- Added more default values for `attributes` parameter for 2 more build methods
- Extend the op-decls.td unit test to test these build methods.
Differential Revision: https://reviews.llvm.org/D83839
This adds a `parseOptionalAttribute` method to the OpAsmParser that allows for parsing optional attributes, in a similar fashion to how optional types are parsed. This also enables the use of attribute values as the first element of an assembly format optional group.
Differential Revision: https://reviews.llvm.org/D83712
Summary: Currently forward decls are included with all the op classes. But there are cases (say when splitting up headers) where one wants the forward decls but not all the classes. Add an option to enable this. This does not change any current behavior (some further refactoring is probably due here).
Differential Revision: https://reviews.llvm.org/D83727
- Provide default value for `ArrayRef<NamedAttribute> attributes` parameter of
the collective params build method.
- Change the `genSeparateArgParamBuilder` function to not generate build methods
that may be ambiguous with the new collective params build method.
- This change should help eliminate passing empty NamedAttribue ArrayRef when the
collective params build method is used
- Extend op-decl.td unit test to make sure the ambiguous build methods are not
generated.
Differential Revision: https://reviews.llvm.org/D83517
The namespace can be specified using the `cppNamespace` field. This matches the functionality already present on dialects, enums, etc. This fixes problems with using interfaces on operations in a different namespace than the interface was defined in.
Differential Revision: https://reviews.llvm.org/D83604
Introduce pass to convert parallel affine.for op into 1-D affine.parallel op.
Run using --affine-parallelize. Removes test-detect-parallel: pass for checking
parallel affine.for ops.
Signed-off-by: Yash Jain <yash.jain@polymagelabs.com>
Differential Revision: https://reviews.llvm.org/D83193
- Create a pass that generates bugs based on trivially defined behavior for the purpose of testing the MLIR Reduce Tool.
- Implement the functionality inside the pass to crash mlir-opt in the presence of an operation with the name "crashOp".
- Register the pass as a test pass in the mlir-opt tool.
Reviewed by: jpienaar
Differential Revision: https://reviews.llvm.org/D83422
Create the framework and testing environment for MLIR Reduce - a tool
with the objective to reduce large test cases into smaller ones while
preserving their interesting behavior.
Implement the functionality to parse command line arguments, parse the
MLIR test cases into modules and run the interestingness tests on
the modules.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D82803
An operation can specify that an operation or result type matches the
type of another operation, result, or attribute via the `AllTypesMatch`
or `TypesMatchWith` constraints.
Use these constraints to also automatically resolve types in the
automatically generated assembly parser.
This way, only the attribute needs to be listed in `assemblyFormat`,
e.g. for constant operations.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D78434
with the objective to reduce large test cases into smaller ones while
preserving their interesting behavior.
Implement the framework to parse the command line arguments, parse the
input MLIR test case into a module and call reduction passes on the MLIR module.
Implement the Tester class which allows the different reduction passes to test the
interesting behavior of the generated reduced variants of the test case and keep track
of the most reduced generated variant.
Introduce pass to convert parallel affine.for op into 1-D
affine.parallel op. Run using --affine-parallelize. Removes
test-detect-parallel: pass for checking parallel affine.for ops.
Differential Revision: https://reviews.llvm.org/D82672
This enables better support for traits such as SameOperandsAndResultType, and other situations in which a variadic operand may be resolved from a non-variadic.
Differential Revision: https://reviews.llvm.org/D83011
This revision adds support to ODS for generating interfaces for attributes and types, in addition to operations. These interfaces can be specified using `AttrInterface` and `TypeInterface` in place of `OpInterface`. All of the features of `OpInterface` are supported except for the `verify` method, which does not have a matching representation in the Attribute/Type world. Generating these interface can be done using `gen-(attr|type)-interface-(defs|decls|docs)`.
Differential Revision: https://reviews.llvm.org/D81884
Also fixed bug in type inferface generator to address bug where operands and
attributes are interleaved.
Differential Revision: https://reviews.llvm.org/D82819
To be able to have more meaningful performance out of workloadsi going through
the vulkan-runner we need to use buffers from GPU device memory as access to
system memory is significantly slower for GPU with dedicated memory. This adds
code to do a copy through staging buffer as GPU memory cannot always be mapped
on the host.
Differential Revision: https://reviews.llvm.org/D82504
Using fully qualified names wherever possible avoids ambiguous class and function names. This is a follow-up to D82371.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D82471
Summary: The Pass class exists in both the mlir and the llvm namespaces. Use the fully qualified class name to avoid any ambiguities.
Reviewers: rriddle
Reviewed By: rriddle
Subscribers: mehdi_amini, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D82371
Add option to filter which op the OpDefinitionsGen run on. This enables having multiple ops together in the same TD file but generating different CC files for them (useful if one wants to use multiclasses or split out 1 dialect into multiple different libraries). There is probably more general query here (e.g., split out all ops that don't have a verify method, or that are commutative) but filtering based on op name (e.g., test.a_op) seemed a reasonable start and didn't require inventing a query specification mechanism here.
Differential Revision: https://reviews.llvm.org/D82319
Summary:
Currently, the TableGen rewrite generates redundant native calls in MLIR DRR files. This is a problem as some native calls may involve significant computations (e.g. when performing constant propagation where every values in a large tensor is touched).
The pattern was as follow:
```c++
if (native-call(args)) tblgen_attrs.emplace_back(rewriter, attribute, native-call(args))
```
The replacement pattern compute `native-call(args)` once and then use it both in the `if` condition and the `emplace_back` call.
Differential Revision: https://reviews.llvm.org/D82101
Summary:
Fixed build of D81618
Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
2) exp^{2x}-1 / exp^{2x}+1 , if x < 0.
Differential Revision: https://reviews.llvm.org/D82040
Summary:
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Reviewers: aartbik
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D81935
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Differential Revision: https://reviews.llvm.org/D79762
This reverts commit 32c757e4f8.
Broke the build bot:
******************** TEST 'MLIR :: Examples/standalone/test.toy' FAILED ********************
[...]
/tmp/ci-KIMiRFcVZt/lib/libMLIRLinalgToLLVM.a(LinalgToLLVM.cpp.o): In function `(anonymous namespace)::ConvertLinalgToLLVMPass::runOnOperation()':
LinalgToLLVM.cpp:(.text._ZN12_GLOBAL__N_123ConvertLinalgToLLVMPass14runOnOperationEv+0x100): undefined reference to `mlir::populateExpandTanhPattern(mlir::OwningRewritePatternList&, mlir::MLIRContext*)'
Summary:
Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
2) exp^{2x}-1 / exp^{2x}+1 , if x < 0.
Differential Revision: https://reviews.llvm.org/D81618
Summary:
* extra ';' in the following files:
mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
mlir/lib/Dialect/Shape/IR/Shape.cpp
* base class ‘mlir::ConvertVectorToSCFBase<ConvertVectorToSCFPass>’
should be explicitly initialized in the copy constructor [-Wextra] in
mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
* warning: ‘bool Expression::operator==(const Expression&) const’
defined but not used [-Wunused-function] in
mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp
Differential Revision: https://reviews.llvm.org/D81673
This parameter gives the developers the freedom to choose their desired function
signature conversion for preparing their functions for buffer placement. It is
introduced for BufferAssignmentFuncOpConverter, and also for
BufferAssignmentReturnOpConverter, and BufferAssignmentCallOpConverter to adapt
the return and call operations with the selected function signature conversion.
If the parameter is set, buffer placement won't also deallocate the returned
buffers.
Differential Revision: https://reviews.llvm.org/D81137
This allows verifying op-indepent attributes (e.g., attributes that do not require the op to have been created) before constructing an operation. These include checking whether required attributes are defined or constraints on attributes (such as I32 attribute). This is not perfect (e.g., if one had a disjunctive constraint where one part relied on the op and the other doesn't, then this would not try and extract the op independent from the op dependent).
The next step is to move these out to a trait that could be verified earlier than in the generated method. The first use case is for inferring the return type while constructing the op. At that point you don't have an Operation yet and that ends up in one having to duplicate the same checks, e.g., verify that attribute A is defined before querying A in shape function which requires that duplication. Instead this allows one to invoke a method to verify all the traits and, if this is checked first during verification, then all other traits could use attributes knowing they have been verified.
It is a little bit funny to have these on the adaptor, but I see the adaptor as a place to collect information about the op before the op is constructed (e.g., avoiding stringly typed accessors, verifying what is possible to verify before the op is constructed) while being cheap to use even with constructed op (so layer of indirection between the op constructed/being constructed). And from that point of view it made sense to me.
Differential Revision: https://reviews.llvm.org/D80842
Summary:
`mlir-rocm-runner` is introduced in this commit to execute GPU modules on ROCm
platform. A small wrapper to encapsulate ROCm's HIP runtime API is also inside
the commit.
Due to behavior of ROCm, raw pointers inside memrefs passed to `gpu.launch`
must be modified on the host side to properly capture the pointer values
addressable on the GPU.
LLVM MC is used to assemble AMD GCN ISA coming out from
`ConvertGPUKernelToBlobPass` to binary form, and LLD is used to produce a shared
ELF object which could be loaded by ROCm HIP runtime.
gfx900 is the default target be used right now, although it could be altered via
an option in `mlir-rocm-runner`. Future revisions may consider using ROCm Agent
Enumerator to detect the right target on the system.
Notice AMDGPU Code Object V2 is used in this revision. Future enhancements may
upgrade to AMDGPU Code Object V3.
Bitcode libraries in ROCm-Device-Libs, which implements math routines exposed in
`rocdl` dialect are not yet linked, and is left as a TODO in the logic.
Reviewers: herhut
Subscribers: mgorny, tpr, dexonsmith, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D80676
This revision adds a helper function to hoist alloc/dealloc pairs and
alloca op out of immediately enclosing scf::ForOp if both conditions are true:
1. all operands are defined outside the loop.
2. all uses are ViewLikeOp or DeallocOp.
This is now considered Linalg-specific and will be generalized on a per-need basis.
Differential Revision: https://reviews.llvm.org/D81152
This utility factors out the machinery required to add iterArgs and yield values to an scf.ForOp.
Differential Revision: https://reviews.llvm.org/D80656
https://reviews.llvm.org/D79246 introduces alignment propagation for vector transfer operations. Unfortunately, the alignment calculation is incorrect and can result in crashes.
This revision fixes the calculation by using the natural alignment of the memref elemental type, instead of the resulting vector type.
If more alignment is desired, it can be done in 2 ways:
1. use a proper vector.type_cast to transform a memref<axbxcxdxf32> into a memref<axbxvector<cxdxf32>> giving a natural alignment of vector<cxdxf32>
2. add an alignment attribute to vector transfer operations and propagate it.
With this change the alignment in the relevant tests goes down from 128 to 4.
Lastly, a few minor cleanups are performed and the custom `isMinorIdentityMap` is deprecated.
Differential Revision: https://reviews.llvm.org/D80734
This allows constructing operand adaptor from existing op (useful for commonalizing verification as I want to do in a follow up).
I also add ability to use member initializers for the generated adaptor constructors for convenience.
Differential Revision: https://reviews.llvm.org/D80667
Make ConvertKernelFuncToCubin pass to be generic:
- Rename to ConvertKernelFuncToBlob.
- Allow specifying triple, target chip, target features.
- Initializing LLVM backend is supplied by a callback function.
- Lowering process from MLIR module to LLVM module is via another callback.
- Change mlir-cuda-runner to adopt the revised pass.
- Add new tests for lowering to ROCm HSA code object (HSACO).
- Tests for CUDA and ROCm are kept in separate directories.
Differential Revision: https://reviews.llvm.org/D80142
Take advantage of equality constrains to generate the type inference interface.
This is used for equality and trivially built types. The type inference method
is only generated when no type inference trait is specified already.
This reorders verification that changes some test error messages.
Differential Revision: https://reviews.llvm.org/D80484
Summary:
Add DynamicMemRefType which can reference one of the statically ranked StridedMemRefType or a UnrankedMemRefType so that runner utils only need to be implemented once.
There is definitely room for more clean up and unification, but I will keep that for follow-ups.
Reviewers: nicolasvasilache
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80513
* Enables using with more variadic sized operands;
* Generate convenience accessors for attributes;
- The accessor are named the same as their name in ODS and returns attribute
type (not convenience type) and no derived attributes.
This is first step to changing adapter to support verifying argument
constraints before the op is even created. This does not change the name of
adaptor nor does it require it except for ops with variadic operands to keep this change smaller.
Considered creating separate adapter but decided against that given operands also require attributes in general (and definitely for verification of operands and attributes).
Differential Revision: https://reviews.llvm.org/D80420
Adds support for cooperative matrix support for arithmetic and cast
instructions. It also adds cooperative matrix store, muladd and matrixlength
instructions which are part of the extension.
Differential Revision: https://reviews.llvm.org/D80181
Due to similar APIs between CUDA and ROCm (HIP),
ConvertGpuLaunchFuncToCudaCalls pass could be used on both platforms with some
refactoring.
In this commit:
- Migrate ConvertLaunchFuncToCudaCalls from GPUToCUDA to GPUCommon, and rename.
- Rename runtime wrapper APIs be platform-neutral.
- Let GPU binary annotation attribute be specifiable as a PassOption.
- Naming changes within the implementation and tests.
Subsequent patches would introduce ROCm-specific tests and runtime wrapper
APIs.
Differential Revision: https://reviews.llvm.org/D80167
This reverts commit cdb6f05e2d.
The build is broken with:
You have called ADD_LIBRARY for library obj.MLIRGPUtoCUDATransforms without any source files. This typically indicates a problem with your CMakeLists.txt file