Add option to filter which op the OpDefinitionsGen run on. This enables having multiple ops together in the same TD file but generating different CC files for them (useful if one wants to use multiclasses or split out 1 dialect into multiple different libraries). There is probably more general query here (e.g., split out all ops that don't have a verify method, or that are commutative) but filtering based on op name (e.g., test.a_op) seemed a reasonable start and didn't require inventing a query specification mechanism here.
Differential Revision: https://reviews.llvm.org/D82319
Summary:
Currently, the TableGen rewrite generates redundant native calls in MLIR DRR files. This is a problem as some native calls may involve significant computations (e.g. when performing constant propagation where every values in a large tensor is touched).
The pattern was as follow:
```c++
if (native-call(args)) tblgen_attrs.emplace_back(rewriter, attribute, native-call(args))
```
The replacement pattern compute `native-call(args)` once and then use it both in the `if` condition and the `emplace_back` call.
Differential Revision: https://reviews.llvm.org/D82101
Summary:
Fixed build of D81618
Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
2) exp^{2x}-1 / exp^{2x}+1 , if x < 0.
Differential Revision: https://reviews.llvm.org/D82040
Summary:
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Reviewers: aartbik
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D81935
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Differential Revision: https://reviews.llvm.org/D79762
This reverts commit 32c757e4f8.
Broke the build bot:
******************** TEST 'MLIR :: Examples/standalone/test.toy' FAILED ********************
[...]
/tmp/ci-KIMiRFcVZt/lib/libMLIRLinalgToLLVM.a(LinalgToLLVM.cpp.o): In function `(anonymous namespace)::ConvertLinalgToLLVMPass::runOnOperation()':
LinalgToLLVM.cpp:(.text._ZN12_GLOBAL__N_123ConvertLinalgToLLVMPass14runOnOperationEv+0x100): undefined reference to `mlir::populateExpandTanhPattern(mlir::OwningRewritePatternList&, mlir::MLIRContext*)'
Summary:
Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
2) exp^{2x}-1 / exp^{2x}+1 , if x < 0.
Differential Revision: https://reviews.llvm.org/D81618
Summary:
* extra ';' in the following files:
mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
mlir/lib/Dialect/Shape/IR/Shape.cpp
* base class ‘mlir::ConvertVectorToSCFBase<ConvertVectorToSCFPass>’
should be explicitly initialized in the copy constructor [-Wextra] in
mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
* warning: ‘bool Expression::operator==(const Expression&) const’
defined but not used [-Wunused-function] in
mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp
Differential Revision: https://reviews.llvm.org/D81673
This parameter gives the developers the freedom to choose their desired function
signature conversion for preparing their functions for buffer placement. It is
introduced for BufferAssignmentFuncOpConverter, and also for
BufferAssignmentReturnOpConverter, and BufferAssignmentCallOpConverter to adapt
the return and call operations with the selected function signature conversion.
If the parameter is set, buffer placement won't also deallocate the returned
buffers.
Differential Revision: https://reviews.llvm.org/D81137
This allows verifying op-indepent attributes (e.g., attributes that do not require the op to have been created) before constructing an operation. These include checking whether required attributes are defined or constraints on attributes (such as I32 attribute). This is not perfect (e.g., if one had a disjunctive constraint where one part relied on the op and the other doesn't, then this would not try and extract the op independent from the op dependent).
The next step is to move these out to a trait that could be verified earlier than in the generated method. The first use case is for inferring the return type while constructing the op. At that point you don't have an Operation yet and that ends up in one having to duplicate the same checks, e.g., verify that attribute A is defined before querying A in shape function which requires that duplication. Instead this allows one to invoke a method to verify all the traits and, if this is checked first during verification, then all other traits could use attributes knowing they have been verified.
It is a little bit funny to have these on the adaptor, but I see the adaptor as a place to collect information about the op before the op is constructed (e.g., avoiding stringly typed accessors, verifying what is possible to verify before the op is constructed) while being cheap to use even with constructed op (so layer of indirection between the op constructed/being constructed). And from that point of view it made sense to me.
Differential Revision: https://reviews.llvm.org/D80842
Summary:
`mlir-rocm-runner` is introduced in this commit to execute GPU modules on ROCm
platform. A small wrapper to encapsulate ROCm's HIP runtime API is also inside
the commit.
Due to behavior of ROCm, raw pointers inside memrefs passed to `gpu.launch`
must be modified on the host side to properly capture the pointer values
addressable on the GPU.
LLVM MC is used to assemble AMD GCN ISA coming out from
`ConvertGPUKernelToBlobPass` to binary form, and LLD is used to produce a shared
ELF object which could be loaded by ROCm HIP runtime.
gfx900 is the default target be used right now, although it could be altered via
an option in `mlir-rocm-runner`. Future revisions may consider using ROCm Agent
Enumerator to detect the right target on the system.
Notice AMDGPU Code Object V2 is used in this revision. Future enhancements may
upgrade to AMDGPU Code Object V3.
Bitcode libraries in ROCm-Device-Libs, which implements math routines exposed in
`rocdl` dialect are not yet linked, and is left as a TODO in the logic.
Reviewers: herhut
Subscribers: mgorny, tpr, dexonsmith, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D80676
This revision adds a helper function to hoist alloc/dealloc pairs and
alloca op out of immediately enclosing scf::ForOp if both conditions are true:
1. all operands are defined outside the loop.
2. all uses are ViewLikeOp or DeallocOp.
This is now considered Linalg-specific and will be generalized on a per-need basis.
Differential Revision: https://reviews.llvm.org/D81152
This utility factors out the machinery required to add iterArgs and yield values to an scf.ForOp.
Differential Revision: https://reviews.llvm.org/D80656
https://reviews.llvm.org/D79246 introduces alignment propagation for vector transfer operations. Unfortunately, the alignment calculation is incorrect and can result in crashes.
This revision fixes the calculation by using the natural alignment of the memref elemental type, instead of the resulting vector type.
If more alignment is desired, it can be done in 2 ways:
1. use a proper vector.type_cast to transform a memref<axbxcxdxf32> into a memref<axbxvector<cxdxf32>> giving a natural alignment of vector<cxdxf32>
2. add an alignment attribute to vector transfer operations and propagate it.
With this change the alignment in the relevant tests goes down from 128 to 4.
Lastly, a few minor cleanups are performed and the custom `isMinorIdentityMap` is deprecated.
Differential Revision: https://reviews.llvm.org/D80734
This allows constructing operand adaptor from existing op (useful for commonalizing verification as I want to do in a follow up).
I also add ability to use member initializers for the generated adaptor constructors for convenience.
Differential Revision: https://reviews.llvm.org/D80667
Make ConvertKernelFuncToCubin pass to be generic:
- Rename to ConvertKernelFuncToBlob.
- Allow specifying triple, target chip, target features.
- Initializing LLVM backend is supplied by a callback function.
- Lowering process from MLIR module to LLVM module is via another callback.
- Change mlir-cuda-runner to adopt the revised pass.
- Add new tests for lowering to ROCm HSA code object (HSACO).
- Tests for CUDA and ROCm are kept in separate directories.
Differential Revision: https://reviews.llvm.org/D80142
Take advantage of equality constrains to generate the type inference interface.
This is used for equality and trivially built types. The type inference method
is only generated when no type inference trait is specified already.
This reorders verification that changes some test error messages.
Differential Revision: https://reviews.llvm.org/D80484
Summary:
Add DynamicMemRefType which can reference one of the statically ranked StridedMemRefType or a UnrankedMemRefType so that runner utils only need to be implemented once.
There is definitely room for more clean up and unification, but I will keep that for follow-ups.
Reviewers: nicolasvasilache
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80513
* Enables using with more variadic sized operands;
* Generate convenience accessors for attributes;
- The accessor are named the same as their name in ODS and returns attribute
type (not convenience type) and no derived attributes.
This is first step to changing adapter to support verifying argument
constraints before the op is even created. This does not change the name of
adaptor nor does it require it except for ops with variadic operands to keep this change smaller.
Considered creating separate adapter but decided against that given operands also require attributes in general (and definitely for verification of operands and attributes).
Differential Revision: https://reviews.llvm.org/D80420
Adds support for cooperative matrix support for arithmetic and cast
instructions. It also adds cooperative matrix store, muladd and matrixlength
instructions which are part of the extension.
Differential Revision: https://reviews.llvm.org/D80181
Due to similar APIs between CUDA and ROCm (HIP),
ConvertGpuLaunchFuncToCudaCalls pass could be used on both platforms with some
refactoring.
In this commit:
- Migrate ConvertLaunchFuncToCudaCalls from GPUToCUDA to GPUCommon, and rename.
- Rename runtime wrapper APIs be platform-neutral.
- Let GPU binary annotation attribute be specifiable as a PassOption.
- Naming changes within the implementation and tests.
Subsequent patches would introduce ROCm-specific tests and runtime wrapper
APIs.
Differential Revision: https://reviews.llvm.org/D80167
This reverts commit cdb6f05e2d.
The build is broken with:
You have called ADD_LIBRARY for library obj.MLIRGPUtoCUDATransforms without any source files. This typically indicates a problem with your CMakeLists.txt file
Due to similar APIs between CUDA and ROCm (HIP),
ConvertGpuLaunchFuncToCudaCalls pass could be used on both platforms with some
refactoring.
In this commit:
- Migrate ConvertLaunchFuncToCudaCalls from GPUToCUDA to GPUCommon, and rename.
- Rename runtime wrapper APIs be platform-neutral.
- Let GPU binary annotation attribute be specifiable as a PassOption.
- Naming changes within the implementation and tests.
Subsequent patches would introduce ROCm-specific tests and runtime wrapper
APIs.
Differential Revision: https://reviews.llvm.org/D80167
Enclose verifier code for AttrSizedOperandSegments and AttrSizedResultSegments
in a nested code block to avoid symbol collision.
Differential Revision: https://reviews.llvm.org/D80250
Summary: This revision adds support for assembly formats with optional attributes. It elides optional attributes that are part of the syntax from the attribute dictionary.
Reviewers: ftynse, Kayjukh
Reviewed By: ftynse, Kayjukh
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, jurahul, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80113
The JitRunner library is logically very close to the execution engine,
and shares similar dependencies.
find -name "*.cpp" -exec sed -i "s/Support\/JitRunner/ExecutionEngine\/JitRunner/" "{}" \;
Differential Revision: https://reviews.llvm.org/D79899
The following Conversions are affected: LoopToStandard -> SCFToStandard,
LoopsToGPU -> SCFToGPU, VectorToLoops -> VectorToSCF. Full file paths are
affected. Additionally, drop the 'Convert' prefix from filenames living under
lib/Conversion where applicable.
API names and CLI options for pass testing are also renamed when applicable. In
particular, LoopsToGPU contains several passes that apply to different kinds of
loops (`for` or `parallel`), for which the original names are preserved.
Differential Revision: https://reviews.llvm.org/D79940
The Vulkan runtime wrapper will be compiled to a shared library
that are loaded by the JIT runner. Depending on LLVM libraries
means that LLVM symbols will be compiled into the shared library.
That can cause problems if we are using it with other shared
libraries depending on LLVM, notably Mesa, the open-source graphics
driver framework. The Vulkan API wrappers invoked by the JIT runner
links to the system libvulkan.so. If it's Mesa providing the
implementation, Mesa will normally try to load the system libLLVM.so
for its shader compilation. That causes issues because the JIT runner
already loaded the Vulkan runtime wrapper which has LLVM sybmols
compiled in. So system linker will instruct Mesa to use those symbols
instead.
Differential Revision: https://reviews.llvm.org/D79860
This normalize the name of the tablegen file with the name of the generated
files (SideEffectInterfaces.h.inc) and the other Interface tablegen files,
which all end in Interface(s).td
Differential Revision: https://reviews.llvm.org/D79517
This is a wrapper around vector of NamedAttributes that keeps track of whether sorted and does some minimal effort to remain sorted (doing more, e.g., appending attributes in sorted order, could be done in follow up). It contains whether sorted and if a DictionaryAttr is queried, it caches the returned DictionaryAttr along with whether sorted.
Change MutableDictionaryAttr to always return a non-null Attribute even when empty (reserve null cases for errors). To this end change the getter to take a context as input so that the empty DictionaryAttr could be queried. Also create one instance of the empty dictionary attribute that could be reused without needing to lock context etc.
Update infer type op interface to use DictionaryAttr and use NamedAttrList to avoid incurring multiple conversion costs.
Fix bug in sorting helper function.
Differential Revision: https://reviews.llvm.org/D79463
vulkan-runtime-wrappers does not need MLIRSPIRVSerialization,
which is used by the ConvertGpuLaunchFuncToVulkanLaunchFunc pass
under the hood.
Differential Revision: https://reviews.llvm.org/D79577
SPIR-V ops can mix operands and attributes in the definition. These
operands and attributes are serialized in the exact order of the definition
to match SPIR-V binary format requirements. It can cause excessive
generated code bloat because we are emitting code to handle each
operand/attribute separately. So here we probe first to check whether all
the operands are ahead of attributes. Then we can serialize all operands
together.
This removes ~1000 lines of code from the generated inc file.
Differential Revision: https://reviews.llvm.org/D79446
These template functions are used in the serializer, where we can
actually directly query the opcode from the op's definition and
use that in the auto-generated serialization logic.
This removes a set of templates accounting for 319 lines from
the auto-generated inc file.
Differential Revision: https://reviews.llvm.org/D79444
We see intermittent build errors on the windows buildbot because
mlir-opt is including Linalg headers which haven't been built yet.
This dependence should be resolved by declaring a PUBLIC dependence
on the Linalg library when building MLIROptMain.
Summary:
Adds the loop unroll transformation for loop::ForOp.
Adds support for promoting the body of single-iteration loop::ForOps into its containing block.
Adds check tests for loop::ForOps with dynamic and static lower/upper bounds and step.
Care was taken to share code (where possible) with the AffineForOp unroll transformation to ease maintenance and potential future transition to a LoopLike construct on which loop transformations for different loop types can implemented.
Reviewers: ftynse, nicolasvasilache
Reviewed By: ftynse
Subscribers: bondhugula, mgorny, zzheng, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79184