This is is the start of the MLIR benchmarks. It sets up a command
line tool along with conventions to define and run benchmarks
using mlir's python bindings.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D115174
Although cmake should be platform-independent, we've observed
that some aspects of Python detection don't work on all platforms,
even with recent versions of cmake. This appears to be due to bugs
in the python detection logic, especially when the NumPy component
is required and not located within the python installation.
As a workaround, this patch first searches for "Development" before
searching for "Development.Module", which seems to workaround the
issue.
Differential Revision: https://reviews.llvm.org/D118148
This is in preparation of switching `-tensor-constant-bufferize` and `-arith-bufferize` to BufferizableOpInterface-based implementations.
Differential Revision: https://reviews.llvm.org/D118324
This commit switches the `tensor-bufferize` pass over to BufferizableOpInterface-based bufferization.
Differential Revision: https://reviews.llvm.org/D118246
The pass can currently not handle to_memref(to_tensor(x)) folding where a cast is necessary. This is required with the new unified bufferization. There is already a canonicalization pattern that handles such foldings and it should be used during this pass.
Differential Revision: https://reviews.llvm.org/D117988
Stop the Cpp target from emitting unused labels. The previosly generated
code generated warning if `-Wunused-label` is passed to a compiler.
Co-authored-by: Simon Camphausen <simon.camphausen@iml.fraunhofer.de>
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D118154
Prefix "affine-" to affine transform passes that were missing it -- to
avoid ambiguity and for uniformity. There were only two needed this.
Move mispaced affine coalescing test case file.
NFC.
Differential Revision: https://reviews.llvm.org/D118314
OwningRewritePatternList has been deprecated for ~10 months now, we can remove
the leftover using directives at this point.
Differential Revision: https://reviews.llvm.org/D118287
These transformations already operate on memref operations (as part of
splitting up the standard dialect). Now that the operations have moved,
it's time for these transformations to move as well.
Differential Revision: https://reviews.llvm.org/D118285
We already convert to BitVector internally, and other APIs (namely Operation::eraseOperands)
already use BitVector as well. Switching over provides a common format between
API and also reduces the amount of format conversions necessary.
Fixes#53325
Differential Revision: https://reviews.llvm.org/D118083
The option parser currently does not properly handle nested options, meaning that
in some cases we can print pass pipelines that we can't actually parse back in. For example,
from #52885 we currently can't parse in inliner pipelines that have nested pipeline strings.
This commit adds handling for string values (e.g. "...") and nested options
(e.g. `foo{baz{bar=10 pizza=11}}`).
Fixes#52885
Differential Revision: https://reviews.llvm.org/D118078
Rationale:
Demonstrates the maximum tile size allowed for the f32 <= bf16 x bf16 op
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D118277
This diff modifies the tablegen specification and code generation for
BitEnumAttr attributes in MLIR Operation Definition Specification (ODS) files.
Specifically:
- there is a new tablegen class for "none" values (i.e. no bits set)
- single-bit enum cases are specified via bit index (i.e. [0, 31]) instead of
the resulting enum integer value
- there is a new tablegen class to represent a "grouped" bitwise OR of other
enum values
This diff is intended as an initial step towards improving "fastmath"
optimization support in MLIR, to allow more precise control of whether certain
floating point optimizations are applied in MLIR passes. "Fast" math options
for floating point MLIR operations would (following subsequent RFC and
discussion) be specified by using the improved enum bit support in this diff.
For example, a "fast" enum value would act as an alias for a group of other
cases (e.g. finite-math-only, no-signed-zeros, etc.), in a way that is similar
to support in C/C++ compilers (clang, gcc).
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D117029
This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rmw which is the non-region
sibling operation of std.generic_atomic_rmw (the relationship is even more clear given
they have nearly the same description % how they represent the inner computation).
Differential Revision: https://reviews.llvm.org/D118209
The GPU dialect currently contains an explicit reference to LLVMFuncOp
during verification to handle the situation where the kernel has already been
converted. This commit changes that reference to instead use FunctionOpInterface,
which has two main benefits:
* It allows for removing an otherwise unnecessary dependency on the LLVM dialect
* It removes hardcoded assumptions about the lowering path and use of the GPU dialect
Differential Revision: https://reviews.llvm.org/D118172
This reverts commit ef82063207.
- It conflicts with the existing llvm::size in STLExtras, which will now
never be called.
- Calling it without llvm:: breaks C++17 compat
This is for compatibility with existing bufferization passes. Also clean up memref type generation a bit.
Differential Revision: https://reviews.llvm.org/D118243
This revision adds enough support to allow InlineAsmOp to work properly with indirect memory constraints "*m".
These require an explicit "elementtype" TypeAttr on the operands to pass LLVM verification and need to be provided.
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D118006
The existing implementation called DenseMap::insert, which is a no-op if the map already contains an entry with the same key.
Differential Revision: https://reviews.llvm.org/D118165
If we are extracting it is more useful to push the index_cast past the
extraction. This increases the chance the tensor.extract can evaluated at
compile time.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D118204
There is not much of a benefit to reshape a from element vs reloading it.
Updated to progagate shape manipulations into the output type of
tensor.from_elements.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D118201
Add methods to
- Get block argument that is tied with an opOperand
- Get the yield value that is tied with a output opOperand.
Differential Revision: https://reviews.llvm.org/D118085
Fusion of reshape ops by linearization incorrectly inverted the
indexing map before linearizing dimensions. This leads to incorrect
indexing maps used in the fused operation.
Differential Revision: https://reviews.llvm.org/D117908
This pattern is not written to handle operations with `linalg.index`
operations in its body, i.e. operations that have index semantics.
Differential Revision: https://reviews.llvm.org/D117856
This is mostly a copy of the existing tensor.from_elements bufferization. Once TensorInterfaceImpl.cpp is moved to the tensor dialect, the existing rewrite pattern can be deleted.
Differential Revision: https://reviews.llvm.org/D117775
This is mostly a copy of the existing tensor.generate bufferization. Once TensorInterfaceImpl.cpp is moved to the tensor dialect, the existing rewrite pattern can be deleted.
Differential Revision: https://reviews.llvm.org/D117770