Some rewriters take more iterations to converge, add a parameter to overwrite
the built-in maximum iteration count.
Fix PR48073.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D91553
This replaces the old type decomposition logic that was previously mixed
into bufferization, and makes it easily accessible.
This also deletes TestFinalizingBufferize, because after we remove the type
decomposition, it doesn't do anything that is not already provided by
func-bufferize.
Differential Revision: https://reviews.llvm.org/D90899
This adds a simple definition of a "workshare loop" operation for
the OpenMP MLIR dialect, excluding the "reduction" and "allocate"
clauses and without a custom parser and pretty printer.
The schedule clause also does not yet accept the modifiers that are
permitted in OpenMP 5.0.
Co-authored-by: Kiran Chandramohan <kiran.chandramohan@arm.com>
Reviewed By: ftynse, clementval
Differential Revision: https://reviews.llvm.org/D86071
scf.parallel is currently not a good fit for tiling on tensors.
Instead provide a path to parallelism directly through scf.for.
For now, this transformation ignores the distribution scheme and always does a block-cyclic mapping (where block is the tile size).
Differential revision: https://reviews.llvm.org/D90475
motivated by a refactoring in the new sparse code (yet to be merged), this avoids some lengthy code dup
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D91465
Support multi-dimension vector for InsertMap/ExtractMap op and update the
transformations. Currently the relation between IDs and dimension is implicitly
deduced from the types. We can then calculate an AffineMap based on it. In the
future the AffineMap could be part of the operation itself.
Differential Revision: https://reviews.llvm.org/D90995
Depends On D89958
1. Adds `async.group`/`async.awaitall` to group together multiple async tokens/values
2. Rewrite scf.parallel operation into multiple concurrent async.execute operations over non overlapping subranges of the original loop.
Example:
```
scf.for (%i, %j) = (%lbi, %lbj) to (%ubi, %ubj) step (%si, %sj) {
"do_some_compute"(%i, %j): () -> ()
}
```
Converted to:
```
%c0 = constant 0 : index
%c1 = constant 1 : index
// Compute blocks sizes for each induction variable.
%num_blocks_i = ... : index
%num_blocks_j = ... : index
%block_size_i = ... : index
%block_size_j = ... : index
// Create an async group to track async execute ops.
%group = async.create_group
scf.for %bi = %c0 to %num_blocks_i step %c1 {
%block_start_i = ... : index
%block_end_i = ... : index
scf.for %bj = %c0 t0 %num_blocks_j step %c1 {
%block_start_j = ... : index
%block_end_j = ... : index
// Execute the body of original parallel operation for the current
// block.
%token = async.execute {
scf.for %i = %block_start_i to %block_end_i step %si {
scf.for %j = %block_start_j to %block_end_j step %sj {
"do_some_compute"(%i, %j): () -> ()
}
}
}
// Add produced async token to the group.
async.add_to_group %token, %group
}
}
// Await completion of all async.execute operations.
async.await_all %group
```
In this example outer loop launches inner block level loops as separate async
execute operations which will be executed concurrently.
At the end it waits for the completiom of all async execute operations.
Reviewed By: ftynse, mehdi_amini
Differential Revision: https://reviews.llvm.org/D89963
The index type does not have a bitsize and hence the size of corresponding allocations cannot be computed. Instead, the promotion pass now has an explicit option to specify the size of index.
Differential Revision: https://reviews.llvm.org/D91360
This exposes a hook to configure legality of operations such that only
`scf.parallel` operations that have mapping attributes are marked as
illegal. Consequently, the transformation can now also be applied to
mixed forms.
Differential Revision: https://reviews.llvm.org/D91340
A recent refactoring removed the need to interleave verifier passes and instead opted to verify during the normal execution of passes instead. As such, the old verify pass is no longer necessary and can be removed.
Differential Revision: https://reviews.llvm.org/D91212
This removes the need to have an explicit `cast<>` given that we always know it `isa` instance of the interface.
Differential Revision: https://reviews.llvm.org/D91304
Some users have native c++ data types that correspond to floating point values stored within a DenseElementsAttr that do not have a corresponding native C++ data type(e.g. bfloat16/half/etc). This revision allows for such users to use those native types directly, and removes the need to go through APFloat when the much faster native value path is available.
Differential Revision: https://reviews.llvm.org/D91402
- Move isSupportedMemRefType() to ConvertToLLVMPatterns and check if the
memref element type is supported there.
Differential Revision: https://reviews.llvm.org/D91374
The previous code defined it as allocating a new memref for its result.
However, this is not how it is treated by the dialect conversion framework,
that does the equivalent of inserting and folding it away internally
(even independent of any canonicalization patterns that we have
defined).
The semantics as they were previously written were also very
constraining: Nontrivial analysis is needed to prove that the new
allocation isn't needed for correctness (e.g. to avoid aliasing).
By removing those semantics, we avoid losing that information.
Differential Revision: https://reviews.llvm.org/D91382
We lower them to a std.global_memref (uniqued by constant value) + a
std.get_global_memref to produce the corresponding memref value.
This allows removing Linalg's somewhat hacky lowering of tensor
constants, now that std properly supports this.
Differential Revision: https://reviews.llvm.org/D91306
- Remove the default valued arguments from these functions.
- Besides FuncOp, looks like no other in-tree op is using these functions.
Differential Revision: https://reviews.llvm.org/D91369
The tokens are already handled by the lexer. This revision exposes them
through the parser interface.
This revision also adds missing functions for question mark parsing and
completes the list of valid punctuation tokens in the documentation.
Differential Revision: https://reviews.llvm.org/D90907
Add an ODS-backed generator of default builders. This currently does not
support operation with attribute arguments, for which the builder is
just ignored. Attribute support will be introduced separately for
builders and accessors.
Default builders are always generated with the same number of result and
operand groups as the ODS specification, i.e. one group per each operand
or result. Optional elements accept None but cannot be omitted. Variadic
groups accept iterable objects and cannot be replaced with a single
object.
For some operations, it is possible to infer the result type given the
traits, but most traits rely on inline pieces of C++ that we cannot
(yet) forward to Python bindings. Since the Ops where the inference is
possible (having the `SameOperandAndResultTypes` trait or
`TypeMatchesWith` without transform field) are a small minority, they
also require the result type to make the builder syntax more consistent.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91190
This change does two main things
1) An operation might have multiple dependences to the same
producer. Not tracking them correctly can result in incorrect code
generation with fusion. To rectify this the dependence tracking
needs to also have the operand number in the consumer.
2) Improve the logic used to find the fused loops making it easier to
follow. The only constraint for fusion is that linalg ops (on
buffers) have update semantics for the result. Fusion should be
such that only one iteration of the fused loop (which is also a
tiled loop) must touch only one (disjoint) tile of the output. This
could be relaxed by allowing for recomputation that is the default
when oeprands are tensors, or can be made legal with promotion of
the fused view (in future).
Differential Revision: https://reviews.llvm.org/D90579
Exposing the C versions of the methods of the sparse runtime support lib
through header files will enable using the same methods in an MLIR program
as well as a C++ program, which will simplify future benchmarking comparisons
(e.g. comparing MLIR generated code with eigen for Matrix Market sparse matrices).
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D91316
This CL integrates the new sparse annotations (hereto merely added as fully
transparent attributes) more tightly to the generic linalg op in order to add
verification of the annotations' consistency as well as to make make other
passes more aware of their presence (in the long run, rewriting rules must
preserve the integrity of the annotations).
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D91224
Previous the textual form of the pass pipeline would implicitly nest,
instead we opt for the explicit form here: this has less surprise.
This also avoids asserting in the bindings when passing a pass pipeline
with incorrect nesting.
Differential Revision: https://reviews.llvm.org/D91233
The MLIR_ASYNCRUNTIME_EXPORT macro was being defined to be either
__declspec(dllexport) or __declspec(dllimport), depending on whether
mlir_c_runner_utils_EXPORTS is defined. The latter was a copy/paste
error and should have been mlir_async_runtime_EXPORTS.
Additionally, the uses of that macro in the .cpp file were unnecessary,
as only function declarations need to be exported, not their definitions.
Differential Revision: https://reviews.llvm.org/D91196
This adds getters for `llvm.align` and `llvm.noalias` strings that are used
as attribute names in the llvm dialect.
Differential Revision: https://reviews.llvm.org/D91166
This patch converts elementwise ops on tensors to linalg.generic ops
with the same elementwise op in the payload (except rewritten to
operate on scalars, obviously). This is a great form for later fusion to
clean up.
E.g.
```
// Compute: %arg0 + %arg1 - %arg2
func @f(%arg0: tensor<?xf32>, %arg1: tensor<?xf32>, %arg2: tensor<?xf32>) -> tensor<?xf32> {
%0 = addf %arg0, %arg1 : tensor<?xf32>
%1 = subf %0, %arg2 : tensor<?xf32>
return %1 : tensor<?xf32>
}
```
Running this through
`mlir-opt -convert-std-to-linalg -linalg-fusion-for-tensor-ops` we get:
```
func @f(%arg0: tensor<?xf32>, %arg1: tensor<?xf32>, %arg2: tensor<?xf32>) -> tensor<?xf32> {
%0 = linalg.generic {indexing_maps = [#map0, #map0, #map0, #map0], iterator_types = ["parallel"]} ins(%arg0, %arg1, %arg2 : tensor<?xf32>, tensor<?xf32>, tensor<?xf32>) {
^bb0(%arg3: f32, %arg4: f32, %arg5: f32): // no predecessors
%1 = addf %arg3, %arg4 : f32
%2 = subf %1, %arg5 : f32
linalg.yield %2 : f32
} -> tensor<?xf32>
return %0 : tensor<?xf32>
}
```
So the elementwise ops on tensors have nicely collapsed into a single
linalg.generic, which is the form we want for further transformations.
Differential Revision: https://reviews.llvm.org/D90354
This patch adds an `ElementwiseMappable` trait as discussed in the RFC
here:
https://llvm.discourse.group/t/rfc-std-elementwise-ops-on-tensors/2113/23
This trait can power a number of transformations and analyses.
A subsequent patch adds a convert-elementwise-to-linalg pass exhibits
how this trait allows writing generic transformations.
See https://reviews.llvm.org/D90354 for that patch.
This trait slightly changes some verifier messages, but the diagnostics
are usually about as good. I fiddled with the ordering of the trait in
the .td file trait lists to minimize the changes here.
Differential Revision: https://reviews.llvm.org/D90731
Introduce an ODS/Tablegen backend producing Op wrappers for Python bindings
based on the ODS operation definition. Usage:
mlir-tblgen -gen-python-op-bindings -Iinclude <path/to/Ops.td> \
-bind-dialect=<dialect-name>
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D90960
Slicing, that is element access with `[being🔚step]` structure, is
a common Python idiom for sequence-like containers. It is also necessary
to support custom accessor for operations with variadic operands and
results (an operation an return a slice of its operands that correspond
to the given variadic group).
Add generic utility to support slicing in Python bindings and use it
for operation operands and results.
Depends On D90923
Reviewed By: stellaraccident, mehdi_amini
Differential Revision: https://reviews.llvm.org/D90936
VectorInsertDynamicOp in SPIRV dialect
conversion from vector.insertelement to spirv VectorInsertDynamicOp
Differential Revision: https://reviews.llvm.org/D90927
This allows us to omit one level of indirection when querying
the information from the underlying attribute.
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D91080
The pass combines patterns of ExpandAtomic, ExpandMemRefReshape,
StdExpandDivs passes. The pass is meant to legalize STD for conversion to LLVM.
Differential Revision: https://reviews.llvm.org/D91082
- Change syntax for FuncOp to be `func <visibility>? @name` instead of printing the
visibility in the attribute dictionary.
- Since printFunctionLikeOp() and parseFunctionLikeOp() are also used by other
operations, make the "inline visibility" an opt-in feature.
- Updated unit test to use and check the new syntax.
Differential Revision: https://reviews.llvm.org/D90859
* Wires them in the same way that peer-dialect test passes are registered.
* Fixes the build for -DLLVM_INCLUDE_TESTS=OFF.
Differential Revision: https://reviews.llvm.org/D91022
I ran into this pattern when converting elementwise ops like
`addf %arg0, %arg : tensor<?xf32>` to linalg. Redundant arguments can
also easily arise from linalg-fusion-for-tensor-ops.
Also, fix some small bugs in the logic in
LinalgStructuredOpsInterface.td.
Differential Revision: https://reviews.llvm.org/D90812
We were discussing on discord regarding the need for extension-based systems like Python to dynamically link against MLIR (or else you can only have one extension that depends on it). Currently, when I set that up, I piggy-backed off of the flag that enables build libLLVM.so and libMLIR.so and depended on libMLIR.so from the python extension if shared library building was enabled. However, this is less than ideal.
In the current setup, libMLIR.so exports both all symbols from the C++ API and the C-API. The former is a kitchen sink and the latter is curated. We should be splitting them and for things that are properly factored to depend on the C-API, they should have the option to *only* depend on the C-API, and we should build that shared library no matter what. Its presence isn't just an optimization: it is a key part of the system.
To do this right, I needed to:
* Introduce visibility macros into mlir-c/Support.h. These should work on both *nix and windows as-is.
* Create a new libMLIRPublicAPI.so with just the mlir-c object files.
* Compile the C-API with -fvisibility=hidden.
* Conditionally depend on the libMLIR.so from libMLIRPublicAPI.so if building libMLIR.so (otherwise, also links against the static libs and will produce a mondo libMLIRPublicAPI.so).
* Disable re-exporting of static library symbols that come in as transitive deps.
This gives us a dynamic linked C-API layer that is minimal and should work as-is on all platforms. Since we don't support libMLIR.so building on Windows yet (and it is not very DLL friendly), this will fall back to a mondo build of libMLIRPublicAPI.so, which has its uses (it is also the most size conscious way to go if you happen to know exactly what you need).
Sizes (release/stripped, Ubuntu 20.04):
Shared library build:
libMLIRPublicAPI.so: 121Kb
_mlir.cpython-38-x86_64-linux-gnu.so: 1.4Mb
mlir-capi-ir-test: 135Kb
libMLIR.so: 21Mb
Static build:
libMLIRPublicAPI.so: 5.5Mb (since this is a "static" build, this includes the MLIR implementation as non-exported code).
_mlir.cpython-38-x86_64-linux-gnu.so: 1.4Mb
mlir-capi-ir-test: 44Kb
Things like npcomp and circt which bring their own dialects/transforms/etc would still need the shared library build and code that links against libMLIR.so (since it is all C++ interop stuff), but hopefully things that only depend on the public C-API can just have the one narrow dep.
I spot checked everything with nm, and it looks good in terms of what is exporting/importing from each layer.
I'm not in a hurry to land this, but if it is controversial, I'll probably split off the Support.h and API visibility macro changes, since we should set that pattern regardless.
Reviewed By: mehdi_amini, benvanik
Differential Revision: https://reviews.llvm.org/D90824
This functionality is superceded by BufferResultsToOutParams pass (see
https://reviews.llvm.org/D90071) for users the require buffers to be
out-params. That pass should be run immediately after all tensors are gone from
the program (before buffer optimizations and deallocation insertion), such as
immediately after a "finalizing" bufferize pass.
The -test-finalizing-bufferize pass now defaults to what used to be the
`allowMemrefFunctionResults=true` flag. and the
finalizing-bufferize-allowed-memref-results.mlir file is moved
to test/Transforms/finalizing-bufferize.mlir.
Differential Revision: https://reviews.llvm.org/D90778
The LinalgDependenceGraph and alias analysis provide the necessary analysis for the Linalg fusion on buffers case.
However this is not enough for linalg on tensors which require proper memory effects to play nicely with DCE and other transformations.
This revision adds side effects to Linalg ops that were previously missing and has 2 consequences:
1. one example in the copy removal pass now fails since the linalg.generic op has side effects and the pass does not perform alias analysis / distinguish between reads and writes.
2. a few examples in fusion-tensor.mlir need to return the resulting tensor otherwise DCE automatically kicks in as part of greedy pattern application.
Differential Revision: https://reviews.llvm.org/D90762
VectorExtractDynamicOp in SPIRV dialect
conversion from vector.extractelement to spirv VectorExtractDynamicOp
Differential Revision: https://reviews.llvm.org/D90679
Per spec, vector sizes 8 and 16 are allowed when Vector16 capability is present.
This change expands the limitation of vector sizes to accept these sizes.
Differential Revision: https://reviews.llvm.org/D90683
- The ODS description was using an old syntax that was updated during the review.
This fixes the ODS description to match the current syntax.
Differential Revision: https://reviews.llvm.org/D90797
- Eliminate duplicated information about mapping from memref -> its descriptor fields
by consolidating that mapping in two functions: getMemRefDescriptorFields and
getUnrankedMemRefDescriptorFields.
- Change convertMemRefType() and convertUnrankedMemRefType() to use these
functions.
- Remove convertMemrefSignature and convertUnrankedMemrefSignature.
Differential Revision: https://reviews.llvm.org/D90707
The new construct represents a generic loop with two regions: one executed
before the loop condition is verifier and another after that. This construct
can be used to express both a "while" loop and a "do-while" loop, depending on
where the main payload is located. It is intended as an intermediate
abstraction for lowering, which will be added later. This form is relatively
easy to target from higher-level abstractions and supports transformations such
as loop rotation and LICM.
Differential Revision: https://reviews.llvm.org/D90255
This delegate the control of the buffering to the user of the API. This
seems like a safer option as messages are immediately propagated to the
user, which may lead to less surprising behavior during debugging for
instance.
In terms of performance, a user can add a buffered stream on the other
side of the callback.
Differential Revision: https://reviews.llvm.org/D90726
This is exposing the basic functionalities (create, nest, addPass, run) of
the PassManager through the C API in the new header: `include/mlir-c/Pass.h`.
In order to exercise it in the unit-test, a basic TableGen backend is
also provided to generate a simple C wrapper around the pass
constructor. It is used to expose the libTransforms passes to the C API.
Reviewed By: stellaraccident, ftynse
Differential Revision: https://reviews.llvm.org/D90667
- Verify that attributes parsed using a custom parser do not have duplicates.
- If there are duplicated in the attribute dictionary in the input, they get caught during the
dictionary parsing.
- This check verifies that there is no duplication between the parsed dictionary and any
attributes that might be added by the custom parser (or when the custom parsing code
adds duplicate attributes).
- Fixes https://bugs.llvm.org/show_bug.cgi?id=48025
Differential Revision: https://reviews.llvm.org/D90502
Previously, they were only defined for `FuncOp`.
To support this, `FunctionLike` needs a way to get an updated type
from the concrete operation. This adds a new hook for that purpose,
called `getTypeWithoutArgsAndResults`.
For now, `FunctionLike` continues to assume the type is
`FunctionType`, and concrete operations that use another type can hide
the `getType`, `setType`, and `getTypeWithoutArgsAndResults` methods.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D90363
The OpenMP dialect include is only needed for translation
and is not required in LLVM dialect.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D90510
This is an error prone behavior, I frequently have ~20 min debugging sessions when I hit
an unexpected implicit nesting. This default makes the C++ API safer for users.
Depends On D90669
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D90671
This simplifies a few parts of the pass manager, but in particular we don't add as many
verifierpass as there are passes in the pipeline, and we can now enable/disable the
verifier after the fact on an already built PassManager.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D90669
This header was an initial early attempt at a crude C API for bindings,
but it isn't used and redundant with the new API. At this point it only
contributes to more confusion.
Differential Revision: https://reviews.llvm.org/D90643
This patch renames AffineParallelNormalize to AffineLoopNormalize to make it
more generic and be able to hold more loop normalization transformations in
the future for affine.for and affine.parallel ops. Eventually, it could also be
extended to support scf.for and scf.parallel. As a starting point for affine.for,
the patch also adds support for removing single iteration affine.for ops to the
the pass.
Differential Revision: https://reviews.llvm.org/D90267
This revision refactors the base Op/AbstractOperation classes to reduce the amount of generated code size when defining a new operation. The current scheme involves taking the address of functions defined directly on Op and Trait classes. This is problematic because even when these functions are empty/unused we still result in these functions being defined in the main executable. In this revision, we switch to using SFINAE and template type filtering to remove remove functions that are not needed/used. For example, if an operation does not define a custom `print` method we shouldn't define a templated `printAssembly` method for it. The same applies to parsing/folding/verification/etc. This dropped MLIR code size for a large downstream library by ~10%(~1 mb in an opt build).
Differential Revision: https://reviews.llvm.org/D90196
- Add standard dialect operations to define global variables with memref types and to
retrieve the memref for to a named global variable
- Extend unit tests to test verification for these operations.
Differential Revision: https://reviews.llvm.org/D90337
This is the most basic possible finalizing bufferization pass, which I
also think is sufficient for most new use cases. The more concentrated
nature of this pass also greatly clarifies the invariants that it
requires on its input to safely transform the program (see the
pass description in Passes.td).
With this pass, I have now upstreamed practically all of the
bufferizations from npcomp (the exception being std.constant, which can
be upstreamed when std.global_memref lands:
https://llvm.discourse.group/t/rfc-global-variables-in-mlir/2076/16 )
Differential Revision: https://reviews.llvm.org/D90205
Leaking macros isn't a good practice when defining headers. This
requires to duplicate the macro definition in every header though, but
that seems like a better tradeoff right now.
Differential Revision: https://reviews.llvm.org/D90633
This pass allows removing getResultConversionKind from
BufferizeTypeConverter. This pass replaces the AppendToArgumentsList
functionality. As far as I could tell, the only use of this functionlity
is to perform the transformation that is implemented in this pass.
Future patches will remove the getResultConversionKind machinery from
BufferizeTypeConverter, but sending this patch for individual review for
clarity.
Differential Revision: https://reviews.llvm.org/D90071
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).
To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:
(1) resolving conflicts between pairs of ops from different modules
(2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)
This patch implements only the first phase.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D90477
The bufferization patterns are moved to the .cpp file, which is
preferred in the codebase when it makes sense.
The LinalgToStandard patterns are kept a header because they are
expected to be used individually. However, they are moved to
LinalgToStandard.h which is the file corresponding to where they are
defined.
This also removes TensorCastOpConverter, which is handled by
populateStdBufferizePatterns now. Eventually, the constant op lowering
will be handled as well, but it there are currently holdups on moving
it (see https://reviews.llvm.org/D89916).
Differential Revision: https://reviews.llvm.org/D90254
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).
To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:
(1) resolving conflicts between pairs of ops from different modules
(2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)
This patch implements only the first phase.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D90477