This commit starts a new pass and patterns for converting Linalg
named ops to generic ops. This enables us to leverage the flexbility
from generic ops during transformations. Right now only linalg.conv
is supported; others will be added when useful.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D91357
For intrinsics with multiple returns where one or more operands are overloaded, the overloaded type is inferred from the corresponding field of the resulting struct, instead of accessing the result directly.
As such, the hasResult parameter of LLVM_IntrOpBase (and derived classes) is replaced with numResults. TableGen for intrinsics also updated to populate this field with the total number of results.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D91680
This allows for operations that exclusively affect symbol operations to better describe their side effects.
Differential Revision: https://reviews.llvm.org/D91581
Rationale:
Make sure preconditions are tested already during verfication.
Currently, the only way a sparse rewriting rule can fail is if
(1) the linalg op does not have sparse annotations, or
(2) a yet to be handled operation is encounted inside the op
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D91748
Refactoring/clean-up step needed to add support for producer-consumer fusion
with multi-store producer loops and, in general, to implement more general
loop fusion strategies in Affine. It introduces the following changes:
- AffineLoopFusion pass now uses loop fusion utilities more broadly to compute
fusion legality (canFuseLoops utility) and perform the fusion transformation
(fuseLoops utility).
- Loop fusion utilities have been extended to deal with AffineLoopFusion
requirements and assumptions while preserving both loop fusion utilities and
AffineLoopFusion current functionality within a unified implementation.
'FusionStrategy' has been introduced for this purpose and, in the future, it
will allow us to have a single loop fusion core implementation that will produce
different fusion outputs depending on the strategy used.
- Improve separation of concerns for legality and profitability analysis:
'isFusionProfitable' no longer filters out illegal scenarios that 'canFuse'
didn't detect, or the other way around. 'canFuse' now takes loop dependences
into account to determine the fusion loop depth (producer-consumer fusion only).
- As a result, maximal fusion now doesn't require any profitability analysis.
- Slices are now computed only once and reused across the legality, profitability
and fusion transformation steps (producer-consumer).
- Refactor some utilities and remove redundant copies of them.
This patch is NFCI and should preserve the existing functionality of both the
AffineLoopFusion pass and the affine fusion utilities.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D90798
This commit does the renaming mentioned in the title in order to bring
'spv' dialect closer to the MLIR naming conventions.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D91715
Make the interface match the one of ConvertToLLVMPattern::getDataPtr() (to be removed in a separate change).
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D91599
The side effect infrastructure is based on the Effect and Resource class
templates, instances of instantiations of which are constructed as
thread-local singletons. With this scheme, it is impossible to further
parameterize either of those, or the EffectInstance class that contains
pointers to an Effect and Resource instances. Such a parameterization is
necessary to express more detailed side effects, e.g. those of a loop or
a function call with affine operations inside where it is possible to
precisely specify the slices of accessed buffers.
Include an additional Attribute to EffectInstance class for further
parameterization. This allows to leverage the dialect-specific
registration and uniquing capabilities of the attribute infrastructure
without requiring Effect or Resource instantiations to be attached to a
dialect themselves.
Split out the generic part of the side effect Tablegen classes into a
separate file to avoid generating built-in MemoryEffect interfaces when
processing any .td file that includes SideEffectInterfaceBase.td.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D91493
- Add `mlirElementsAttrGetType` C API.
- Add `def_buffer` binding to PyDenseElementsAttribute.
- Implement the protocol to access the buffer.
Differential Revision: https://reviews.llvm.org/D91021
* I had missed the note about "Standard size" in the docs. On Windows, the 'l' types are 32bit.
* This fixes the only failing MLIR-Python test on Windows.
Differential Revision: https://reviews.llvm.org/D91283
On some platform (like WebAssembly), alignof(mlir::AttributeStorage) is 4 instead of 8. As a result, it makes the program crashes since PointerLikeTypeTraits<mlir::Attribute>::NumLowBitsAvailable is 3.
So I explicitly set the alignment of mlir::AttributeStoarge to 64 bits, and set PointerLikeTypeTraits<mlir::Attribute>::NumLowBitsAvailable according to it.
I also fixed an another related error (alignof(NamedAttribute) -> alignof(DictionaryAttributeStorage)) based on reviewer's comments.
Reviewed By: dblaikie, rriddle
Differential Revision: https://reviews.llvm.org/D91062
As discussed in https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020
this CL is the start of sparse tensor compiler support in MLIR. Starting with a
"dense" kernel expressed in the Linalg dialect together with per-dimension
sparsity annotations on the tensors, the compiler automatically lowers the
kernel to sparse code using the methods described in Fredrik Kjolstad's thesis.
Many details are still TBD. For example, the sparse "bufferization" is purely
done locally since we don't have a global solution for propagating sparsity
yet. Furthermore, code to input and output the sparse tensors is missing.
Nevertheless, with some hand modifications, the generated MLIR can be
easily converted into runnable code already.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D90994
std.alloc only supports memrefs with identity layout, which means we can simplify the lowering to LLVM and compute strides only from (static and dynamic) sizes.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D91549
This commit does the renaming mentioned in the title in order to bring
`spv` dialect closer to the MLIR naming conventions.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D91609
This utility function is helpful for dialect-specific builders that need
to access the context through location, and the location itself may be
either provided as an argument or expected to be recovered from the
implicit location stack.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91623
The shape of the result of a dynamic_tensor_from_elements is defined via its
result type and operands. We already fold dim operations when they reference
one of the statically sized dimensions. Now, also fold dim on the dynamically
sized dimensions by picking the corresponding operand.
Differential Revision: https://reviews.llvm.org/D91616
It may be necessary for interface methods to process or return variables with
the interface class type, in particular for attribute and type interfaces that
can return modified attributes and types that implement the same interface.
However, the code generated by ODS in this case would not compile because the
signature (and the body if provided) appear in the definition of the Model
class and before the interface class, which derives from the Model. Change the ODS
interface method generator to emit only method declarations in the Model class
itself, and emit method definitions after the interface class. Mark as "inline"
since their definitions are still emitted in the header and are no longer
implicitly inline. Add a forward declaration of the interface class before the
Concept+Model classes to make the class name usable in declarations.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D91499
In ODS, attributes of an operation can be provided as a part of the "arguments"
field, together with operands. Such attributes are accepted by the op builder
and have accessors generated.
Implement similar functionality for ODS-generated op-specific Python bindings:
the `__init__` method now accepts arguments together with operands, in the same
order as in the ODS `arguments` field; the instance properties are introduced
to OpView classes to access the attributes.
This initial implementation accepts and returns instances of the corresponding
attribute class, and not the underlying values since the mapping scheme of the
value types between C++, C and Python is not yet clear. Default-valued
attributes are not supported as that would require Python to be able to parse
C++ literals.
Since attributes in ODS are tightely related to the actual C++ type system,
provide a separate Tablegen file with the mapping between ODS storage type for
attributes (typically, the underlying C++ attribute class), and the
corresponding class name. So far, this might look unnecessary since all names
match exactly, but this is not necessarily the cases for non-standard,
out-of-tree attributes, which may also be placed in non-default namespaces or
Python modules. This also allows out-of-tree users to generate Python bindings
without having to modify the bindings generator itself. Storage type was
preferred over the Tablegen "def" of the attribute class because ODS
essentially encodes attribute _constraints_ rather than classes, e.g. there may
be many Tablegen "def"s in the ODS that correspond to the same attribute type
with additional constraints
The presence of the explicit mapping requires the change in the .td file
structure: instead of just calling the bindings generator directly on the main
ODS file of the dialect, it becomes necessary to create a new file that
includes the main ODS file of the dialect and provides the mapping for
attribute types. Arguably, this approach offers better separability of the
Python bindings in the build system as the main dialect no longer needs to know
that it is being processed by the bindings generator.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91542
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.
Differential Revision: https://reviews.llvm.org/D91572
This has been a long standing TODO, and cleans up a bit of IR/. This will also make it easier to move FuncOp out of IR/ at some point in the future. For now, Module.h and Function.h just forward BuiltinDialect.h. These files will be removed in a followup.
Differential Revision: https://reviews.llvm.org/D91571
Some rewriters take more iterations to converge, add a parameter to overwrite
the built-in maximum iteration count.
Fix PR48073.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D91553
This replaces the old type decomposition logic that was previously mixed
into bufferization, and makes it easily accessible.
This also deletes TestFinalizingBufferize, because after we remove the type
decomposition, it doesn't do anything that is not already provided by
func-bufferize.
Differential Revision: https://reviews.llvm.org/D90899
The current code allows strided layouts, but the number of elements allocated is ambiguous. It could be either the number of elements in the shape (the current implementation), or the amount of elements required to not index out-of-bounds with the given maps (which would require evaluating the layout map).
If we require the canonical layouts, the two will be the same.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D91523
This adds a simple definition of a "workshare loop" operation for
the OpenMP MLIR dialect, excluding the "reduction" and "allocate"
clauses and without a custom parser and pretty printer.
The schedule clause also does not yet accept the modifiers that are
permitted in OpenMP 5.0.
Co-authored-by: Kiran Chandramohan <kiran.chandramohan@arm.com>
Reviewed By: ftynse, clementval
Differential Revision: https://reviews.llvm.org/D86071
The logic of vector on boolean was missed. This patch adds the logic and test on
it.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D91403
scf.parallel is currently not a good fit for tiling on tensors.
Instead provide a path to parallelism directly through scf.for.
For now, this transformation ignores the distribution scheme and always does a block-cyclic mapping (where block is the tile size).
Differential revision: https://reviews.llvm.org/D90475
motivated by a refactoring in the new sparse code (yet to be merged), this avoids some lengthy code dup
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D91465
Support multi-dimension vector for InsertMap/ExtractMap op and update the
transformations. Currently the relation between IDs and dimension is implicitly
deduced from the types. We can then calculate an AffineMap based on it. In the
future the AffineMap could be part of the operation itself.
Differential Revision: https://reviews.llvm.org/D90995
Depends On D89958
1. Adds `async.group`/`async.awaitall` to group together multiple async tokens/values
2. Rewrite scf.parallel operation into multiple concurrent async.execute operations over non overlapping subranges of the original loop.
Example:
```
scf.for (%i, %j) = (%lbi, %lbj) to (%ubi, %ubj) step (%si, %sj) {
"do_some_compute"(%i, %j): () -> ()
}
```
Converted to:
```
%c0 = constant 0 : index
%c1 = constant 1 : index
// Compute blocks sizes for each induction variable.
%num_blocks_i = ... : index
%num_blocks_j = ... : index
%block_size_i = ... : index
%block_size_j = ... : index
// Create an async group to track async execute ops.
%group = async.create_group
scf.for %bi = %c0 to %num_blocks_i step %c1 {
%block_start_i = ... : index
%block_end_i = ... : index
scf.for %bj = %c0 t0 %num_blocks_j step %c1 {
%block_start_j = ... : index
%block_end_j = ... : index
// Execute the body of original parallel operation for the current
// block.
%token = async.execute {
scf.for %i = %block_start_i to %block_end_i step %si {
scf.for %j = %block_start_j to %block_end_j step %sj {
"do_some_compute"(%i, %j): () -> ()
}
}
}
// Add produced async token to the group.
async.add_to_group %token, %group
}
}
// Await completion of all async.execute operations.
async.await_all %group
```
In this example outer loop launches inner block level loops as separate async
execute operations which will be executed concurrently.
At the end it waits for the completiom of all async execute operations.
Reviewed By: ftynse, mehdi_amini
Differential Revision: https://reviews.llvm.org/D89963
The index type does not have a bitsize and hence the size of corresponding allocations cannot be computed. Instead, the promotion pass now has an explicit option to specify the size of index.
Differential Revision: https://reviews.llvm.org/D91360
This exposes a hook to configure legality of operations such that only
`scf.parallel` operations that have mapping attributes are marked as
illegal. Consequently, the transformation can now also be applied to
mixed forms.
Differential Revision: https://reviews.llvm.org/D91340