This patch replaces the root-terminal vectorization approach implemented in the
Affine vectorizer with a topological order approach that vectorizes all the
operations within the target loop nest. These are the most important changes
introduced by the new algorithm:
* Removed tracking of root and terminal ops. Existing vectorization
functionality is preserved and extended so that loop nests without
root-terminal chains can be vectorized.
* Vectorizing a loop nest now only requires a single topological traversal.
* A new vector loop nest is incrementally built along the vectorization
process. The original scalar loop is kept intact. No cloning guard is needed
to recover the scalar loop if vectorization fails. This approach also
simplifies the challenging task of replacing a loop operation amid the
vectorization process without invalidating the analysis information that
depends on the original loop.
* Vectorization of specific operations has been implemented as independent,
preparing them to be moved to a potential vectorization interface.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97442
This allows for storage instances to store data that isn't uniqued in the context, or contain otherwise non-trivial logic, in the rare situations that they occur. Storage instances with trivial destructors will still have their destructor skipped. A consequence of this is that the storage instance definition must be visible from the place that registers the type.
Differential Revision: https://reviews.llvm.org/D98311
Data layout information allows to answer questions about the size and alignment
properties of a type. It enables, among others, the generation of various
linear memory addressing schemes for containers of abstract types and deeper
reasoning about vectors. This introduces the subsystem for modeling data
layouts in MLIR.
The data layout subsystem is designed to scale to MLIR's open type and
operation system. At the top level, it consists of attribute interfaces that
can be implemented by concrete data layout specifications; type interfaces that
should be implemented by types subject to data layout; operation interfaces
that must be implemented by operations that can serve as data layout scopes
(e.g., modules); and dialect interfaces for data layout properties unrelated to
specific types. Built-in types are handled specially to decrease the overall
query cost.
A concrete default implementation of these interfaces is provided in the new
Target dialect. Defaults for built-in types that match the current behavior are
also provided.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D97067
verifyCompatibleShapes is not transitive. Create an n-ary version and
update SameOperandShapes and SameOperandAndResultShapes traits to use
it.
Differential Revision: https://reviews.llvm.org/D98331
Clean-up after D98279, remove one call to createConvertGPUKernelToBlobPass().
Depends On D98203
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D98360
If MLIR_CUDA_RUNNER_ENABLED, register a 'gpu-to-cubin' conversion pass to mlir-opt.
The next step is to switch CUDA integration tests from mlir-cuda-runner to mlir-opt + mlir-cpu-runner and remove mlir-cuda-runner.
Depends On D98279
Reviewed By: herhut, rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D98203
This patch adds support for vectorizing loops with 'iter_args' when those loops
are not a vector dimension. This allows vectorizing outer loops with an inner
'iter_args' loop (e.g., reductions). Vectorizing scenarios where 'iter_args'
loops are vector dimensions would require more work (e.g., analysis,
generating horizontal reduction, etc.) not included in this patch.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97892
This patch replaces the root-terminal vectorization approach implemented in the
Affine vectorizer with a topological order approach that vectorizes all the
operations within the target loop nest. These are the most important changes
introduced by the new algorithm:
* Removed tracking of root and terminal ops. Existing vectorization
functionality is preserved and extended so that loop nests without
root-terminal chains can be vectorized.
* Vectorizing a loop nest now only requires a single topological traversal.
* A new vector loop nest is incrementally built along the vectorization
process. The original scalar loop is kept intact. No cloning guard is needed
to recover the scalar loop if vectorization fails. This approach also
simplifies the challenging task of replacing a loop operation amid the
vectorization process without invalidating the analysis information that
depends on the original loop.
* Vectorization of specific operations has been implemented as independent,
preparing them to be moved to a potential vectorization interface.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97442
The dialect separation was introduced to demarkate ops operating in different
type systems. This is no longer the case after the LLVM dialect has migrated to
using built-in vector types, so the original reason for separation is no longer
valid. Squash the two dialects into one.
The code size decrease isn't quite large: the ops originally in LLVM_AVX512 are
preserved because they match LLVM IR intrinsics specialized for vector element
bitwidth. However, it is still conceptually beneficial to have only one
dialect. I originally considered to use Tablegen multiclasses to define both
the type-polymorphic op and its two intrinsic-related instantiations, but
decided against it given both the complexity of the required Tablegen input and
its dissimilarity with the rest of ODS-defined ops, both potentially resulting
in very poor maintainability.
Depends On D98327
Reviewed By: nicolasvasilache, springerm
Differential Revision: https://reviews.llvm.org/D98328
Based on the following discussion:
https://llvm.discourse.group/t/rfc-memref-memory-shape-as-attribute/2229
The goal of the change is to make memory space property to have more
expressive representation, rather then "magic" integer values.
It will allow to have more clean ASM form:
```
gpu.func @test(%arg0: memref<100xf32, "workgroup">)
// instead of
gpu.func @test(%arg0: memref<100xf32, 3>)
```
Explanation for `Attribute` choice instead of plain `string`:
* `Attribute` classes allow to use more type safe API based on RTTI.
* `Attribute` classes provides faster comparison operator based on
pointer comparison in contrast to generic string comparison.
* `Attribute` allows to store more complex things, like structs or dictionaries.
It will allows to have more complex memory space hierarchy.
This commit preserve old integer-based API and implements it on top
of the new one.
Depends on D97476
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D96145
This class provides efficient implementations of symbol queries related to uses, such as collecting the users of a symbol, replacing all uses, etc. This provides similar benefits to use related queries, as SymbolTableCollection did for lookup queries.
Differential Revision: https://reviews.llvm.org/D98071
This allows the caller to distinguish between a parse error or an
unmatched keyword. It fixes the redundant error that was emitted by the
caller when the generated parser would fail.
Differential Revision: https://reviews.llvm.org/D98162
Instead of storing an array of LoopOpt attributes, which were just
wrapping std::pair<enum, int> anyway, we can have an attribute storing
a sorted ArrayRef<std::pair<enum, int>> as a single unit. This improves
here the textual format and the general API. Note that we're limiting
the options to fit into an int64_t by design, but this isn't a new
constraint.
Building the LoopOptions attribute is likely worth a specific builder
for efficient reason, that'll be the subject of a future patch.
Differential Revision: https://reviews.llvm.org/D98105
This makes it easy to compose the distribution computation with
other affine computations.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D98171
Move Target/LLVMIR.h to target/LLVMIR/Import.h to better reflect the purpose of
this file. Also move all LLVM IR target tests under the LLVMIR directory.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D98178
* Only leaf packages are non-namespace packages. This allows most of the top levels to be split into different directories or deployment packages. In the previous state, the presence of __init__.py files at each level meant that the entire tree could only ever exist in one physical directory on the path.
* This changes the API usage slightly: `import mlir` will no longer do a deep import of `mlir.ir`, etc. This may necessitate some client code changes.
* Dialect gen code was restructured so that the user is responsible for providing the `my_dialect.py` file, which then must import its peer `_my_dialect_ops_gen`. This gives complete control of the dialect namespace to the user instead of to tablegen code, allowing further dialect-specific python APIs.
* Correspondingly, the previous extension modules `_my_dialect.py` are now `_my_dialect_ops_ext.py`.
* Now that the `linalg` namespace is open, moved the `linalg_opdsl` tool into it.
* This may require some corresponding downstream adjustments to npcomp, circt, et al:
* Probably some shallow imports need to be converted to deep imports (i.e. not `import mlir` brings in the world).
* Each tablegen generated dialect now needs an explicit `foo.py` which does a `from ._foo_ops_gen import *`. This is similar to the way that generated code operates in the C++ world.
* If providing dialect op extensions, those need to be moved from `_foo.py` -> `_foo_ops_ext.py`.
Differential Revision: https://reviews.llvm.org/D98096
Lowerings for min, max, prod, and sum reduction operations on int and float
values. This includes reduction tests for both cases.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D97893
split_at can return an error if the split index is out of bounds. If the
user knows that the index can never be out of bounds it's safe to use
extent tensors. This has a straight-forward lowering to std.subtensor.
Differential Revision: https://reviews.llvm.org/D98177
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere. For ops that
don't have a SPIR-V spec counterpart, we use spv.mlir.snake_case.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D98014
Normally tensors will be stored in buffers before converting to SPIR-V,
given that is how a large amount of data is sent to the GPU. However,
SPIR-V supports converting from tensors directly too. This is for the
cases where the tensor just contains a small amount of elements and it
makes sense to directly inline them as a small data array in the shader.
To handle this, internally the conversion might create new local
variables. SPIR-V consumers in GPU drivers may or may not optimize that
away. So this has implications over register pressure. Therefore, a
threshold is used to control when the patterns should kick in.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D98052
The two dialects are largely redundant. The former was introduced as a mirror
of the latter operating on LLVM dialect types. This is no longer necessary
since the LLVM dialect operates on built-in types. Combine the two dialects.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D98060
With the new vector.load/store operations, there is no need to go through
unmasked transfer operations (which will canonicalized to l/s anyway).
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D98056
This patch is a follow-up on D97217. It adds a new 'Skip' result to the Operation visitor
so that a callback can stop the ongoing visit of an operation/block/region and
continue visiting the next one without fully interrupting the walk. Skipping is
needed to be able to erase an operation/block in pre-order and do not continue
visiting the internals of that operation/block.
Related to the skipping mechanism, the patch also introduces the following changes:
* Added new TestIRVisitors pass with basic testing for the IR visitors.
* Fixed missing early increment ranges in visitor implementation.
* Updated documentation of walk methods to include erasure information and walk
order information.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D97820
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere. For ops that
don't have a SPIR-V spec counterpart, we use spv.mlir.snake_case.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D98016
To unify the naming scheme across all ops in the SPIR-V dialect,
we are moving from spv.camelCase to spv.CamelCase everywhere.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D97918
* Mostly imported from experimental repo as-is with cosmetic changes.
* Temporarily left out emission code (for building ops at runtime) to keep review size down.
* Documentation and lit tests added fresh.
* Sample op library that represents current Linalg named ops included.
Differential Revision: https://reviews.llvm.org/D97995
Reduction updates should be masked, just like the load and stores.
Note that alternatively, we could use the fact that masked values are
zero of += updates and mask invariants to get this working but that
would not work for *= updates. Masking the update itself is cleanest.
This change also replaces the constant mask with a broadcast of "true"
since this constant folds much better for various folding patterns.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D98000
Canonicalize the iter_args of an scf::ForOp that involve a tensor_load and
for which only the last loop iteration is actually visible outside of the
loop. The canonicalization looks for a pattern such as:
```
%t0 = ... : tensor_type
%0 = scf.for ... iter_args(%bb0 : %t0) -> (tensor_type) {
...
// %m is either tensor_to_memref(%bb00) or defined above the loop
%m... : memref_type
... // uses of %m with potential inplace updates
%new_tensor = tensor_load %m : memref_type
...
scf.yield %new_tensor : tensor_type
}
```
`%bb0` may have either 0 or 1 use. If it has 1 use it must be exactly a
`%m = tensor_to_memref %bb0` op that feeds into the yielded `tensor_load`
op.
If no aliasing write of `%new_tensor` occurs between tensor_load and yield
then the value %0 visible outside of the loop is the last `tensor_load`
produced in the loop.
For now, we approximate the absence of aliasing by only supporting the case
when the tensor_load is the operation immediately preceding the yield.
The canonicalization rewrites the pattern as:
```
// %m is either a tensor_to_memref or defined above
%m... : memref_type
scf.for ... { // no iter_args
... // uses of %m with potential inplace updates
}
%0 = tensor_load %m : memref_type
```
Differential revision: https://reviews.llvm.org/D97953
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D97919
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from `spv.camelCase` to `spv.CamelCase` everywhere.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D97917
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere.
Differential Revision: https://reviews.llvm.org/D97920
The value type of the attribute can be specified by either overriding the typeBuilder field on the AttrDef, or by providing a parameter of type `AttributeSelfTypeParameter`. This removes the need to define custom storage class constructors for attributes that have a value type other than NoneType.
Differential Revision: https://reviews.llvm.org/D97590
Add a folder to rewrite a sequence such as:
```
%t1 = ...
%v = vector.transfer_read %t0[%c0...], {masked = [false...]} :
tensor<static_sizesxf32>, vector<static_sizesxf32>
%t2 = vector.transfer_write %v, %t1[%c0...] {masked = [false...]} :
vector<static_sizesxf32>, tensor<static_sizesxf32>
```
into:
```
%t0
```
The producer of t1 may or may not be DCE'd depending on whether it is a
block argument or has side effects.
Differential revision: https://reviews.llvm.org/D97934