Move the existing optimization for transfer op on tensor to folder and
canonicalization. This handles the write after write case and read after write
and also add write after read case.
Differential Revision: https://reviews.llvm.org/D100597
The implementation supports static schedule for Fortran do loops. This
implements the dynamic variant of the same concept.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D97393
ArmSVE dialect is behind the recent changes in how the Vector dialect
interacts with backend vector dialects and the MLIR -> LLVM IR
translation module. This patch cleans up ArmSVE initialization within
Vector and removes the need for an LLVMArmSVE dialect.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D100171
When Linalg named ops support was added, captures were omitted
from the body builder. This revision adds support for captures
which allows us to write FillOp in a more idiomatic fashion using
the _linalg_ops_ext mixin support.
This raises an issue in the generation of `_linalg_ops_gen.py` where
```
@property
def result(self):
return self.operation.results[0] if len(self.operation.results) > 1 else None
```.
The condition should be `== 1`.
This will be fixed in a separate commit.
Differential Revision: https://reviews.llvm.org/D100363
This offers the ability to pass numpy arrays to the corresponding
memref argument.
Reviewed By: mehdi_amini, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100077
This allows for walking all nested locations of a given location, and is generally useful when processing locations.
Differential Revision: https://reviews.llvm.org/D100437
In the long run, we want to unify the dot product codegen solutions between
all target architectures, but this intrinsic enables experimenting with AVX
specific implementations in the meantime.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100593
We were using llvm::nulls, but that isn't thread safe so we switch to giving each thread it's own null stream.
Differential Revision: https://reviews.llvm.org/D100578
This matches the current support provided to operations, and allows attaching traits, interfaces, and using the DeclareInterfaceMethods utility. This was missed when attribute/type generation was first added.
Differential Revision: https://reviews.llvm.org/D100233
Rationale:
Now that vector<?xindex> is allowed, the restriction on vectorization
of index types in the sparse compiler can be removed. Also needs
generalization of scatter/gather index types.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D100522
Add iterator for ReductionNode traversal and use range to indicate the
region we would like to keep. Refactor the interaction between
Pass/Tester/ReductionNode.
Now it'll be easier to add new traversal type and OpReducer
Reviewed By: jpienaar, rriddle
Differential Revision: https://reviews.llvm.org/D99713
This reverts commit a32846b1d0.
The build is broken with -DBUILD_SHARED_LIBS=ON:
tools/mlir/lib/Reducer/CMakeFiles/obj.MLIRReduce.dir/Tester.cpp.o: In function `mlir::Tester::isInteresting(mlir::ModuleOp) const':
Tester.cpp:(.text._ZNK4mlir6Tester13isInterestingENS_8ModuleOpE+0xa8): undefined reference to `mlir::OpPrintingFlags::OpPrintingFlags()'
Tester.cpp:(.text._ZNK4mlir6Tester13isInterestingENS_8ModuleOpE+0xc6): undefined reference to `mlir::Operation::print(llvm::raw_ostream&, mlir::OpPrintingFlags)'
Add iterator for ReductionNode traversal and use range to indicate the region we would like to keep. Refactor the interaction between Pass/Tester/ReductionNode.
Now it'll be easier to add new traversal type and OpReducer
Reviewed By: jpienaar, rriddle
Differential Revision: https://reviews.llvm.org/D99713
This allows custom types and attribute to parse a dimension list that
isn't necessarily terminated with `xtype`, for example something like:
#tf.shape<4x5>
Differential Revision: https://reviews.llvm.org/D100432
This patch collects operations that have users in a for loop and uses
them when loop invariant operations are detected and hoisted.
Reviewed By: bondhugula, vinayaka-polymage
Differential Revision: https://reviews.llvm.org/D99761
Handles lowering conv2d to linalg's convolution operator. This implementation
only supports floating point values but handles all strides, dilations, and
padding values.
Differential Revision: https://reviews.llvm.org/D100061
Per the SPIR-V spec "2.16.2. Validation Rules for Shader Capabilities":
Composite objects in the StorageBuffer, PhysicalStorageBuffer,
Uniform, and PushConstant Storage Classes must be explicitly
laid out.
For other cases we don't need to attach the struct offsets.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D100386
The patch updates the tiling pass to add the tile offsets to the indices returned by the linalg operations.
Differential Revision: https://reviews.llvm.org/D100379
The patch extends the linalg to loop lowering pass to replace all linalg index operations by the induction variables of the generated loop nests.
Differential Revision: https://reviews.llvm.org/D100364
This patch introduces the neccessary infrastructure changes to implement
cost-modelling for detensoring. In particular, it introduces the
following changes:
- An extension to the dialect conversion framework to selectively
convert sub-set of non-entry BB arguments.
- An extension to branch conversion pattern to selectively convert
sub-set of a branche's operands.
- An interface for detensoring cost-modelling.
- 2 simple implementations of 2 different cost models.
This sets the stage to explose cost-modelling for detessoring in an
easier way. We still need to come up with better cost models.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D99945
Depends On D95311
Previous automatic-ref-counting pass worked with high level async operations (e.g. async.execute), however async values reference counting is a runtime implementation detail.
New pass mostly relies on the save liveness analysis to place drop_ref operations, and does better verification of CFG with different liveIn sets in block successors.
This is almost NFC change. No new reference counting ideas, just a cleanup of the previous version.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D95390
This is similar to the definition of llvm.switch, providing
unstructured branch-based control flow. It differs from the LLVM
operation in that it accepts any signless integer (not only an i32),
takes no branch weights (the same as the Branch and CondBranch ops),
and has a slightly different syntax for the default case that includes
it in the list of cases with an explicit `default` keyword.
Also included are several canonicalizers.
See https://llvm.discourse.group/t/rfc-add-std-switch-and-scf-switch/3090
Reviewed By: rriddle, bondhugula
Differential Revision: https://reviews.llvm.org/D99925
The stride should be calculated with the converted array element
type, not the original input type.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D100337
These patterns have been used as a prerequisite step for lowering
to SPIR-V. But they don't involve SPIR-V dialect ops; they are
pure memref/vector op transformations. Given now we have a dedicated
MemRef dialect, moving them to Memref/Transforms/, which is a more
suitable place to host them, to allow used by others.
This commit just moves code around and renames patterns/passes
accordingly. CMakeLists.txt for existing MemRef libraries are
also improved along the way.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D100326
Symbols are now supported in the integer emptiness check. Remove some outdated assertions checking that there are no symbols.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D100327
This CL introduces a generic attribute (called "encoding") on tensors.
The attribute currently does not carry any concrete information, but the type
system already correctly determines that tensor<8xi1,123> != tensor<8xi1,321>.
The attribute will be given meaning through an interface in subsequent CLs.
See ongoing discussion on discourse:
[RFC] Introduce a sparse tensor type to core MLIR
https://llvm.discourse.group/t/rfc-introduce-a-sparse-tensor-type-to-core-mlir/2944
A sparse tensor will look something like this:
```
// named alias with all properties we hold dear:
#CSR = {
// individual named attributes
}
// actual sparse tensor type:
tensor<?x?xf64, #CSR>
```
I see the following rough 5 step plan going forward:
(1) introduce this format attribute in this CL, currently still empty
(2) introduce attribute interface that gives it "meaning", focused on sparse in first phase
(3) rewrite sparse compiler to use new type, remove linalg interface and "glue"
(4) teach passes to deal with new attribute, by rejecting/asserting on non-empty attribute as simplest solution, or doing meaningful rewrite in the longer run
(5) add FE support, document, test, publicize new features, extend "format" meaning to other domains if useful
Reviewed By: stellaraccident, bondhugula
Differential Revision: https://reviews.llvm.org/D99548
We will soon be adding non-AVX512 operations to MLIR, such as AVX's rsqrt. In https://reviews.llvm.org/D99818 several possibilities were discussed, namely to (1) add non-AVX512 ops to the AVX512 dialect, (2) add more dialects (e.g. AVX dialect for AVX rsqrt), and (3) expand the scope of the AVX512 to include these SIMD x86 ops, thereby renaming the dialect to something more accurate such as X86Vector.
Consensus was reached on option (3), which this patch implements.
Reviewed By: aartbik, ftynse, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100119
Fusing a constant with a linalg.generic operation can result in the
fused operation being illegal since the loop bound computation
fails. Avoid such fusions.
Differential Revision: https://reviews.llvm.org/D100272
The `linalg.index` operation provides access to the iteration indexes of immediately enclosing linalg operations. It takes a dimension `dim` attribute and returns the iteration index in the given dimension. Having `linalg.index` allows us to unify `linalg.generic` and `linalg.indexed_generic` and also enables index access in named operations.
Differential Revision: https://reviews.llvm.org/D100292