An OpBuilder already exists for GEPs that does not have any struct indices for existing typed pointers, but no such builder exists for GEPs utilizing opaque pointers that has an explicit `basePtrType`.
Differential Revision: https://reviews.llvm.org/D129376
Clean up checks for alloc-like ops in analysis. Use the analysis
utility to properly check for the desired kind of effects. The previous
locality utility worked for all practical purposes but wasn't sound and
was locally duplicate code. Instead, use mlir::hasSingleEffect.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D129439
```
// -----// IR Dump Before LowerLinalgMicrokernels (iree-vmvx-lower-linalg-microkernels) //----- //
```
I've been meaning to suggest this for a long time, and I think the only reason we don't have it is because we didn't used to have the `getArgument()` handy when printing these comments. When debugging or putting a pipeline together based on such dumps, I often find myself grepping for the argument name of the pass (which is often related but not universally).
This change allows the user of LivenessBlockInfo to specify an op within the block and get a set of all values that are live as of that op. Semantically it relies on having a dominance-based region that has ordered operations. For DFG regions, computing liveness statically this way doesn't really make sense, it likely needs to be done at runtime.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D129447
A previous commit (f2b94bd) added some unnecessary statements that
dereferenced operations only to get the operations back. This patch
removes the unnecessary statements.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D129913
This patch allows custom attribute and type builders to return
something other than the C++ type of the attribute or type.
This is useful for attributes or types that may perform extra work during
construction (e.g. canonicalization) that could result in a different
kind of attribute or type being returned.
Reviewed By: rriddle, lattner
Differential Revision: https://reviews.llvm.org/D129792
This patch adds a pattern to decompose a `linalg.generic` operations
that
- has only parallel iterator types
- has more than 2 statements (including the yield)
into multiple `linalg.generic` operation such that each operation has
a single statement and a yield.
The pattern added here just splits the matching `linalg.generic` into
two `linalg.generic`s, one containing the first statement, and the
other containing the remaining. The same pattern can be applied
repeatedly on the second op to ultimately fully decompose the generic
op.
Differential Revision: https://reviews.llvm.org/D129704
The visitor functions for `Region` and `Block` types did not always
check the value returned by recursive calls. This caused the top-level
visitor invocation to return `WalkResult::advance()` even if one or more
recursive invocations returned `WalkResult::interrupt()`. This patch
fixes the problem by check if any recursive call is interrupted, and if
so, return `WalkResult::interrupt()`.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D129718
A new sparse_tensor operation allows for
custom reduction code to be injected during
linalg.generic lowering for sparse tensors.
An identity value is provided to indicate
the starting value of the reduction. A single
block region is required to contain the
custom reduce computation.
Reviewed by: aartbik
Differential Revision: https://reviews.llvm.org/D128004
This is a NFC change to make it easier to update this canonicalization
for more use cases. The refactoring makes things easier to
understand/adapt.
Differential Revision: https://reviews.llvm.org/D129829
This diff adds an integration test which does element wise multiplication for two sparse 3-d tensors of size 3x3x5
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D129638
In `linalg::tileConsumerAndFuseProducers`, there are two levels of
tiling and fusion; we partition the tile sizes and only use one
half for each of them. The partition is using the first non-parallel
dimension *after* interchange as the boundary. However, concrete
tiling happens *together with* loop interchange, so we still need
to provide the partial tile sizes *before* the interchange.
Otherwise, there will be inconsistency, which is what this patch
is to fix.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D129804
This patch modifies the implementation of `RewritePatternSet::add` to perfectly forward its arguments to pattern constructors. Without this, code like the following compiles but, due to the limited lifetime of the temporary TypeConverter, can produce unexpected behavior:
```
RewritePatternSet patterns(context);
patterns.add<SomeOpConversion, OtherOpConversion>(TypeConverter(), context);
if (failed(applyPartialConversion(getOperation(), target, std::move(patterns))))
return signalPassFailure();
```
The patch also changes the linalg fusion pattern implementation to correctly fill the test pattern set given the new behavior.
Author: Laszlo Kindrat <laszlokindrat@gmail.com>
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D129601
This patch modifies the implementation of `RewritePatternSet::add` to perfectly forward its arguments to pattern constructors. Without this, code like the following compiles but, due to the limited lifetime of the temporary TypeConverter, can produce unexpected behavior:
```
RewritePatternSet patterns(context);
patterns.add<SomeOpConversion, OtherOpConversion>(TypeConverter(), context);
if (failed(applyPartialConversion(getOperation(), target, std::move(patterns))))
return signalPassFailure();
```
The patch also changes the linalg fusion pattern implementation to correctly fill the test pattern set given the new behavior.
Author: Laszlo Kindrat <laszlokindrat@gmail.com>
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D129601
This is useful because MPInt.h defines identically-named functions that
operate on MPInts, which would otherwie become the only candidates of
overload resolution when calling e.g. ceilDiv from the mlir::presburger
namespace (iff MPInt.h is included). So to access the 64-bit overloads, an
explict call to mlir::ceilDiv would be required. This patch adds `using`
declarations allowing overload resolution to transparently call the right
function.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D129820
This commit extends the `raise` statements on errors in user-provided
code with `from e` clauses that attach the original exception to the one
being raised. This allows to debug the root cause of the error more
easily.
Reviewed By: SaurabhJha
Differential Revision: https://reviews.llvm.org/D129762
is out of range. Both intrinsics return a poison value.
Consequently, mark the intrinsics speculatable.
Differential Revision: https://reviews.llvm.org/D129656
The benchmark currently fails to run because it cannot find the `func`
symbol when using a `FuncOp`. I suppose that the breakage was introduced
by the extraction of the func dialect from the builtin dialect that
wasn't reflected in the benchmark yet.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D129738
Fixed some new memory leaks after migration to new
bufferization. One is expected, the other may need
some more careful analysis.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D129805
After recent bufferization improvement, this test
started failing due to missed zero initialization.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D129800
The lds_barrier op allows workgroups to wait at a barrier for
operations to/from their local data store (LDS) to complete without
incurring the performance penalties of a full memory fence.
Reviewed By: nirvedhmeshram
Differential Revision: https://reviews.llvm.org/D129522
bufferization.writable is used in most cases instead. All remaining test cases are updated. Some code that is no longer needed is deleted.
Differential Revision: https://reviews.llvm.org/D129739
This revision removes the LinalgPromotion pattern and adds a `transform.structured.promotion` op.
Since the LinalgPromotion transform allows the injection of arbitrary C++ via lambdas, the current
transform op does not handle it.
It is left for future work to decide what the right transform op control is for those cases.
Note the underlying implementation remains unchanged and the mechanism is still controllable by
lambdas from the API.
During this refactoring it was also determined that the `dynamicBuffers` option does not actually
connect to a change of behavior in the algorithm.
This also exhibits that the related test is wrong (and dangerous).
Both the option and the test are therefore removed.
Lastly, a test that connects patterns using the filter-based mechanism is removed: all the independent
pieces are already tested separately.
Context: https://discourse.llvm.org/t/psa-retire-linalg-filter-based-patterns/63785
Differential Revision: https://reviews.llvm.org/D129649
The constructor of PatternApplicator doesn't have a constructor that
accepts only a `RewritePatternSet` as currently used in the example
code in PatternRewriter.md. Instead, one has to turn it into a
`FrozenRewritePatternSet`.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D125236
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.
Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.
Differential Revision: https://reviews.llvm.org/D129356
- Adds verification for `nvgpu.mma.sync` op
- Adds tests to `mlir/test/Dialect/NVGPU/invalid.mlir`
- `nvgpu.mma.sync` verifier caught a bug and triggered a failure in m16n8k4_tf32_f32 variant in `mlir/test/Conversion/NVGPUToNVVM/nvgpu-to-nvvm.mlir`
- The output shape of vector holding thread-level accumulators was inconsistent and fixed in this change
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D129400
This pass tests patterns that are already tested elsewhere by applying them in a semi-targeted
fashion using anchor function and op names.
From now on, targeted tests should use the transform dialect interpreter.
Differential Revision: https://reviews.llvm.org/D129627