This commit separates the bufferization from the bufferization pass in Linalg. This allows other dialects to use ComprehensiveBufferize more easily.
This commit mainly moves files to a new directory and adds a new build target.
Differential Revision: https://reviews.llvm.org/D112989
AllocationCallbacks functions allocate/deallocate only. They no longer set the insertion point.
This is in preparation of decoupling ComprehensiveBufferize from the Linalg dialect.
Differential Revision: https://reviews.llvm.org/D112991
Move dialect-specific and analysis-specific function out of BufferizationAliasInfo. BufferizationAliasInfo's only job now is to keep track of aliases.
This is in preparation of futher decoupling ComprehensiveBufferize from various dialects.
Differential Revision: https://reviews.llvm.org/D112992
By default, OpResult buffers are writable. But there are ops (e.g., ConstantOp) for which this is not the case.
The purpose of this commit is to further decouple Comprehensive Bufferize from the Standard dialect.
Differential Revision: https://reviews.llvm.org/D112908
This in preparation of decoupling BufferizableOpInterface, Comprehensive Bufferize and dialects.
The goal of this CL is to make `getResultBuffer` (and other `bufferize` functions) independent of `LinalgOps`.
Differential Revision: https://reviews.llvm.org/D112907
These two methods are redundant and removed:
* `bufferizesToAliasOnly`: If not `bufferizesToMemoryRead` and not `bufferizesToMemoryWrite` but `getAliasingOpResult` returns a non-null value, we know that this OpOperand is alias-only. This method now has a default implementation and does not have to be implemented.
* `getInplaceableOpResult`: The analysis does not differentiate between "inplaceable" and "aliasing". The only thing that matters is whether or not OpOperand and OpResult are aliasing. That is the key property that makes buffer copies necessary.
Differential Revision: https://reviews.llvm.org/D112902
The earlier reduction "scalarization" was only applied to a chain of
*innermost* and *for* loops. This revision generalizes this to any
nesting of for- and while-loops. This implies that reductions can be
implemented with a lot less load and store operations. The chaining
is implemented with a forest of yield statements (but not as bad as
when we would also include the while-induction).
Fixes https://bugs.llvm.org/show_bug.cgi?id=52311
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D113078
This allows for external users of Comprehensive Bufferize to specify their own InitTensorOp elimination procedures.
Differential Revision: https://reviews.llvm.org/D112686
This better decouples transfer read/write from vector-only rewrite of conv.
This form is close to ready to plop into a new vector.conv op and the vector.transfer operations to be generalized as part of generic vectorization once the properties ConvolutionOpInterface are inferred from the indexing maps.
This also results in a nice perf boost in the dw == 1 cases.
Differential revision: https://reviews.llvm.org/D112822
This refactoring prepares conv1d vectorization for a future integration into
the generic codegen path.
Once transfer_read / transfer_write vectorization also supports sliding windows,
the special pattern for conv can disappear.
This will also likely need a vector.conv operation.
Differential Revision: https://reviews.llvm.org/D112797
The current setup of LinalgTransformationFilter allows a
transformation to trigger when either
1) The StringAttr is not set and no filter identifier is specified.
2) The StringAttr is set and its value matches (one of) the provided
identifier.
This misses the case where the transformation should trigger either
when the attribute is not set or its value matches (one of) the
provided identifier. Since `Identifier` does not allow empty strings,
add a boolean option to match when the attribute is not set. This
option is by default off.
Differential Revision: https://reviews.llvm.org/D113057
The 2-D case can be rewritten to generate quite fewer instructions and a single vector.shuffle which seems to provide a nice performance boost.
Add this arrow to our quiver by exposing it with a new vector transform option.
Differential Revision: https://reviews.llvm.org/D113062
We'd like to take a progressive approach towards Fconvolution op
CodeGen, by 1) tiling it to fit compute hierarchy first, and then
2) tiling along window dimensions with size 1 to reduce the problem
to be matmul-like. After that, we can 3) downscale high-D convolution
ops to low-D by removing the size-1 window dimensions. The final
step would be 4) vectorizing the low-D convolution op directly.
We have patterns for 1), 2), and 4). This commit adds a pattern for
3) for `linalg.conv_2d_nhwc_hwcf` ops as a starter. Supporting other
high-D convolution ops should be similar and mechanical.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112928
This commit moves parts of the existing bufferization code into external op interface implementations. Furthermore, Comprehensive Bufferize is adapted to use the new interface.
Future commits will decouple the interface and its op implementations from Comprehensive Bufferize and the Linalg dialect, as well as split them into multiple files with their own build targets. This commit leaves the file structure and build rules mostly unchanged.
Differential Revision: https://reviews.llvm.org/D112900
This commit adds a new op interface: BufferizableOpInterface. In the future, ops that implement this interface can be bufferized using Comprehensive Bufferize.
Note: The interface methods of this interface correspond to the "op interface" in ComprehensiveBufferize.cpp.
Differential Revision: https://reviews.llvm.org/D112974
In order to support fusion with mma matrix type we need to be able to
execute elementwise operations on them. This add an op to be able to
support some basic elementwise operations. This is a is not a full
solution as it only supports a limited scope or operations. Ideally we would
want to be able to fuse with more kind of operations.
Differential Revision: https://reviews.llvm.org/D112857
wmma intrinsics have a large number of combinations, ideally we want to be able
to target all the different variants. To avoid a combinatorial explosion in the
number of mlir op we use attributes to represent the different variation of
load/store/mma ops. We also can generate with tablegen helpers to know which
combinations are available. Using this we can avoid having too hardcode a path
for specific shapes and can support more types.
This patch also adds boiler plates for tf32 op support.
Differential Revision: https://reviews.llvm.org/D112689
This makes the class usable with types that do not provide their own operator<.
Update MLIR Linalg ComprehensiveBufferize to take advantage of the new template param.
Differential Revision: https://reviews.llvm.org/D112052
The list of operations that do neither read nor write, but create an alias when bufferizing inplace, is getting longer. This commit adds a helper function so that we do not have to spell out the entire list each time.
Differential Revision: https://reviews.llvm.org/D112515
When operand is a subview we don't infer in_bounds and some default cases (e.g case in the tests) will crash with `operand is NULL` when converting to LLVM
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D112772
Add a strategy pass that pads and hoists after tiling and fusion.
Depends On D112412
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112480
Adding a padding and hoisting pattern, a test pass, and tests. The patch prepares the split of tiling/fusion and padding.
Depends On D112255
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112412
Using [1] for representing shape of a scalar is incorrect, and will break with vectors of size 1.
- remove redundant helper functions
- fix couple of style warnings
Reviewed By: cota
Differential Revision: https://reviews.llvm.org/D112764
Adapt hoistPaddingOnTensors to leave replacing and erasing the old pad tensor operation to the caller. This change makes the function pattern friendly.
Depends On D112003
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112255
Adapt the rewriteAsPaddedOp method to use the OpBuilder instead of the PatterRewriter.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112003
* Move SmallVectors outside of inner loops to avoid frequent
allocations and deallocations
* Calculate linearized index and call flat range getters to
avoid internal shape querying behind `getValue`.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D112099
Add llvm.mlir.global_ctors and global_dtors ops and their translation
support to LLVM global_ctors/global_dtors global variables.
Differential Revision: https://reviews.llvm.org/D112524
This patch adds the inclusive clause (which was missed in previous
reorganization - https://reviews.llvm.org/D110903) in omp.wsloop operation.
Added a test for validating it.
Also fixes the order clause, which was not accepting any values. It now accepts
"concurrent" as a value, as specified in the standard.
Reviewed By: kiranchandramohan, peixin, clementval
Differential Revision: https://reviews.llvm.org/D112198
Analyze ops in a pseudo-random order to see if any assertions are triggered. Randomizing the order of analysis likely worsens the quality of the bufferization result (more out-of-place bufferizations). However, assertions should never fail, as that would indicate a problem with our implementation.
Differential Revision: https://reviews.llvm.org/D112581
This patch supports the atomic construct (read and write) following
section 2.17.7 of OpenMP 5.0 standard. Also added tests and
verifier for the same.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D111992
This also fixes the vector.shuffle C++ builder which had an incorrect type assumption that triggers with this new rewrite.
The vector.shuffle semantics were correct though.
Differential revision: https://reviews.llvm.org/D112578
The current implementation invokes materializations
whenever an input operand does not have a mapping for the
desired type, i.e. it requires materialization at the earliest possible
point. This conflicts with goal of dialect conversion (and also the
current documentation) which states that a materialization is only
required if the materialization is supposed to persist after the
conversion process has finished.
This revision refactors this such that whenever a target
materialization "might" be necessary, we insert an
unrealized_conversion_cast to act as a temporary materialization.
This allows for deferring the invocation of the user
materialization hooks until the end of the conversion process,
where we actually have a better sense if it's actually
necessary. This has several benefits:
* In some cases a target materialization hook is no longer
necessary
When performing a full conversion, there are some situations
where a temporary materialization is necessary. Moving forward,
these users won't need to provide any target materializations,
as the temporary materializations do not require the user to
provide materialization hooks.
* getRemappedValue can now handle values that haven't been
converted yet
Before this commit, it wasn't well supported to get the remapped
value of a value that hadn't been converted yet (making it
difficult/impossible to convert multiple operations in many
situations). This commit updates getRemappedValue to properly
handle this case by inserting temporary materializations when
necessary.
Another code-health related benefit is that with this change we
can move a majority of the complexity related to materializations
to the end of the conversion process, instead of handling adhoc
while conversion is happening.
Differential Revision: https://reviews.llvm.org/D111620
Dyn-cast should be checked and bailed out if the dyn_cast failed.
Reviewed By: sjarus, NatashaKnk
Differential Revision: https://reviews.llvm.org/D112574
Rationale:
The currently used trait was demanding that all types are the same
which is not true (since the sparse part may change and the dim sizes
may be relaxed). This revision uses the correct trait and makes the
rank match test explicit in the verify method.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D112576
This refactoring adds a few "event" functions (start/end loop-seq/loop) for
readability of the core function of codegen. This also prepares sparse tensor
output codegen, where these "event" functions will provide convenient
placeholders to start or stop insertion bookkeeping.
This revision also includes a few various minor changes that kept on
pending in my local workspace.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D112506