When the source tensor of a tensor.insert_slice is not equivalent to an inplace buffer an extra copy is necessary. This revision adds the missing copy.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D106587
If this pass executes without shape inference its possible for unranked tensors
to appear in the IR. This pass should gracefully handle unranked tensors.
Differential Revision: https://reviews.llvm.org/D106617
- Change findDealloc() to return Optional<Operation *> and return None if > 1
dealloc is associated with the given alloc.
- Add findDeallocs() to return all deallocs associated with the given alloc.
- Fix current uses of findDealloc() to bail out if > 1 dealloc is found.
Differential Revision: https://reviews.llvm.org/D106456
Fix affine.for empty loop body folder in the presence of yield values.
The existing pattern ignored iter_args/yield values and thus crashed
when yield values had uses.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D106121
AffineForOp's folding hook is expected to fold away trivially empty
affine.for. This allows simplification to happen as part of the
canonicalizer and from wherever the folding hook is used. While more
complex analysis based zero trip count detection is available from other
passes in analysis and transforms, simple and inexpensive folding had
been missing.
Also, update/improve affine.for op documentation clarifying semantics of
the result values for zero trip count loops.
Differential Revision: https://reviews.llvm.org/D106123
Introduce a new rewrite driver (MultiOpPatternRewriteDriver) to rewrite
a supplied list of ops and other ops. Provide a knob to restrict
rewrites strictly to those ops or also to affected ops (but still not to
completely related ops).
This rewrite driver is commonly needed to run any simplification and
cleanup at the end of a transforms pass or transforms utility in a way
that only simplifies relevant IR. This makes it easy to write test cases
while not performing unrelated whole IR simplification that may
invalidate other state at the caller.
The introduced utility provides more freedom to developers of transforms
and transform utilities to perform focussed and local simplification. In
several cases, it provides greater efficiency as well as more
simplification when compared to repeatedly calling
`applyOpPatternsAndFold`; in other cases, it avoids the need to
undesirably call `applyPatternsAndFoldGreedily` to do unrelated
simplification in a FuncOp.
Update a few transformations that were earlier using
applyOpPatternsAndFold (SimplifyAffineStructures,
affineDataCopyGenerate, a linalg transform).
TODO:
- OpPatternRewriteDriver can be removed as it's a special case of
MultiOpPatternRewriteDriver, i.e., both can be merged.
Differential Revision: https://reviews.llvm.org/D106232
This patch allows iterating typed enum via the ADT/Sequence utility.
It also changes the original design to better separate concerns:
- `StrongInt` only deals with safe `intmax_t` operations,
- `SafeIntIterator` presents the iterator and reverse iterator
interface but only deals with safe `StrongInt` internally.
- `iota_range` only deals with `SafeIntIterator` internally.
This design ensures that operations are always valid. In particular,
"Out of bounds" assertions fire when:
- the `value_type` is not representable as an `intmax_t`
- iterator operations make internal computation underflow/overflow
- the internal representation cannot be converted back to `value_type`
Differential Revision: https://reviews.llvm.org/D106279
This allows caller to use non-const functions, e.g., `getOperandNumber`, etc. It
is expected that OpOperand is not modified in a callback function.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D106322
Bufferization handles all unknown ops conservative. The patch ensures accessing the dimension of an output tensor does not prevent in place bufferization.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D106356
The `reifyReturnTypeShapesPerResultDim` method supports shape
inference for rsults that are ranked types. These are used lower in
the codegeneration stack than its counter part `reifyReturnTypeShapes`
which also supports unranked types, and is more suited for use higher
up the compilation stack. To have separation of concerns, this method
is split into its own interface.
See discussion : https://llvm.discourse.group/t/better-layering-for-infershapedtypeopinterface/3823
Differential Revision: https://reviews.llvm.org/D106133
This is the first step to support software pipeline for scf.for loops.
This is only the transformation to create pipelined kernel and
prologue/epilogue.
The scheduling needs to be given by user as many different algorithm
and heuristic could be applied.
This currently doesn't handle loop arguments, this will be added in a
follow up patch.
Differential Revision: https://reviews.llvm.org/D105868
This makes it more explicit what the scope of this pass is. The name
of this pass predates fusion on tensors using tile + fuse, and hence
the confusion.
Differential Revision: https://reviews.llvm.org/D106132
Added shape inference handles cases for convolution operations. This includes
conv2d, conv3d, depthwise_conv2d, and transpose_conv2d. With transpose conv
we use the specified output shape when possible however will shape propagate
if the output shape attribute has dynamic values.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D105645
Add pattern to fold a TensorCast into a PadTensorOp if the cast removes static size information.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D106278
Insert ops replacing pad_tensor in front of the associated tansfer_write / insert_slice op. Otherwise we may end up with invalid ir if one of the remaining tansfer_write / insert_slice operands is defined after the pad_tensor op.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D106162
This simplifies the vector to LLVM lowering. Previously, both vector.load/store and vector.transfer_read/write lowered directly to LLVM. With this commit, there is a single path to LLVM vector load/store instructions and vector.transfer_read/write ops must first be lowered to vector.load/store ops.
* Remove vector.transfer_read/write to LLVM lowering.
* Allow non-unit memref strides on all but the most minor dimension for vector.load/store ops.
* Add maxTransferRank option to populateVectorTransferLoweringPatterns.
* vector.transfer_reads with changing element type can no longer be lowered to LLVM. (This functionality is needed only for SPIRV.)
Differential Revision: https://reviews.llvm.org/D106118
Removed inconsistent name prefixes, added consistency checks
on debug strings, added more assertions to verify assumptions
that may be lifted in the future.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D106108
The dialect-specific cast between builtin (ex-standard) types and LLVM
dialect types was introduced long time before built-in support for
unrealized_conversion_cast. It has a similar purpose, but is restricted
to compatible builtin and LLVM dialect types, which may hamper
progressive lowering and composition with types from other dialects.
Replace llvm.mlir.cast with unrealized_conversion_cast, and drop the
operation that became unnecessary.
Also make unrealized_conversion_cast legal by default in
LLVMConversionTarget as the majority of convesions using it are partial
conversions that actually want the casts to persist in the IR. The
standard-to-llvm conversion, which is still expected to run last, cleans
up the remaining casts standard-to-llvm conversion, which is still
expected to run last, cleans up the remaining casts
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105880
This may be necessary in partial multi-stage conversion when a container type
from dialect A containing types from dialect B goes through the conversion
where only dialect A is converted to the LLVM dialect. We will need to keep a
pointer-to-non-LLVM type in the IR until a further conversion can convert
dialect B types to LLVM types.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D106076
Changes include the following:
1. Single iteration reduction loops being sibling fused at innermost insertion level
are skipped from being considered as sequential loops.
Otherwise, the slice bounds of these loops is reset.
2. Promote loops that are skipped in previous step into outer loops.
3. Two utility function - buildSliceTripCountMap, getSliceIterationCount - are moved from
mlir/lib/Transforms/Utils/LoopFusionUtils.cpp to mlir/lib/Analysis/Utils.cpp
Reviewed By: bondhugula, vinayaka-polymage
Differential Revision: https://reviews.llvm.org/D104249
This format was missing from the support library. Although there are some
subtleties reading in an external format for int64 as double, there is no
good reason to omit support for this data type form the support library.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D106016
Arbitrary shifts have some complications, but shift by invariants
(viz. tensor index exp only at left hand side) can be easily
handled with the conjunctive rule.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D106002
Instead of relying on adhoc bounds calculations, use a projection-based
implementation. This simplifies the implementation and finds more static
constant sizes than previously/
Differential Revision: https://reviews.llvm.org/D106054
Tiling can be enabled with `linalg-tile-pad-tensor-ops`. Only scf::ForOp can be generated at the moment.
Differential Revision: https://reviews.llvm.org/D105460
Factor out the functionality into a new function, so that it can be used for creating PadTensorOp tiles.
Differential Revision: https://reviews.llvm.org/D105458
Adds zero-preserving unary operators from std. Also adds xor.
Performs minor refactoring to remove "zero" node, and pushed
the irregular logic for negi (not support in std) into one place.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D105928
Clean up corner cases related to elemental tensor / buffer type return values that would previously fail.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D105857
Previously, linalg bufferization always had to be conservative at function boundaries and assume the most dynamic strided memref layout.
This revision introduce the mechanism to specify a linalg.buffer_layout function argument attribute that carries an affine map used to set a less pessimistic layout.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D105859
Integral AND and OR follow the simple conjunction and disjuction rules
for lattice building. This revision also completes some of the Merge
refactoring by moving the remainder parts that are merger specific from
sparsification into utils files.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D105851
Pool operations perform the same shape propagation. Included the shape
propagation and tests for these avg_pool2d and max_pool2d.
Differential Revision: https://reviews.llvm.org/D105665
Right now, we only accept x/c with nonzero c, since this
conceptually can be treated as a x*(1/c) conjunction for both
FP and INT as far as lattice computations go. The codegen
keeps the division though to preserve precise semantics.
See discussion:
https://llvm.discourse.group/t/sparse-tensors-in-mlir/3389/28
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D105731
Previously, comprehensive bufferization of scf.yield did not have enough information
to detect whether an enclosing scf::for bbargs would bufferize to a buffer equivalent
to that of the matching scf::yield operand.
As a consequence a separate sanity check step would be required to determine whether
bufferization occured properly.
This late check would miss the case of calling a function in an loop.
Instead, we now pass and update aliasInfo during bufferization and it is possible to
imrpove bufferization of scf::yield and drop that post-pass check.
Add an example use case that was failing previously.
This slightly modifies the error conditions, which are also updated as part of this
revision.
Differential Revision: https://reviews.llvm.org/D105803
The trait was inconsistent with the other broadcasting logic here. And
also fix printing here to use ? rather than -1 in the error.
Differential Revision: https://reviews.llvm.org/D105748
Use a modeling similar to SCF ParallelOp to support arbitrary parallel
reductions. The two main differences are: (1) reductions are named and declared
beforehand similarly to functions using a special op that provides the neutral
element, the reduction code and optionally the atomic reduction code; (2)
reductions go through memory instead because this is closer to the OpenMP
semantics.
See https://llvm.discourse.group/t/rfc-openmp-reduction-support/3367.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D105358
After the MemRef has been split out of the Standard dialect, the
conversion to the LLVM dialect remained as a huge monolithic pass.
This is undesirable for the same complexity management reasons as having
a huge Standard dialect itself, and is even more confusing given the
existence of a separate dialect. Extract the conversion of the MemRef
dialect operations to LLVM into a separate library and a separate
conversion pass.
Reviewed By: herhut, silvas
Differential Revision: https://reviews.llvm.org/D105625
`GeneralizePadTensorOpPattern` might generate `tensor.dim` op so the
TensorDialect should be marked legal. This pattern should also
transform all `linalg.pad_tensor` ops so mark those as illegal. Those
changes are missed from a previous change in
https://reviews.llvm.org/D105293
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D105642
This reverts commit 6c0fd4db79.
This simple implementation is unfortunately not extensible and needs to be reverted.
The extensible way should be to extend https://reviews.llvm.org/D104321.
Introduce the exp and log function in OpDSL. Add the soft plus operator to test the emitted IR in Python and C++.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105420
Verify the number of results matches exactly the number of output tensors. Simplify the FillOp verification since part of it got redundant.
Differential Revision: https://reviews.llvm.org/D105427
Simplify vector unrolling pattern to be more aligned with rest of the
patterns and be closer to vector distribution.
The new implementation uses ExtractStridedSlice/InsertStridedSlice
instead of the Tuple ops. After this change the ops based on Tuple don't
have any more used so they can be removed.
This allows removing signifcant amount of dead code and will allow
extending the unrolling code going forward.
Differential Revision: https://reviews.llvm.org/D105381
Add the rewrite of PadTensorOp to InitTensor + InsertSlice before the
bufferization analysis starts.
This is exercised via a more advanced integration test.
Since the new behavior triggers folding, 2 tests need to be updated.
One of those seems to exhibit a folding issue with `switch` and is modified.
Differential Revision: https://reviews.llvm.org/D105549
Refactor the original code to rewrite a PadTensorOp into a
sequence of InitTensorOp, FillOp and InsertSliceOp without
vectorization by default. `GenericPadTensorOpVectorizationPattern`
provides a customized OptimizeCopyFn to vectorize the
copying step.
Reviewed By: silvas, nicolasvasilache, springerm
Differential Revision: https://reviews.llvm.org/D105293
The `bufferizesToMemoryRead` condition was too optimistics in the case
of operands that map to a block argument.
This is the case for ForOp and TiledLoopOp.
For such ops, forward the call to all uses of the matching BBArg.
Differential Revision: https://reviews.llvm.org/D105540
When an affine.if operation is returning/yielding results and has a
trivially true or false condition, then its 'then' or 'else' block,
respectively, is promoted to the affine.if's parent block and then, the
affine.if operation is replaced by the correct results/yield values.
Relevant test cases are also added.
Signed-off-by: Srishti Srivastava <srishti.srivastava@polymagelabs.com>
Differential Revision: https://reviews.llvm.org/D105418
Split out GPU ops library from GPU transforms. This allows libraries to
depend on GPU Ops without needing/building its transforms.
Differential Revision: https://reviews.llvm.org/D105472
The normalizeAffineForOp and normalizedAffineParallel methods were
misplaced in the AffineLoopNormalize pass file while their declarations
were in affine utils. Move these to affine Utils.cpp. NFC.
Differential Revision: https://reviews.llvm.org/D105468
This changes the custom syntax of the emitc.include operation for standard includes.
Reviewed By: marbre
Differential Revision: https://reviews.llvm.org/D105281
Remove `getDynOperands` and `createOrFoldDimOp` from MemRef.h to decouple MemRef a bit from Tensor. These two functions are used in other dialects/transforms.
Differential Revision: https://reviews.llvm.org/D105260
This revision extends the sparse compiler support from fp/int addition and multiplication to fp/int negation and subtraction, thereby increasing the scope of sparse kernels that can be compiled.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D105306
The implementation has become too unwieldy and cognitive overhead wins.
Instead compress the implementation in preparation for additional lowering paths.
This is a resubmit of https://reviews.llvm.org/D105359 without ordering ambiguities.
Differential Revision: https://reviews.llvm.org/D105367
Fusion by linearization should not happen when
- The reshape is expanding and it is a consumer
- The reshape is collapsing and is a producer.
The bug introduced in this logic by some recent refactoring resulted
in a crash.
To enforce this (negetive) use case, add a test that reproduces the
error and verifies the fix.
Differential Revision: https://reviews.llvm.org/D104970
The implementation has become too unwieldy and cognitive overhead wins.
Instead compress the implementation in preparation for additional lowering paths.
Differential Revision: https://reviews.llvm.org/D105359
Add the min operation to OpDSL and introduce a min pooling operation to test the implementation. The patch is a sibling of the max operation patch https://reviews.llvm.org/D105203 and the min operation is again lowered to a compare and select pair.
Differential Revision: https://reviews.llvm.org/D105345
To make TensorExp clearer, this change refactors the e0/e1 fields into a union: e0/e1 for a binary op tensor expression, and tensor_num for a tensor-kinded tensor expression.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D105303
Add the max operation to the OpDSL and introduce a max pooling operation to test the implementation. As MLIR has no builtin max operation, the max function is lowered to a compare and select pair.
Differential Revision: https://reviews.llvm.org/D105203
Added InferReturnTypeComponents for NAry operations, reshape, and reverse.
With the additional tosa-infer-shapes pass, we can infer/propagate shapes
across a set of TOSA operations. Current version does not modify the
FuncOp type by inserting an unrealized conversion cast prior to any new
non-matchin returns.
Differential Revision: https://reviews.llvm.org/D105312
Rationale:
Follow-up on migrating lattice and tensor expression related methods into the new utility.
This also prepares the next step of generalizing the op kinds that are handled.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D105219
ConstantOp are only supported in the ModulePass because they require a GlobalCreator object that must be constructed from a ModuleOp.
If the standlaone FunctionPass encounters a ConstantOp, bufferization fails.
Differential revision: https://reviews.llvm.org/D105156
This revision drops the comprehensive bufferization Function pass, which has issues when trying to bufferize constants.
Instead, only support the comprehensive-module-bufferize by default.
Differential Revision: https://reviews.llvm.org/D105228
Also add an integration test that connects all the dots end to end, including with cast to unranked tensor for external library calls.
Differential Revision: https://reviews.llvm.org/D105106
Cross function boundary bufferization support is added.
This is enabled by cross-function boundary alias analysis, for which the bufferization process is extended: it can now modify the BufferizationAliasInfo as new ops are introduced.
A number of simplifying assumptions are made:
1. by default we bufferize to the most dynamic strided memref type, further memref::CastOp canonicalizations are expected to clean up the IR.
2. in the current implementation, the stride information is always erased at function boundaries. A subsequent pass will be required to analyze the meet of all call ops to a function and decide whether more static buffer types can be used. This will potentially clone functions when it is deemed profitable to do so (e.g. when the stride-1 dimension may vary).
3. external function always bufferize to the most dynamic strided memref version. This may require special annotations for specifying that particular operands of top-level functions have contiguous buffer layout.
An alternative to point 3. would be to support tensor layout annotations, which is currently not supported in MLIR.
Differential revision: https://reviews.llvm.org/D104873
* Split memref.dim into two operations: memref.dim and tensor.dim. Both ops have the same builder interface and op argument names, so that they can be used with templates in patterns that apply to both tensors and memrefs (e.g., some patterns in Linalg).
* Add constant materializer to TensorDialect (needed for folding in affine.apply etc.).
* Remove some MemRefDialect dependencies, make some explicit.
Differential Revision: https://reviews.llvm.org/D105165
Uses elementwise interface to generalize canonicalization pattern and add a new
pattern for vector.contract case.
Differential Revision: https://reviews.llvm.org/D104343
The executeregionop is used to allow multiple blocks within SCF constructs. If the container allows multiple blocks, inline the region
Differential Revision: https://reviews.llvm.org/D104960