Even with all parallel loops reading the output value is still allowed so we
don't have to handle reduction loops differently.
Differential Revision: https://reviews.llvm.org/D109851
Add the addTileLoopIvsToIndexOpResults method to shift the IndexOp results after tiling.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D109761
The pattern is returning success even if it does no work leading to pattern application running up to the max iteration count and failing.
Reviewed By: nicolasvasilache, mravishankar
Differential Revision: https://reviews.llvm.org/D109791
TosaOp defintion had an artificial constraint that the input/output types
needed to be ranked to invoke the quantization builder. This is correct as an
unranked tensor could still be quantized.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D109863
We are having issues running the integration test of the sparse compiler
on AArch64 (crashing in the lib). This revision adds more assertions.
Reviewed By: jsetoain
Differential Revision: https://reviews.llvm.org/D109861
This enables the sparsification of more kernels, such as convolutions
where there is a x(i+j) subscript. It also enables more tensor invariants
such as x(1) or other affine subscripts such as x(i+1). Currently, we
reject sparsity altogether for such tensors. Despite this restriction,
however, we can already handle a lot more kernels with compound subscripts
for dense access (viz. convolution with dense input and sparse filter).
Some unit tests and an integration test demonstrate new capability.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D109783
Adds a new rewrite directive returnType that can be added at the end of an op's
argument list to explicitly specify return types.
```
(OpX $v0, $v1, (returnType "$_builder.getI32Type()"))
```
Pass in a bound value to copy its return type, or pass a native code call to
dynamically create new types.
```
(OpX $v0, $v1, (returnType $v0, (NativeCodeCall<"..."> $v1)))
```
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D109472
There are two main versions of depthwise conv depending whether the multiplier
is 1 or not. In cases where m == 1 we should use the version without the
multiplier channel as it can perform greater optimization.
Add lowering for the quantized/float versions to have a multiplier of one.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D108959
This revision fixes a corner case that could appear due to incorrect insertion point behavior in comprehensive bufferization.
Differential Revision: https://reviews.llvm.org/D109830
AliasInfo can now use union-find for a much more efficient implementation.
This brings no functional changes but large performance gains on more complex examples.
Differential Revision: https://reviews.llvm.org/D109819
This seems in-line with the intent and how we build tools around it.
Update the description for the flag accordingly.
Also use an injected thread pool in MLIROptMain, now we will create
threads up-front and reuse them across split buffers.
Differential Revision: https://reviews.llvm.org/D109802
Both copy/alloc ops are using memref dialect after this change.
Reviewed By: silvas, mehdi_amini
Differential Revision: https://reviews.llvm.org/D109480
Add the makeComposedExtractSliceOp method that creates an ExtractSliceOp and folds chains of ExtractSliceOps by computing the sum of their offsets and by multiplying their strides.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D109601
This tiling option scalarizes all dynamic dimensions, i.e., it tiles all dynamic dimensions by 1.
This option is useful for linalg ops with partly dynamic tensor dimensions. E.g., such ops can appear in the partial iteration after loop peeling. After scalarizing dynamic dims, those ops can be vectorized.
Differential Revision: https://reviews.llvm.org/D109268
Only scf.for loops are supported at the moment. linalg.tiled_loop support will be added in a subsequent commit.
Only static tensor sizes are supported. Loops for dynamic tensor sizes can be peeled, but the generated code is not optimal due to a missing canonicalization pattern.
Differential Revision: https://reviews.llvm.org/D109043
This revision allows hoisting static alloc/dealloc pairs as high as possible during ComprehensiveBufferization.
This also aligns such allocated buffers to 128B by default.
This change exhibited some issues wrt insertion points and a missing copy that are also fixed in this revision; tests are updated accordingly.
Differential Revision: https://reviews.llvm.org/D109684
Previously, we would insert a DimOp and rely on later canonicalizations.
Unfortunately, reifyShape kind of rewrites are not canonicalizations anymore.
This introduces undesirable pass dependencies.
Instead, immediately reify the result shape and avoid the DimOp altogether.
This is akin to a local folding, which avoids introducing more reliance on `-resolve-shaped-type-result-dims` (similar to compositions of `affine.apply` by construction to avoid chains of size > 1).
It does not completely get rid of the reliance on the pass as the process is merely local: calling the pass may still be necessary for global effects. Indeed, one of the tests still requires the pass.
Differential Revision: https://reviews.llvm.org/D109571
The conversion pattern is particularly useful for conversion of
block arguments in the master op.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D109610
Tosa.while shape inference requires repeatedly running shape inference across
the body of the loop until the types become static as we do not know the number
of iterations required by the loop body. Once the least specific arguments are
known they are propagated to both regions.
To determine the final end type, the least restrictive types are determined
from all yields.
Differential Revision: https://reviews.llvm.org/D108801
The original version of the bufferization pattern for linalg.generic would
manually clone operations within the region to the bufferized clone of the
operation. This triggers legality requirements on those operations in the
conversion infra. Instead, this now uses the rewriter to inline the region
instead, avoiding those legality requirements.
Differential Revision: https://reviews.llvm.org/D109581
Generate an scf.for instead of an scf.if for the partial iteration. This is for consistency reasons: The peeling of linalg.tiled_loop also uses another loop for the partial iteration.
Note: Canonicalizations patterns may rewrite partial iterations to scf.if afterwards.
Differential Revision: https://reviews.llvm.org/D109568
Extend the signature of the tile loop nest region builder to take all operand values to use and not just the scf::For iterArgs. This change allows us to pass in all block arguments of TiledLoop and use them directly instead of replacing them after the loop generation.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D109569
This revision fixes the traversal order of extract_slice during the inplace analysis.
It was previously thought that such ops could be analyzed at the very end.
This is unfortunately not true as the AliasInfo for dependents of these ops need to be updated.
This change allows the aliases introduced by the bufferization of extract_slice to be properly propagated.
Differential Revision: https://reviews.llvm.org/D109519
This PR adds missing AtomicRMWKind::min/max cases which we would like to use for min/max reduction loop vectorizations.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D104881
This renames the primary methods for creating a zero value to `getZero`
instead of `getNullValue` and renames predicates like `isAllOnesValue`
to simply `isAllOnes`. This achieves two things:
1) This starts standardizing predicates across the LLVM codebase,
following (in this case) ConstantInt. The word "Value" doesn't
convey anything of merit, and is missing in some of the other things.
2) Calling an integer "null" doesn't make any sense. The original sin
here is mine and I've regretted it for years. This moves us to calling
it "zero" instead, which is correct!
APInt is widely used and I don't think anyone is keen to take massive source
breakage on anything so core, at least not all in one go. As such, this
doesn't actually delete any entrypoints, it "soft deprecates" them with a
comment.
Included in this patch are changes to a bunch of the codebase, but there are
more. We should normalize SelectionDAG and other APIs as well, which would
make the API change more mechanical.
Differential Revision: https://reviews.llvm.org/D109483