Add operations abs, ceil, floor, and neg to the C++ API and Python API.
Add test cases.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D121339
This patch moves the testcases from
`mlir/test/Target/LLVMIR/openmp-llvm-bad-schedule-modifier.mlir` to
`mlir/test/Dialect/OpenMP/invalid.mlir` as they test the verifier
(not the translation to LLVM IR).
Reviewed By: NimishMishra
Differential Revision: https://reviews.llvm.org/D120877
This patch adds lowering from omp.atomic.update to LLVM IR. Whenever a
special LLVM IR instruction is available for the operation, `atomicrmw`
instruction is emitted, otherwise a compare-exchange loop based update
is emitted.
Depends on D119522
Reviewed By: ftynse, peixin
Differential Revision: https://reviews.llvm.org/D119657
Currently when we fold an empty loop, we assume that any loop
with iterArgs returns its iterArgs in order, which is not always
the case. It may return values defined outside of the loop or
return its iterArgs out of order. This patch adds support to
those cases.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D120776
This revision adds support for the linalg.index to the sparse compiler
pipeline. In essence, this adds the ability to refer to indices in
the tensor index expression, as illustrated below:
Y[i, j, k, l, m] = T[i, j, k, l, m] * i * j
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D121251
It's fine to use any integer (vector) values regardless of the
signedness. The opcode decides how to interpret the bits.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D121238
A lot of test passes are currently anchored on FuncOp, but this
dependency
is generally just historical. A majority of these test passes can run on
any operation, or can operate on a specific interface
(FunctionOpInterface/SymbolOpInterface).
This allows for greatly reducing the API dependency on FuncOp, which
is slated to be moved out of the Builtin dialect.
Differential Revision: https://reviews.llvm.org/D121191
Commit rG1a2bb03edab9d7aa31beb587d0c863acc6715d27 introduced a pattern
to convert dynamic dimensions in operands of `GenericOp`s to static
values based on indexing maps and shapes of other operands. The logic
is directly usable to any `LinalgOp`. Move that pattern as an
`OpInterfaceRewritePattern`.
Differential Revision: https://reviews.llvm.org/D120968
This is a pass that can be used by downstream consumers directly
to avoid the boilerplate to wrap around the `populate*Patterns`.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D121222
A `tensor.cast` consumer can be folded with its producer. This is
beneficial only if the result of the tensor cast is more static than
the source. This patch adds a utility function to check that this is
the case, and adds a couple of canonicalizations patterns that fold an
operation with `tensor.cast` conusmers.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D120950
With the recent improvements to OpDSL it is cheap to reintroduce a linalg.copy operation.
This operation is needed in at least 2 cases:
1. for copies that may want to change the elemental type (e.g. cast, truncate, quantize, etc)
2. to specify new tensors that should bufferize to a copy operation. The linalg.generic form
always folds away which is not always the right call.
Differential Revision: https://reviews.llvm.org/D121230
Allow pointwise operations to take rank zero input tensors similarly to scalar inputs. Use an empty indexing map to broadcast rank zero tensors to the iteration domain of the operation.
Depends On D120734
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D120807
Simplify tests that use `linalg.fill_rng_2d` to focus on testing the `const` and `index` functions. Additionally, cleanup emit_misc.py to use simpler test functions and fix an error message in config.py.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D120734
Add a FillOpInterface similar to the contraction and convolution op interfaces. The FillOpInterface is a preparation step to replace linalg.fill by its OpDSL version linalg.fill_tensor. The interface implements the `value()`, `output()`, and `result()` methods that by default are not available on linalg.fill_tensor.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D120725
In quantized comutation, there are casting ops around computation ops.
Reorder the ops to make reduce-to-contract actually work.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D120760
The current StandardToLLVM conversion patterns only really handle
the Func dialect. The pass itself adds patterns for Arithmetic/CFToLLVM, but
those should be/will be split out in a followup. This commit focuses solely
on being an NFC rename.
Aside from the directory change, the pattern and pass creation API have been renamed:
* populateStdToLLVMFuncOpConversionPattern -> populateFuncToLLVMFuncOpConversionPattern
* populateStdToLLVMConversionPatterns -> populateFuncToLLVMConversionPatterns
* createLowerToLLVMPass -> createConvertFuncToLLVMPass
Differential Revision: https://reviews.llvm.org/D120778
The default lowering of vector transpose operations generates a large sequence of
scalar extract/insert operations, one pair for each scalar element in the input tensor.
In other words, the vector transpose is scalarized. However, there are transpose
patterns where one or more adjacent high-order dimensions are not transposed (for
example, in the transpose pattern [1, 0, 2, 3], dimensions 2 and 3 are not transposed).
This patch improves the lowering of those cases by not scalarizing them and extracting/
inserting a full n-D vector, where 'n' is the number of adjacent high-order dimensions
not being transposed. By doing so, we prevent the scalarization of the code and generate a
more performant vector version.
Paradoxically, this patch shouldn't improve the performance of transpose operations if
we are using LLVM. The LLVM pipeline is able to optimize away some of the extract/insert
operations and the SLP vectorizer is converting the scalar operations back to its vector
form. However, scalarizing a vector version of the code in MLIR and relying on the SLP
vectorizer to reconstruct the vector code again is highly undesirable for several reasons.
Reviewed By: nicolasvasilache, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D120601
This patch extends the existing if combining canonicalization to also handle the case where a value returned by the first if is used within the body of the second if.
This patch also extends if combining to support if's whose conditions are logical negations of each other.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D120924
We can simplify an extractvalue of an insertvalue to extract out of the base of the insertvalue, if the insert and extract are at distinct and non-prefix'd indices
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D120915
Extend isLoopMemoryParallel check to include locally allocated memrefs.
This strengthens and also speeds up the dependence check used by the
utility by excluding locally allocated memrefs where appropriate.
Additional memref dialect ops can be supported exhaustively via proper
interfaces.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D120617
This commit deletes the old dialect conversion-based bufferization patterns, which are now obsolete.
Differential Revision: https://reviews.llvm.org/D120883
Add support for integer and float types into the data layout subsystem with
default logic similar to LLVM IR. Given the flexibility of the sybsystem, the
logic can be easily overwritten by operations if necessary. This provides the
connection necessary, e.g., for the GPU target where alignment requirements for
integers and floats differ from those provided by default (although still
compatible with the LLVM IR model). Previously, it was impossible to use
non-default alignment requirements for integer and float types, which could
lead to incorrect address and size calculations when targeting GPUs.
Depends On D120737
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D120739
This patch adds assemblyFormat for `omp.critical.declare`, `omp.atomic.read`,
`omp.atomic.write`, `omp.atomic.update` and `omp.atomic.capture`.
Also removing those clauses from `parseClauses` that aren't needed
anymore, thanks to the new assemblyFormats.
Reviewed By: NimishMishra, rriddle
Differential Revision: https://reviews.llvm.org/D120248
The remanants of Standard was renamed to Func, but the test directory
remained named as Standard. In adidition to fixing the name, this commit
also moves the tests for operations not in the Func dialect to the proper
parent dialect test directory.
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:
* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect
See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061
Differential Revision: https://reviews.llvm.org/D120624
As discussed in https://reviews.llvm.org/D119743 scf.parallel would continuously stack allocate since the alloca op was placd in the wsloop rather than the omp.parallel. This PR is the second stage of the fix for that problem. Specifically, we now introduce an alloca scope around the inlined body of the scf.parallel and enable a canonicalization to hoist the allocations to the surrounding allocation scope (e.g. omp.parallel).
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D120423
The llvm.mlir.global operation accepts a region as initializer. This region
corresponds to an LLVM IR constant expression and therefore should not accept
operations with side effects. Add a corresponding verifier.
Reviewed By: wsmoses, bondhugula
Differential Revision: https://reviews.llvm.org/D120632
The revision extends OpDSL with unary and binary function attributes. A function attribute, makes the operations used in the body of a structured operation configurable. For example, a pooling operation may take an aggregation function attribute that specifies if the op shall implement a min or a max pooling. The goal of this revision is to define less and more flexible operations.
We may thus for example define an element wise op:
```
linalg.elem(lhs, rhs, outs=[out], op=BinaryFn.mul)
```
If the op argument is not set the default operation is used.
Depends On D120109
Reviewed By: nicolasvasilache, aartbik
Differential Revision: https://reviews.llvm.org/D120110
Add a pattern matcher for ExtractSliceOp when its source is a constant.
The matching heuristics can be governed by the control function since
generating a new constant is not always beneficial.
Differential Revision: https://reviews.llvm.org/D119605
If we have a chain of `tensor.insert_slice` ops inserting some
`tensor.pad` op into a `linalg.fill` and ranges do not overlap,
we can also elide the `tensor.pad` later.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D120446
Fold tensor.insert_slice(tensor.pad(<input>), linalg.fill) into
tensor.insert_slice(<input>, linalg.fill) if the padding value and
the filling value are the same.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D120410
Improve the LinalgOp verification to ensure the iterator types is known. Previously, unknown iterator types have been ignored without warning, which can lead to confusing bugs.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D120649
The AVX2 lowering for transpose operations is only applicable to f32 vector types.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D120427
The existing AVX2 lowering patterns for the transpose op only triggers if the
input vector is 2-D. This patch extends the patterns to trigger for n-D vectors
which are effectively 2-D vectors (e.g., vector<1x4x1x8x1). The main constraint
for the generalized AVX2 patterns to be applicable to these vectors is that the
dimensions that are greater than one must be transposed. Otherwise, the existing
patterns are not applicable.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D119505
This change gives explicit order of verifier execution and adds
`hasRegionVerifier` and `verifyWithRegions` to increase the granularity
of verifier classification. The orders are as below,
1. InternalOpTrait will be verified first, they can be run independently.
2. `verifyInvariants` which is constructed by ODS, it verifies the type,
attributes, .etc.
3. Other Traits/Interfaces that have marked their verifier as
`verifyTrait` or `verifyWithRegions=0`.
4. Custom verifier which is defined in the op and has marked
`hasVerifier=1`
If an operation has regions, then it may have the second phase,
5. Traits/Interfaces that have marked their verifier as
`verifyRegionTrait` or
`verifyWithRegions=1`. This implies the verifier needs to access the
operations in its regions.
6. Custom verifier which is defined in the op and has marked
`hasRegionVerifier=1`
Note that the second phase will be run after the operations in the
region are verified. Based on the verification order, you will be able to
avoid verifying duplicate things.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D116789
Previously, OpDSL operation used hardcoded type conversion operations (cast or cast_unsigned). Supporting signed and unsigned casts thus meant implementing two different operations. Type function attributes allow us to define a single operation that has a cast type function attribute which at operation instantiation time may be set to cast or cast_unsigned. We may for example, defina a matmul operation with a cast argument:
```
@linalg_structured_op
def matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), C=TensorDef(U, S.M, S.N, output=True),
cast=TypeFnAttrDef(default=TypeFn.cast)):
C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
```
When instantiating the operation the attribute may be set to the desired cast function:
```
linalg.matmul(lhs, rhs, outs=[out], cast=TypeFn.cast_unsigned)
```
The revsion introduces a enum in the Linalg dialect that maps one-by-one to the type functions defined by OpDSL.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D119718
This transformation is useful to break dependency between consecutive loop
iterations by increasing the size of a temporary buffer. This is usually
combined with heavy software pipelining.
Differential Revision: https://reviews.llvm.org/D119406
This adds a variable op, emitted as C/C++ locale variable, which can be
used if the `emitc.constant` op is not sufficient.
As an example, the canonicalization pass would transform
```mlir
%0 = "emitc.constant"() {value = 0 : i32} : () -> i32
%1 = "emitc.constant"() {value = 0 : i32} : () -> i32
%2 = emitc.apply "&"(%0) : (i32) -> !emitc.ptr<i32>
%3 = emitc.apply "&"(%1) : (i32) -> !emitc.ptr<i32>
emitc.call "write"(%2, %3) : (!emitc.ptr<i32>, !emitc.ptr<i32>) -> ()
```
into
```mlir
%0 = "emitc.constant"() {value = 0 : i32} : () -> i32
%1 = emitc.apply "&"(%0) : (i32) -> !emitc.ptr<i32>
%2 = emitc.apply "&"(%0) : (i32) -> !emitc.ptr<i32>
emitc.call "write"(%1, %2) : (!emitc.ptr<i32>, !emitc.ptr<i32>) -> ()
```
resulting in pointer aliasing, as %1 and %2 point to the same address.
In such a case, the `emitc.variable` operation can be used instead.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D120098
The current implementation of ShuffleVectorOp assumes all vectors are
scalable. LLVM IR allows shufflevector operations on scalable vectors,
and the current translation between LLVM Dialect and LLVM IR does the
rigth thing when the shuffle mask is all zeroes. This is required to
do a splat operation on a scalable vector, but it doesn't make sense
for scalable vectors outside of that operation, i.e.: with non-all zero
masks.
Differential Revision: https://reviews.llvm.org/D118371
In D115022, we introduced an optimization where OpResults of a `linalg.generic` may bufferize in-place with an "in" OpOperand if the corresponding "out" OpOperand is not used in the computation.
This optimization can lead to unexpected behavior if the newly chosen OpOperand is in the same alias set as another OpOperand (that is used in the computation). In that case, the newly chosen OpOperand must bufferize out-of-place. This can be confusing to users, as always choosing the "out" OpOperand (regardless of whether it is used) would be expected when having the notion of "destination-passing style" in mind.
With this change, we go back to always bufferizing in-place with "out" OpOperands by default, but letting users override the behavior with a bufferization option.
Differential Revision: https://reviews.llvm.org/D120182
Given a cmpf of either uitofp or sitofp and a constant, attempt to canonicalize it to a cmpi.
This PR rewrites equivalent code within LLVM to now apply to MLIR arith.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D117257
+ compare block size with the unrollable inner dimension
+ reduce nesting in the code and simplify a bit IR building
Reviewed By: cota
Differential Revision: https://reviews.llvm.org/D120075
The related functionality is moved over to the bufferization dialect. Test cases are cleaned up a bit.
Differential Revision: https://reviews.llvm.org/D120191
This commit adds canonicalization pattern in `linalg.generic` op
for static shape inference. If any of the inputs or outputs have
static shape or is casted from a tensor of static shape, then
shapes of all the inputs and outputs can be inferred by using the
affine map of the static shape input/output.
Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D118929
This patch adds assemblyFormat for omp.sections operation.
Some existing functions have been altered to fit the custom directive
in assemblyFormat. This has led to their callsites to get modified too,
but those will be removed in later patches, when other operations get
their assemblyFormat. All operations were not changed in one patch for
ease of review.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D120176
Add `BufferizableOpInterface::verifyAnalysis`. Ops can implement this method to check for expected invariants and limitations.
The purpose of this change is to introduce a modular way of checking assertions such as `assertScfForAliasingProperties`.
Differential Revision: https://reviews.llvm.org/D120189
This patch adds assemblyFormat for omp.parallel operation.
Some existing functions have been altered to fit the custom directive
in assemblyFormat. This has led to their callsites to get modified too,
but those will be removed in later patches, when other operations get
their assemblyFormat. All operations were not changed in one patch for
ease of review.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D120157
This patch removes the following clauses from OpenMP Dialect:
- private
- firstprivate
- lastprivate
- shared
- default
- copyin
- copyprivate
The privatization clauses are being handled in the flang frontend. The
data copying clauses are not being handled anywhere for now. Once
we have a better picture of how to handle these clauses in OpenMP
Dialect, we can add these. For the time being, removing unneeded
clauses.
For detailed discussion about this refer to [[ https://discourse.llvm.org/t/rfc-privatisation-in-openmp-dialect/3526 | Privatisation in OpenMP dialect ]]
Reviewed By: kiranchandramohan, clementval
Differential Revision: https://reviews.llvm.org/D120029
This is a bit awkward since ExtractOp allows both `f32` and
`vector<1xf32>` results for a scalar extraction. Allow both, but make
inference return the scalar to make this as NFC as possible.
This change changes the handling of trailing dimensions with unknown
extent. Users of the changessociationIndicesForReshape helper should
see benefits when transforming reshape like operations into
expand/collapse pairs if the higher-rank type has trailing unknown
dimensions.
The motivating example is a reshape from tensor<16x1x?xi32> to
tensor<16xi32> that can be modeled as collapsing the three dimensions.
Differential Revision: https://reviews.llvm.org/D119730
Previously, NaNs would be dropped in favor of bounded values which was
strictly incorrect. Now the min/max operation propagate this
information. Not all uses of min/max need this, but the given change
will help protect future additions, and this prevents the need for an
additional cmpf and select operation to handle NaNs.
Differential Revision: https://reviews.llvm.org/D120020
This op is added to allow MLIR code running on multi-GPU systems to
select the GPU they want to execute operations on when no GPU is
otherwise specified.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D119883
In SPIR-V, resources are represented as global variables that
are bound to certain descriptor. SPIR-V requires those global
variables to be declared as aliased if multiple ones are bound
to the same slot. Such aliased decorations can cause issues
for transcompilers like SPIRV-Cross when converting to source
shading languages like MSL.
So this commit adds a pass to perform analysis of aliased
resources and see if we can unify them into one.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D119872
This would create a double free when a memref is passed twice to the
same op. This wasn't a problem at the time the pass was written but is
common since the introduction of scf.while.
There's a latent non-determinism that's triggered by the test, but this
change is messy enough as-is so I'll leave that for later.
Differential Revision: https://reviews.llvm.org/D120044
Previously `gpu-kernel-outlining` pass was also doing index computation sinking into gpu.launch before actual outlining.
Split ops sinking from `gpu-kernel-outlining` pass into separate pass, so users can use theirs own sinking pass before outlining.
To achieve old behavior users will need to call both passes: `-gpu-launch-sink-index-computations -gpu-kernel-outlining`.
Differential Revision: https://reviews.llvm.org/D119932
A very small refactoring, but a big impact on tests that expect an exact order.
This revision fixes the tests, but also makes them less brittle for similar
minor changes in the future!
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119992
This commit adds a pattern to wrap a tensor.pad op with
an scf.if op to separate the cases where we don't need padding
(all pad sizes are actually zeros) and where we indeed need
padding.
This pattern is meant to handle padding inside tiled loops.
Under such cases the padding sizes typically depend on the
loop induction variables. Splitting them would allow treating
perfect tiles and edge tiles separately.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D117018
Fusion of `linalg.generic` with
`tensor.expand_shape/tensor.collapse_shape` currently handles fusion
with reshape by expanding the dimensionality of the `linalg.generic`
operation. This helps fuse elementwise operations better since they
are fused at the highest dimensionality while keeping all indexing
maps involved projected permutations. The intent of these is to push
the reshape to the boundaries of functions.
The presence of named ops (or other ops across which the reshape
cannot be propagated) stops the propagation to the edges of the
function. At this stage, the converse patterns that fold the reshapes
with generic ops by collapsing the dimensions of the generic op can
push the reshape towards edges. In particular it helps the case where
reshapes exist in between named ops and generic ops.
`linalg.named_op` -> `tensor.expand_shape` -> `linalg.generic`
Pushing the reshape down will help fusion of `linalg.named_op` ->
`linalg.generic` using tile + fuse transformations.
This pattern is intended to replace the following patterns
1) FoldReshapeByLinearization : These patterns create indexing maps
that are not projected permutations that affect future
transformations. They are only useful for folding unit-dimensions.
2) PushReshapeByExpansion : This pattern has the same functionality
but has some restrictions
a) It tries to avoid creating new reshapes that limits its
applicability. The pattern added here can achieve the same
functionality through use of the `controlFn` that allows clients
of the pattern freedom to make this decision.
b) It does not work for ops with indexing semantics.
These patterns will be deprecated in a future patch.
Differential Revision: https://reviews.llvm.org/D119365
This allow user to register a callback that can annotate operations
during software pipelining. This allows user potential annotate op to
know what part of the pipeline they correspond to.
Differential Revision: https://reviews.llvm.org/D119866
Casting between scalable vectors and fixed-length vectors doesn't make
sense. If one of the operands is scalable, the other has to be scalable
to be able to guarantee they have the same shape at runtime.
Differential Revision: https://reviews.llvm.org/D119568
This patch changes the syntax of omp.atomic.update to allow the other
dialects to modify the variable with appropriate operations in the
region.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D119522
Add verifier for gpu.alloc op to verify if the dimension operand counts
and symbol operand counts are same as their memref counterparts.
Differential Revision: https://reviews.llvm.org/D117427
Also, it seems Khronos has changed html spec format so small adjustment to script was needed.
Base op parsing is also probably broken.
Differential Revision: https://reviews.llvm.org/D119678
Adds a pointer type to EmitC. The emission of pointers is so far only
possible by using the `emitc.opaque` type
Co-authored-by: Simon Camphausen <simon.camphausen@iml.fraunhofer.de>
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D119337
Index attributes had no default value, which means the attribute values had to be set on the operation. This revision adds a default parameter to `IndexAttrDef`. After the change, every index attribute has to define a default value. For example, we may define the following strides attribute:
```
```
When using the operation the default stride is used if the strides attribute is not set. The mechanism is implemented using `DefaultValuedAttr`.
Additionally, the revision uses the naming index attribute instead of attribute more consistently, which is a preparation for follow up revisions that will introduce function attributes.
Depends On D119125
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D119126
Previously, OpDSL did not support rank polymorphism, which required a separate implementation of linalg.fill. This revision extends OpDSL to support rank polymorphism for a limited class of operations that access only scalars and tensors of rank zero. At operation instantiation time, it scales these scalar computations to multi-dimensional pointwise computations by replacing the empty indexing maps with identity index maps. The revision does not change the DSL itself, instead it adapts the Python emitter and the YAML generator to generate different indexing maps and and iterators depending on the rank of the first output.
Additionally, the revision introduces a `linalg.fill_tensor` operation that in a future revision shall replace the current handwritten `linalg.fill` operation. `linalg.fill_tensor` is thus only temporarily available and will be renamed to `linalg.fill`.
Reviewed By: nicolasvasilache, stellaraccident
Differential Revision: https://reviews.llvm.org/D119003
Add new operations to the gpu dialect to represent device side
asynchronous copies. This also add the lowering of those operations to
nvvm dialect.
Those ops are meant to be low level and map directly to llvm dialects
like nvvm or rocdl.
We can further add higher level of abstraction by building on top of
those operations.
This has been discuss here:
https://discourse.llvm.org/t/modeling-gpu-async-copy-ampere-feature/4924
Differential Revision: https://reviews.llvm.org/D119191
If the result operand has a unit leading dim it is removed from all operands.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D119206
Fix fold-memref-subview-ops for affine.load/store. We need to expand out
the affine apply on its operands.
Differential Revision: https://reviews.llvm.org/D119402
Reuse the higher precision F32 approximation for the F16 one (by expanding and
truncating). This is partly RFC as I'm not sure what the expectations are here
(e.g., these are only for F32 and should not be expanded, that reusing
higher-precision ones for lower precision is undesirable due to increased
compute cost and only approximations per exact type is preferred, or this is
appropriate [at least as fallback] but we need to see how to make it more
generic across all the patterns here).
Differential Revision: https://reviews.llvm.org/D118968
For 0-D as well as 1-D vectors, both these patterns should
return a failure as there is no need to collapse the shape
of the source. Currently, only 1-D vectors were handled. This
patch handles the 0-D case as well.
Reviewed By: Benoit, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D119202
There are a few different test passes that check elementwise fusion in
Linalg. Consolidate them to a single pass controlled by different pass
options (in keeping with how `TestLinalgTransforms` exists).
There are a few different test passes that check elementwise fusion in
Linalg. Consolidate them to a single pass controlled by different pass
options (in keeping with how `TestLinalgTransforms` exists).
Fix the verification function of spirv::ConstantOp to allow nesting
array attributes.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D118939
* Implement `FlatAffineConstraints::getConstantBound(EQ)`.
* Inject a simpler constraint for loops that have at most 1 iteration.
* Taking into account constant EQ bounds of FlatAffineConstraint dims/symbols during canonicalization of the resulting affine map in `canonicalizeMinMaxOp`.
Differential Revision: https://reviews.llvm.org/D119153
This is both more efficient and more ergonomic to use, as inverting a
bit vector is trivial while inverting a set is annoying.
Sadly this leaks into a bunch of APIs downstream, so adapt them as well.
This would be NFC, but there is an ordering dependency in MemRefOps's
computeMemRefRankReductionMask. This is now deterministic, previously it
was dependent on SmallDenseSet's unspecified iteration order.
Differential Revision: https://reviews.llvm.org/D119076
Adapt `tileConsumerAndFuseProducers` to return failure if the generated tile loop nest is empty since all tile sizes are zero. Additionally, fix `LinalgTileAndFuseTensorOpsPattern` to return success if the pattern applied successfully.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D118878
Induction variable calculation was ignoring scf.for step value. Fix it to get
the correct induction variable value in the prologue.
Differential Revision: https://reviews.llvm.org/D118932
-- This commit adds a canonicalization pattern on scf.while to remove
the loop invariant arguments.
-- An argument is considered loop invariant if the iteration argument value is
the same as the corresponding one being yielded (at the same position) in both
the before/after block of scf.while.
-- For the arguments removed, their use within scf.while and their corresponding
scf.while's result are replaced with their corresponding initial value.
Signed-off-by: Abhishek Varma <abhishek.varma@polymagelabs.com>
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D116923
This is completely unused upstream, and does not really have well defined semantics
on what this is supposed to do/how this fits into the ecosystem. Given that, as part of
splitting up the standard dialect it's best to just remove this behavior, instead of try
to awkwardly fit it somewhere upstream. Downstream users are encouraged to
define their own operations that clearly can define the semantics of this.
This also uncovered several lingering uses of ConstantOp that weren't
updated to use arith::ConstantOp, and worked during conversions because
the constant was removed/converted into something else before
verification.
See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for more discussion.
Differential Revision: https://reviews.llvm.org/D118654
This is part of the larger effort to split the standard dialect. This will also allow for pruning some
additional dependencies on Standard (done in a followup).
Differential Revision: https://reviews.llvm.org/D118202
This revision avoids incorrect hoisting of alloca'd buffers across an AutomaticAllocationScope boundary.
In the more general case, we will probably need a ParallelScope-like interface.
Differential Revision: https://reviews.llvm.org/D118768
Use type inference when building the TransferWriteOp in the TransferWritePermutationLowering. Previously, the result type has been set to Type() which triggers an assertion if the pattern is used with tensors instead of memrefs.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D118758
Following the discussion in D118318, mark `arith.addf/mulf` commutative.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D118600
Support affine.load/store ops in fold-memref-subview ops pass. The
existing pass just "inlines" the subview operation on load/stores by
inserting affine.apply ops in front of the memref load/store ops: this
is by design always consistent with the semantics on affine.load/store
ops and the same would work even more naturally/intuitively with the
latter.
Differential Revision: https://reviews.llvm.org/D118565
Update SCF pass cmd line names to prefix `scf`. This is consistent with
guidelines/convention on how to name dialect passes. This also avoids
ambiguity on the context given the multiple `for` operations in the
tree.
NFC.
Differential Revision: https://reviews.llvm.org/D118564
There was a bug where some of the OpOperands needed in the replacement op were not in scope.
It does not matter where the replacement op is inserted. Any insertion point is OK as long as there are no dominance errors. In the worst case, the newly inserted op will bufferize out-of-place. This is no worse than not eliminating the InitTensorOp at all.
Differential Revision: https://reviews.llvm.org/D117685
The bufferization of arith.constant ops is also switched over to BufferizableOpInterface-based bufferization. The old implementation is deleted. Both implementations utilize GlobalCreator, now renamed to just `getGlobalFor`.
GlobalCreator no longer maintains a set of all created allocations to avoid duplicate allocations of the same constant. Instead, `getGlobalFor` scans the module to see if there is already a global allocation with the same constant value.
For compatibility reasons, it is still possible to create a pass that bufferizes only `arith.constant`. This pass (createConstantBufferizePass) could be deleted once all users were switched over to One-Shot bufferization.
Differential Revision: https://reviews.llvm.org/D118483
This patch adds the vector.scan op which computes the
scan for a given n-d vector. It requires specifying the operator,
the identity element and whether the scan is inclusive or
exclusive.
TEST: Added test in ops.mlir
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D117171
This is in preparation of switching `-tensor-constant-bufferize` and `-arith-bufferize` to BufferizableOpInterface-based implementations.
Differential Revision: https://reviews.llvm.org/D118324
This commit switches the `tensor-bufferize` pass over to BufferizableOpInterface-based bufferization.
Differential Revision: https://reviews.llvm.org/D118246
The pass can currently not handle to_memref(to_tensor(x)) folding where a cast is necessary. This is required with the new unified bufferization. There is already a canonicalization pattern that handles such foldings and it should be used during this pass.
Differential Revision: https://reviews.llvm.org/D117988
Prefix "affine-" to affine transform passes that were missing it -- to
avoid ambiguity and for uniformity. There were only two needed this.
Move mispaced affine coalescing test case file.
NFC.
Differential Revision: https://reviews.llvm.org/D118314
These transformations already operate on memref operations (as part of
splitting up the standard dialect). Now that the operations have moved,
it's time for these transformations to move as well.
Differential Revision: https://reviews.llvm.org/D118285
This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rmw which is the non-region
sibling operation of std.generic_atomic_rmw (the relationship is even more clear given
they have nearly the same description % how they represent the inner computation).
Differential Revision: https://reviews.llvm.org/D118209
The GPU dialect currently contains an explicit reference to LLVMFuncOp
during verification to handle the situation where the kernel has already been
converted. This commit changes that reference to instead use FunctionOpInterface,
which has two main benefits:
* It allows for removing an otherwise unnecessary dependency on the LLVM dialect
* It removes hardcoded assumptions about the lowering path and use of the GPU dialect
Differential Revision: https://reviews.llvm.org/D118172
This is for compatibility with existing bufferization passes. Also clean up memref type generation a bit.
Differential Revision: https://reviews.llvm.org/D118243
If we are extracting it is more useful to push the index_cast past the
extraction. This increases the chance the tensor.extract can evaluated at
compile time.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D118204
There is not much of a benefit to reshape a from element vs reloading it.
Updated to progagate shape manipulations into the output type of
tensor.from_elements.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D118201
Fusion of reshape ops by linearization incorrectly inverted the
indexing map before linearizing dimensions. This leads to incorrect
indexing maps used in the fused operation.
Differential Revision: https://reviews.llvm.org/D117908
This pattern is not written to handle operations with `linalg.index`
operations in its body, i.e. operations that have index semantics.
Differential Revision: https://reviews.llvm.org/D117856
This is mostly a copy of the existing tensor.from_elements bufferization. Once TensorInterfaceImpl.cpp is moved to the tensor dialect, the existing rewrite pattern can be deleted.
Differential Revision: https://reviews.llvm.org/D117775
This is mostly a copy of the existing tensor.generate bufferization. Once TensorInterfaceImpl.cpp is moved to the tensor dialect, the existing rewrite pattern can be deleted.
Differential Revision: https://reviews.llvm.org/D117770
This patch supports the atomic construct (capture) following section 2.17.7 of OpenMP 5.0 standard. Also added tests for the same.
Reviewed By: peixin, kiranchandramohan
Differential Revision: https://reviews.llvm.org/D115851
Add a transpose option to hoist padding to transpose the padded tensor before storing it into the packed tensor. The early transpose improves the memory access patterns of the actual compute kernel. The patch introduces a transpose right after the hoisted pad tensor and a second transpose inside the compute loop. The second transpose can either be fused into the compute operation or will canonicalize away when lowering to vector instructions.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D117893
Both insertion points are valid. This is to make BufferizableOpInteface-based bufferization compatible with existing partial bufferization test cases. (So less changes are necessary to unit tests.)
Differential Revision: https://reviews.llvm.org/D117986
This is the only op that is not supported via BufferizableOpInterfaceImpl bufferization. Once this op is supported we can switch `tensor-bufferize` over to the new unified bufferization.
Differential Revision: https://reviews.llvm.org/D117985
When 2 clamp ops are in a row, they can be canonicalized into a single clamp
that uses the most constrained range
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D117934
Rationale:
Although file I/O is a bit alien to MLIR itself, we provide two convenient ways
for sparse tensor I/O. The input part was already there (behind the swiss army
knife sparse_tensor.new). Now we have a sparse_tensor.out to write out data. As
before, the ops are kept vague and may change in the future. For now this
allows us to compare TACO vs MLIR very easily.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D117850
Implement a taylor series approximation for atan and add an atan2 lowering
that uses atan's appromation. This includes tests for edge cases and tests
for each quadrant.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D115682
In some cases, fusion can produce illegal operations if after fusion
the range of some of the loops cannot be computed from shapes of its
operands. Check for this case and abort the fusion if this happens.
Differential Revision: https://reviews.llvm.org/D117602
This allows to pipe sequences of `mlir-opt -split-input-file | mlir-opt -split-input-file`.
Depends On D117750
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D117756
PDLDialect being a somewhat user-facing dialect and whose ops contain exclusively other PDL ops in their regions can take advantage of `OpAsmOpInterface` to provide nicer IR.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D117828
Unbound OperationOp in the matcher (i.e. one with no uses) is already disallowed by the verifier. However, an OperationOp in the rewriter is not side-effect free -- it's creating an op!
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D117825
This commits explicitly states that negative values and values exceeding
vector dimensions are allowed in vector.create_mask (but not in
vector.constant_mask). These values are now truncated when
canonicalizing vector.create_mask to vector.constant_mask.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D116069
When computing the new type of a collapse_shape operation, we need to at least
take into account whether the type has an identity layout, in which case we can
easily support dynamic strides. Otherwise, the canonicalizer creates invalid
IR.
Longer term, both the verifier and the canoncializer need to be extended to
support the general case.
Differential Revision: https://reviews.llvm.org/D117772
If not allow-return-memref, raise an error if a new memory allocation is returned/yielded from a block. We do not check for new allocations directly, but for ops that yield/return values that are not equivalent to values that are defined outside of the current of the block.
Note: We still need to check that scf.for yield values and bbArgs are aliasing to ensure that getAliasingOpOperand/getAliasingOpResult is correct.
Differential Revision: https://reviews.llvm.org/D116687
This op is needed for unit testing in a subsequent revision. (This is the first op that has a block that yields equivalent values via the op's results.)
Note: Bufferization of scf.execute_region ops with multiple blocks is not yet supported.
Differential Revision: https://reviews.llvm.org/D117424
When the min/max are the total range of the value, it is a no-op as the values
are already restricted to that range.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D117625
This commit refactors the FunctionLike trait into an interface (FunctionOpInterface).
FunctionLike as it is today is already a pseudo-interface, with many users checking the
presence of the trait and then manually into functionality implemented in the
function_like_impl namespace. By transitioning to an interface, these accesses are much
cleaner (ideally with no direct calls to the impl namespace outside of the implementation
of the derived function operations, e.g. for parsing/printing utilities).
I've tried to maintain as much compatability with the current state as possible, while
also trying to clean up as much of the cruft as possible. The general migration plan for
current users of FunctionLike is as follows:
* function_like_impl -> function_interface_impl
Realistically most user calls should remove references to functions within this namespace
outside of a vary narrow set (e.g. parsing/printing utilities). Calls to the attribute name
accessors should be migrated to the `FunctionOpInterface::` equivalent, most everything
else should be updated to be driven through an instance of the interface.
* OpTrait::FunctionLike -> FunctionOpInterface
`hasTrait` checks will need to be moved to isa, along with the other various Trait vs
Interface API differences.
* populateFunctionLikeTypeConversionPattern -> populateFunctionOpInterfaceTypeConversionPattern
Fixes#52917
Differential Revision: https://reviews.llvm.org/D117272
The chunk size in schedule clause is one integer expression, which can
be either constant integer or integer variable. Fix schedule clause in
MLIR Op Def to support integer expression with different bit width.
Reviewed By: shraiysh
Differential Revision: https://reviews.llvm.org/D116073
The input type of a linalg.generic can be less dynamic than its output
type. If this is the case moving a reshape across the generic op would
create invalid IR, as expand_shape cannot expand arbitrary dynamic
dimensions.
Check that the reshape is actually valid before creating the
expand_shape. This exposes the existing verification logic in reshape
utils and removes the incomplete custom implementation in fusion.
Differential Revision: https://reviews.llvm.org/D116600
The leading space that is always printed at the beginning of regions is not consistent with other parts of the printing API. Moreover, this leading space can lead to undesirable assembly formats:
```
attr-dict-with-keyword $region
```
Prints as:
```
// Two spaces between `}` and `{`
attributes {foo} { ... }
```
Moreover, the leading space results in the odd generic op format:
```
"test.op"() ( {...}) : () -> ()
```
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D117411
The translation from the MLIR LLVM dialect to LLVM IR includes a mechanism that
ensures the successors of a block to be different blocks in case block
arguments are passed to them since the opposite cannot be expressed in LLVM IR.
This mechanism previously only worked for functions because it was written
prior to the introduction of other region-carrying operations such as the
OpenMP dialect, which also translates directly to LLVM IR. Modify this
mechanism to handle all regions in the module and not only functions.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D117548
This revision fixes a bug where the iterative algorithm would walk back def-use chains to an incorrect operand.
This exposed opportunities for a larger refactoring and behavior improvement.
The new algorithm has improved folding behavior and proceeds by tracking both the
permutation of the extraction position and the internal vector permutation.
Multiple partial intersection cases with a candidate insertOp are supported.
The refactoring of the implementation should also help it generalize to strided insert/extract op.
This also subsumes the previous `foldExtractOpFromTranspose` which is now a simple special case and can be deleted.
Differential Revision: https://reviews.llvm.org/D117322
In some cases, the result of an initTensorOp may have an attribute.
However, the Attribute was not passed to `inferResultType`, failing the
verifier. Therefore, propagate the Attribute to `inferResultType`.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D117192
Clean up return value on affineDataCopyGenerate utility. Return the
actual success/failure status instead of the "number of bytes" which
isn't being used in the codebase in any way. The success/failure status
wasn't being sent out earlier.
Differential Revision: https://reviews.llvm.org/D117209
If any of the operands is NaN, return the operand instead of a new constant.
When the rhs operand is a constant, the second arith.cmpf+select ops will be folded away.
https://reviews.llvm.org/D117010 marks the two ops commutative, which will place the constant on the rhs.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D117011
Enable constant folding of ops within the math dialect, and introduce constant folders for ceil and log2
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D117085
This change makes it possible to use a different buffer deallocation strategy. E.g., `-buffer-deallocation` can be used, which also works for allocations that are not in destination-passing style.
Differential Revision: https://reviews.llvm.org/D117096
This op is an example for how to deal with ops who's OpResult may aliasing with one of multiple OpOperands.
Differential Revision: https://reviews.llvm.org/D116868
Enable constant folding of ops within the math dialect, and introduce constant folders for ceil and log2
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D117085
Given a while loop whose condition is given by a cmp, don't recomputed the comparison (or its inverse) in the after region, instead use a constant since the original condition must be true if we branched to the after region.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D117047
Most convolution operations need explicit padding of the input to
ensure all accesses are inbounds. In such cases, having a pad
operation can be a significant overhead. One way to reduce that
overhead is to try to fuse the pad operation with the producer of its
source.
A sequence
```
linalg.generic -> linalg.pad_tensor
```
can be replaced with
```
linalg.fill -> tensor.extract_slice -> linalg.generic ->
tensor.insert_slice.
```
if the `linalg.generic` has all parallel iterator types.
Differential Revision: https://reviews.llvm.org/D116418
This patch adds corresponding memory effects to mlir llvm-dialect load/store/addressof ops, which thus enables canonicalizations of those ops (like dead code elimination) that rely on the effect interface
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D117041
Moved all TOSA decomposition patterns so that they can be optionally populated
and used by external rewrites. This avoids decomposing TOSa operations when
backends may benefit from the non-decomposed version.
Reviewed By: rsuderman, mehdi_amini
Differential Revision: https://reviews.llvm.org/D116526
Given an if of the form, simplify it by eliminating the not and swapping the regions
scf.if not(c) {
yield origTrue
} else {
yield origFalse
}
becomes
scf.if c {
yield origFalse
} else {
yield origTrue
}
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D116990
Previously, CallOps did not have any aliasing OpResult/OpOperand pairs. Therefore, CallOps were mostly ignored by the analysis and buffer copies were not inserted when necessary.
This commit introduces the following changes:
* Function bbArgs writable by default. A function can now be bufferized without inspecting its callers.
* Callers must introduce buffer copies of function arguments when necessary. If a function is external, the caller must conservatively assume that a function argument is modified by the callee after bufferization. If the function is not external, the caller inspects the callee to determine if a function argument is modified.
Differential Revision: https://reviews.llvm.org/D116457
D115722 added a DL spec to GPU modules. It happens that the DL
default interface implementation is sensitive to the name of the
DL spec attribute. This patch is fixing the name of the attribute
to be the expected one.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D116956
Given a select whose result is an i1, we can eliminate the conditional in the select completely by adding a few arithmetic operations.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D116839
This revision fixes SubviewOp, InsertSliceOp, ExtractSliceOp construction during bufferization
where not all offset/size/stride operands were properly specified.
A test that exhibited problematic behaviors related to incorrect memref casts is introduced.
Init tensor optimization is disabled in teh testing func bufferize pass.
Differential Revision: https://reviews.llvm.org/D116899
init_tensor elimination is arguably a pre-optimization that should be separated from comprehensive bufferization.
In any case it is still experimental and easily results in wrong IR with violated SSA def-use orderings.
Isolate the optimization behind a flag, separate the test cases and add a test case that would results in wrong IR.
Differential Revision: https://reviews.llvm.org/D116936
We allow the omission of a map in memref.reinterpret_cast under the assumption,
that the cast might cast to an identity layout. This change adds verification
that the static knowledge that is present in the reinterpret_cast supports
this assumption.
Differential Revision: https://reviews.llvm.org/D116601
This patch changes the syntax of omp.atomic.read to take the address of
destination, instead of having the value in a result. This will allow
using omp.atomic.read operation within an omp.atomic.capture operation
thus making its implementation less complex.
Reviewed By: peixin
Differential Revision: https://reviews.llvm.org/D116396
Fix crash in the presence of yield values. Multiple fixes to affine loop
tiling pre-condition checks and return status. Do not signal pass
failure on a failure to tile since the IR is still valid. Detect index
set computation failure in checkIfHyperrectangular and return failure.
Replace assertions with proper status return. Move checks to an
appropriate place earlier in the utility before mutation happens.
Differential Revision: https://reviews.llvm.org/D116738
With this change, the analysis takes a look at OpOperands instead of OpResults. OpOperands can bufferize out-of-place (even if they have no aliasing OpResults). The analysis does no longer care about OpResults.
Previously, only OpResults could bufferize out-of-place, so OpOperands that have no aliasing OpResults were never copied by Comprehensive Bufferize. This does not fit wwell with the new CallOp bufferization that is introduced in a subsequent change. In essence, called FuncOps can then be treated as "black boxes" that may read/write to any bbArg, even if they do not return anything.
Differential Revision: https://reviews.llvm.org/D115706
This is in preparation of fixing CallOp bufferization. Add explicit linalg.inplaceable attrs to all bbArgs, except for the ones where inplaceability should be decided by the analysis.
Differential Revision: https://reviews.llvm.org/D115840
Recent commits added a possibility for indices in LLVM dialect GEP operations
to be supplied directly as constant attributes to ensure they remain such until
translation to LLVM IR happens. Make this required for indexing into LLVM
struct types to match LLVM IR requirements, otherwise the translation would
assert on constructing such IR.
For better compatibility with MLIR-style operation construction interface,
allow GEP operations to be constructed programmatically using Values pointing
to known constant operations as struct indices.
Depends On D116758
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D116759
This make LLVM dialect constants to work with `m_constant` matches. Implement
the folding hook for this operation as required by the trait. This in turn
allows LLVM::ConstantOp to properly participate in constant-folding.
Depends On D116757
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D116758
In LLVM IR, the GEP indices that correspond to structures are required to be
i32 constants. MLIR models constants as just values defined by special
operations, and there is no verification that it is the case for structure
indices in GEP. Furthermore, some common transformations such as control flow
simplification may lead to the operands becoming non-constant. Make it possible
to directly supply constant values to LLVM GEPOp to guarantee they remain
constant until the translation to LLVM IR. This is not yet a requirement and
the verifier is not modified, this will be introduced separately.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D116757
This patch adds two folds. One for a repeated xor (e.g. xor(xor(x, a), a)) and one for a repeated trunc (e.g. trunc(trunc(x))).
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D116383
This patch adds omp.atomic.write lowering to LLVM IR.
Also, changed the syntax to have equal symbol instead of the comma to
make it more intuitive.
Reviewed By: kiranchandramohan, peixin
Differential Revision: https://reviews.llvm.org/D116416
`tensor.collapse_shape` op when fused with a consumer elementwise
`linalg.generic` operation results in creation of tensor.expand_shape
ops. In purely dynamic cases this can end up with a dynamic dimensions
being expanded to more than one dynamic dimension. This is disallowed
by the semantics of `tensor.expand_shape` operation. (While the
transformation is itself correct, its a gap in the specification of
`tensor.expand_shape` that is the issue). So disallow fusions which
result in such a pattern.
Differential Revision: https://reviews.llvm.org/D116703
This change simplifies BufferizableOpInterface and other functions. Overall, the API will get smaller: Functions related to custom IR traversal are deleted entirely. This will makes it easier to write BufferizableOpInterface implementations.
This is also in preparation of unifying Comprehensive Bufferize and core bufferization. While Comprehensive Bufferize could theoretically maintain its own IR traversal, there is no reason to do so, because all bufferize implementations in BufferizableOpInterface have to support partial bufferization anyway. And we can share a larger part of the code base between the two bufferizations.
Differential Revision: https://reviews.llvm.org/D116448
Until now, bufferization assumed that the yieleded tensor of a linalg.tiled_loop is an output tensor. This is not necessarily the case.
Differential Revision: https://reviews.llvm.org/D116685
Verify only the outer type being LLVM-compatible, the elemental types if
present are already checked by the type verifiers. This makes some LLVM dialect
operations compatible with mixed-dialect types that appear during progressive
lowering.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D116671
There is no need to inspect the ReturnOp of the called function.
This change also refactors the bufferization of CallOps in such a way that `lookupBuffer` is called only a single time. This is important for a later change that fixes CallOp bufferization. (There is currently a TODO among the test cases.)
Note: This change modifies a test case but is marked as NFC. There is no change of functionality, but FuncOps with empty bodies are now reported with a different error message.
Differential Revision: https://reviews.llvm.org/D116446
Such CallOps were not handled properly. When computing the new result types (and replacement values) of a CallOp, non-tensor return values were not accounted for.
Differential Revision: https://reviews.llvm.org/D116445
Depends On D115008
This change opens the way for D115012, and removes some corner cases in `CodegenUtils.cpp`. The `SparseTensorAttrDefs.td` already specifies that we allow `0` bitwidth for the two overhead types and that it is interpreted to mean the architecture's native width.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D115010
Presently the result type verification checks if the type is used by a `pdl::OperationOp` inside the matcher. This is unnecessarily restrictive; the type could come from a `pdl::OperandOp or `pdl::OperandsOp` and still be inferrable.
Reviewed By: rriddle, Mogball
Differential Revision: https://reviews.llvm.org/D116083
After https://reviews.llvm.org/D115821 it became possible to create
`tensor<elem_type>` with a single `tensor.from_elements` operation without
collapsing tensor shape from `tensor<1xelem_type>` to `tensor<elem_type>`
Differential Revision: https://reviews.llvm.org/D115891
Two canonicalizations for select %x, 1, 0
If the return type is i1, return simply the condition %x, otherwise extui %x to the return type.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D116517
Replace and(ext(a),ext(b)) with ext(and(a,b)). This both reduces one instruction, and results in the computation (and/or) being done on a smaller type.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D116519
This patch creates folds for cmpi( ext(%x : i1, iN) != 0) -> %x
In essence this matches patterns matching an extension of a boolean, that != 0, which is equivalent to the original condition.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D116504
MLIR already exposes landingpads, the invokeop and the personality function on LLVM functions. With this intrinsic it should be possible to implement exception handling via the exception handling mechanisms provided by the Itanium ABI.
Differential Revision: https://reviews.llvm.org/D116436
Per the discussion in https://reviews.llvm.org/D116345 it makes sense
to move AtomicRMWOp out of the standard dialect. This was accentuated by the
need to add a fold op with a memref::cast. The only dialect
that would permit this is the memref dialect (keeping it in the standard dialect
or moving it to the arithmetic dialect would require those dialects to have a
dependency on the memref dialect, which breaks linking).
As the AtomicRMWKind enum is used throughout, this has been moved to Arith.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D116392
vector.transfer operations do not have rank-reducing semantics.
Bail on illegal rank-reduction: we need to check that the rank-reduced
dims are exactly the leading dims. I.e. the following is illegal:
```
%0 = vector.transfer_write %v, %t[0,0], %cst :
vector<2x4xf32>, tensor<2x4xf32>
%1 = tensor.insert_slice %0 into %tt[0,0,0][2,1,4][1,1,1] :
tensor<2x4xf32> into tensor<2x1x4xf32>
```
Cannot fold into:
```
%0 = vector.transfer_write %v, %t[0,0,0], %cst :
vector<2x4xf32>, tensor<2x1x4xf32>
```
For this, check the trailing `vectorRank` dims of the insert_slice result
tensor match the trailing dims of the inferred result tensor.
Differential Revision: https://reviews.llvm.org/D116409
The semantics of the ops that implement the
`OffsetSizeAndStrideOpInterface` is that if the number of offsets,
sizes or strides are less than the rank of the source, then some
default values are filled along the trailing dimensions (0 for offset,
source dimension of sizes, and 1 for strides). This is confusing,
especially with rank-reducing semantics. Immediate issue here is that
the methods of `OffsetSizeAndStridesOpInterface` assumes that the
number of values is same as the source rank. This cause out-of-bounds
errors.
So simplifying the specification of `OffsetSizeAndStridesOpInterface`
to make it invalid to specify number of offsets/sizes/strides not
equal to the source rank.
Differential Revision: https://reviews.llvm.org/D115677
* There is no reason to forbid that case
* Also, user will get very unfriendly error like `expected result type with offset = -9223372036854775808 instead of 1`
Differential Revision: https://reviews.llvm.org/D114678
These conversions are better suited to be applied at whole tensor
level. Applying these as canonicalizations end up triggering such
canonicalizations at all levels of the stack which might be
undesirable. For example some of the resulting code patterns wont
bufferize in-place and need additional stack buffers. Best is to be
more deliberate in when these canonicalizations apply.
Differential Revision: https://reviews.llvm.org/D115912
TOSA's canonicalizers that change dense operations should be moved to a
seperate optimization pass to avoid canonicalizing to operations not supported
for relevant backends.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D115890
When the input and output of a pool2d op are both 1x1, it can be canonicalized to a no-op
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D115908