To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D97919
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from `spv.camelCase` to `spv.CamelCase` everywhere.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D97917
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere.
Differential Revision: https://reviews.llvm.org/D97920
The value type of the attribute can be specified by either overriding the typeBuilder field on the AttrDef, or by providing a parameter of type `AttributeSelfTypeParameter`. This removes the need to define custom storage class constructors for attributes that have a value type other than NoneType.
Differential Revision: https://reviews.llvm.org/D97590
Add a folder to rewrite a sequence such as:
```
%t1 = ...
%v = vector.transfer_read %t0[%c0...], {masked = [false...]} :
tensor<static_sizesxf32>, vector<static_sizesxf32>
%t2 = vector.transfer_write %v, %t1[%c0...] {masked = [false...]} :
vector<static_sizesxf32>, tensor<static_sizesxf32>
```
into:
```
%t0
```
The producer of t1 may or may not be DCE'd depending on whether it is a
block argument or has side effects.
Differential revision: https://reviews.llvm.org/D97934
There is no need for the interface implementations to be exposed, opaque
registration functions are sufficient for all users, similarly to passes.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D97852
Add a Loop Option attribute and generate llvm metadata attached to
branch instructions to control code generation.
Reviewed By: ftynse, mehdi_amini
Differential Revision: https://reviews.llvm.org/D96820
Found with exhaustive testing, it is possible that a while loop
appears in between chainable for loops. As long as we don't
scalarize reductions in while loops, this means we need to
terminate the chain at the while. This also refactors the
reduction code into more readable helper methods.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D97886
The support for attributes closely maps that of Types (basically 1-1) given that Attributes are defined in exactly the same way as Types. All of the current ODS TypeDef classes get an Attr equivalent. The generation of the attribute classes themselves share the same generator as types.
Differential Revision: https://reviews.llvm.org/D97589
This better matches the actual IR concept that is being modeled, and is consistent with how the rest of PDL is structured.
Differential Revision: https://reviews.llvm.org/D95718
This type represents a range of positional values. It will be used in followup revisions to add support for variadic constructs to PDL, such as operand and result ranges.
Differential Revision: https://reviews.llvm.org/D95717
The SubTensorInsertOp has a requirement that dest type and result
type match. Just folding the tensor.cast operation violates this and
creates verification errors during canonicalization. Also fix other
canonicalization methods that werent inserting casts properly.
Differential Revision: https://reviews.llvm.org/D97800
Different from the definition in Tensorflow and TOSA, the output is [N,H,W,C,M]. This can make transforms easier in LinAlg because the indexing maps are plain. E.g., to determine if the fill op has dependency between the depthwise conv op, the current pipeline only recognizes the dep if they are all projected affine map.
Reviewed By: asaadaldien
Differential Revision: https://reviews.llvm.org/D97798
This offers the ability to create a JIT and invoke a function by passing
ctypes pointers to the argument and the result.
Differential Revision: https://reviews.llvm.org/D97523
This adds minimalistic bindings for the execution engine, allowing to
invoke the JIT from the C API. This is still quite early and
experimental and shouldn't be considered stable in any way.
Differential Revision: https://reviews.llvm.org/D96651
This gets rid of a dubious shape_eq %a, %a fold, that folds shape_eq
even if %a is not an Attribute.
Differential Revision: https://reviews.llvm.org/D97728
Use `StringLiteral` for function return type if it is known to return
constant string literals only.
This will make it visible to API users, that such values can be safely
stored, since they refers to constant data, which will never be deallocated.
`StringRef` is general is not safe to store for a long term,
since it might refer to temporal data allocated in heap.
Add `inline` and `constexpr` methods support to `OpMethod`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D97390
Some variables are unused after D97383 landed. We should generate one symbol for one attrUse.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D97794
Some elementwise operations are not scalarizable, vectorizable, or tensorizable.
Split `ElementwiseMappable` trait into the following, more precise traits.
- `Elementwise`
- `Scalarizable`
- `Vectorizable`
- `Tensorizable`
This allows for reuse of `Elementwise` in dialects like HLO.
Differential Revision: https://reviews.llvm.org/D97674
This patch continues detensorizing implementation by detensoring
internal control flow in functions.
In order to detensorize functions, all the non-entry block's arguments
are detensored and branches between such blocks are properly updated to
reflect the detensored types as well. Function entry block (signature)
is left intact.
This continues work towards handling github/google/iree#1159.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D97148
* Moves `batch_matmul`, `matmul`, `matvec`, `vectmat`, `dot` to the new mechanism.
* This is not just an NFC change, in addition to using a new code generation mechanism, it also activates symbolic casting, allowing mixed precision operands and results.
* These definitions were generated from DSL by the tool: https://github.com/stellaraccident/mlir-linalgpy/blob/main/mlir_linalg/oplib/core.py (will be upstreamed in a subsequent set of changes).
Reviewed By: nicolasvasilache, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D97719
Add canonicalizers to subtensor_insert operations need canonicalizers
that propagate the constant arguments within offsets, sizes and
strides. Also add pattern to propogate tensor_cast operations.
Differential Revision: https://reviews.llvm.org/D97704
Lowers the transpose operation to a generic linalg op when permutations
is a constant value.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D97508
For ops that produces tensor types and implement the shaped type component interface, the type inference interface can be used. Create a grouping of these together to make it easier to specify (it cannot be added into a list of traits, but must rather be appended/concated to one as it isn't a trait but a list of traits).
Differential Revision: https://reviews.llvm.org/D97636
The universal index was maintained if dense indices were still
in place, and lattice points followed. However, it should only
be kept if any of those following lattice points actually
consumes the universal index. This change also fixes an
inaccuracy with a missing broadcast around vector invariant.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D97594
This enables this kind of construct in the DSL to generate a named op that is polymorphic over numeric type variables `T` and `U`, generating the correct arithmetic casts at construction time:
```
@tc_def_op
def polymorphic_matmul(A=TensorDef(T1, S.M, S.K),
B=TensorDef(T2, S.K, S.N),
C=TensorDef(U, S.M, S.N, output=True)):
implements(ContractionOpInterface)
C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
```
Presently, this only supports type variables that are bound to the element type of one of the arguments, although a further extension that allows binding a type variable to an attribute would allow some more expressiveness and may be useful for some formulations. This is left to a future patch. In addition, this patch does not yet materialize the verifier support which ensures that types are bound correctly (for such simple examples, failing to do so will yield IR that fails verification, it just won't yet fail with a precise error).
Note that the full grid of extensions/truncation/int<->float conversions are supported, but many of them are lossy and higher level code needs to be mindful of numerics (it is not the job of this level).
As-is, this should be sufficient for most integer matmul scenarios we work with in typical quantization schemes.
Differential Revision: https://reviews.llvm.org/D97603
Includes a lowering for tosa.const, tosa.if, and tosa.while to Standard/SCF dialects. TosaToStandard is
used for constant lowerings and TosaToSCF handles the if/while ops.
Resubmission of https://reviews.llvm.org/D97518 with ASAN fixes.
Differential Revision: https://reviews.llvm.org/D97529
Allows querying regions too via OpAdaptor's generated. This does not yet move region verification to adaptor nor require regions for ops where needed.
Differential Revision: https://reviews.llvm.org/D97519
Both identity ops can be loweried by replacing their results with their
inputs. We keep this as a linalg lowering as other backends may choose to
create copies.
Differential Revision: https://reviews.llvm.org/D97517
Similar to mask-load/store and compress/expand, the gather and
scatter operation now allow for higher dimension uses. Note that
to support the mixed-type index, the new syntax is:
vector.gather %base [%i,%j] [%kvector] ....
The first client of this generalization is the sparse compiler,
which needs to define scatter and gathers on dense operands
of higher dimensions too.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D97422
Lowering from the tosa.reshape op to linalg.reshape. For same-rank or
non-collapsed/expanded cases two linalg.reshapes are inserted.
Differential Revision: https://reviews.llvm.org/D97439
Right now they multiply before casting which means they would frequently
overflow. There are various reasonable ways to do this, but until we
have robust op description infra, this is a simple and safe default. More
careful treatments are likely to be hardware specific, as well (e.g.
using an i8*i8->i16 mul instruction).
Reviewed By: nicolasvasilache, mravishankar
Differential Revision: https://reviews.llvm.org/D97505
If one operand is not used in the formula, it will be considered a
shaped operand. And the result of indexing map of the operand will be the first
reduction dims.
Depends On D97383
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97384
'getAttrs' has been explicitly marked deprecated. This patch refactors
to use Operation::getAttrs().
Reviewed By: csigg
Differential Revision: https://reviews.llvm.org/D97546
Fixes a bug in affine fusion pipeline where an incorrect fusion is performed
despite a Call Op that potentially modifies memrefs under consideration
exists between source and target.
Fixes part of https://bugs.llvm.org/show_bug.cgi?id=49220
Reviewed By: bondhugula, dcaballe
Differential Revision: https://reviews.llvm.org/D97252
Includes a lowering for tosa.const, tosa.if, and tosa.while to Standard/SCF dialects. TosaToStandard is
used for constant lowerings and TosaToSCF handles the if/while ops.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D97352