The existing vector transforms reduce the dimension of transfer_read
ops. However, beyond a certain point, the vector op actually has
to be reduced to a scalar load, since we can't load a zero-dimension
vector. This handles this case.
Note that in the longer term, it may be preferaby to support
zero-dimension vectors. see
https://llvm.discourse.group/t/should-we-have-0-d-vectors/3097.
Differential Revision: https://reviews.llvm.org/D103432
This test ensures that an error is generated from the Python side when running a module pass on a function. The test used to instantiate ViewOpGraph, however, this pass was changed into a general "any op" pass in D106253. Therefore, a different pass must be used in this test.
Differential Revision: https://reviews.llvm.org/D107424
* New pass option `max-label-len`: Truncate attributes/result types that have more #chars.
* New pass option `print-attrs`: Activate/deactivate rendering of attributes.
* New pass option `printResultTypes`: Activate/deactivate rendering of result types.
Differential Revision: https://reviews.llvm.org/D106337
* Visualize blocks and regions as subgraphs.
* Generate DOT file directly instead of using `GraphTraits`. `GraphTraits` does not support subgraphs.
Differential Revision: https://reviews.llvm.org/D106253
Also makes style consistent with the "surrounding"
text that appears on one webpage in MLIR doc
Reviewed By: grosul1
Differential Revision: https://reviews.llvm.org/D107418
We can propagate the shape from tosa.cond_if operands into the true/false
regions then through the connected blocks. Then, using the tosa.yield ops
we can determine what all possible return types are.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D105940
Handles shape inference for identity, cast, and rescale. These were missed
during the initialy elementwise work. This includes resize shape propagation
which includes both attribute and input type based propagation.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D105845
This prevents an explosion of threads, given that each file gets its own context and thus its own thread pool. We don't really need a thread pool for the LSP contexts anyways, so it's better to just disable threading.
mlir/test/transforms/loop-fusion.mlir is too big and is split into mlir/test/transforms/loop-fusion.mlir, mlir/test/transforms/loop-fusion-2.mlir, mlir/test/transforms/loop-fusion-3.mlir
and mlir/test/transforms/loop-fusion-4.mlir. Further tests can be added in mlir/test/transforms/loop-fusion-4.mlir
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D106473
This patch adds the critical construct to the OpenMP dialect. The
implementation models the definition in 2.17.1 of the OpenMP 5 standard.
A name and hint can be specified. The name is a global entity or has
external linkage, it is modelled as a FlatSymbolRefAttr. Hint is
modelled as an integer enum attribute.
Also lowering to LLVM IR using the OpenMP IRBuilder.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D107135
Store both interfaceID and objectID as key for interface registration callback.
Otherwise the implementation allows to register only one external model per one object in the single dialect.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D107274
Silence clang-tidy warning in AffineOps.cpp due to the inability to see
through the typeswitch. NFC.
Differential Revision: https://reviews.llvm.org/D106125
There are cases in which it is not desirable to fully compose the bound map with the operands when adding lower/upper bounds to a `FlatAffineConstraints`.
E.g., this is the case when bounds should be expressed in terms of the operands only (and not the operands' dependencies). This also makes `addLowerOrUpperBound` useable together with operands that are defined through semi-affine expressions.
Differential Revision: https://reviews.llvm.org/D107221
Bounds such as `dim_{pos} <= c_1 * dim_x + ...` where `x == pos` are invalid. `addLowerOrUpperBound` previously added an incorrect inequality to the set. Such cases are now explicitly rejected.
Differential Revision: https://reviews.llvm.org/D107220
Add ForLoopBoundSpecialization pass, which specializes scf.for loops into a "main loop" where `step` divides the iteration space evenly and into an scf.if that handles the last iteration.
This transformation is useful for vectorization and loop tiling. E.g., when vectorizing loads/stores, programs will spend most of their time in the main loop, in which only unmasked loads/stores are used. Only the in the last iteration (scf.if), slower masked loads/stores are used.
Subsequent commits will apply this transformation in the SparseDialect and in Linalg's loop tiling.
Differential Revision: https://reviews.llvm.org/D105804
There was a slightly mismatch between the double COO and actual numerical
type in the final sparse tensor storage (due to external formats always
using double). This minor revision removes that inconsistency by using a
properly typed COO and casting during the "add" method instead. This also
prepares alternative ways of initializing the COO object.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D107310
We had a [bad bug](69655864ee) over in CIRCT
caused by accidentally passing around PatternRewriter
by value. There is no reason to support copy/assignment
of the pattern rewriter, so disable it.
Differential Revision: https://reviews.llvm.org/D107232
This patch fixes a bug in the existing implementation of detectAsFloorDiv,
where floordivs with numerator with non-zero constant term and floordivs with
numerator only consisting of a constant term were not being detected.
Reviewed By: vinayaka-polymage
Differential Revision: https://reviews.llvm.org/D107214
The `DataLayout` class currently contains the member `layoutStack` which is hidden behind a preprocessor region dependant on the NDEBUG macro. Code wise this makes a lot of sense, as the `layoutStack` is used for extra assertions that users will want when compiling a debug build.
It however has the uncomfortable consequence of leading to a different ABI in Debug and Release builds. This I think is a bit annoying for downstream projects and others as they may want to build against a stable Release of MLIR in Release mode, but be able to debug their own project depending on MLIR.
This patch changes the related uses of NDEBUG to LLVM_ENABLE_ABI_BREAKING_CHECKS. As the macro is computed at configure time of LLVM, it may not change based on compiler settings of a downstream projects like NDEBUG would.
Differential Revision: https://reviews.llvm.org/D107227
Introduces a conversion from one (sparse) tensor type to another
(sparse) tensor type. See the operation doc for details. Actual
codegen for all cases is still TBD.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D107205
NFC. Clean up stale doc comments on memref replacement utility and some
variable renaming in it to avoid confusion.
Differential Revision: https://reviews.llvm.org/D107144
This allows for reusing the same output channel when the extension reloads after updating the server. Currently, whenever the extension restarts a new output channel is created (which can lead to a large number of seemingly dead output channels).
Quite a few things were out-of-date, or just not
organized well. This revision updates the extension
name, repo, icon, and many other components in
preperation for publishing the extension to the
marketplace.
If the source value to load is bool, and we have native storage
capability support for the source bitwidth, we still cannot directly
rewrite uses; we need to perform casting to bool first.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D107119
If the source value to store is bool, and we have native storage
capability support for the target bitwidth, we still cannot directly
store; we need to perform casting to match the target memref
element's bitwidth.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D107114
Rationale:
External file formats always store the values as doubles, so this was
hard coded in the memory resident COO scheme used to pass data into the
final sparse storage scheme during setup. However, with alternative methods
on the horizon of setting up these temporary COO schemes, it is time to
properly template this data structure.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D107001
The effect name is used by tablegen when generating the getEffects method of the SideEffectInterfaces. It is currently unqualified even though the class is contained within the mlir namespace, leading to compiler errors when using namespace mlir; isn't used before including the generated cpp file.
This patch fixes that by simply fully qualifying the class name.
Differential Revision: https://reviews.llvm.org/D107171
The presence of AffineIfOp inside AffineFor prevents fusion of the other loops to happen. For example:
```
affine.for %i0 = 0 to 10 {
affine.store %cf7, %a[%i0] : memref<10xf32>
}
affine.for %i1 = 0 to 10 {
%v0 = affine.load %a[%i1] : memref<10xf32>
affine.store %v0, %b[%i1] : memref<10xf32>
}
affine.for %i2 = 0 to 10 {
affine.if #set(%i2) {
%v0 = affine.load %b[%i2] : memref<10xf32>
}
}
```
The first two loops were not be fused because of `affine.if` inside the last `affine.for`.
The issue seems to come from a conservative constraint that does not allow fusion if there are ops whose number of regions != 0 (affine.if is one of them).
This patch just removes such a constraint when`affine.if` is inside `affine.for`. The existing `canFuseLoops` method is able to handle `affine.if` correctly.
Reviewed By: bondhugula, vinayaka-polymage
Differential Revision: https://reviews.llvm.org/D105963
When we vectorize a scalar constant, the vector constant is inserted before its
first user if the scalar constant is defined outside the loops to be vectorized.
It is possible that the vector constant does not dominate all its users. To fix
the problem, we find the innermost vectorized loop that encloses that first user
and insert the vector constant at the top of the loop body.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D106609
Make broadcastable needs the output shape to determine whether the operation
includes additional broadcasting. Include some canonicalizations for TOSA
to remove unneeded reshape.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D106846