This flag will print the IR after a pass only in the case where the pass failed. This can be useful to more easily view the invalid IR, without needing to print after every pass in the pipeline.
Differential Revision: https://reviews.llvm.org/D101853
Skip the sparsification pass for Linalg ops without annotated tensors
(or cases that are not properly handled yet).
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D102787
Also, fix a small typo where the "unsigned" splat variants were not
being created with an unsigned type.
Differential Revision: https://reviews.llvm.org/D102797
vector.transfer_read and vector.transfer_write operations are converted
to llvm intrinsics with specific alignment information, however there
doesn't seem to be a way in llvm to take information from llvm.assume
intrinsics and change this alignment information. In any
event, due the to the structure of the llvm.assume instrinsic, applying
this information at the llvm level is more cumbersome. Instead, let's
generate the masked vector load and store instrinsic with the right
alignment information from MLIR in the first place. Since
we're bothering to do this, lets just emit the proper alignment for
loads, stores, scatter, and gather ops too.
Differential Revision: https://reviews.llvm.org/D100444
The patch extends the yaml code generation to support the following new OpDSL constructs:
- captures
- constants
- iteration index accesses
- predefined types
These changes have been introduced by revision
https://reviews.llvm.org/D101364.
Differential Revision: https://reviews.llvm.org/D102075
VectorTransferPermutationMapLoweringPatterns can be enabled via a pass option. These additional patterns lower permutation maps to minor identity maps with broadcasting, if possible, allowing for more efficient vector load/stores. The option is deactivated by default.
Differential Revision: https://reviews.llvm.org/D102593
LinalgOps that are all parallel do not use the value of `outs`
tensor. The semantics is that the `outs` tensor is fully
overwritten. Using anything other than `init_tensor` can add false
dependencies between operations, when the use is just for the shape of
the tensor. Adding a canonicalization to always use `init_tensor` in
such cases, breaks this dependence.
Differential Revision: https://reviews.llvm.org/D102561
Original interfaces are not safe to be called during dialect conversion.
This is because some ops (e.g. `dynamic_reshape(input, target_shape)`)
depend on the values of their operands to calculate the output shape.
However the operands may be out of reach during dialect conversion (e.g.
converting from tensor world to buffer world). This patch provides a new
kind of interface which accpets user-provided operands to solve this
problem.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D102317
"[mlir] Speed up Lexer::getEncodedSourceLocation"
This reverts commit 3043be9d2d and commit
861d69a525.
This change resulted in printing textual MLIR that can't be parsed; see
review thread https://reviews.llvm.org/D102567 for details.
This is a hook that allows for providing custom initialization of the pattern, e.g. if it has bounded recursion, setting the debug name, etc., without needing to define a custom constructor. A non-virtual hook was chosen to avoid polluting the vtable with code that we really just want to be inlined when constructing the pattern. The alternative to this would be to just define a constructor for each pattern, this unfortunately creates a lot of otherwise unnecessary boiler plate for a lot of patterns and a hook provides a much simpler/cleaner interface for the very common case.
Differential Revision: https://reviews.llvm.org/D102440
The version is used by LSP clients to ignore stale diagnostics, and can be used in a followup to help verify incremental changes.
Differential Revision: https://reviews.llvm.org/D102644
This adds the ability to specify a location when creating BlockArguments.
Notably Value::getLoc() will return this correctly, which makes diagnostics
more precise (e.g. the example in test-legalize-type-conversion.mlir).
This is currently optional to avoid breaking any existing code - if
absent, the BlockArgument defaults to using the location of its enclosing
operation (preserving existing behavior).
The bulk of this change is plumbing location tracking through the parser
and printer to make sure it can round trip (in -mlir-print-debuginfo
mode). This is complete for generic operations, but requires manual
adoption for custom ops.
I added support for function-like ops to round trip their argument
locations - they print correctly, but when parsing the locations are
dropped on the floor. I intend to fix this, but it will require more
invasive plumbing through "function_like_impl" stuff so I think it
best to split it out to its own patch.
Differential Revision: https://reviews.llvm.org/D102567
During affine loop fusion, create private memrefs for escaping memrefs
too under the conditions that:
-- the source is not removed after fusion, and
-- the destination does not write to the memref.
This creates more fusion opportunities as illustrated in the test case.
Reviewed By: bondhugula, ayzhuang
Differential Revision: https://reviews.llvm.org/D102604
- Enables inferring return type for ConstShape, takes into account valid return types;
- The compatible return type function could be reused, leaving that for next use refactoring;
Differential Revision: https://reviews.llvm.org/D102182
The experimental flag for "inplace" bufferization in the sparse
compiler can be replaced with the new inplace attribute. This gives
a uniform way of expressing the more efficient way of bufferization.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D102538
Initial version of pooling assumed normalization was accross all elements
equally. TOSA actually requires the noramalization is perform by how
many elements were summed (edges are not artifically dimmer). Updated
the lowering to reflect this change with corresponding tests.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D102540
Translate ExitDataOp with delete and copyout operands to runtime call.
This is done in a similar way as D101504.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D102381
Broadcast dimensions of vector transfer ops are always in-bounds. This is consistent with the fact that the starting position of a transfer is always in-bounds.
Differential Revision: https://reviews.llvm.org/D102566
This brings it in line with the bultin unrealized_conversion_cast,
which memref.buffer_cast is a specialized version of.
Differential Revision: https://reviews.llvm.org/D102608
At the moment `MlirModule`s can be converted to `MlirOperation`s, but not
the other way around (at least not without going around the C API). This
makes it impossible to e.g. run passes over a `ModuleOp` created through
`mlirOperationCreate`.
Reviewed By: nicolasvasilache, mehdi_amini
Differential Revision: https://reviews.llvm.org/D102497
Add TransferWritePermutationLowering, which replaces permutation maps of TransferWriteOps with vector.transpose.
Differential Revision: https://reviews.llvm.org/D102548
Provide an option to specify optimization level when creating an
ExecutionEngine via the MLIR JIT Python binding. Not only is the
specified optimization level used for code generation, but all LLVM
optimization passes at the optimization level are also run prior to
machine code generation (akin to the mlir-cpu-runner tool).
Default opt level continues to remain at level two (-O2).
Contributions in part from Prashant Kumar <prashantk@polymagelabs.com>
as well.
Differential Revision: https://reviews.llvm.org/D102551
We are moving from just dense/compressed to more general dim level
types, so we need more than just an "i1" array for annotations.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D102520
This change allows the SRC and DST of dma_start operations to be located in the
same memory space. This applies to both the Affine dialect and Memref dialect
versions of these Ops. The documention has been updated to reflect this by
explicitly stating overlapping memory locations are not supported (undefined
behavior).
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D102274
test/lib/Transforms/ has bitrot and become somewhat of a dumping grounds for testing pretty much any part of the project. This revision cleans this up, and moves the files within to a directory that reflects what is actually being tested.
Differential Revision: https://reviews.llvm.org/D102456
Lowering div elementwise op to the linalg dialect. Since tosa only supports integer division, that is the only version that is currently implemented.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D102430
Support OpImageQuerySize in spirv dialect
co-authored-by: Alan Liu <alanliu.yf@gmail.com>
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D102029
Create a copy of vector-to-loops.mlir and adapt the test for
ProgressiveVectorToSCF. Fix a small bug in getExtractOp() triggered by
this test.
Differential Revision: https://reviews.llvm.org/D102388
Do not rely on pass labels to detect if the pattern was already applied in the past (which allows for more some extra optimizations to avoid extra InsertOps and ExtractOps). Instead, check if these optimizations can be applied on-the-fly.
This also fixes a bug, where vector.insert and vector.extract ops sometimes disappeared in the middle of the pass because they get folded away, but the next application of the pattern expected them to be there.
Differential Revision: https://reviews.llvm.org/D102206
Rounding to integers requires rounding (for floating points) and clipping
to the min/max values of the destination range. Added this behavior and
updated tests appropriately.
Reviewed By: sjarus, silvas
Differential Revision: https://reviews.llvm.org/D102375