Previously, this API would return the PyObjectRef, rather than the
underlying PyOperation.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D101416
Canonicalizations for subtensor operations defaulted to use the
rank-reduced version of the operation, but the cast inserted to get
back the original type would be illegal if the rank was actually
reduced. Instead make the canonicalization not reduce the rank of the
operation.
Differential Revision: https://reviews.llvm.org/D101258
The current canonicalization did not remap operation results correctly
and attempted to erase tiledLoop, which is incorrect if not all tensor
results are folded.
Tensor inputs, if not used in the body of TiledLoopOp, can be removed.
memref::CastOp can be folded into TiledLoopOp as well.
Differential Revision: https://reviews.llvm.org/D101445
Both, `shape.broadcast` and `shape.cstr_broadcastable` accept dynamic and static
extent tensors. If their operands are casted, we can use the original value
instead.
Differential Revision: https://reviews.llvm.org/D101376
Add a section attribute to LLVM_GlobalOp, during module translation attribute value is propagated to llvm
Reviewed By: sgrechanik, ftynse, mehdi_amini
Differential Revision: https://reviews.llvm.org/D100947
This adds `mlirOperationSetOperand` to the IR C API, similar to the
function to get an operand.
In the Python API, this adds `operands[index] = value` syntax, similar
to the syntax to get an operand with `operands[index]`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D101398
Add the `getCapsule()` and `createFromCapsule()` methods to the
PyValue class, as well as the necessary interoperability.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D101090
MatMul and FullyConnected have transposed dimensions for the weights.
Also, removed uneeded tensor reshape for bias.
Differential Revision: https://reviews.llvm.org/D101220
Quantized negation can be performed using higher bits operations.
Minimal bits are picked to perform the operation.
Differential Revision: https://reviews.llvm.org/D101225
Explicitly check for uninitialized to prevent crashes in edge cases where the derived analysis creates a lattice element for a value that hasn't been visited yet.
Like `print-ir-after-all` and `-before-all`, this allows to inspect IR for
debug purposes. While the former allow to inspect only between passes, this
change allows to follow the rewrites that happen within passes.
Differential Revision: https://reviews.llvm.org/D100940
Empty extent tensor operands were only removed when they were defined as a
constant. Additionally, we can remove them if they are known to be empty by
their type `tensor<0xindex>`.
Differential Revision: https://reviews.llvm.org/D101351
Splat constant folding was limited to `std.constant` operations. Instead, use
the constant matcher and apply splat constant folding to any constant-like
operation that holds a splat attribute.
Differential Revision: https://reviews.llvm.org/D101301
This revision takes the forward value propagation engine in SCCP and refactors it into a more generalized forward dataflow analysis framework. This framework allows for propagating information about values across the various control flow constructs in MLIR, and removes the need for users to reinvent the traversal (often not as completely). There are a few aspects of the traversal, that were conservative for SCCP, that should be relaxed to support the needs of different value analyses. To keep this revision simple, these conservative behaviors will be left in (Note that this won't produce an incorrect result, but may produce more conservative results than necessary in certain edge cases. e.g. region entry arguments for non-region branch interface operations). The framework also only focuses on computing lattices for values, given the SCCP origins, but this is something to relax as needed in the future.
Given that this logic is already in SCCP, a majority of this commit is NFC. The more interesting parts are the interface glue that clients interact with.
Differential Revision: https://reviews.llvm.org/D100915
The new "encoding" field in tensor types so far had no meaning. This revision introduces:
1. an encoding attribute interface in IR: for verification between tensors and encodings in general
2. an attribute in Tensor dialect; #tensor.sparse<dict> + concrete sparse tensors API
Active discussion:
https://llvm.discourse.group/t/rfc-introduce-a-sparse-tensor-type-to-core-mlir/2944/
Reviewed By: silvas, penpornk, bixia
Differential Revision: https://reviews.llvm.org/D101008
Add two canoncalizations for scf.if.
1) A canonicalization that allows users of a condition within an if to assume the condition
is true if in the true region, etc.
2) A canonicalization that removes yielded statements that are equivalent to the condition
or its negation
Differential Revision: https://reviews.llvm.org/D101012
In LLVM_ENABLE_STATS=0 builds, `llvm::Statistic` maps to `llvm::NoopStatistic`
but has 3 mostly unused pointers. GlobalOpt considers that the pointers can
potentially retain allocated objects, so GlobalOpt cannot optimize out the
`NoopStatistic` variables (see D69428 for more context), wasting 23KiB for stage
2 clang.
This patch makes `NoopStatistic` empty and thus reclaims the wasted space. The
clang size is even smaller than applying D69428 (slightly smaller in both .bss and
.text).
```
# This means the D69428 optimization on clang is mostly nullified by this patch.
HEAD+D69428: size(.bss) = 0x0725a8
HEAD+D101211: size(.bss) = 0x072238
# bloaty - HEAD+D69428 vs HEAD+D101211
# With D101211, we also save a lot of string table space (.rodata).
FILE SIZE VM SIZE
-------------- --------------
-0.0% -32 -0.0% -24 .eh_frame
-0.0% -336 [ = ] 0 .symtab
-0.0% -360 [ = ] 0 .strtab
[ = ] 0 -0.2% -880 .bss
-0.0% -2.11Ki -0.0% -2.11Ki .rodata
-0.0% -2.89Ki -0.0% -2.89Ki .text
-0.0% -5.71Ki -0.0% -5.88Ki TOTAL
```
Note: LoopFuse is a disabled pass. For now this patch adds
`#if LLVM_ENABLE_STATS` so `OptimizationRemarkMissed` is skipped in
LLVM_ENABLE_STATS==0 builds. If these `OptimizationRemarkMissed` are useful in
LLVM_ENABLE_STATS==0 builds, we can replace `llvm::Statistic` with
`llvm::TrackingStatistic`, or use a different abstraction to keep track of the strings.
Similarly, skip the code in `mlir/lib/Pass/PassStatistics.cpp` which
calls `getName`/`getDesc`/`getValue`.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D101211
This tidies up the code a bit:
* Eliminate the ctx member, which doesn't need to be stored.
* Rename verify(Operation) to make it more clear that it is
doing more than verifyOperation and that the dominance check
isn't being done multiple times.
* Rename mayNotHaveTerminator which was confusing about whether
it wasn't known whether it had a terminator, when it is really
about whether it is legal to have a terminator.
* Some minor optimizations: don't check for RegionKindInterface
if there are no regions. Don't do two passes over the
operations in a block in OperationVerifier::verifyDominance when
one will do.
The optimizations are actually a measurable (but minor) win in some
CIRCT cases.
Differential Revision: https://reviews.llvm.org/D101267
Ensure to preserve the correct type during when folding and canonicalization.
`shape.broadcast` of of a single operand can only be folded away if the argument
type is correct.
Differential Revision: https://reviews.llvm.org/D101158
Includes tests and implementation for both integer and floating point values.
Both nearest neighbor and bilinear interpolation is included.
Differential Revision: https://reviews.llvm.org/D101009
Mask vectors are handled similar to data vectors in N-D TransferWriteOp. They are copied into a temporary memory buffer, which can be indexed into with non-constant values.
Differential Revision: https://reviews.llvm.org/D101136
This commit adds support for broadcast dimensions in permutation maps of vector transfer ops.
Also fixes a bug in VectorToSCF that generated incorrect in-bounds checks for broadcast dimensions.
Differential Revision: https://reviews.llvm.org/D101019
This commit adds support for dimension permutations in permutation maps of vector transfer ops.
Differential Revision: https://reviews.llvm.org/D101007
Fix style/clang-tidy warning, trim stale includes and forward
declarations, and cleanup/fix stale comments.
Differential Revision: https://reviews.llvm.org/D101021
Strided 1D vector transfer ops are 1D transfers operating on a memref dimension different from the last one. Such transfer ops do not accesses contiguous memory blocks (vectors), but access memory in a strided fashion. In the absence of a mask, strided 1D vector transfer ops can also be lowered using matrix.column.major.* LLVM instructions (in a later commit).
Subsequent commits will extend the pass to handle the remaining missing permutation maps (broadcasts, transposes, etc.).
Differential Revision: https://reviews.llvm.org/D100946
Eliminate empty shapes from the operands, partially fold all constant shape
operands, and fix normal folding.
Differential Revision: https://reviews.llvm.org/D100634
The interchange option attached to the linalg to loop lowering affects only the loops and does not update the memory accesses generated in to body of the operation. Instead of performing the interchange during the loop lowering use the interchange pattern.
Differential Revision: https://reviews.llvm.org/D100758
Example:
```
%0 = linalg.init_tensor : tensor<...>
%1 = linalg.generic ... outs(%0: tensor<...>)
%2 = linalg.generic ... outs(%0: tensor<...>)
```
Memref allocated as a result of `init_tensor` bufferization can be incorrectly overwritten by the second linalg.generic operation
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D100921
This covers some of the basic documentation, but is still missing some documentation/examples of features provided by the server. Feature documentation will be added in a followup.
Differential Revision: https://reviews.llvm.org/D100690
This utilizes the mlir-lsp server to provide language services for MLIR files opened in vscode. The extension currently supports syntax highlighting, as well as tracking definitions/uses/source locations for SSA values and blocks.
Differential Revision: https://reviews.llvm.org/D100607
This commits adds a basic LSP server for MLIR that supports resolving references and definitions. Several components of the setup are simplified to keep the size of this commit down, and will be built out in later commits. A followup commit will add a vscode language client that communicates with this server, paving the way for better IDE experience when interfacing with MLIR files.
The structure of this tool is similar to mlir-opt and mlir-translate, i.e. the implementation is structured as a library that users can call into to implement entry points that contain the dialects/passes that they are interested in.
Note: This commit contains several files, namely those in `mlir-lsp-server/lsp`, that have been copied from the LSP code in clangd and adapted for use in MLIR. This copying was decided as the best initial path forward (discussed offline by several stake holders in MLIR and clangd) given the different needs of our MLIR server, and the one for clangd. If a strong desire/need for unification arises in the future, the existence of these files in mlir-lsp-server can be reconsidered.
Differential Revision: https://reviews.llvm.org/D100439
This information isn't useful for general compilation, but is useful for building tools that process .mlir files. This class will be used in a followup to start building an LSP language server for MLIR.
Differential Revision: https://reviews.llvm.org/D100438
This will prevent fusion that spains all dims and generates
(d0, d1, ...) -> () reshape that isn't legal
Differential Revision: https://reviews.llvm.org/D100805
Break up the dependency between SCF ops and substituteMin helper and make a
more generic version of AffineMinSCFCanonicalization. This reduce dependencies
between linalg and SCF and will allow the logic to be used with other kind of
ops. (Like ID ops).
Differential Revision: https://reviews.llvm.org/D100321
Previously, any terminator without ReturnLike and BranchOpInterface traits (e.g. scf.condition) were causing pass to fail.
Differential Revision: https://reviews.llvm.org/D100832
Instead of always running the region builder check if the generalized op has a region attached. If yes inline the existing region instead of calling the region builder. This change circumvents a problem with named operations that have a region builder taking captures and the generalization pass not knowing about this captures.
Differential Revision: https://reviews.llvm.org/D100880
The current implementation allows for TransferWriteOps with broadcasts that do not make sense. E.g., a broadcast could write a vector into a single (scalar) memory location, which is effectively the same as writing only the last element of the vector.
Differential Revision: https://reviews.llvm.org/D100842
Currently, it is only possible to register an operation or a type
when the TypeID is defined at compile time. Same with InterfaceMaps
which can only be defined with compile-time defined interfaces.
With those changes, it is now possible to register types/operations
with custom TypeIDs. This is necessary to define new operations/types
at runtime.
Differential Revision: https://reviews.llvm.org/D99084
std.xor ops on bool are lowered to spv.LogicalNotEqual. For Boolean values, xor
and not-equal are the same thing.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D100817
The patch extends the vectorization pass to lower linalg index operations to vector code. It allocates constant 1d vectors that enumerate the indexes along the iteration dimensions and broadcasts/transposes these 1d vectors to the iteration space.
Differential Revision: https://reviews.llvm.org/D100373
Add a new ProgressiveVectorToSCF pass that lowers vector transfer ops to SCF by gradually unpacking one dimension at time. Unpacking stops at 1D, but can be configured to stop earlier, should the HW support (N>1)-d vectors.
The current implementation cannot handle permutation maps, masks, tensor types and unrolling yet. These will be added in subsequent commits. Once features are on par with VectorToSCF, this implementation will replace VectorToSCF.
Differential Revision: https://reviews.llvm.org/D100622
Some Math operations do not have an equivalent in LLVM. In these cases,
allow a low priority fallback of calling the libm functions. This is to
give functionality and is not a performant option.
Differential Revision: https://reviews.llvm.org/D100367
This patch extends the control-flow cost-model for detensoring by
implementing a forward-looking pass on block arguments that should be
detensored. This makes sure that if a (to-be-detensored) block argument
"escapes" its block through the terminator, then the successor arguments
are also detensored.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D100457
The patch replaces the index operations in the body of fused producers and linearizes the indices after expansion.
Differential Revision: https://reviews.llvm.org/D100479
Update the dimensions of the index operations to account for dropped dimensions and replace the index operations of dropped dimensions by zero.
Differential Revision: https://reviews.llvm.org/D100395
This patch add the UnnamedAddr attribute for the GlobalOp in the LLVM
dialect. The attribute is also handled to and from LLVM IR.
This is meant to be used in a follow up patch to lower OpenACC/OpenMP ops to
call to kmp and tgt runtime calls (D100678).
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D100677
The patch enables the library call lowering for linalg operations that contain index operations.
Differential Revision: https://reviews.llvm.org/D100537
Expose the debug flag as a readable and assignable property of a
dedicated class instead of a write-only function. Actually test the fact
of setting the flag. Move test to a dedicated file, it has zero relation
to context_managers.py where it was added.
Arguably, it should be promoted from mlir.ir to mlir module, but we are
not re-exporting the latter and this functionality is purposefully
hidden so can stay in IR for now. Drop unnecessary export code.
Refactor C API and put Debug into a separate library, fix it to actually
set the flag to the given value.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100757
Instead of interchanging loops during the loop lowering this pass performs the interchange by permuting the indexing maps. It also updates the iterator types and the index accesses in the body of the operation.
Differential Revision: https://reviews.llvm.org/D100627
Move the existing optimization for transfer op on tensor to folder and
canonicalization. This handles the write after write case and read after write
and also add write after read case.
Differential Revision: https://reviews.llvm.org/D100597
The implementation supports static schedule for Fortran do loops. This
implements the dynamic variant of the same concept.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D97393
ArmSVE dialect is behind the recent changes in how the Vector dialect
interacts with backend vector dialects and the MLIR -> LLVM IR
translation module. This patch cleans up ArmSVE initialization within
Vector and removes the need for an LLVMArmSVE dialect.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D100171
When Linalg named ops support was added, captures were omitted
from the body builder. This revision adds support for captures
which allows us to write FillOp in a more idiomatic fashion using
the _linalg_ops_ext mixin support.
This raises an issue in the generation of `_linalg_ops_gen.py` where
```
@property
def result(self):
return self.operation.results[0] if len(self.operation.results) > 1 else None
```.
The condition should be `== 1`.
This will be fixed in a separate commit.
Differential Revision: https://reviews.llvm.org/D100363
This offers the ability to pass numpy arrays to the corresponding
memref argument.
Reviewed By: mehdi_amini, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100077
This allows for walking all nested locations of a given location, and is generally useful when processing locations.
Differential Revision: https://reviews.llvm.org/D100437