Previously this checked if the entire symbolic numerator was divisible by the
denominator, which is never the case when this function is called. Fixed this to
check only the non-const coefficients in the numerator, which was what was
intended and documented.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123592
extract was incorrectly folded when the source was coming from a
broadcast that was both adding new rank and broadcasting the inner
dimension.
Differential Revision: https://reviews.llvm.org/D123867
When the sample value is zero, everything is the same except that failure to
pivot does not imply emptiness. So, leave it to the user to mark as empty if
necessary, if they know the sample value is strictly negative. This is needed
for an upcoming symbolic lexmin heuristic.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123604
Changes the algorithm of LICM to support graph regions (no guarantee of topologically sorted order). Also fixes an issue where ops with recursive side effects and regions would not be hoisted if any nested ops used operands that were defined within the nested region.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D122465
Operation clone is currently faulty.
Suppose you have a block like as follows:
```
(%x0 : i32) {
%x1 = f(%x0)
return %x1
}
```
The test case we have is that we want to "unroll" this, in which we want to change this to compute `f(f(x0))` instead of just `f(x0)`. We do so by making a copy of the body at the end of the block and set the uses of the argument in the copy operations with the value returned from the original block.
This is implemented as follows:
1) map to the block arguments to the returned value (`map[x0] = x1`).
2) clone the body
Now for this small example, this works as intended and we get the following.
```
(%x0 : i32) {
%x1 = f(%x0)
%x2 = f(%x1)
return %x2
}
```
This is because the current logic to clone `x1 = f(x0)` first looks up the arguments in the map (which finds `x0` maps to `x1` from the initialization), and then sets the map of the result to the cloned result (`map[x1] = x2`).
However, this fails if `x0` is not an argument to the op, but instead used inside the region, like below.
```
(%x0 : i32) {
%x1 = f() {
yield %x0
}
return %x1
}
```
This is because cloning an op currently first looks up the args (none), sets the map of the result (`map[%x1] = %x2`), and then clones the regions. This results in the following, which is clearly illegal:
```
(%x0 : i32) {
%x1 = f() {
yield %x0
}
%x2 = f() {
yield %x2
}
return %x2
}
```
Diving deeper, this is partially due to the ordering (how this PR fixes it), as well as how region cloning works. Namely it will first clone with the mapping, and then it will remap all operands. Since the ordering above now has a map of `x0 -> x1` and `x1 -> x2`, we end up with the incorrect behavior here.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D122531
LLVM IR is moving towards adoption of opaque pointer types. These require extra
information to be passed when constructing some operations, in particular GEP
and Alloca. Adapt the builders of said operations and modify the translation
code to handle both opaque and non-opaque pointers.
This incidentally adds the translation for Alloca alignment and fixes the translation
of struct-related GEP indices that must be constant.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D123792
Similar to the existing pattern for reodering cast(transpose),
this makes transpose following transpose and increases the chance
of embedding the transposition inside contraction op. Actually
cast ops are just special instances of elementwise ops.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D123596
In order to increase parallism, certain ops with regions and have the
IsIsolatedFromAbove trait will have their verification delayed. That
means the region verifier may access the invalid ops and may lead to a
crash.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D122771
This dialect provides operations that can be used to control transformation of
the IR using a different portion of the IR. It refers to the IR being
transformed as payload IR, and to the IR guiding the transformation as
transform IR.
The main use case for this dialect is orchestrating fine-grain transformations
on individual operations or sets thereof. For example, it may involve finding
loop-like operations with specific properties (e.g., large size) in the payload
IR, applying loop tiling to those and only those operations, and then applying
loop unrolling to the inner loops produced by the previous transformations. As
such, it is not intended as a replacement for the pass infrastructure, nor for
the pattern rewriting infrastructure. In the most common case, the transform IR
will be processed and applied to payload IR by a pass. Transformations
expressed by the transform dialect may be implemented using the pattern
infrastructure or any other relevant MLIR component.
This dialect is designed to be extensible, that is, clients of this dialect are
allowed to inject additional operations into this dialect using the newly
introduced in this patch `TransformDialectExtension` mechanism. This allows the
dialect to avoid a dependency on the implementation of the transformation as
well as to avoid introducing dialect-specific transform dialects.
See https://discourse.llvm.org/t/rfc-interfaces-and-dialects-for-precise-ir-transformation-control/60927.
Reviewed By: nicolasvasilache, Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D123135
Move the operations that correspond to LLVM IR intrinsics in a separate .td
file. This makes it easier to maintain the intrinsics and decreases the compile
time of LLVMDialect.cpp by ~25%.
Depends On D123310
Reviewed By: wsmoses, jacquesguan
Differential Revision: https://reviews.llvm.org/D123315
LLVM IR has introduced and is moving forward with the concept of opaque
pointers, i.e. pointer types that are not carrying around the pointee type.
Instead, memory-related operations indicate the type of the data being accessed
through the opaque pointer. Introduce the initial support for opaque pointers
in the LLVM dialect:
- `LLVMPointerType` to support omitting the element type;
- alloca/load/store/gep to support opaque pointers in their operands and
results; this requires alloca and gep to store the element type as an
attribute;
- memory-related intrinsics to support opaque pointers in their operands;
- translation to LLVM IR for the ops above is no longer using methods
deprecated in LLVM API due to the introduction of opaque pointers.
Unlike LLVM IR, MLIR can afford to support both opaque and non-opaque pointers
at the same time and simplify the transition. Translation to LLVM IR of MLIR
that involves opaque pointers requires the LLVMContext to be configured to
always use opaque pointers.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D123310
This change adds three new operations to the GPU dialect: gpu.mma.sync,
gpu.mma.ldmatrix, and gpu.lane_id. The former two are meant to target
the lower level nvvm.mma.sync and nvvm.ldmatrix instructions, respectively.
Lowerings are added for the new GPU operations for conversion to
NVVM.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D123647
Support unrolling for vector.transpose following the same interface as
other vector unrolling ops.
Differential Revision: https://reviews.llvm.org/D123688
StrEnumAttr has been deprecated in favour of EnumAttr, a solution based on AttrDef (https://reviews.llvm.org/D115181). This patch removes StrEnumAttr, along with all the custom ODS logic required to handle it.
See https://discourse.llvm.org/t/psa-stop-using-strenumattr-do-use-enumattr/5710 on how to transition to EnumAttr. In short,
```
// Before
def MyEnumAttr : StrEnumAttr<"MyEnum", "", [
StrEnumAttrCase<"A">,
StrEnumAttrCase<"B">
]>;
// After (pick an integer enum of your choice)
def MyEnum : I32EnumAttr<"MyEnum", "", [
I32EnumAttrCase<"A", 0>,
I32EnumAttrCase<"B", 1>
]> {
// Don't generate a C++ class! We want to use the AttrDef
let genSpecializedAttr = 0;
}
// Define the AttrDef
def MyEnum : EnumAttr<MyDialect, MyEnum, "my_enum">;
```
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D120834
Enable specifying additional include directories to search. This is
consistent with what one can do with clangd (although there it is more
general compilation options) and Python LSP. We would in general expect
these to be provided by compilation database equivalent.
Differential Revision: https://reviews.llvm.org/D123474
This patch adds thread_local to llvm.mlir.global and adds translation for dso_local and addr_space to and from LLVM IR.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D123412
This revision replaces current type cast constant folder with a new common type cast constant folder function template.
It will cover all former folder and support fold the constant splat and vector.
Differential Revision: https://reviews.llvm.org/D123489
This change generalizes the fusion of `tensor.expand_shape` ->
`linalg.generic` op by collapsing to handle cases where only a subset
of the reassociations specified in the `tensor.expand_shape` are valid
to be collapsed.
The method that does the collapsing is refactored to allow it to be a
generic utility when required.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D123153
This patch adds tasking construct according to Section 2.10.1 of OpenMP 5.0
Reviewed By: peixin, kiranchandramohan, abidmalikwaterloo
Differential Revision: https://reviews.llvm.org/D123575
This patch removes inheritence from PresburgerSpace in IntegerRelation and
instead makes it a member of these classes.
This is required for three reasons:
- It prevents implicit casting to PresburgerSpace.
- Not all functions of PresburgerSpace need to be exposed by the deriving classes.
- IntegerRelation and IntegerPolyhedron are defined in a PresburgerSpace. It
makes more sense for the space to be a member instead of them inheriting from
a space.
Reviewed By: arjunp, ftynse
Differential Revision: https://reviews.llvm.org/D123585
With this change, there's going to be a clear distinction between LLVM
and MLIR pass maanger options (e.g. `-mlir-print-after-all` vs
`-print-after-all`). This change is desirable from the point of view of
projects that depend on both LLVM and MLIR, e.g. Flang.
For consistency, all pass manager options in MLIR are prefixed with
`mlir-`, even options that don't have equivalents in LLVM .
Differential Revision: https://reviews.llvm.org/D123495
Lookup iter_arg buffers using `lookupBuffer` instead of always creating a new `ToMemrefOp`. Also cast all yielded buffers (if necessary), regardless of whether they are an equivalent buffer or a new allocation.
Note: This should have been part of D123369.
Differential Revision: https://reviews.llvm.org/D123383
Switch CUDA runtime wrapper for GPU mem alloc/free to async. The
semantics of the GPU dialect ops (gpu.alloc/dealloc) and the wrappers it
lowered to (gpu-to-llvm) was for the async versions -- however, this was
being incorrectly mapped to cuMemAlloc/cuMemFree instead of
cuMemAllocAsync/cuMemFreeAsync.
Reviewed By: csigg
Differential Revision: https://reviews.llvm.org/D123482
This supports the threadprivate directive in OpenMP dialect following
the OpenMP 5.1 [2.21.2] standard. Also lowering to LLVM IR using OpenMP
IRBduiler.
Reviewed By: kiranchandramohan, shraiysh, arnamoy10
Differential Revision: https://reviews.llvm.org/D123350
Adding annotations on as-needed bases, currently only for memrefCopy, but in general all C API functions that take pointers to memory allocated/initialized inside the jit-compiled code must be annotated, to be able to run with msan.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123557
Use the new pass manager.
This also removes the ability to run arbitrary sets of passes. Not sure if this functionality is used, but it doesn't seem to be tested.
No need to initialize passes outside of constructing the PassBuilder with the new pass manager.
Reland: Fixed custom calls to `-lower-matrix-intrinsics` in integration tests by replacing them with `-O0 -enable-matrix`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123425
The method to add elementwise ops fusion patterns pulls in many other
patterns by default. The patterns to pull in along with the
elementwise op fusion should be upto the caller. Split the method to
pull in just the elementwise ops fusion pattern. Other cleanup changes
include
- Move the pattern for constant folding of generic ops (currently only
constant folds transpose) into a separate file, cause it is not
related to fusion
- Drop the uber LinalgElementwiseFusionOptions. With the
populateElementwiseOpsFusionPatterns being split, this has no
utility now.
- Drop defaults for the control function.
- Fusion of splat constants with generic ops doesnt need a control
function. It is always good to do.
Differential Revision: https://reviews.llvm.org/D123236
Use the new pass manager.
This also removes the ability to run arbitrary sets of passes. Not sure if this functionality is used, but it doesn't seem to be tested.
No need to initialize passes outside of constructing the PassBuilder with the new pass manager.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123425
Normalize some of the division and inequality expressions used,
which can improve performance. Also deduplicate some of the
normalization functionality throughout the Presburger library.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123314
When making the subtract implementation non-recursive, tail calls were
implemented by incrementing the level but not pushing a frame, and returning
was implemented as returning to the level corresponding to the number of frames in the stack.
This is incorrect, as there could be a case where we tail-recurse at `level`,
and then recurse at `level + 1`, pushing a frame. However, because the previous
frame was missing, this new frame would be interpreted as corresponding to
`level` and not `level + 1`. Fix this by removing the special handling of tail
calls and just doing them as normal recursion, as this is the simplest correct
implementation and handling them specifically would be a premature optimization.
The impact of this bug is only on performance as this can only lead to
unnecessary subtractions of the same disjuncts multiples times. As subtraction
is idempotent, and rationally empty disjuncts are always discarded, this
does not affect the output, so this patch does not include a regression test.
(This also does not affect termination.)
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123327
Rewrite tensor::ExtractSliceOp(vector::TransferWriteOp) to vector::TransferWriteOp(tensor::ExtractSliceOp) if the full slice is overwritten and inserted into another tensor. After this rewrite, the operations bufferize in-place since all of them work on the same %iter_arg slice.
For example:
```mlir
%0 = vector.transfer_write %vec, %init_tensor[%c0, %c0]
: vector<8x16xf32>, tensor<8x16xf32>
%1 = tensor.extract_slice %0[0, 0] [%sz0, %sz1] [1, 1]
: tensor<8x16xf32> to tensor<?x?xf32>
%r = tensor.insert_slice %1 into %iter_arg[%iv0, %iv1] [%sz0, %sz1] [1, 1]
: tensor<?x?xf32> into tensor<27x37xf32>
```
folds to
```mlir
%0 = tensor.extract_slice %iter_arg[%iv0, %iv1] [%sz0, %sz1] [1, 1]
: tensor<27x37xf32> to tensor<?x?xf32>
%1 = vector.transfer_write %vec, %0[%c0, %c0]
: vector<8x16xf32>, tensor<?x?xf32>
%r = tensor.insert_slice %1 into %iter_arg[%iv0, %iv1] [%sz0, %sz1] [1, 1]
: tensor<?x?xf32> into tensor<27x37xf32>
Reviewed By: nicolasvasilache, hanchung
Differential Revision: https://reviews.llvm.org/D123190
Insert a buffer copy unless the dims are guaranteed to be collapsible. In the verifier, accept collapses unless they are guaranteed to be non-collapsible.
Differential Revision: https://reviews.llvm.org/D123316
Insert a cast if the two tensors with identical layout (that are passed to `arith.select`) have different layout maps after bufferization.
Differential Revision: https://reviews.llvm.org/D123321
This patch revamps the BranchOpInterface a bit and allows a proper implementation of what was previously `getMutableSuccessorOperands` for operations, which internally produce arguments to some of the block arguments. A motivating example for this would be an invoke op with a error handling path:
```
invoke %function(%0)
label ^success ^error(%1 : i32)
^error(%e: !error, %arg0 : i32):
...
```
The advantages of this are that any users of `BranchOpInterface` can still argue over remaining block argument operands (such as `%1` in the example above), as well as make use of the modifying capabilities to add more operands, erase an operand etc.
The way this patch implements that functionality is via a new class called `SuccessorOperands`, which is now returned by `getSuccessorOperands`. It basically contains an `unsigned` denoting how many operator produced operands exist, as well as a `MutableOperandRange`, which are the usual forwarded operands we are used to. The produced operands are assumed to the first few block arguments, followed by the forwarded operands afterwards. The role of `SuccessorOperands` is to provide various utility functions to modify and query the successor arguments from a `BranchOpInterface`.
Differential Revision: https://reviews.llvm.org/D123062
Reland Note: Adds a fix to properly mark a commutative operation as folded if we change the order
of its operands. This was uncovered by the fact that we no longer re-process constants.
This avoids accidentally reversing the order of constants during successive
application, e.g. when running the canonicalizer. This helps reduce the number
of iterations, and also avoids unnecessary changes to input IR.
Fixes#51892
Differential Revision: https://reviews.llvm.org/D122692
In addition, fixed a small bug with padding incorrectly inferring output shape for dynaic inputs in convolution
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D121872
This case is handled in neither the folding or canonicalization
patterns. The folding pattern cannot generate new broadcast ops,
so it should be handled by the canonicalization pattern.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D123307
Subtraction was previously implemented recursively. This refactors it to be
non-recursive to avoid issues with potential stack overflows.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123248
This patch enhances the CSE pass to deal with simple cases of duplicated
operations with MemoryEffects.
It allows the CSE pass to remove safely duplicate operations with the
MemoryEffects::Read that have no other side-effecting operations in
between. Other MemoryEffects::Read operation are allowed.
The use case is pretty simple so far so we can build on top of it to add
more features.
This patch is also meant to avoid a dedicated CSE pass in FIR and was
brought together afetr discussion on https://reviews.llvm.org/D112711.
It does not currently cover the full range of use cases described in
https://reviews.llvm.org/D112711 but the idea is to gradually enhance
the MLIR CSE pass to handle common use cases that can be used by
other dialects.
This patch takes advantage of the new CSE capabilities in Fir.
Reviewed By: mehdi_amini, rriddle, schweitz
Differential Revision: https://reviews.llvm.org/D122801
This commit refactors the expected form of native constraint and rewrite
functions, and greatly reduces the necessary user complexity required when
defining a native function. Namely, this commit adds in automatic processing
of the necessary PDLValue glue code, and allows for users to define
constraint/rewrite functions using the C++ types that they actually want to
use.
As an example, lets see a simple example rewrite defined today:
```
static void rewriteFn(PatternRewriter &rewriter, PDLResultList &results,
ArrayRef<PDLValue> args) {
ValueRange operandValues = args[0].cast<ValueRange>();
TypeRange typeValues = args[1].cast<TypeRange>();
...
// Create an operation at some point and pass it back to PDL.
Operation *op = rewriter.create<SomeOp>(...);
results.push_back(op);
}
```
After this commit, that same rewrite could be defined as:
```
static Operation *rewriteFn(PatternRewriter &rewriter ValueRange operandValues,
TypeRange typeValues) {
...
// Create an operation at some point and pass it back to PDL.
return rewriter.create<SomeOp>(...);
}
```
Differential Revision: https://reviews.llvm.org/D122086
Rationale:
Allocating the temporary buffers for access pattern expansion on the stack
(using alloca) is a bit too agressive, since it easily runs out of stack space
for large enveloping tensor dimensions. This revision changes the dynamic
allocation of these buffers with explicit alloc/dealloc pairs.
Reviewed By: bixia, wrengr
Differential Revision: https://reviews.llvm.org/D123253
Adds `mlirBlockDetach` to the CAPI to remove a block from its parent
region. Use it in the Python bindings to implement
`Block.append_to(region)`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123165
Refactor the operation of subtraction by
- removing the usage of SimplexRollbackScopeExit since this
can't be used in the iterative version
- reducing the number of stack variables to make the
iterative version easier to follow
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123156
Support returning arbitrary tensors from functions. Even those that are
not equivalent. To that end, additional information is gathered during
the analysis phase. In particular, which function args are aliasing with
which return values.
Also fix bugs in the current implementation when returning equivalent
tensors. Various unit tests are added to ensure that we have better test
coverage.
Note: Returning non-equivalent tensors is only allowed when
allowReturnAllocs is enabled. This functionality is useful for unit
testing and compatibility with other bufferizations such as the sparse
compiler. This is also towards using ModuleBufferization as a
replacement for --func-bufferize.
Differential Revision: https://reviews.llvm.org/D119120
* Bufferize FuncOp bodies and boundaries in the same loop. This is in preparation of moving FuncOp bufferization into an external model implementation.
* As a side effect, stop bufferization earlier if there was an error. (Do not continue bufferization, fewer error messages.)
* Run equivalence analysis of CallOps before the main analysis. This is needed so that equialvence info is propagated properly.
Differential Revision: https://reviews.llvm.org/D123208
* Store bbArg indices instead of BlockArguments, so that args can be changed during bufferizationn.
* Use type aliases for better readability.
Differential Revision: https://reviews.llvm.org/D123191
https://reviews.llvm.org/D122641 introduced fixes to the ExpandShapeOp verifier
but also introduced an artificial layout limitation that prevents the consideration of transposed layouts.
This revision fixes the omissions and reimplements the logic using saturated arithmetic which is more
idiomatic and avoids leaking internal implementation details.
Tests cases are added for transposed layouts.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D122845
Returning `std::array<uint8_t, N>` is better ergonomics for the hashing functions usage, instead of a `StringRef`:
* When returning `StringRef`, client code is "jumping through hoops" to do string manipulations instead of dealing with fixed array of bytes directly, which is more natural
* Returning `std::array<uint8_t, N>` avoids the need for the hasher classes to keep a field just for the purpose of wrapping it and returning it as a `StringRef`
As part of this patch also:
* Introduce `TruncatedBLAKE3` which is useful for using BLAKE3 as the hasher type for `HashBuilder` with non-default hash sizes.
* Make `MD5Result` inherit from `std::array<uint8_t, 16>` which improves & simplifies its API.
Differential Revision: https://reviews.llvm.org/D123100
This avoids a rather big bug where we were reserving
dense space for the ptx/idx in the first sparse dimension.
For example, using CSR for a 140874 x 140874 matrix with
3977139 nonzero would reserve the full 19845483876 space.
This revision fixes this for now, but we need to revisit
the reservation heuristic to make this better.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D123166