The reconciliation pass has been improved to introduce the support for chains of casts, thus not limiting anymore the reconciliation to just consider pairs of unrealized casts.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D130711
This patch adds constant folder for AtanOp which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D130983
In some cases the recursion will grow the `visited` hash table and
invalidate the cached iterator.
(caught with ASAN)
Differential Revision: https://reviews.llvm.org/D131027
In the ROCm runtime (and probably CUDA as well), all kernel arguments
are aligned. Therefore, enable using bare pointers for memref
arguments to kernels when these memrefs have static shape and a
trivial layout.
This is a substantial optimization to launching kernels that use
memrefs with known, static sizes, since it causes the kernel launch
packet to no longer include information already known to the kernel,
which can enable packing the kernel launch arguments into launch
packets instead of having to allocate an entire separate structure to
hold unneeded memref information.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D130716
This rewriting was no longer functional after recent migration to one shot
bufferization. However, this revision makes it work again, with a CHECK test
to ensure fusion happens. Note that functionality is tested by several
integration tests.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D130996
The generic allocation and deallocation instructions, which are optionally used during the MemRef -> LLVM conversion, should have a name that is specifically bound to their origin, that is the conversion pass itself.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D130588
This patch adds constant folder for TanhOp which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D130960
This patch adds constant folder for TanOp which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D130873
Previously, DenseArrayAttr used VectorType for its shaped type.
VectorType is problematic for arrays because it doesn't support zero
dimensions, meaning that an empty array would have `vector<i32>` as its
type. ElementsAttr would think that an empty dense array is size 1, not
0. This patch switches over to TensorType, which does support zero
dimensions.
Fixes#56860
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D130921
Adds optional attribute to support tensor cores on F32 datatype by lowering to `mma.sync` with TF32 operands. Since, TF32 is not a native datatype in LLVM we are adding `tf32Enabled` as an attribute to allow the IR to be aware of `MmaSyncOp` datatype. Additionally, this patch adds placeholders for nvgpu-to-nvgpu transformation targeting higher precision tf32x3.
For mma.sync on f32 input using tensor cores there are two possibilites:
(a) tf32 (1 `mma.sync` per warp-level matrix-multiply-accumulate)
(b) tf32x3 (3 `mma.sync` per warp-level matrix-multiply-accumulate)
Typically, tf32 tensor core acceleration comes at a cost of accuracy from missing precision bits. While f32 has 23 precision bits, tf32 has only 10 precision bits. tf32x3 aims to recover the precision bits by splitting each operand into two tf32 values and issue three `mma.sync` tensor core operations.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D130294
This attribute is technical debt from the early stages of MLIR, before
ElementsAttr was an interface and when it was more difficult for
dialects to define their own types of attributes. At present it isn't
used at all in tree (aside from being convenient for eliding other
ElementsAttr), and has had little to no evolution in the past three years.
Differential Revision: https://reviews.llvm.org/D129917
This attributes is intended cover the current set of use cases that abuse
DenseElementsAttr, e.g. when the data is large. Using resources for large
data is one of the major reasons why they were added; e.g. they can be
deallocated mid-compilation, they support a wide variety of data origins
(e.g, heap allocated, mmap'd, etc.), they can support mutation, etc.
I considered at length not having a builtin variant of this, and instead
having multiple versions of this attribute for dialects that are interested,
but they all boiled down to the exact same attribute definition. Given the
generality of this attribute, it feels more aligned to keep it next to DenseArrayAttr
(given that DenseArrayAttr covers the "small" case, and DenseResourcesElementsAttr
covers the "large" case). The underlying infra used to build this attribute is
general, and having a builtin attribute doesn't preclude users from defining
their own when it makes sense (they can even share a blob manager with the
builtin dialect to avoid data duplication).
Differential Revision: https://reviews.llvm.org/D130022
The DialectResourceBlobManager class provides functionality for managing resource blobs
in a generic, dialect-agnostic fashion. In addition to this class, a dialect interface and custom
resource handle are provided to simplify referencing and interacting with the manager. These
classes intend to simplify the work required for dialects that want to manage resource blobs
during compilation, such as for large elements attrs. The old manager for the resource example
in the test dialect has been updated to use this, which provides and cleaner and more consistent API.
This commit also adds new HeapAsmResourceBlob and ImmortalAsmResourceBlob to simplify
creating resource blobs in common scenarios.
Differential Revision: https://reviews.llvm.org/D130021
Attempting to apply the range analysis to real code revealed that
trunci wasn't correctly handling the case where truncation would
create wider ranges - for example, if we truncate [255, 257] : i16 to
i8, the result can be 255, 0, or 1, which isn't a contiguous range of
values.
The previous implementation would naively map this to [255, 1], which
would cause issues with unsigned ranges and unification.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D130501
This is the follow up on https://reviews.llvm.org/D130730 which goes through upstream code and removes creating constant values in favour of using the constant indices in GEP directly. This leads to less and more readable code and more compact IR as well.
Differential Revision: https://reviews.llvm.org/D130731
Scope of changes:
1) Added new function to generate loop versioning
2) Added support for if clause to applySimd function
2) Added tests which confirm that lowering is successful
If ifCond is specified, then collapsed loop is duplicated and if branch
is added. Duplicated loop is executed if simd ifCond is evaluated to false.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D129368
Signed-off-by: Dominik Adamski <dominik.adamski@amd.com>
The "optimization" would replace the AffineMap for an empty shape with a 0 to represent its indexing (stride * dimension) logic. Meanwhile other pieces of core logic (such as getStridesAndOffset and makeStridedLinearLayoutMap) require strides for all dimensions to ensure no aliasing can occur which would occur if the shape was not empty. For now, this optimization is removed as different pieces of core types disagree on this, so the optimization should be caller supplied or should be consistent throughout the infrastructure.
Differential Revision: https://reviews.llvm.org/D130772
This patch adds constant folder for ExpM1Op which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D130567
This patch removes the `type` field from `Attribute` along with the
`Attribute::getType` accessor.
Going forward, this means that attributes in MLIR will no longer have
types as a first-class concept. This patch lays the groundwork to
incrementally remove or refactor code that relies on generic attributes
being typed. The immediate impact will be on attributes that rely on
`Attribute` containing a type, such as `IntegerAttr`,
`DenseElementsAttr`, and `ml_program::ExternAttr`, which will now need
to define a type parameter on their storage classes. This will save
memory as all other attribute kinds will no longer contain a type.
Moreover, it will not be possible to generically query the type of an
attribute directly. This patch provides an attribute interface
`TypedAttr` that implements only one method, `getType`, which can be
used to generically query the types of attributes that implement the
interface. This interface can be used to retain the concept of a "typed
attribute". The ODS-generated accessor for a `type` parameter
automatically implements this method.
Next steps will be to refactor the assembly formats of certain operations
that rely on `parseAttribute(type)` and `printAttributeWithoutType` to
remove special handling of type elision until `type` can be removed from
the dialect parsing hook entirely; and incrementally remove uses of
`TypedAttr`.
Reviewed By: lattner, rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D130092
When dead-code analysis is run at the scope of a function, call ops to
other functions at the same level were being marked as unreachable,
since the analysis optimistically assumes the call op to have no known
predecessors and that all predecessors are known, but the callee would
never get visited.
This patch fixes the bug by checking if a referenced function is above
the top-level op of the analysis, and is thus considered an external
callable.
Fixes#56830
Reviewed By: zero9178
Differential Revision: https://reviews.llvm.org/D130829
Added a commutativity utility pattern and a function to populate it. The pattern sorts the operands of an op in ascending order of the "key" associated with each operand iff the op is commutative. This sorting is stable.
The function is intended to be used inside passes to simplify the matching of commutative operations. After the application of the above-mentioned pattern, since the commutative operands now have a deterministic order in which they occur in an op, the matching of large DAGs becomes much simpler, i.e., requires much less number of checks to be written by a user in her/his pattern matching function.
The "key" associated with an operand is the list of the "AncestorKeys" associated with the ancestors of this operand, in a breadth-first order.
The operand of any op is produced by a set of ops and block arguments. Each of these ops and block arguments is called an "ancestor" of this operand.
Now, the "AncestorKey" associated with:
1. A block argument is `{type: BLOCK_ARGUMENT, opName: ""}`.
2. A non-constant-like op, for example, `arith.addi`, is `{type: NON_CONSTANT_OP, opName: "arith.addi"}`.
3. A constant-like op, for example, `arith.constant`, is `{type: CONSTANT_OP, opName: "arith.constant"}`.
So, if an operand, say `A`, was produced as follows:
```
`<block argument>` `<block argument>`
\ /
\ /
`arith.subi` `arith.constant`
\ /
`arith.addi`
|
returns `A`
```
Then, the block arguments and operations present in the backward slice of `A`, in the breadth-first order are:
`arith.addi`, `arith.subi`, `arith.constant`, `<block argument>`, and `<block argument>`.
Thus, the "key" associated with operand `A` is:
```
{
{type: NON_CONSTANT_OP, opName: "arith.addi"},
{type: NON_CONSTANT_OP, opName: "arith.subi"},
{type: CONSTANT_OP, opName: "arith.constant"},
{type: BLOCK_ARGUMENT, opName: ""},
{type: BLOCK_ARGUMENT, opName: ""}
}
```
Now, if "keyA" is the key associated with operand `A` and "keyB" is the key associated with operand `B`, then:
"keyA" < "keyB" iff:
1. In the first unequal pair of corresponding AncestorKeys, the AncestorKey in operand `A` is smaller, or,
2. Both the AncestorKeys in every pair are the same and the size of operand `A`'s "key" is smaller.
AncestorKeys of type `BLOCK_ARGUMENT` are considered the smallest, those of type `CONSTANT_OP`, the largest, and `NON_CONSTANT_OP` types come in between. Within the types `NON_CONSTANT_OP` and `CONSTANT_OP`, the smaller ones are the ones with smaller op names (lexicographically).
---
Some examples of such a sorting:
Assume that the sorting is being applied to `foo.commutative`, which is a commutative op.
Example 1:
> %1 = foo.const 0
> %2 = foo.mul <block argument>, <block argument>
> %3 = foo.commutative %1, %2
Here,
1. The key associated with %1 is:
```
{
{CONSTANT_OP, "foo.const"}
}
```
2. The key associated with %2 is:
```
{
{NON_CONSTANT_OP, "foo.mul"},
{BLOCK_ARGUMENT, ""},
{BLOCK_ARGUMENT, ""}
}
```
The key of %2 < the key of %1
Thus, the sorted `foo.commutative` is:
> %3 = foo.commutative %2, %1
Example 2:
> %1 = foo.const 0
> %2 = foo.mul <block argument>, <block argument>
> %3 = foo.mul %2, %1
> %4 = foo.add %2, %1
> %5 = foo.commutative %1, %2, %3, %4
Here,
1. The key associated with %1 is:
```
{
{CONSTANT_OP, "foo.const"}
}
```
2. The key associated with %2 is:
```
{
{NON_CONSTANT_OP, "foo.mul"},
{BLOCK_ARGUMENT, ""}
}
```
3. The key associated with %3 is:
```
{
{NON_CONSTANT_OP, "foo.mul"},
{NON_CONSTANT_OP, "foo.mul"},
{CONSTANT_OP, "foo.const"},
{BLOCK_ARGUMENT, ""},
{BLOCK_ARGUMENT, ""}
}
```
4. The key associated with %4 is:
```
{
{NON_CONSTANT_OP, "foo.add"},
{NON_CONSTANT_OP, "foo.mul"},
{CONSTANT_OP, "foo.const"},
{BLOCK_ARGUMENT, ""},
{BLOCK_ARGUMENT, ""}
}
```
Thus, the sorted `foo.commutative` is:
> %5 = foo.commutative %4, %3, %2, %1
Signed-off-by: Srishti Srivastava <srishti.srivastava@polymagelabs.com>
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124750
Fix the hardcoded check for `FuncOp` in `getCommonBlock` utility: the
check should have been for an op that starts an affine scope. The
incorrect block returned in turn causes dependence analysis to function
incorrectly.
This change allows affine store-load forwarding to work correctly inside
any ops that start an affine scope.
Reviewed By: ftynse, dcaballe
Differential Revision: https://reviews.llvm.org/D130749
The implementation and API of GEP Op has gotten a bit convoluted over the time. Issues with it are:
* Misleading naming: `indices` actually only contains the dynamic indices, not all of them. To get the amount of indices you need to query the size of `structIndices`
* Very difficult to iterate over all indices properly: One had to iterate over `structIndices`, check whether it contains the magic constant `kDynamicIndex`, if it does, access the next value in `index` etc.
* Inconvenient to build: One either has create lots of constant ops for every index or have an odd split of passing both a `ValueRange` as well as a `ArrayRef<int32_t>` filled with `kDynamicIndex` at the correct places.
* Implementation doing verification in the build method
and more.
This patch attempts to address all these issues via convenience classes and reworking the way GEP Op works:
* Adds `GEPArg` class which is a sum type of a `int32_t` and `Value` and is used to have a single convenient easy to use `ArrayRef<GEPArg>` in the builders instead of the previous `ValueRange` + `ArrayRef<int32_t>` builders.
* Adds `GEPIndicesAdapter` which is a class used for easy random access and iteration over the indices of a GEP. It is generic and flexible enough to also instead return eg. a corresponding `Attribute` for an operand inside of `fold`.
* Rename `structIndices` to `rawConstantIndices` and `indices` to `dynamicIndices`: `rawConstantIndices` signifies one shouldn't access it directly as it is encoded, and `dynamicIndices` is more accurate and also frees up the `indices` name.
* Add `getIndices` returning a `GEPIndicesAdapter` to easily iterate over the GEP Ops indices.
* Move the verification/asserts out of the build method and into the `verify` method emitting op error messages.
* Add convenient builder methods making use of `GEPArg`.
* Add canonicalizer turning dynamic indices with constant values into constant indices to have a canonical representation.
The only breaking change is for any users building GEPOps that have so far used the old `ValueRange` + `ArrayRef<int32_t>` builder as well as those using the generic syntax.
Another follow up patch then goes through upstream and makes use of the new `ArrayRef<GEPArg>` builder to remove a lot of code building constants for GEP indices.
Differential Revision: https://reviews.llvm.org/D130730
Adding complex value with 0 for real and imaginary part can be ignored.
NOTE: This type of canonicalization can be written in an easy and tidy format using `complex.number` after constant op supports custom attribute.
Differential Revision: https://reviews.llvm.org/D130748
A group of functions in the Affine dialect provides a mechanism for
buliding folded-by-construction operations. These functions used to
accept a `RewriterBase` reference because they may need to erase the
operations that were folded and notify the rewriter when called from
rewrite patterns. Adopt a different approach: postpone the builder
notification of the op creation until we are certain that the op will
not be folded away. This removes the need to notify the rewriter about
op deletion following op construction in case of successful folding, and
removes a bunch of one-off `IRRewriter` instances in transform code that
may mess up insertion points.
Reviewed By: springerm, mravishankar
Differential Revision: https://reviews.llvm.org/D130616
We can canonicalize consecutive complex.conj just by removing all conjugate operations.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D130684
It is more useful to use ComplexType as type of the attribute than to
use the element type as attribute type. This means when using this
attribute in complex::ConstantOp, we just need to check whether
the types match.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D130703
This is used to fix a bug in SymbolTable::replaceAllSymbolUses where we replace symbols that
we shouldn't.
Differential Revision: https://reviews.llvm.org/D130693
Current implementation of decomposition of Linalg operations wouldnt
work if the `outs` operand values were used within the body of the
operation. Relax this restriction. This potentially sets the stage for
decomposing ops with reduction iterator types (but is not done here
since it requires more study).
Differential Revision: https://reviews.llvm.org/D130527
While The tiling interface provides a mechanism for operations to be
tiled into tiled version of the op (or another op at the same level of
abstraction), the `generateScalarImplementation` method added here is
the "exit point" after all transformations have been done. Ops that
implement this method are expected to generate IR that are directly
lowerable to backend dialects like LLVM or SPIR-V dialects.
Differential Revision: https://reviews.llvm.org/D130612
This supports lowering from parse-tree to MLIR and translation from
MLIR to LLVM IR using OMPIRBuilder for OpenMP simdlen clause in SIMD
construct.
Reviewed By: shraiysh, peixin, arnamoy10
Differential Revision: https://reviews.llvm.org/D130195
This commit folds a `tensor.cast` op into a `tensor.collapse_shape` op
when following two conditions meet:
1. the `tensor.collapse_shape` op consumes result of the `tensor.cast` op.
2. `tensor.cast` op casts to a more dynamic version of the source tensor.
This is added as a canonicalization pattern in `tensor.collapse_shape` op.
Signed-Off-By: Gaurav Shukla <gaurav@nod-labs.com>
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D130650
* https://discourse.llvm.org/t/rfc-removing-the-quant-dialect/3643/8
* Removes most ops. Leaves casts given final comment (can remove more in a followup).
* There are a few uses in Tosa keeping some of the utilities alive. In a followup, I will probably elect to just move simplified versions of them into Tosa itself vs having this quasi-library dependency.
Differential Revision: https://reviews.llvm.org/D120204
This commit extends UnifyAliasedResourcePass to handle the case
where aliased resources have different vector sizes. (It still
requires all scalar types to be of the same bitwidth.) This is
effectively reusing the code for handling different-bitwidth
scalar types.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D130671
This commit fixes spv.CompositeConstruct to assembly to list
operand types to enable vector construction out of smaller vectors.
Validation is also fixed to properly check the cases for vector
construction.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D130669
This patch adds canonicalization conditions for omp.atomic.update thus
eliminating it when it becomes just a write or a no-op due to other
changes during canonicalization.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126531
Sparse compiler failed on the provided test (when the sparse kernel is nested in a scf structrual operator).
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D130609