OptionalAttr does the wrapping with Optional explicitly in ODS while
default valued optional attribute doesn't (and follows DefaultValuedAttr
in this behavior/meant as drop-in for updating old behavior), update
when to emit check for non-null to account for this. Also add variant
for optional default valued string attribute to have same convenience as
default valued string attribute.
This fixes some fallout from D131282. Currently, add_tablegen() will add the tablegen target to LLVM_EXPORTS and associates the install with LLVMExports. For non-standalone builds, this means that you end up with mlir-tblgen and clang-tblgen in LLVMExports.
However, these projects should instead be using MLIR_EXPORTS/MLIRTargets and CLANG_EXPORTS/ClangTargets. To fix this, add an extra EXPORT option and make use of get_target_export_arg() to create the correct export argument.
Reviewed By: ashay-github
Differential Revision: https://reviews.llvm.org/D131565
This change add a helper function for computing a topological sorting of a list of ops. E.g. this can be useful in transforms where a subset of ops should be cloned without dominance errors.
The analysis reuses the existing implementation in TopologicalSortUtils.cpp.
Differential Revision: https://reviews.llvm.org/D131669
An optional attribute dictionary before a region in an assembly format
is a potential format ambiguity because they both start with `{`.
Fixes#53077
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131636
This patch adds support for string literals as `custom` directive
arguments. This can be useful for re-using custom parsers and printers
when arguments have a known value. For example:
```
ParseResult parseTypedAttr(AsmParser &parser, Attribute &attr, Type type) {
return parser.parseAttribute(attr, type);
}
void printTypedAttr(AsmPrinter &printer, Attribute attr, Type type) {
return parser.printAttributeWithoutType(attr);
}
```
And in TableGen:
```
def FooOp : ... {
let arguments = (ins AnyAttr:$a);
let assemblyFormat = [{ custom<TypedAttr>($a, "$_builder.getI1Type()")
attr-dict }];
}
def BarOp : ... {
let arguments = (ins AnyAttr:$a);
let assemblyFormat = [{ custom<TypedAttr>($a, "$_builder.getIndexType()")
attr-dict }];
}
```
Instead of writing two separate sets of custom parsers and printers.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131603
This reland includes changes to the Python bindings.
Switch variadic operand and result segment size attributes to use the
dense i32 array. Dense integer arrays were introduced primarily to
represent index lists. They are a better fit for segment sizes than
dense elements attrs.
Depends on D131801
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131803
Switch variadic operand and result segment size attributes to use the
dense i32 array. Dense integer arrays were introduced primarily to
represent index lists. They are a better fit for segment sizes than
dense elements attrs.
Depends on D131738
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D131702
When emitting the declarations for interface methods defined in ODS,
also emit their descriptions as C++ comments. This makes the
documentation accessible to C++ tooling such as IDEs that offers better
usability than reading it form the .td or the website.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D130478
This commit moves MemRef memory space to SPIR-V storage class
conversion out of the main SPIR-V type converter. Now the mapping
should happen as a prelimiary step before performing the final
conversion to SPIR-V. Flows are expect to write their own memory
space mappings like the `MapMemRefStorageClassPass` to handle
memory space mappings according to their needs.
This is needed because SPIR-V is serving multiple client APIs,
including Vulkan and OpenCL. Different client APIs might want
to use different storage classes for buffers in a particular
memory space, e.g., `StorageBuffer` for Vulkan vs. `CrossWorkgroup`
for OpenCL when converting the default 0 memory space. Hardcoding
a specific mapping makes that hard. While it's possible to embed
selection logic further inside the main type converter, it will
make the main type converter even complicated. So it's better to
separate the concerns, as mapping the memory space is really
concretizing the meaning of those numeric memory spaces in the
particular context of SPIR-V lowering.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131410
Previously we are using IntegerAttr to back all SPIR-V enum
attributes. Therefore we all such attributes are showed like
IntegerAttr in IRs, which is barely readable and breaks
roundtripability of the IR. This commit changes to use
`EnumAttr` as the base directly so that we can have separate
attribute definitions and better IR printing.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131311
Prior to this patch, the `add_tablegen()` macro in
llvm/cmake/modules/TableGen.cmake added the install rule only if
`project` matched `LLVM` or `MLIR`. This patch adds an optional
`DESTINATION` argument, which, if non-empty, decides whether (and where)
to install the tablegen tool, thus eliminating the need for
project-specific overrides. This patch also updates all other
invocations of the `add_tablegen()` macro.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D131282
This has previously been done for `mlir-opt` and `mlir-reduce` and roughly the same approach has been done here.
The use case for having a separate library is that it is easier for downstream to make custom TableGen backends/executable that work on top of the utilities that are defined in `mlir/TableGen`.
The customization point here is the same one as for any upstream TableGen backends: One can add a new generator by simply creating a global instance of `mlir::GenRegistration`.
Differential Revision: https://reviews.llvm.org/D131112
Using if (TARGET ${LLVM_NATIVE_ARCH}) only works if MLIR is built
together with LLVM, but not for standalone builds of MLIR. The
correct way to check this is
if (${LLVM_NATIVE_ARCH} IN_LIST LLVM_TARGETS_TO_BUILD), as the
LLVM build system exports LLVM_TARGETS_TO_BUILD.
To avoid repeating the same check many times, add a
MLIR_ENABLE_EXECUTION_ENGINE variable.
Differential Revision: https://reviews.llvm.org/D131071
Adds optional attribute to support tensor cores on F32 datatype by lowering to `mma.sync` with TF32 operands. Since, TF32 is not a native datatype in LLVM we are adding `tf32Enabled` as an attribute to allow the IR to be aware of `MmaSyncOp` datatype. Additionally, this patch adds placeholders for nvgpu-to-nvgpu transformation targeting higher precision tf32x3.
For mma.sync on f32 input using tensor cores there are two possibilites:
(a) tf32 (1 `mma.sync` per warp-level matrix-multiply-accumulate)
(b) tf32x3 (3 `mma.sync` per warp-level matrix-multiply-accumulate)
Typically, tf32 tensor core acceleration comes at a cost of accuracy from missing precision bits. While f32 has 23 precision bits, tf32 has only 10 precision bits. tf32x3 aims to recover the precision bits by splitting each operand into two tf32 values and issue three `mma.sync` tensor core operations.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D130294
This patch removes the `type` field from `Attribute` along with the
`Attribute::getType` accessor.
Going forward, this means that attributes in MLIR will no longer have
types as a first-class concept. This patch lays the groundwork to
incrementally remove or refactor code that relies on generic attributes
being typed. The immediate impact will be on attributes that rely on
`Attribute` containing a type, such as `IntegerAttr`,
`DenseElementsAttr`, and `ml_program::ExternAttr`, which will now need
to define a type parameter on their storage classes. This will save
memory as all other attribute kinds will no longer contain a type.
Moreover, it will not be possible to generically query the type of an
attribute directly. This patch provides an attribute interface
`TypedAttr` that implements only one method, `getType`, which can be
used to generically query the types of attributes that implement the
interface. This interface can be used to retain the concept of a "typed
attribute". The ODS-generated accessor for a `type` parameter
automatically implements this method.
Next steps will be to refactor the assembly formats of certain operations
that rely on `parseAttribute(type)` and `printAttributeWithoutType` to
remove special handling of type elision until `type` can be removed from
the dialect parsing hook entirely; and incrementally remove uses of
`TypedAttr`.
Reviewed By: lattner, rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D130092
Added a commutativity utility pattern and a function to populate it. The pattern sorts the operands of an op in ascending order of the "key" associated with each operand iff the op is commutative. This sorting is stable.
The function is intended to be used inside passes to simplify the matching of commutative operations. After the application of the above-mentioned pattern, since the commutative operands now have a deterministic order in which they occur in an op, the matching of large DAGs becomes much simpler, i.e., requires much less number of checks to be written by a user in her/his pattern matching function.
The "key" associated with an operand is the list of the "AncestorKeys" associated with the ancestors of this operand, in a breadth-first order.
The operand of any op is produced by a set of ops and block arguments. Each of these ops and block arguments is called an "ancestor" of this operand.
Now, the "AncestorKey" associated with:
1. A block argument is `{type: BLOCK_ARGUMENT, opName: ""}`.
2. A non-constant-like op, for example, `arith.addi`, is `{type: NON_CONSTANT_OP, opName: "arith.addi"}`.
3. A constant-like op, for example, `arith.constant`, is `{type: CONSTANT_OP, opName: "arith.constant"}`.
So, if an operand, say `A`, was produced as follows:
```
`<block argument>` `<block argument>`
\ /
\ /
`arith.subi` `arith.constant`
\ /
`arith.addi`
|
returns `A`
```
Then, the block arguments and operations present in the backward slice of `A`, in the breadth-first order are:
`arith.addi`, `arith.subi`, `arith.constant`, `<block argument>`, and `<block argument>`.
Thus, the "key" associated with operand `A` is:
```
{
{type: NON_CONSTANT_OP, opName: "arith.addi"},
{type: NON_CONSTANT_OP, opName: "arith.subi"},
{type: CONSTANT_OP, opName: "arith.constant"},
{type: BLOCK_ARGUMENT, opName: ""},
{type: BLOCK_ARGUMENT, opName: ""}
}
```
Now, if "keyA" is the key associated with operand `A` and "keyB" is the key associated with operand `B`, then:
"keyA" < "keyB" iff:
1. In the first unequal pair of corresponding AncestorKeys, the AncestorKey in operand `A` is smaller, or,
2. Both the AncestorKeys in every pair are the same and the size of operand `A`'s "key" is smaller.
AncestorKeys of type `BLOCK_ARGUMENT` are considered the smallest, those of type `CONSTANT_OP`, the largest, and `NON_CONSTANT_OP` types come in between. Within the types `NON_CONSTANT_OP` and `CONSTANT_OP`, the smaller ones are the ones with smaller op names (lexicographically).
---
Some examples of such a sorting:
Assume that the sorting is being applied to `foo.commutative`, which is a commutative op.
Example 1:
> %1 = foo.const 0
> %2 = foo.mul <block argument>, <block argument>
> %3 = foo.commutative %1, %2
Here,
1. The key associated with %1 is:
```
{
{CONSTANT_OP, "foo.const"}
}
```
2. The key associated with %2 is:
```
{
{NON_CONSTANT_OP, "foo.mul"},
{BLOCK_ARGUMENT, ""},
{BLOCK_ARGUMENT, ""}
}
```
The key of %2 < the key of %1
Thus, the sorted `foo.commutative` is:
> %3 = foo.commutative %2, %1
Example 2:
> %1 = foo.const 0
> %2 = foo.mul <block argument>, <block argument>
> %3 = foo.mul %2, %1
> %4 = foo.add %2, %1
> %5 = foo.commutative %1, %2, %3, %4
Here,
1. The key associated with %1 is:
```
{
{CONSTANT_OP, "foo.const"}
}
```
2. The key associated with %2 is:
```
{
{NON_CONSTANT_OP, "foo.mul"},
{BLOCK_ARGUMENT, ""}
}
```
3. The key associated with %3 is:
```
{
{NON_CONSTANT_OP, "foo.mul"},
{NON_CONSTANT_OP, "foo.mul"},
{CONSTANT_OP, "foo.const"},
{BLOCK_ARGUMENT, ""},
{BLOCK_ARGUMENT, ""}
}
```
4. The key associated with %4 is:
```
{
{NON_CONSTANT_OP, "foo.add"},
{NON_CONSTANT_OP, "foo.mul"},
{CONSTANT_OP, "foo.const"},
{BLOCK_ARGUMENT, ""},
{BLOCK_ARGUMENT, ""}
}
```
Thus, the sorted `foo.commutative` is:
> %5 = foo.commutative %4, %3, %2, %1
Signed-off-by: Srishti Srivastava <srishti.srivastava@polymagelabs.com>
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124750
The current Parser library is solely focused on providing API for
the textual MLIR format, but MLIR will soon also provide a binary
format. This commit renames the current Parser library to AsmParser to
better correspond to what the library is actually intended for. A new
Parser library is added which will act as a unified parser interface
between both text and binary formats. Most parser clients are
unaffected, given that the unified interface is essentially the same as
the current interface. Only clients that rely on utilizing the
AsmParserState, or those that want to parse Attributes/Types need to be
updated to point to the AsmParser library.
Differential Revision: https://reviews.llvm.org/D129605
First of all, `LLVM_TOOLS_INSTALL_DIR` put there breaks our NixOS
builds, because `LLVM_TOOLS_INSTALL_DIR` defined the same as
`CMAKE_INSTALL_BINDIR` becomes an *absolute* path, and then when
downstream projects try to install there too this breaks because our
builds always install to fresh directories for isolation's sake.
Second of all, note that `LLVM_TOOLS_INSTALL_DIR` stands out against the
other specially crafted `LLVM_CONFIG_*` variables substituted in
`llvm/cmake/modules/LLVMConfig.cmake.in`.
@beanz added it in d0e1c2a550 to fix a
dangling reference in `AddLLVM`, but I am suspicious of how this
variable doesn't follow the pattern.
Those other ones are carefully made to be build-time vs install-time
variables depending on which `LLVMConfig.cmake` is being generated, are
carefully made relative as appropriate, etc. etc. For my NixOS use-case
they are also fine because they are never used as downstream install
variables, only for reading not writing.
To avoid the problems I face, and restore symmetry, I deleted the
exported and arranged to have many `${project}_TOOLS_INSTALL_DIR`s.
`AddLLVM` now instead expects each project to define its own, and they
do so based on `CMAKE_INSTALL_BINDIR`. `LLVMConfig` still exports
`LLVM_TOOLS_BINARY_DIR` which is the location for the tools defined in
the usual way, matching the other remaining exported variables.
For the `AddLLVM` changes, I tried to copy the existing pattern of
internal vs non-internal or for LLVM vs for downstream function/macro
names, but it would good to confirm I did that correctly.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D117977
The `tileAndFuseLinalgOps` is a legacy approach for tiling + fusion of
Linalg operations. Since it was also intended to work on operations
with buffer operands, this method had fairly complex logic to make
sure tile and fuse was correct even with side-effecting linalg ops.
While complex, it still wasnt robust enough. This patch deprecates
this method and thereby deprecating the tiling + fusion method for ops
with buffer semantics. Note that the core transformation to do fusion
of a producer with a tiled consumer still exists. The deprecation here
only removes methods that auto-magically tried to tile and fuse
correctly in presence of side-effects.
The `tileAndFuseLinalgOps` also works with operations with tensor
semantics. There are at least two other ways the same functionality
exists.
1) The `tileConsumerAndFuseProducers` method. This does a similar
transformation, but using a slightly different logic to
automatically figure out the legal tile + fuse code. Note that this
is also to be deprecated soon.
2) The prefered way uses the `TilingInterface` for tile + fuse, and
relies on the caller to set the tiling options correctly to ensure
that the generated code is correct.
As proof that (2) is equivalent to the functionality provided by
`tileAndFuseLinalgOps`, relevant tests have been moved to use the
interface, where the test driver sets the tile sizes appropriately to
generate the expected code.
Differential Revision: https://reviews.llvm.org/D129901
This one required more changes than ideal due to overlapping generated name
with different return types. Changed getIndexingMaps to getIndexingMapsArray to
move it out of the way/highlight that it returns (more expensively) a
SmallVector and uses the prefixed name for the Attribute.
Differential Revision: https://reviews.llvm.org/D129919
For AttrDef declarations, place specified code in extraClassDefinition into the generated *.cpp.inc file.
Reviewed By: Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D129574
This patch allows custom attribute and type builders to return
something other than the C++ type of the attribute or type.
This is useful for attributes or types that may perform extra work during
construction (e.g. canonicalization) that could result in a different
kind of attribute or type being returned.
Reviewed By: rriddle, lattner
Differential Revision: https://reviews.llvm.org/D129792
This patch adds a pattern to decompose a `linalg.generic` operations
that
- has only parallel iterator types
- has more than 2 statements (including the yield)
into multiple `linalg.generic` operation such that each operation has
a single statement and a yield.
The pattern added here just splits the matching `linalg.generic` into
two `linalg.generic`s, one containing the first statement, and the
other containing the remaining. The same pattern can be applied
repeatedly on the second op to ultimately fully decompose the generic
op.
Differential Revision: https://reviews.llvm.org/D129704
This pass tests patterns that are already tested elsewhere by applying them in a semi-targeted
fashion using anchor function and op names.
From now on, targeted tests should use the transform dialect interpreter.
Differential Revision: https://reviews.llvm.org/D129627
This required changing a bit of how attributes/types are parsed. A new
`KeywordSwitch` class was added to AsmParser that provides a StringSwitch
like API for parsing keywords with a set of potential matches. It intends to
both provide a cleaner API, and enable injection for code completion. This
required changing the API of `generated(Attr|Type)Parser` to handle the
parsing of the keyword, instead of having the user do it. Most upstream
dialects use the autogenerated handling and didn't require a direct update.
Differential Revision: https://reviews.llvm.org/D129267
Previously default attributes were only usable by way of the ODS generated
accessors, but this was undesirable as
1. The ODS getters could construct Attribute each get request;
2. For non-C++ uses this would require either duplicating some of tee default
attribute generating or generating additional bindings to generate methods;
3. Accessing op.getAttr("foo") and op.getFoo() would return different results;
Generate method to populate default attributes that can be used to address
these.
This merely adds this facility but does not employ by default on any path.
Differential Revision: https://reviews.llvm.org/D128962
With SCCP and integer range analysis ported to the new framework, this old framework is redundant. Delete it.
Depends on D128866
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D128867
This patch introduces an implementation of dense data-flow analysis. Dense
data-flow analysis attaches a lattice before and after the execution of every
operation. The lattice state is propagated across operations by a user-defined
transfer function. The state is joined across control-flow and callgraph edges.
Thge patch provides an example pass that uses both a dense and a sparse analysis
together.
Depends on D127139
Reviewed By: rriddle, phisiart
Differential Revision: https://reviews.llvm.org/D127173
By making TypeInterfaces and AttrInterfaces, Types and Attrs respectively it'd then be possible to use them anywhere where a Type or Attr may go. That is within the arguments and results of an Op definition, in a RewritePattern etc.
Prior to this change users had to separately define a Type or Attr, with a predicate to check whether a type or attribute implements a given interface. Such code will be redundant now.
Removing such occurrences in upstream dialects will be part of a separate patch.
As part of implementing this patch, slight refactoring had to be done. In particular, Interfaces cppClassName field was renamed to cppInterfaceName as it "clashed" with TypeConstraints cppClassName. In particular Interfaces cppClassName expected just the class name, without any namespaces, while TypeConstraints cppClassName expected a fully qualified class name.
Differential Revision: https://reviews.llvm.org/D129209
With the exceptions of AttrOrTypeParameter and DerivedAttr, all of MLIR consistently uses $_ctxt as the substitute variable for the MLIRContext in TableGen C++ code.
Usually this does not matter unless one where to reuse some code in multiple fields but it is still needlessly inconsistent and prone to error.
This patch fixes that by consistently using _$ctxt everywhere.
Differential Revision: https://reviews.llvm.org/D129153
This patch implements the analysis state classes needed for sparse data-flow analysis and implements a dead-code analysis using those states to determine liveness of blocks, control-flow edges, region predecessors, and function callsites.
Depends on D126751
Reviewed By: rriddle, phisiart
Differential Revision: https://reviews.llvm.org/D127064
`enableSplitting` simply enables/disables whether we should split
or use the full buffer. `insertMarkerInOutput` toggles if split markers
should be inserted in between prcessed output chunks.
These options allow for merging the duplicate code paths we have
when splitting is optional.
Differential Revision: https://reviews.llvm.org/D128764
This patch adds a `convertFromStorage` field to attribute or type parameters that can implement more complex logic for converting from the parameter's C++ storage type (e.g. `Optional<SmallVector<T>>`) to its C++ type (e.g. `Optional<ArrayRef<T>>`).
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D128293