This diff causes mlir-tblgen to generate code for an additional builder for an
operation argument with a return type that can be inferred *AND* an attribute in
the argument list can be "unwrapped." (Previously, the unwrapped build function
was only generated for builders with explicit return types in separate or
aggregate form.) As an example, this builder might be used by code that creates
operations that implement the `SameOperandsAndResultType` interface. A test case
was created.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D124043
This diff allows the EnumAttr class to be used for bit enum attributes (in
addition to previously supported integer enum attributes). While integer
and bit enum attributes share many common implementation aspects, parsing
bit enum values requires a separate implementation. This is accomplished
by creating empty parser and printer strings in the EnumAttrInfo record,
and having derived classes (specific to bit and integer enums) override with
an appropriate parser/printer string.
To support existing bit enums that may use a vertical bar separator, the
parser is modified to support the | token.
Tests were added for bit enums alongside integer enums.
Future diffs for fastmath attributes in the arithmetic dialect will use these
changes.
(resubmission of earlier abaondoned diff, updated to reflect subsequent changes
in the repository)
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D123880
This diff introduces a tablegen field for bit enum attributes
(`printBitEnumPrimaryGroups`) to control printing when the enum uses "group"
cases. An example would be an implementation that uses a `fastmath` enum value
as an alias for individual fastmath flags. The proposed field would allow
printing of simply `fast` for the enum value, instead of the more verbose list
that would include `fast` as well as the individual flags (e.g. `reassoc,nnan,
ninf,nsz,arcp,contract,afn,fast`).
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D123871
The verifier of llvm.mlir.addressof did not properly account for opaque pointers, that is, the pointer type not having an element type equal to the type of the referenced global or function. This patch fixes that by skipping the test for the element type if the pointer is opaque.
Differential Revision: https://reviews.llvm.org/D124333
After https://reviews.llvm.org/D119743 added the `AutomaticAllocationScope`
trait to loop-like constructs, the vector transfer full/partial splitting pass
started inserting allocations for temporaries within the closest loop rather
than the closest function (or other allocation scope such as `async.execute`).
While this is correct as long as the lowered code takes care of automatic
deallocation at the end of each iteration of the loop, this interferes with
downstream optimizations that expect `alloca`s to be at the function level.
Step over loops when looking for the closest allocation scope in vector
transfer full/partial splitting pass thus restoring the original behavior.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D124366
This is likely preferable to having it crash if one were to specify an opaque pointer type, and the actual element type is unused either way.
Differential Revision: https://reviews.llvm.org/D124334
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.
Fixes GitHub issue #53389
Reviewed by: aartbik, mehdi_amini
Differential Revision: https://reviews.llvm.org/D123876
Run `one-shot-bufferize` instead of `linalg-comprehensive-module-bufferize` and move some test cases to their respective dialects.
Differential Revision: https://reviews.llvm.org/D124323
Now that dialect constructors are generated in the .cpp file, we can
drop all of the dependent dialect includes from the .h file.
Differential Revision: https://reviews.llvm.org/D124298
By generating in the .h file, we were forcing dialects to include
a lot of additional header files because:
* Fields of the dialect, e.g. std::unique_ptr<>, were unable to use
forward declarations.
* Dependent dialects are loaded in the constructor, requiring the
full definition of each dependent dialect (which, depending on
the file structure of the dialect, may include the operations).
By generating in the .cpp we get much faster builds, and also
better align with the rest of the code base.
Fixes#55044
Differential Revision: https://reviews.llvm.org/D124297
As a fallback mechanism, if no entry was supplied for a given address space, the size or alignment for a pointer type with the default address space is returned instead.
This code currently crashes with opaque pointers, as it tries to construct a typed pointer type from the opaque pointer type, leading to a null pointer dereference when fetching the element type.
This patch fixes the issue by handling the opaque pointer cases explicitly.
Differential Revision: https://reviews.llvm.org/D124290
Using opaque pointers in function signatures leads to an attempt to recursively convert all types, including sub types in LLVM types. In the case of LLVM pointers, it may not have a subtype aka element type if it is opaque which would then lead to a null pointer dereference.
Differential Revision: https://reviews.llvm.org/D124291
This change fixes `CollapsedLayoutMap` for cases where the collapsed
dims are size 1. The cases where inner most dims are size 1 and
noncontiguous can be represented by the strided form and therefore can
be allowed. For such cases, the new stride should be of the next entry
in an association whose dimension is not size 1. If the next entry is
dynamic, it's not possible to decide which stride to use at compilation
time and the stride is set to dynamic.
Differential Revision: https://reviews.llvm.org/D124137
Currently, the sequence of Transform dialect operations only supports a single
use of each operand (verified by the `transform.sequence` operation). This was
originally motivated by the need to guard against accessing a payload IR
operation associated with a transform IR value after this operation has likely
been rewritten by a transformation. However, not all Transform dialect
operations rewrite payload IR, in particular the "navigation" operation such as
`transform.pdl_match` do not.
Introduce memory effects to the Transform dialect operations to describe their
effect on the payload IR and the mapping between payload IR opreations and
transform IR values. Use these effects to replace the single-use rule, allowing
repeated reads and disallowing use-after-free, where operations with the "free"
effect are considered to "consume" the transform IR value and rewrite the
corresponding payload IR operations). As an additional improvement, this
enables code motion transformation on the transform IR itself.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124181
The bubble up logic was written by assuming the slice operation is
always a normal slice that outputs a tensor with the same rank.
Differential Revision: https://reviews.llvm.org/D124283
This allows printing the users of an operation as proposed in the git issue #53286.
To be able to refer to operations with no result, these operations are assigned an
ID in SSANameState.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D124048
Add shape func op for use (primarily) in shape function_library op. Allows
setting default dialect for some simpler authoring. This is a minimal version
of the ops needed.
Differential Revision: https://reviews.llvm.org/D124055
If there is only one single element in the vector, then we can
just extract the element to compute the final result.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D124129
This makes the API easier to use. Also allows us to check for incorrect API usage for easier debugging.
Differential Revision: https://reviews.llvm.org/D124265
The `hasFilter` field is not needed. Instead, the filter accepts ops by default if no ALLOW rule was specified.
Differential Revision: https://reviews.llvm.org/D124264
vector.broadcast can inject all size one dimensions. If it's
followed by a vector.shape_cast to the original type, we can
cancel the op pair, like cancelling consecutive shape_cast ops.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D124094
* Move Module Bufferization to the bufferization dialect. The implementation is split into `OneShotModuleBufferize.cpp` and `FuncBufferizableOpInterfaceImpl.cpp`, so that the external model implementation can be easily moved to the func dialect in the future.
* Split and clean up test cases. A few test cases are still remaining in Linalg and will be updated separately.
* `linalg.inplaceable` is renamed to `bufferization.writable` to accurately reflect its current usage.
* Attributes and their verifiers are moved from the Linalg dialect to the Bufferization dialect.
* Expand documentation.
* Add a new flag to One-Shot Bufferize to allow for function boundary bufferization.
Differential Revision: https://reviews.llvm.org/D122229
The layout postprocessing step was removed and is now part of the FuncOp bufferization. If the user specified a certain layout map for a tensor function arg, use that layout map directly when bufferizing the function signature. Previously, the bufferization used a generic layout map for every tensor function arg and then updated function signatures and CallOps in a separate step.
Differential Revision: https://reviews.llvm.org/D122228
FuncOps are now less special. They must still be analyzed + bufferized in a certain order, but they are now bufferized same as other ops that have a region: Bufferize the op first (`bufferize` interface method), then bufferize the region body with other bufferization patterns. In the case of FuncOps, the function signature is bufferized together with ReturnOps. Similar to how, e.g., scf.for ops are bufferized together with scf.yield ops.
This change is essentially a reimplementation of the FuncOp bufferization, but mostly NFC from a user's perspective (apart from error messages). This change is in preparation of moving the code to the bufferization dialect.
Differential Revision: https://reviews.llvm.org/D123214
The bufferization driver was previously using a GreedyPatternRewriter. This was problematic because bufferization must traverse ops top-to-bottom. The GreedyPatternRewriter was previously configured via `useTopDownTraversal`, but this was a hack; this API was just meant for performance improvements and should not affect the result of the rewrite.
BEGIN_PUBLIC
No public commit message needed.
END_PUBLIC
Differential Revision: https://reviews.llvm.org/D123618
This patch replaces current fold function with the common constant fold funtion in order to cover the situation of constant splat.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D124236
This patch replaces some code with matchPattern and move them before the constant folder function in order to avoid redundant invoking.
Differential Revision: https://reviews.llvm.org/D124235
It seems more natural than to have it as a static method of ExpandShapeOp.
Also fix a typo ("the the" -> "the").
Differential Revision: https://reviews.llvm.org/D124234
These scripts do not appear to require bash, and while /bin/sh
is not guaranteed either it's more commonly available.
Fixes tests on NixOS and in certain sandbox build environments.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D124205
Insert the select op before the combiner op when vectorizing a
reduction loop that needs a mask, so the vectorized reduction loop
can pass isLoopParallel check and be transformed correctly in later
passes.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D124047
When Location tracking support for block arguments was added, we
discussed various approaches to threading support for this through
function-like argument parsing. At the time, we added a parallel array
of locations that could hold this. It turns out that that approach was
verbose and error prone, roughly no one adopted it.
This patch takes a different approach, adding an optional source
locator to the UnresolvedOperand class. This fits much more naturally
into the standard structure we use for representing locators, and gives
all the function like dialects locator support for free (e.g. see the
test adding an example for the LLVM dialect).
Differential Revision: https://reviews.llvm.org/D124188
Previously, checking that a fix point is reached was counted as a full
iteration. As this "iteration" never changes the IR, this seems counter-
intuitive.
Differential Revision: https://reviews.llvm.org/D123641
This introduces a pair of ops to the Transform dialect that connect it to PDL
patterns. Transform dialect relies on PDL for matching the Payload IR ops that
are about to be transformed. For this purpose, it provides a container op for
patterns, a "pdl_match" op and transform interface implementations that call
into the pattern matching infrastructure.
To enable the caching of compiled patterns, this also provides the extension
mechanism for TransformState. Extensions allow one to store additional
information in the TransformState and thus communicate it between different
Transform dialect operations when they are applied. They can be added and
removed when applying transform ops. An extension containing a symbol table in
which the pattern names are resolved and a pattern compilation cache is
introduced as the first client.
Depends On D123664
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124007
The current implementation of takeBody first clears the Region, before then taking ownership of the blocks of the other regions. The issue here however, is that when clearing the region, it does not take into account references of operations to each other. In particular, blocks are deleted from front to back, and operations within a block are very likely to be deleted despite still having uses, causing an assertion to trigger [0].
This patch fixes that issue by simply calling dropAllReferences()before clearing the blocks.
[0] 9a8bb4bc63/mlir/lib/IR/Operation.cpp (L154)
Differential Revision: https://reviews.llvm.org/D123913
Prior to this patch, `cloneInto` would do a simple walk over the blocks and contained operations and clone and map them as it encounters them. As finishing touch it then remaps any successor and operands it has remapped during that process.
This is generally fine, but sadly leads to a lot of uses of both operations and blocks from the source region, in the cloned operations in the target region. Those uses lead to writes in the use-def list of the operations, making `cloneInto` never thread safe.
This patch reimplements `cloneInto` in three steps to avoid ever creating any extra uses on elements in the source region:
* It first creates the mapping of all blocks and block operands
* It then clones all operations to create the mapping of all operation results, but does not yet clone any regions or set the operands
* After all operation results have been mapped, it now sets the operations operands and clones their regions.
That way it is now possible to call `cloneInto` from multiple threads if the Region or Operation is isolated-from-above. This allows creating copies of functions or to use `mlir::inlineCall` with the same source region from multiple threads. In the general case, the method is thread-safe if through cloning, no new uses of `Value`s from outside the cloned Operation/Region are created. This can be ensured by mapping any outside operands via the `BlockAndValueMapping` to `Value`s owned by the caller thread.
While I was at it, I also reworked the `clone` method of `Operation` a little bit and added a proper options class to avoid having a `cloneWithoutRegionsAndOperands` method, and be more extensible in the future. `cloneWithoutRegions` is now also a simple wrapper that calls `clone` with the proper options set. That way all the operation cloning code is now contained solely within `clone`.
Differential Revision: https://reviews.llvm.org/D123917
Add async dependencies support for gpu.launch op: this allows specifying
a list of async tokens ("streams") as dependencies for the launch.
Update the GPU kernel outlining pass lowering to propagate async
dependencies from gpu.launch to gpu.launch_func op. Previously, a new
stream was being created and destroyed for a kernel launch. The async
deps support allows the kernel launch to be serialized on an existing
stream.
Differential Revision: https://reviews.llvm.org/D123499
This patch adds lowering support for atomic read and write constructs.
Also added is pointer modelling code to allow FIR pointer like types to
be inferred and converted while lowering.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D122725
Co-authored-by: Kiran Chandramohan <kiran.chandramohan@arm.com>
This patch handles empty hint value for critical and atomic constructs.
This also adds checks and tests for hint clause on atomic constructs.
Reviewed By: peixin, kiranchandramohan, NimishMishra
Differential Revision: https://reviews.llvm.org/D123186
Add a helper used to implement the build methods generated by ods-gen. The change reduces code size and compilation time since all structured op builders use the same build method. The change reduces the LinalgOps.cpp compilation time from 10.2s to 9.8s (debug build).
Depends On D123987
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D124003
The revision avoids template methods for parsing and printing that are replicated for every named operation. Instead, the new methods take a regionBuilder argument. The revision reduces the compile time of LinalgOps.cpp from 11.2 to 10.2 seconds (debug build).
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D123987
NFC. Drop trailing end of line white space in GPU async ops' printer
whenever the list of async deps is empty.
Reviewed By: mehdi_amini, rriddle
Differential Revision: https://reviews.llvm.org/D123754
Add RegionBranchOpInterface on affine.for op so that transforms relying
on RegionBranchOpInterface can support affine.for. E.g.:
buffer-deallocation pass.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D123568
Writes into tensors that are definied outside of a repetitive region, but with the write happening inside of the repetitive region were previously not considered conflicts. This was incorrect.
E.g.:
```
%0 = ... : tensor<?xf32>
scf.for ... {
"reading_op"(%0) : tensor<?xf32>
%1 = "writing_op"(%0) : tensor<?xf32> -> tensor<?xf32>
...
}
```
In the above example, "writing_op" should be out-of-place.
This commit fixes the bufferization for any op that declares its repetitive semantics via RegionBranchOpInterface.
This patch adds check of supported reduction kind for ScanOp to avoid using and/or/xor for floating point type.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D123977
Introduce a method on PyMlirContext (and plumb it through to Python) to
invalidate all of the operations in the live operations map and clear
it. Since Python has no notion of private data, an end-developer could
reach into some 3rd party API which uses the MLIR Python API (that is
behaving correctly with regard to holding references) and grab a
reference to an MLIR Python Operation, preventing it from being
deconstructed out of the live operations map. This allows the API
developer to clear the map when it calls C++ code which could delete
operations, protecting itself from its users.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D123895
getUpperBound is analogous to getLowerBound(), except for the upper
bound, and is used in range analysis.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124020
Sequence is an important transform combination primitive that just indicates
transform ops being applied in a row. The simplest version requires fails
immediately if any transformation in the sequence fails. Introducing this
operation allows one to start placing transform IR within other IR.
Depends On D123135
Reviewed By: Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D123664
This patch adds a new function `mlirDenseElementsAttrBFloat16Get()`,
which accepts the shaped type, the number of BFloat16 values, and a
pointer to an array of BFloat16 values, each of which is a `uint16_t`
value.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D123981
The printer is now resilient to invalid IR and will already automatically
fallback to the generic form on invalid IR. Using the generic printer on
pass failure was a conservative option before the printer was made
failsafe.
Reviewed By: lattner, rriddle, jpienaar, bondhugula
Differential Revision: https://reviews.llvm.org/D123915
Fold away gpu.memcpy op when only uses of dest are
the memcpy op in question, its allocation and deallocation
ops.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D121279
Add helper functions to check if an op may be executed multiple times based on RegionBranchOpInterface.
Differential Revision: https://reviews.llvm.org/D123789
This reverts commit af0285122f.
The test "libomp::loop_dispatch.c" on builder
openmp-gcc-x86_64-linux-debian fails from time-to-time.
See #54969. This patch is unrelated.
This patch removes inheritence of MultiAffineFunction from IntegerPolyhedron
and instead makes IntegerPolyhedron as a member.
This patch removes virtualization in MultiAffineFunction and also removes
unnecessary functions inherited from IntegerPolyhedron.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D123921
The OMPScheduleType enum stores the constants from libomp's internal sched_type in kmp.h and are used by several kmp API functions. The enum values have an internal structure, namely each scheduling algorithm (e.g.) exists in four variants: unordered, orderend, normerge unordered, and nomerge ordered.
This patch (basically a followup to D114940) splits the "ordered" and "nomerge" bits into separate flags, as was already done for the "monotonic" and "nonmonotonic", so we can apply bit flags operations on them. It also now contains all possible combinations according to kmp's sched_type. Deriving of the OMPScheduleType enum from clause parameters has been moved form MLIR's OpenMPToLLVMIRTranslation.cpp to OpenMPIRBuilder to make available for clang as well. Since the primary purpose of the flag is the binary interface to libomp, it has been made more private to LLVMFrontend. The primary interface for generating worksharing-loop using OpenMPIRBuilder code becomes `applyWorkshareLoop` which derives the OMPScheduleType automatically and calls the appropriate emitter function.
While this is mostly a NFC refactor, it still applies the following functional changes:
* The logic from OpenMPToLLVMIRTranslation to derive the OMPScheduleType also applies to clang. Most notably, it now applies the nonmonotonic flag for non-static schedules by default.
* In OpenMPToLLVMIRTranslation, the nonmonotonic default flag was previously not applied if the simd modifier was used. I assume this was a bug, since the effect was due to `loop.schedule_modifier()` returning `mlir::omp::ScheduleModifier::none` instead of `llvm::Optional::None`.
* In OpenMPToLLVMIRTranslation, the nonmonotonic default flag was set even if ordered was specified, in breach to what the comment before citing the OpenMP specification says. I assume this was an oversight.
The ordered flag with parameter was not considered in this patch. Changes will need to be made (e.g. adding/modifying function parameters) when support for it is added. The lengthy names of the enum values can be discussed, for the moment this is avoiding reusing previously existing enum value names such as `StaticChunked` to avoid confusion.
Reviewed By: peixin
Differential Revision: https://reviews.llvm.org/D123403
Reproducers that resulted in triggering the following asserts
mlir::NamedAttribute::NamedAttribute(mlir::StringAttr, mlir::Attribute)
mlir/lib/IR/Attributes.cpp:29:3
consumeToken
mlir/lib/Parser/Parser.h:126
Differential Revision: https://reviews.llvm.org/D122240
This patch modifies mergeLocalIds to not delete duplicate local ids in
`this` relation. This allows the ordering of the final local ids for `this`
to be determined more easily, which is generally required when other objects
refer to these local ids.
Reviewed By: arjunp
Differential Revision: https://reviews.llvm.org/D123866
This revision folds transpose splat to a new splat with the transposed vector type. For a splat, there is no need to actually do transpose for it, it would be more effective to just build a new splat as the result.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D123765
This class is a helper for 'parser-like' use cases of LogicalResult
where the implicit conversion to bool is tolerable. It is used by the
operation asmparsers, but is more generic functionality that is closely
aligned with LogicalResult. Hoist it up to LogicalResult.h to make it
more accessible. This is part of Issue #54884
Differential Revision: https://reviews.llvm.org/D123760
The generic form of the op is too verbose and in some cases not
readable. On pass failure, ops have been so far printed in generic form
to provide a (stronger) guarantee that the IR print succeeds. However,
in a large number of pass failure cases, the IR is still valid and
the custom printers for the ops will succeed. In fact, readability is
highly desirable post pass failure. This revision provides an option to
print ops in their custom/pretty-printed form on IR failure -- this
option is unsafe and there is no guarantee it will succeed. It's
disabled by default and can be turned on only if needed.
Differential Revision: https://reviews.llvm.org/D123893
This patch takes advantage of the Commutative trait on operation
to remove identical commutative operations where the operands are swapped.
The second operation below can be removed since `arith.addi` is commutative.
```
%1 = arith.addi %a, %b : i32
%2 = arith.addi %b, %a : i32
```
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D123492
This helps to prevent tsan failures when users inadvertantly mutate the
context in a non-safe way.
Differential Revision: https://reviews.llvm.org/D112021
This technique results in an explosion in compile time, resulting from a
huge number of std::tuple/concat instatiations. This technique is replaced
by simpler metaprogramming and results in a signficant reduction in
compile time. A local debug/asan build saw a 4x speed up in the processing
of ArithmeticOps.h.inc, and given the nature of this change every dialect
should see similar reductions in compile time.
Differential Revision: https://reviews.llvm.org/D123360
Previously this checked if the entire symbolic numerator was divisible by the
denominator, which is never the case when this function is called. Fixed this to
check only the non-const coefficients in the numerator, which was what was
intended and documented.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123592
extract was incorrectly folded when the source was coming from a
broadcast that was both adding new rank and broadcasting the inner
dimension.
Differential Revision: https://reviews.llvm.org/D123867
When the sample value is zero, everything is the same except that failure to
pivot does not imply emptiness. So, leave it to the user to mark as empty if
necessary, if they know the sample value is strictly negative. This is needed
for an upcoming symbolic lexmin heuristic.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123604
Changes the algorithm of LICM to support graph regions (no guarantee of topologically sorted order). Also fixes an issue where ops with recursive side effects and regions would not be hoisted if any nested ops used operands that were defined within the nested region.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D122465
Operation clone is currently faulty.
Suppose you have a block like as follows:
```
(%x0 : i32) {
%x1 = f(%x0)
return %x1
}
```
The test case we have is that we want to "unroll" this, in which we want to change this to compute `f(f(x0))` instead of just `f(x0)`. We do so by making a copy of the body at the end of the block and set the uses of the argument in the copy operations with the value returned from the original block.
This is implemented as follows:
1) map to the block arguments to the returned value (`map[x0] = x1`).
2) clone the body
Now for this small example, this works as intended and we get the following.
```
(%x0 : i32) {
%x1 = f(%x0)
%x2 = f(%x1)
return %x2
}
```
This is because the current logic to clone `x1 = f(x0)` first looks up the arguments in the map (which finds `x0` maps to `x1` from the initialization), and then sets the map of the result to the cloned result (`map[x1] = x2`).
However, this fails if `x0` is not an argument to the op, but instead used inside the region, like below.
```
(%x0 : i32) {
%x1 = f() {
yield %x0
}
return %x1
}
```
This is because cloning an op currently first looks up the args (none), sets the map of the result (`map[%x1] = %x2`), and then clones the regions. This results in the following, which is clearly illegal:
```
(%x0 : i32) {
%x1 = f() {
yield %x0
}
%x2 = f() {
yield %x2
}
return %x2
}
```
Diving deeper, this is partially due to the ordering (how this PR fixes it), as well as how region cloning works. Namely it will first clone with the mapping, and then it will remap all operands. Since the ordering above now has a map of `x0 -> x1` and `x1 -> x2`, we end up with the incorrect behavior here.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D122531
This diff moves `EnumAttr` tablegen definitions (specifically, `IntEnumAttr` and
`BitEnumAttr`-related classes) from `OpBase.td` to `EnumAttr.td`. No
functionality is changed.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D123551
LLVM IR is moving towards adoption of opaque pointer types. These require extra
information to be passed when constructing some operations, in particular GEP
and Alloca. Adapt the builders of said operations and modify the translation
code to handle both opaque and non-opaque pointers.
This incidentally adds the translation for Alloca alignment and fixes the translation
of struct-related GEP indices that must be constant.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D123792
Similar to the existing pattern for reodering cast(transpose),
this makes transpose following transpose and increases the chance
of embedding the transposition inside contraction op. Actually
cast ops are just special instances of elementwise ops.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D123596
In order to increase parallism, certain ops with regions and have the
IsIsolatedFromAbove trait will have their verification delayed. That
means the region verifier may access the invalid ops and may lead to a
crash.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D122771
Copy the implementation of SparseCompiler from python/tools to taco/tools until we have a common place to install it. Modify TACO to use this SparseCompiler for compilation and jitting.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D123696
This dialect provides operations that can be used to control transformation of
the IR using a different portion of the IR. It refers to the IR being
transformed as payload IR, and to the IR guiding the transformation as
transform IR.
The main use case for this dialect is orchestrating fine-grain transformations
on individual operations or sets thereof. For example, it may involve finding
loop-like operations with specific properties (e.g., large size) in the payload
IR, applying loop tiling to those and only those operations, and then applying
loop unrolling to the inner loops produced by the previous transformations. As
such, it is not intended as a replacement for the pass infrastructure, nor for
the pattern rewriting infrastructure. In the most common case, the transform IR
will be processed and applied to payload IR by a pass. Transformations
expressed by the transform dialect may be implemented using the pattern
infrastructure or any other relevant MLIR component.
This dialect is designed to be extensible, that is, clients of this dialect are
allowed to inject additional operations into this dialect using the newly
introduced in this patch `TransformDialectExtension` mechanism. This allows the
dialect to avoid a dependency on the implementation of the transformation as
well as to avoid introducing dialect-specific transform dialects.
See https://discourse.llvm.org/t/rfc-interfaces-and-dialects-for-precise-ir-transformation-control/60927.
Reviewed By: nicolasvasilache, Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D123135
Move the operations that correspond to LLVM IR intrinsics in a separate .td
file. This makes it easier to maintain the intrinsics and decreases the compile
time of LLVMDialect.cpp by ~25%.
Depends On D123310
Reviewed By: wsmoses, jacquesguan
Differential Revision: https://reviews.llvm.org/D123315
LLVM IR has introduced and is moving forward with the concept of opaque
pointers, i.e. pointer types that are not carrying around the pointee type.
Instead, memory-related operations indicate the type of the data being accessed
through the opaque pointer. Introduce the initial support for opaque pointers
in the LLVM dialect:
- `LLVMPointerType` to support omitting the element type;
- alloca/load/store/gep to support opaque pointers in their operands and
results; this requires alloca and gep to store the element type as an
attribute;
- memory-related intrinsics to support opaque pointers in their operands;
- translation to LLVM IR for the ops above is no longer using methods
deprecated in LLVM API due to the introduction of opaque pointers.
Unlike LLVM IR, MLIR can afford to support both opaque and non-opaque pointers
at the same time and simplify the transition. Translation to LLVM IR of MLIR
that involves opaque pointers requires the LLVMContext to be configured to
always use opaque pointers.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D123310
This change adds three new operations to the GPU dialect: gpu.mma.sync,
gpu.mma.ldmatrix, and gpu.lane_id. The former two are meant to target
the lower level nvvm.mma.sync and nvvm.ldmatrix instructions, respectively.
Lowerings are added for the new GPU operations for conversion to
NVVM.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D123647
Support unrolling for vector.transpose following the same interface as
other vector unrolling ops.
Differential Revision: https://reviews.llvm.org/D123688
StrEnumAttr has been deprecated in favour of EnumAttr, a solution based on AttrDef (https://reviews.llvm.org/D115181). This patch removes StrEnumAttr, along with all the custom ODS logic required to handle it.
See https://discourse.llvm.org/t/psa-stop-using-strenumattr-do-use-enumattr/5710 on how to transition to EnumAttr. In short,
```
// Before
def MyEnumAttr : StrEnumAttr<"MyEnum", "", [
StrEnumAttrCase<"A">,
StrEnumAttrCase<"B">
]>;
// After (pick an integer enum of your choice)
def MyEnum : I32EnumAttr<"MyEnum", "", [
I32EnumAttrCase<"A", 0>,
I32EnumAttrCase<"B", 1>
]> {
// Don't generate a C++ class! We want to use the AttrDef
let genSpecializedAttr = 0;
}
// Define the AttrDef
def MyEnum : EnumAttr<MyDialect, MyEnum, "my_enum">;
```
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D120834
Enable specifying additional include directories to search. This is
consistent with what one can do with clangd (although there it is more
general compilation options) and Python LSP. We would in general expect
these to be provided by compilation database equivalent.
Differential Revision: https://reviews.llvm.org/D123474
This patch adds thread_local to llvm.mlir.global and adds translation for dso_local and addr_space to and from LLVM IR.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D123412
This revision replaces current type cast constant folder with a new common type cast constant folder function template.
It will cover all former folder and support fold the constant splat and vector.
Differential Revision: https://reviews.llvm.org/D123489
This change generalizes the fusion of `tensor.expand_shape` ->
`linalg.generic` op by collapsing to handle cases where only a subset
of the reassociations specified in the `tensor.expand_shape` are valid
to be collapsed.
The method that does the collapsing is refactored to allow it to be a
generic utility when required.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D123153
This patch adds tasking construct according to Section 2.10.1 of OpenMP 5.0
Reviewed By: peixin, kiranchandramohan, abidmalikwaterloo
Differential Revision: https://reviews.llvm.org/D123575
This patch removes inheritence from PresburgerSpace in IntegerRelation and
instead makes it a member of these classes.
This is required for three reasons:
- It prevents implicit casting to PresburgerSpace.
- Not all functions of PresburgerSpace need to be exposed by the deriving classes.
- IntegerRelation and IntegerPolyhedron are defined in a PresburgerSpace. It
makes more sense for the space to be a member instead of them inheriting from
a space.
Reviewed By: arjunp, ftynse
Differential Revision: https://reviews.llvm.org/D123585
With this change, there's going to be a clear distinction between LLVM
and MLIR pass maanger options (e.g. `-mlir-print-after-all` vs
`-print-after-all`). This change is desirable from the point of view of
projects that depend on both LLVM and MLIR, e.g. Flang.
For consistency, all pass manager options in MLIR are prefixed with
`mlir-`, even options that don't have equivalents in LLVM .
Differential Revision: https://reviews.llvm.org/D123495
Lookup iter_arg buffers using `lookupBuffer` instead of always creating a new `ToMemrefOp`. Also cast all yielded buffers (if necessary), regardless of whether they are an equivalent buffer or a new allocation.
Note: This should have been part of D123369.
Differential Revision: https://reviews.llvm.org/D123383
Switch CUDA runtime wrapper for GPU mem alloc/free to async. The
semantics of the GPU dialect ops (gpu.alloc/dealloc) and the wrappers it
lowered to (gpu-to-llvm) was for the async versions -- however, this was
being incorrectly mapped to cuMemAlloc/cuMemFree instead of
cuMemAllocAsync/cuMemFreeAsync.
Reviewed By: csigg
Differential Revision: https://reviews.llvm.org/D123482
This supports the threadprivate directive in OpenMP dialect following
the OpenMP 5.1 [2.21.2] standard. Also lowering to LLVM IR using OpenMP
IRBduiler.
Reviewed By: kiranchandramohan, shraiysh, arnamoy10
Differential Revision: https://reviews.llvm.org/D123350
Adding annotations on as-needed bases, currently only for memrefCopy, but in general all C API functions that take pointers to memory allocated/initialized inside the jit-compiled code must be annotated, to be able to run with msan.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123557
Use the new pass manager.
This also removes the ability to run arbitrary sets of passes. Not sure if this functionality is used, but it doesn't seem to be tested.
No need to initialize passes outside of constructing the PassBuilder with the new pass manager.
Reland: Fixed custom calls to `-lower-matrix-intrinsics` in integration tests by replacing them with `-O0 -enable-matrix`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123425
The method to add elementwise ops fusion patterns pulls in many other
patterns by default. The patterns to pull in along with the
elementwise op fusion should be upto the caller. Split the method to
pull in just the elementwise ops fusion pattern. Other cleanup changes
include
- Move the pattern for constant folding of generic ops (currently only
constant folds transpose) into a separate file, cause it is not
related to fusion
- Drop the uber LinalgElementwiseFusionOptions. With the
populateElementwiseOpsFusionPatterns being split, this has no
utility now.
- Drop defaults for the control function.
- Fusion of splat constants with generic ops doesnt need a control
function. It is always good to do.
Differential Revision: https://reviews.llvm.org/D123236
Use the new pass manager.
This also removes the ability to run arbitrary sets of passes. Not sure if this functionality is used, but it doesn't seem to be tested.
No need to initialize passes outside of constructing the PassBuilder with the new pass manager.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123425
This avoids emitting errors in situations where the user doesn't have a server
setup, and doesn't mean to (e.g. when they merely want syntax highlighting).
Differential Revision: https://reviews.llvm.org/D123240
We currently proactively create language clients for every workspace folder,
and every language. This makes startup time more costly, and also emits errors
for missing language servers in contexts that the user currently isn't in. For example,
if a user opens a .mlir file we don't want to emit errors about .pdll files. We also don't
want to emit errors for missing servers in workspace folders that don't even utilize
MLIR.
This commit refactors client creation to lazy-load when a document that requires the
server is opened.
Differential Revision: https://reviews.llvm.org/D123184
In a previous commit we added proper support for separate configurations
per workspace folder, but that effectively broke support for processing out-of-workspace
files. Given how useful this is (e.g. when iterating on a test case in /tmp), this
commit refactors server creation to support this again. We support this case using
a "fallback" server that specifically handles files not within the workspace. This uses
the configuration settings for the current workspace itself (not the specific folder).
Differential Revision: https://reviews.llvm.org/D123183
We don't actually have any documentation today for how to
declaratively define a dialect. This commit rectifies that and properly
documents how to define a Dialect in tablegen, and details all of
the possible fields.
Differential Revision: https://reviews.llvm.org/D123258
OpBase is currently extremely overbloated with constructs. This
commit continues the current process of cleaning this up, by splitting
out dialect definition constructs. This maps the ODS side more closely
to the C++ side.
Differential Revision: https://reviews.llvm.org/D123257
Normalize some of the division and inequality expressions used,
which can improve performance. Also deduplicate some of the
normalization functionality throughout the Presburger library.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123314
When making the subtract implementation non-recursive, tail calls were
implemented by incrementing the level but not pushing a frame, and returning
was implemented as returning to the level corresponding to the number of frames in the stack.
This is incorrect, as there could be a case where we tail-recurse at `level`,
and then recurse at `level + 1`, pushing a frame. However, because the previous
frame was missing, this new frame would be interpreted as corresponding to
`level` and not `level + 1`. Fix this by removing the special handling of tail
calls and just doing them as normal recursion, as this is the simplest correct
implementation and handling them specifically would be a premature optimization.
The impact of this bug is only on performance as this can only lead to
unnecessary subtractions of the same disjuncts multiples times. As subtraction
is idempotent, and rationally empty disjuncts are always discarded, this
does not affect the output, so this patch does not include a regression test.
(This also does not affect termination.)
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123327
This patch contains several ODS-level optimizations to attribute getters and getting.
1. OpAdaptors, when provided a DictionaryAttr, will instantiate an OperationName so that adaptor attribute getters can used cached identifiers.
2. Verifiers will take advantage of attributes stored in sorted order to get all required (non-optional, non-default valued, and non-derived) attributes in one pass over the attribute dictionary and verify that they are present.
3. ODS-generated attribute getters will use "subrange" lookup. Because the attributes are stored in sorted order and ODS knows which attributes are required, the number of required attributes less than and greater than each attribute can be computed. When searching for an attribute, the ends of the search range can be dropped.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D122430
This patch makes inheritence from PresburgerSpace for PWMAFunction private.
The reasoning for this patch is to prevent implicit conversion to
PresburgerSpace from PWMAFunction and to not expose all functions exposed by
PresburgerSpace in PWMAFunction.
Reviewed By: arjunp
Differential Revision: https://reviews.llvm.org/D123076
Rewrite tensor::ExtractSliceOp(vector::TransferWriteOp) to vector::TransferWriteOp(tensor::ExtractSliceOp) if the full slice is overwritten and inserted into another tensor. After this rewrite, the operations bufferize in-place since all of them work on the same %iter_arg slice.
For example:
```mlir
%0 = vector.transfer_write %vec, %init_tensor[%c0, %c0]
: vector<8x16xf32>, tensor<8x16xf32>
%1 = tensor.extract_slice %0[0, 0] [%sz0, %sz1] [1, 1]
: tensor<8x16xf32> to tensor<?x?xf32>
%r = tensor.insert_slice %1 into %iter_arg[%iv0, %iv1] [%sz0, %sz1] [1, 1]
: tensor<?x?xf32> into tensor<27x37xf32>
```
folds to
```mlir
%0 = tensor.extract_slice %iter_arg[%iv0, %iv1] [%sz0, %sz1] [1, 1]
: tensor<27x37xf32> to tensor<?x?xf32>
%1 = vector.transfer_write %vec, %0[%c0, %c0]
: vector<8x16xf32>, tensor<?x?xf32>
%r = tensor.insert_slice %1 into %iter_arg[%iv0, %iv1] [%sz0, %sz1] [1, 1]
: tensor<?x?xf32> into tensor<27x37xf32>
Reviewed By: nicolasvasilache, hanchung
Differential Revision: https://reviews.llvm.org/D123190
Clarify the in_bounds attribute is specified for the vector dimensions.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D123188
Insert a buffer copy unless the dims are guaranteed to be collapsible. In the verifier, accept collapses unless they are guaranteed to be non-collapsible.
Differential Revision: https://reviews.llvm.org/D123316
Insert a cast if the two tensors with identical layout (that are passed to `arith.select`) have different layout maps after bufferization.
Differential Revision: https://reviews.llvm.org/D123321
This patch revamps the BranchOpInterface a bit and allows a proper implementation of what was previously `getMutableSuccessorOperands` for operations, which internally produce arguments to some of the block arguments. A motivating example for this would be an invoke op with a error handling path:
```
invoke %function(%0)
label ^success ^error(%1 : i32)
^error(%e: !error, %arg0 : i32):
...
```
The advantages of this are that any users of `BranchOpInterface` can still argue over remaining block argument operands (such as `%1` in the example above), as well as make use of the modifying capabilities to add more operands, erase an operand etc.
The way this patch implements that functionality is via a new class called `SuccessorOperands`, which is now returned by `getSuccessorOperands`. It basically contains an `unsigned` denoting how many operator produced operands exist, as well as a `MutableOperandRange`, which are the usual forwarded operands we are used to. The produced operands are assumed to the first few block arguments, followed by the forwarded operands afterwards. The role of `SuccessorOperands` is to provide various utility functions to modify and query the successor arguments from a `BranchOpInterface`.
Differential Revision: https://reviews.llvm.org/D123062
This diff contains:
- Parameterization of bit enum attributes in OpBase.td by bit width (e.g. 32
and 64). Previously, all enums were 32-bits. This brings enum functionality in
line with other integer attributes, and allows for bit enums greater than 32
bits.
- SPIRV and Vector dialects were updated to use bit enum attributes with an
explicit bit width
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123095
Reland Note: Adds a fix to properly mark a commutative operation as folded if we change the order
of its operands. This was uncovered by the fact that we no longer re-process constants.
This avoids accidentally reversing the order of constants during successive
application, e.g. when running the canonicalizer. This helps reduce the number
of iterations, and also avoids unnecessary changes to input IR.
Fixes#51892
Differential Revision: https://reviews.llvm.org/D122692
In addition, fixed a small bug with padding incorrectly inferring output shape for dynaic inputs in convolution
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D121872
This case is handled in neither the folding or canonicalization
patterns. The folding pattern cannot generate new broadcast ops,
so it should be handled by the canonicalization pattern.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D123307
In several cases, a doc is being generated from a .td file that includes
files containing other dialects. Specify the dialect for which the
documentation is being generated explicitly.
Subtraction was previously implemented recursively. This refactors it to be
non-recursive to avoid issues with potential stack overflows.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123248
This patch enhances the CSE pass to deal with simple cases of duplicated
operations with MemoryEffects.
It allows the CSE pass to remove safely duplicate operations with the
MemoryEffects::Read that have no other side-effecting operations in
between. Other MemoryEffects::Read operation are allowed.
The use case is pretty simple so far so we can build on top of it to add
more features.
This patch is also meant to avoid a dedicated CSE pass in FIR and was
brought together afetr discussion on https://reviews.llvm.org/D112711.
It does not currently cover the full range of use cases described in
https://reviews.llvm.org/D112711 but the idea is to gradually enhance
the MLIR CSE pass to handle common use cases that can be used by
other dialects.
This patch takes advantage of the new CSE capabilities in Fir.
Reviewed By: mehdi_amini, rriddle, schweitz
Differential Revision: https://reviews.llvm.org/D122801
This commit refactors the expected form of native constraint and rewrite
functions, and greatly reduces the necessary user complexity required when
defining a native function. Namely, this commit adds in automatic processing
of the necessary PDLValue glue code, and allows for users to define
constraint/rewrite functions using the C++ types that they actually want to
use.
As an example, lets see a simple example rewrite defined today:
```
static void rewriteFn(PatternRewriter &rewriter, PDLResultList &results,
ArrayRef<PDLValue> args) {
ValueRange operandValues = args[0].cast<ValueRange>();
TypeRange typeValues = args[1].cast<TypeRange>();
...
// Create an operation at some point and pass it back to PDL.
Operation *op = rewriter.create<SomeOp>(...);
results.push_back(op);
}
```
After this commit, that same rewrite could be defined as:
```
static Operation *rewriteFn(PatternRewriter &rewriter ValueRange operandValues,
TypeRange typeValues) {
...
// Create an operation at some point and pass it back to PDL.
return rewriter.create<SomeOp>(...);
}
```
Differential Revision: https://reviews.llvm.org/D122086
Rationale:
Allocating the temporary buffers for access pattern expansion on the stack
(using alloca) is a bit too agressive, since it easily runs out of stack space
for large enveloping tensor dimensions. This revision changes the dynamic
allocation of these buffers with explicit alloc/dealloc pairs.
Reviewed By: bixia, wrengr
Differential Revision: https://reviews.llvm.org/D123253
Adds `mlirBlockDetach` to the CAPI to remove a block from its parent
region. Use it in the Python bindings to implement
`Block.append_to(region)`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123165
Refactor the operation of subtraction by
- removing the usage of SimplexRollbackScopeExit since this
can't be used in the iterative version
- reducing the number of stack variables to make the
iterative version easier to follow
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D123156
Support returning arbitrary tensors from functions. Even those that are
not equivalent. To that end, additional information is gathered during
the analysis phase. In particular, which function args are aliasing with
which return values.
Also fix bugs in the current implementation when returning equivalent
tensors. Various unit tests are added to ensure that we have better test
coverage.
Note: Returning non-equivalent tensors is only allowed when
allowReturnAllocs is enabled. This functionality is useful for unit
testing and compatibility with other bufferizations such as the sparse
compiler. This is also towards using ModuleBufferization as a
replacement for --func-bufferize.
Differential Revision: https://reviews.llvm.org/D119120
* Bufferize FuncOp bodies and boundaries in the same loop. This is in preparation of moving FuncOp bufferization into an external model implementation.
* As a side effect, stop bufferization earlier if there was an error. (Do not continue bufferization, fewer error messages.)
* Run equivalence analysis of CallOps before the main analysis. This is needed so that equialvence info is propagated properly.
Differential Revision: https://reviews.llvm.org/D123208
It appears that the DialectRegistry::addExtension template was never
instantiated because it contains an obvious compilation error. Fix it.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D123199
* Store bbArg indices instead of BlockArguments, so that args can be changed during bufferizationn.
* Use type aliases for better readability.
Differential Revision: https://reviews.llvm.org/D123191
https://reviews.llvm.org/D122641 introduced fixes to the ExpandShapeOp verifier
but also introduced an artificial layout limitation that prevents the consideration of transposed layouts.
This revision fixes the omissions and reimplements the logic using saturated arithmetic which is more
idiomatic and avoids leaking internal implementation details.
Tests cases are added for transposed layouts.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D122845
Returning `std::array<uint8_t, N>` is better ergonomics for the hashing functions usage, instead of a `StringRef`:
* When returning `StringRef`, client code is "jumping through hoops" to do string manipulations instead of dealing with fixed array of bytes directly, which is more natural
* Returning `std::array<uint8_t, N>` avoids the need for the hasher classes to keep a field just for the purpose of wrapping it and returning it as a `StringRef`
As part of this patch also:
* Introduce `TruncatedBLAKE3` which is useful for using BLAKE3 as the hasher type for `HashBuilder` with non-default hash sizes.
* Make `MD5Result` inherit from `std::array<uint8_t, 16>` which improves & simplifies its API.
Differential Revision: https://reviews.llvm.org/D123100