The current OpBuilder has a set of virtual functions required by the fact that the PatternRewriter inherits from it for convenience. The PatternRewriter is required to know about IR mutations for correctness. This revision changes the relationship to be explicit by having users register a listener with the builder instead of using inheritance/vtables. This still requires that users properly transfer the listener when creating new builders, but has several benefits:
* More than one builder can be created during pattern rewrites(assuming that the listener is properly forwarded)
* OpBuilder no longer requires a vtable, and thus does not incur the cost when a listener isn't present.
Differential Revision: https://reviews.llvm.org/D79206
Summary:
Maps ZeroExtendIOp and TruncateIOp to spirv::UConvertOp and spirv::SConvertOp.
Depends On D78974
Differential Revision: https://reviews.llvm.org/D79143
Summary:
The current implementation in SPIRVTypeConverter just unconditionally turns
everything into 32-bit if it doesn't meet the requirements of extensions or
capabilities. In this case, we can load a 32-bit value and then do bit
extraction to get the value.
Differential Revision: https://reviews.llvm.org/D78974
- Extract common logic between -convert-gpu-to-nvvm and -convert-gpu-to-rocdl.
- Cope with the fact that alloca operates on different addrspaces between NVVM
and ROCDL.
- Modernize unit tests for ROCDL dialect.
Differential Revision: https://reviews.llvm.org/D79021
Summary:
This revision cleans up a layer of complexity in ScopedContext and uses InsertGuard instead of previously manual bookkeeping.
The method `getBuilder` is renamed to `getBuilderRef` and spurious copies of OpBuilder are tracked.
This results in some canonicalizations not happening anymore in the Linalg matmul to vector test. This test is retired because relying on DRRs for this has been shaky at best. The solution will be better support to write fused passes in C++ with more idiomatic pattern composition and application.
Differential Revision: https://reviews.llvm.org/D79208
This revision adds support to allow named ops to lower to loops.
Linalg.batch_matmul successfully lowers to loops and to LLVM.
In the process, this test also activates linalg to affine loops.
However padded convolutions to not lower to affine.load atm so this revision overrides the type of underlying load / store operation.
Differential Revision: https://reviews.llvm.org/D79135
There are three op conversion modes: Partial, Full, and Analysis. This change modifies the Partial mode to optionally take a set of non-legalizable ops. If this parameter is specified, all ops that are not legalizable (i.e. would cause full conversion to fail) are tracked throughout the partial legalization.
Differential Revision: https://reviews.llvm.org/D78788
This commit marks AllocLikeOp as MemAlloc in StandardOps.
Also in Linalg dependency analysis use memory effect to detect
allocation. This allows the dependency analysis to be more
general and recognize other allocation-like operations.
Differential Revision: https://reviews.llvm.org/D78705
Summary:
The purpose of this is to aid in having code behave differently on
Operations based on their Dialect without caring about the specific
Op. Additionally this is consistent with most other types supporting
isa<> and dyn_cast<>.
A Dialect matches isa<> based only on its namespace and relies on each
namespace being unique.
Differential Revision: https://reviews.llvm.org/D79088
This revision allows masked vector transfers with m-D buffers and n-D vectors to
progressively lower to m-D buffer and 1-D vector transfers.
For a vector.transfer_read, assuming a `memref<(leading_dims) x (major_dims) x (minor_dims) x type>` and a `vector<(minor_dims) x type>` are involved in the transfer, this generates pseudo-IR resembling:
```
if (any_of(%ivs_major + %offsets, <, major_dims)) {
%v = vector_transfer_read(
{%offsets_leading, %ivs_major + %offsets_major, %offsets_minor},
%ivs_minor):
memref<(leading_dims) x (major_dims) x (minor_dims) x type>,
vector<(minor_dims) x type>;
} else {
%v = splat(vector<(minor_dims) x type>, %fill)
}
```
Differential Revision: https://reviews.llvm.org/D79062
This range allows for performing many different operations on successor operands, including erasing/adding/setting. This removes the need for the explicit canEraseSuccessorOperand and eraseSuccessorOperand methods.
Differential Revision: https://reviews.llvm.org/D79077
Currently a declaration won't be generated if the method has a default implementation. Meaning that operations that wan't to override the default have to explicitly declare the method in the extraClassDeclarations. This revision adds an optional list parameter to DeclareOpInterfaceMethods to allow for specifying a set of methods that should always have the declarations generated, even if there is a default.
Differential Revision: https://reviews.llvm.org/D79030
This provides a general hash and comparison for checking if two operations are equivalent. This revision also optimizes the handling of result types to take advantage of how result types are stored on the operation.
Differential Revision: https://reviews.llvm.org/D79029
This class allows for mutating an operand range in-place, and provides vector like API for adding/erasing/setting. ODS now uses this class to generate mutable wrappers for named operands, with the name `MutableOperandRange <operand-name>Mutable()`
Differential Revision: https://reviews.llvm.org/D78892
The current implementation uses CrashRecoveryContext, but this only supports recovering in a certain number of cases. This revision adds a signal handler to support even more situations.
This revision was able to properly generate a reproducer for a segfault in the Inliner, that the current recovery couldn't.
Differential Revision: https://reviews.llvm.org/D78315
This revision adds a mode to the crash reproducer generator to attempt to generate a more local reproducer. This will attempt to generate a reproducer right before the offending pass that fails. This is useful for the majority of failures that are specific to a single pass, and situations where some passes in the pipeline are not registered with a specific tool.
Differential Revision: https://reviews.llvm.org/D78314
This moves the threading check to runOnOperation. This produces a much cleaner interface for the adaptor pass, and will allow for the ability to enable/disable threading in a much cleaner way in the future.
Differential Revision: https://reviews.llvm.org/D78313
Makes the relationship and function clearer. Accordingly rename getAttrList to getMutableAttrDict.
Differential Revision: https://reviews.llvm.org/D79125
Enable calling the sort, as expected by getWithSorted, into static member function so that callers can get same sorting behavior.
Differential Revision: https://reviews.llvm.org/D79011
type operands.
The instructions used to convert std.cmpi cannot have i1 types
according to SPIR-V specification. A different set of operations are
specified in the SPIR-V spec for comparing boolean types. Enhance the
StandardToSPIRV lowering to target these instructions when operands to
std.cmpi operation are of i1 type.
Differential Revision: https://reviews.llvm.org/D79049
On certain targets std.subview should be able to take memrefs from non-zero
addrspaces. Improve lowering logic to llvm dialect and amend the tests.
Differential Revision: https://reviews.llvm.org/D79024
Enhance lowering logic and tests so vector.transfer_read and
vector.transfer_write take memrefs on non-zero addrspaces.
Differential Revision: https://reviews.llvm.org/D79023
(A previous version of this, dd2c639c3c, was
reverted.)
Introduce op trait PolyhedralScope for ops to define a new scope for
polyhedral optimization / affine dialect purposes, thus generalizing
such scopes beyond FuncOp. Ops to which this trait is attached will
define a new scope for the consideration of SSA values as valid symbols
for the purposes of polyhedral analysis and optimization. Update methods
that check for dim/symbol validity to work based on this trait.
Differential Revision: https://reviews.llvm.org/D79060
OperationHandle mostly existed to mirror the behavior of ValueHandle.
This has become unnecessary and can be retired.
Differential Revision: https://reviews.llvm.org/D78692
Previously, they would only only verify `isa<DictionaryAttr>` on such attrs
which resulted in crashes down the line from code assuming that the
verifier was doing the more thorough check introduced in this patch.
The key change here is for StructAttr to use
`CPred<"$_self.isa<" # name # ">()">` instead of `isa<DictionaryAttr>`.
To test this, introduce struct attrs to the test dialect. Previously,
StructAttr was only being tested by unittests/, which didn't verify how
StructAttr interacted with ODS.
Differential Revision: https://reviews.llvm.org/D78975
Summary:
When creating an operation with
* `AttrSizedOperandSegments` trait
* Variadic operands of only non-buildable types
* assemblyFormat to automatically generate the parser
the `builder` local variable is used, but never declared.
This adds a fix as well as a test for this case as existing ones use buildable types only.
Reviewers: rriddle, Kayjukh, grosser
Reviewed By: Kayjukh
Subscribers: mehdi_amini, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D79004
Summary:
This change results in tests also being changed to prevent dead
affine.load operations from being folded away during rewrites.
Also move AffineStoreOp and AffineLoadOp to an ODS file.
Differential Revision: https://reviews.llvm.org/D78930
As we start defining more complex Ops, we increasingly see the need for
Ops-with-regions to be able to construct Ops within their regions in
their ::build methods. However, these methods only have access to
Builder, and not OpBuilder. Creating a local instance of OpBuilder
inside ::build and using it fails to trigger the operation creation
hooks in derived builders (e.g., ConversionPatternRewriter). In this
case, we risk breaking the logic of the derived builder. At the same
time, OpBuilder::create, which is by far the largest user of ::build
already passes "this" as the first argument, so an OpBuilder instance is
already available.
Update all ::build methods in all Ops in MLIR and Flang to take
"OpBuilder &" instead of "Builder *". Note the change from pointer and
to reference to comply with the common style in MLIR, this also ensures
all other users must change their ::build methods.
Differential Revision: https://reviews.llvm.org/D78713
We have provided a generic buffer assignment transformation ported from
TensorFlow. This generic transformation pass automatically analyzes the values
and their aliases (also in other blocks) and returns the valid positions for
Alloc and Dealloc operations. To find these positions, the algorithm uses the
block Dominator and Post-Dominator analyses. In our proposed algorithm, we have
considered aliasing, liveness, nested regions, branches, conditional branches,
critical edges, and independency to custom block terminators. This
implementation doesn't support block loops. However, we have considered this in
our design. For this purpose, it is only required to have a loop analysis to
insert Alloc and Dealloc operations outside of these loops in some special
cases.
Differential Revision: https://reviews.llvm.org/D78484
This method has been commented as deprecated for a while. Remove
it and replace all uses with the equivalent getCalledOperand().
I also made a few cleanups in here. For example, to removes use
of getElementType on a pointer when we could just use getFunctionType
from the call.
Differential Revision: https://reviews.llvm.org/D78882
Introduce op trait `PolyhedralScope` for ops to define a new scope for
polyhedral optimization / affine dialect purposes, thus generalizing
such scopes beyond FuncOp. Ops to which this trait is attached will
define a new scope for the consideration of SSA values as valid symbols
for the purposes of polyhedral analysis and optimization. Update methods
that check for dim/symbol validity to work based on this trait.
Differential Revision: https://reviews.llvm.org/D78863
- Adds a folder for integer division by one with the `divi_signed` and `divi_unsigned` ops.
- Creates tests for scalar and tensor versions of these ops.
- Modifies the test in `parallel-loop-collapsing.mlir` so that it doesn't assume division by one will be in the output.
Differential Revision: https://reviews.llvm.org/D78518
This revision adds support for propagating constants across symbol-based callgraph edges. It uses the existing Call/CallableOpInterfaces to detect the dataflow edges, and propagates constants through arguments and out of returns.
Differential Revision: https://reviews.llvm.org/D78592
This provides a much cleaner interface into Symbols, and allows for users to start injecting op-specific information. For example, derived op can now inject when a symbol can be discarded if use_empty. This would let us drop unused external functions, which generally have public visibility.
This revision also adds a new `extraTraitClassDeclaration` field to ODS OpInterface to allow for injecting declarations into the trait class that gets attached to the operations.
Differential Revision: https://reviews.llvm.org/D78522
Many ops with this trait have `getBody()` and `getBodyBuilder()` methods defined in `extraClassDeclaration` in tablegen. `getBody()` implementation is the same accross all these ops, but `getBodyBuilder()` can return builders with varying insertion points set. In this PR, `getBody()` is moved into `SingleImplicitBlockTerminator` struct and `getBodyBuilder()` is replaced with `OpBuilder::atBlock(End|Terminator)(op.getBody);`.
Differential Revision: https://reviews.llvm.org/D78864
'From' and 'To' should be reversed. And now we must explicitly
call the registration function given that MLIR moved away from
static registration.
Differential Revision: https://reviews.llvm.org/D78934
Instead of using llvm_unreachable to guard against fusing linalg.conv,
reject fusing linalg.conv in isFusableInto.
tileLinalgOpImpl is a templated function now and it can operate on
loop.parellel. So we should avoid calling into getForInductionVarOwner
which always assumes loop.for.
Differential Revision: https://reviews.llvm.org/D78936
Summary: This revision extends the lowering of vector transfers to work with n-D memref and 1-D vector where the permutation map is an identity on the most minor dimensions (1 for now).
Differential Revision: https://reviews.llvm.org/D78925
Summary:
Order classes by purpose and alphabetically to make it slightly easier
to read through the file.
Reviewers: ftynse!
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78914
The latest changes of the Liveness analysis caused a warning related to an
unused variable. This commit solves this warning.
Differential Revision: https://reviews.llvm.org/D78912
Summary:
Previously operations like std.load created methods for obtaining their
effects but did not inherit from the SideEffect interfaces when their
parameters were decorated with the information. The resulting situation
was that passes had no information on the SideEffects of std.load/store
and had to treat them more cautiously. This adds the inheritance
information when creating the methods.
As a side effect, many tests are modified, as they were using std.load
for testing and this oepration would be folded away as part of pattern
rewriting. Tests are modified to use store or to reutn the result of the
std.load.
Reviewers: mravishankar, antiagainst, nicolasvasilache, herhut, aartbik, ftynse!
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78802
Summary:
These have been replaced from attributes to operations gpu.module and
gpu.func respectively.
Differential Revision: https://reviews.llvm.org/D78803
Certain classes of operations, such as FuncOp, are known to never have operands. This revision adds a bit to operation to detect this case and avoid allocating the unnecessary operand storage. This saves 1 word for each instance of these operations.
Differential Revision: https://reviews.llvm.org/D78876
This revision refactors the structure of the operand storage such that there is no additional memory cost for resizable operand lists until it is required. This is done by using two different internal representations for the operand storage:
* One using trailing operands
* One using a dynamically allocated std::vector<OpOperand>
This allows for removing the resizable operand list bit, and will free up APIs from needing to workaround non-resizable operand lists.
Differential Revision: https://reviews.llvm.org/D78875
This revision optimizes resizable operand lists by allocating them in the same location as the trailing operands. This has numerous benefits:
* If the operation has at least one operand at construction time, there is 0 additional memory overhead to the resizable storage.
* There is less pointer arithmetic necessary as the resizable storage is now only used when the operands are dynamically allocated.
Differential Revision: https://reviews.llvm.org/D78854
Summary:
This introduces a new SourceMgr::FindLocForLineAndColumn method that
uses the OffsetCache in SourceMgr::SrcBuffer to do do a constant time
lookup for the line number (once the cache is populated).
Use this method in MLIR's SourceMgrDiagnosticHandler::convertLocToSMLoc,
replacing the O(n) scanning logic. This resolves a long standing TODO
in MLIR, and makes one of my usecases go dramatically faster (which is
currently producing many diagnostics in a 40MB SourceBuffer).
NFC, this is just a performance speedup and cleanup.
Reviewers: rriddle!, ftynse!
Subscribers: hiraditya, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78868
LinalgOps.cpp:232:71: error: specialization of 'template<class GenericOpType> static mlir::LogicalResult {anonymous}::BlockArgsVerifier<GenericOpType>::verify(GenericOpType, mlir::Block&)' in different namespace [-fpermissive]
`addArgument()` is not undoable and should not be used in
ConversionPattern, therefore replacing `splitBlock()` with
`createBlock()`, that creates a block with specified args.
Differential Revision: https://reviews.llvm.org/D78731
Summary: There was a memory corruption issue where the lifespan of the ArrayRef<StringRef> would fail. Directly passing the data will avoid the issue.
Reviewers: rriddle
Reviewed By: rriddle
Subscribers: mehdi_amini, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78850
Summary: Added support for sparse strings elements. This is a follow up from the original DenseStringElements.
Differential Revision: https://reviews.llvm.org/D78844
Fix affine dialect documentation on valid dimensional values: they also
include affine.parallel IVs.
Differential Revision: https://reviews.llvm.org/D78855
Summary:
* Follows the convention of the tablegen-generated dialects.
* Ensures that vague linkage rules place the definitions in the dialect's object files.
* Allows code that uses RTTI to include MLIR headers (compiled without RTTI) without
type_info link errors.
Reviewers: rriddle
Reviewed By: rriddle
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, Joonsoo, grosul1, frgossen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78039
- Implement a first constant fold for shape.shape_of (more ops coming in subsequent patches)
- Implement the right builder interfaces for ShapeType and other types
- Splits shape.constant into shape.const_size and shape.const_shape which plays better with dyn_cast and building vs one polymorphic op.
Also, fix the RUN line in ops.mlir to properly verify round-tripping.
The current implementation of this method performs the replacement directly, and thus doesn't support proper back tracking.
Differential Revision: https://reviews.llvm.org/D78790
The elements of a DictionaryAttr are sorted by name. In many situations, e.g NamedAttributeList, we can guarantee that the elements are sorted on construction and remove the need to perform extra checks. In places with lots of calls to attribute methods, this leads to a good performance improvement.
Differential Revision: https://reviews.llvm.org/D78781
Summary:
This is to specify that ParallelOp does not have side effects on its own
but has the effects of all operations executed in its region.
Differential Revision: https://reviews.llvm.org/D78707
This can help provide a common interface for view-like
ops so that for example Linalg's dependency analysis
can avoid relying on concrete ops.
Differential Revision: https://reviews.llvm.org/D78645
Now both Operation::operand_range and Operation::result_range have
.begin() and .end() for ranged-based for loop and we have
ValueRange for wrapping a single Value. We can remove the SmallVector
materialization!
Differential Revision: https://reviews.llvm.org/D78766
Ensure that `gpu.func` is only used within the dedicated `gpu.module`.
Implement the constraint to the GPU dialect and adopt test cases.
Differential Revision: https://reviews.llvm.org/D78541
Summary:
Implemented a DenseStringsElements attr for handling arrays / tensors of strings. This includes the
necessary logic for parsing and printing the attribute from MLIR's text format.
To store the attribute we perform a single allocation that includes all wrapped string data tightly packed.
This means no padding characters and no null terminators (as they could be present in the string). This
buffer includes a first chunk of data that represents an array of StringRefs, that contain address pointers
into the string data, with the length of each string wrapped. At this point there is no Sparse representation
however strings are not typically represented sparsely.
Differential Revision: https://reviews.llvm.org/D78600
This revision removes the multi use-list to ensure that each result gets its own. This decision was made by doing some extensive benchmarking of programs that actually use multiple results. This results in a size increase of 1-word per result >1, but the common case of 1-result remains unaffected. A side benefit is that 0-result operations now shrink by 1-word.
Differential Revision: https://reviews.llvm.org/D78701
it to fusing different kinds of linalg operations on tensors.
The implementation of fusion on tensor was initially planned for just
GenericOps (and maybe IndexedGenericOps). With addition of
linalg.tensor_reshape, and potentially other such non-structured ops,
refactor the existing implementation to allow easier specification of
fusion between different linalg operations on tensors.
Differential Revision: https://reviews.llvm.org/D78463
367229e100 retired ValueHandle but
mistakenly removed the implementation for `negate` which was not
tested and would result in linking errors.
This revision adds the implementation back and provides a test.
The current Liveness analysis does not support operations with nested regions.
This causes issues when querying liveness information about blocks nested within
operations. Furthermore, the live-in and live-out sets are not computed properly
in these cases.
Differential Revision: https://reviews.llvm.org/D77714
It currently requires that the condition match the shape of the selected value, but this is only really useful for things like masks. This revision allows for the use of i1 to mean that all of the vector/tensor is selected. This also matches the behavior of LLVM select. A benefit of this change is that transformations that want to generate selects, like those on the CFG, don't have to special case vector/tensor. Previously the only way to generate a select from an i1 was to use a splat, but that doesn't support dynamically shaped/unranked tensors.
Differential Revision: https://reviews.llvm.org/D78690
This revision adds support for canonicalizing the following:
```
br ^bb1
^bb1
br ^bbN(...)
br ^bbN(...)
```
Differential Revision: https://reviews.llvm.org/D78683
This revision adds support for canonicalizing the following:
```
cond_br %cond, ^bb1(A, ..., N), ^bb1(A, ..., N)
br ^bb1(A, ..., N)
```
If the operands to the successor are different and the cond_br is the only predecessor, we emit selects for the branch operands.
```
cond_br %cond, ^bb1(A), ^bb1(B)
%select = select %cond, A, B
br ^bb1(%select)
```
Differential Revision: https://reviews.llvm.org/D78682
Summary:
This test is in a different file because it contains a literal NUL
character, which causes various tools to treat it as a binary file.
Hence it is useful to have this test kept in a separate, rarely-changing
file.
Differential Revision: https://reviews.llvm.org/D78689
Summary:
Use a nested symbol to identify the kernel to be invoked by a `LaunchFuncOp` in the GPU dialect.
This replaces the two attributes that were used to identify the kernel module and the kernel within seperately.
Differential Revision: https://reviews.llvm.org/D78551
Summary:
Use the shortcu `kernel` for the `gpu.kernel` attribute of `gpu.func`.
The parser supports this and test cases are easier to read.
Differential Revision: https://reviews.llvm.org/D78542
Summary:
Fix a broken test case in the `invalid.mlir` lit test case.
`expect` was missing its `e`.
Differential Revision: https://reviews.llvm.org/D78540
We also need to lock the LLVMDialect mutex when initializing
LLVM targets or destroying llvm modules concurrently. Added another
scoped lock to that effect.
Differential Revision: https://reviews.llvm.org/D78580
The buffer allocated by a promotion can be subject to other transformations afterward. For example it could be vectorized, in which case it is needed to ensure that this buffer is memory-aligned.
Differential Revision: https://reviews.llvm.org/D78556
This revision is the first in a set of improvements that aim at allowing
more generalized named Linalg op generation from a mathematical
specification.
This revision allows creating a new op and checks that the parser,
printer and verifier are hooked up properly.
This opened up a few design points that will be addressed in the future:
1. A named linalg op has a static region builder instead of an
explicitly parsed region. This is not currently compatible with
assemblyFormat so a custom parser / printer are needed.
2. The convention for structured ops and tensor return values needs to
evolve to allow tensor-land and buffer land specifications to agree
3. ReferenceIndexingMaps and referenceIterators will need to become
static to allow building attributes at parse time.
4. Error messages will be improved once we have 3. and we pretty print
in custom form.
Differential Revision: https://reviews.llvm.org/D78327
Unfortunately FileCheck ignores directives with whitespace between the directive and the colon (`CHECK :` for example), thus most of the directives of this test were ignored.
Differential Revision: https://reviews.llvm.org/D78548
This is possible by adding two new ControlFlowInterface additions:
- A new interface, RegionBranchOpInterface
This interface allows for region holding operations to describe how control flows between regions. This interface initially contains two methods:
* getSuccessorEntryOperands
Returns the operands of this operation used as the entry arguments when entering the region at `index`, which was specified as a successor by `getSuccessorRegions`. when entering. These operands should correspond 1-1 with the successor inputs specified in `getSuccessorRegions`, and may be a subset of the entry arguments for that region.
* getSuccessorRegions
Returns the viable successors of a region, or the possible successor when branching from the parent op. This allows for describing which regions may be executed when entering an operation, and which regions are executed after having executed another region of the parent op. For example, a structured loop operation may always enter into the loop body region. The loop body region may branch back to itself, or exit to the operation.
- A trait, ReturnLike
This trait signals that a terminator exits a region and forwards all of its operands as "exiting" values.
These additions allow for performing more general dataflow analysis in the presence of region holding operations.
Differential Revision: https://reviews.llvm.org/D78447
This revision adds the initial pass for performing SCCP generically in MLIR. SCCP is an algorithm for propagating constants across control flow, and optimistically assumes all values to be constant unless proven otherwise. It currently supports branching control, with support for regions and inter-procedural propagation being added in followups.
Differential Revision: https://reviews.llvm.org/D78397
The promotion transformation is promoting all input and output buffers of the transformed op. The user might want to only promote some of these buffers.
Differential Revision: https://reviews.llvm.org/D78498
The previous code result a mismatch between block argument types and
predecessor successor args when a type conversion was needed in a
multiblock case. It was assuming the replaced result types matched the
region result types.
Also, slighly improve the debug output from the inliner.
Differential Revision: https://reviews.llvm.org/D78415
Summary:
Generate method to generate a DictionaryAttr with attribute values of
derived attribute. If a conversion back from the derived attribute C++
type to Attribute is not defined, then attempting to materialize such an
op's derived attributes would result in runtime failure.
This allows to treat derived attributes and attributes of an op in more
uniform manner where needed. The derived attributes are not added to the
operation but returned as new attribute instead.
Differential Revision: https://reviews.llvm.org/D78302
Fix intra-tile upper bound setting in a scenario where the tile size was
larger than the trip count.
Differential Revision: https://reviews.llvm.org/D78505
memref types with dynamic dimensions do not have a compile-time
known size. They should be mapped to SPIR-V runtime array types.
Differential Revision: https://reviews.llvm.org/D78197
Update misleading line in conclusions. Although the application to IR
objects is stated earlier, the concluding section contradicts it in
isolation.
Differential Revision: https://reviews.llvm.org/D78446
Rename mlir::tileCodeGen -> mlir::tilePerfectlyNested to be consistent.
NFC clean up tiling utility code, drop dead code, better comments.
Expose isPerfectlyNested and reuse.
Differential Revision: https://reviews.llvm.org/D78423
Summary:
Workgroup size is written into the kernel. So to properly modelling
vulkan launch, we have to skip local workgroup size for vulkan launch
call op.
Differential Revision: https://reviews.llvm.org/D78307
Summary:
The tests referred to in Chapter 3 of the tutorial were missing from the tutorial test
directory; this adds those missing tests. This also cleans up some stale directory paths and code
snippets used throughout the tutorial.
Differential Revision: https://reviews.llvm.org/D76809
Summary:
The tests referred to in Chapter 3 of the tutorial were missing from the tutorial test
directory; this adds those missing tests. This also cleans up some stale directory paths and code
snippets used throughout the tutorial.
Differential Revision: https://reviews.llvm.org/D76809
Summary:
The tests referred to in Chapter 3 of the tutorial were missing from the tutorial test
directory; this adds those missing tests. This also cleans up some stale directory paths and code
snippets used throughout the tutorial.
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, aartbik, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D76809
Summary:
Rather than having a full, recursive, lowering of vector.broadcast
to LLVM IR, it is much more elegant to have a progressive lowering
of each vector.broadcast into a lower dimensional vector.broadcast,
until only elementary vector operations remain. This results
in more elegant, step-wise code, that is easier to understand.
Also makes some optimizations in the generated code.
Reviewers: nicolasvasilache, mehdi_amini, andydavis1, grosul1
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78071
add_llvm_library() sometimes needs access to the dependencies in order to
generate new targets. Using DEPENDS allows this.
Differential Revision: https://reviews.llvm.org/D78321
Libraries declared as target_link_libraries() do not also need
to be declared as dependencies using add_dependencies().
Differential Revision: https://reviews.llvm.org/D78320