This revision allows for attaching "debug labels" to patterns, and provides to FrozenRewritePatternSet for filtering patterns based on these labels (in addition to the debug name of the pattern). This will greatly simplify the ability to write tests targeted towards specific patterns (in cases where many patterns may interact), will also simplify debugging pattern application by observing how application changes when enabling/disabling specific patterns.
To enable better reuse of pattern rewrite options between passes, this revision also adds a new PassUtil.td file to the Rewrite/ library that will allow for passes to easily hook into a common interface for pattern debugging. Two options are used to seed this utility, `disable-patterns` and `enable-patterns`, which are used to enable the filtering behavior indicated above.
Differential Revision: https://reviews.llvm.org/D102441
I backed this off to make the previous patch easier to wrangle, but now
this is an efficient query and it is better to not replace it in CSE.
Differential Revision: https://reviews.llvm.org/D103494
The previous impl densely scanned the entire region starting with an op
when dominators were created, creating a DominatorTree for every region.
This is extremely expensive up front -- particularly for clients like
Linalg/Transforms/Fusion.cpp that construct DominanceInfo for a single
query. It is also extremely memory wasteful for IRs that use single
block regions commonly (e.g. affine.for) because it's making a
dominator tree for a region that has trivial dominance. The
implementation also had numerous unnecessary minor efficiencies, e.g.
doing multiple walks of the region tree or tryGetBlocksInSameRegion
building a DenseMap that it didn't need.
This patch switches to an approach where [Post]DominanceInfo is free
to construct, and which lazily constructs DominatorTree's for any
multiblock regions that it needs. This avoids the up-front cost
entirely, making its runtime proportional to the complexity of the
region tree instead of # ops in a region. This also avoids the memory
and time cost of creating DominatorTree's for single block regions.
Finally this rewrites the implementation for simplicity and to avoids
the constant factor problems the old implementation had.
Differential Revision: https://reviews.llvm.org/D103384
CSE is the only client of this API, refactor it a bit to pull the query
internally to make changes to DominanceInfo a bit easier. This commit
also improves comments a bit.
* Add `hasCanonicalizer` option to Dialect.
* Initialize canonicalizer with dialect-wide canonicalization patterns.
* Add test case to TestDialect.
Dialect-wide canonicalization patterns are useful if a canonicalization pattern does not conceptually associate with any single operation, i.e., it should not be registered as part of an operation's `getCanonicalizationPatterns` function. E.g., this is the case for canonicalization patterns that match an op interface.
Differential Revision: https://reviews.llvm.org/D103226
This allows C++ clients of the Canonicalize pass to specify their own
Config option struct to control how Canonicalize works, increasing reusability.
This also allows controlling these settings for the default Canonicalize pass
using command line options. This is useful for testing and for playing with
things on the command line.
Differential Revision: https://reviews.llvm.org/D103069
This exposes the iterations and top-down processing as flags, and also
allows controlling whether region simplification is desirable for a client.
This allows deleting some duplicated entrypoints to
applyPatternsAndFoldGreedily.
This also deletes the Constant Preprocessing pass, which isn't worth it
on balance.
All defaults are all kept the same, so no one should see a behavior change.
Differential Revision: https://reviews.llvm.org/D102988
Steps for normalizing dynamic memrefs for tiled layout map
1. Check if original map is tiled layout. Only tiled layout is supported.
2. Create normalized memrefType. Dimensions that include dynamic dimensions
in the map output will be dynamic dimensions.
3. Create new maps to calculate each dimension size of new memref.
In tiled layout, the dimension size can be calculated by replacing
"floordiv <tile size>" with "ceildiv <tile size>" and
"mod <tile size>" with "<tile size>".
4. Create AffineApplyOp to apply the new maps. The output of AffineApplyOp is
dynamicSizes for new AllocOp.
5. Add the new dynamic sizes in new AllocOp.
This patch also set MemRefsNormalizable trant in CastOp and DimOp since
they used with dynamic memrefs.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D97655
The previous implementation did not handle casting behavior properly and
did not consider aliases.
Differential Revision: https://reviews.llvm.org/D102785
During affine loop fusion, create private memrefs for escaping memrefs
too under the conditions that:
-- the source is not removed after fusion, and
-- the destination does not write to the memref.
This creates more fusion opportunities as illustrated in the test case.
Reviewed By: bondhugula, ayzhuang
Differential Revision: https://reviews.llvm.org/D102604
Splitting the memref dialect lead to an introduction of several dependencies
to avoid compilation issues. The canonicalize pass also depends on the
memref dialect, but it shouldn't. This patch resolves the dependencies
and the unintuitive includes are removed. However, the dependency moves
to the constructor of the std dialect.
Differential Revision: https://reviews.llvm.org/D102060
The current design uses a unique entry for each argument/result attribute, with the name of the entry being something like "arg0". This provides for a somewhat sparse design, but ends up being much more expensive (from a runtime perspective) in-practice. The design requires building a string every time we lookup the dictionary for a specific arg/result, and also requires N attribute lookups when collecting all of the arg/result attribute dictionaries.
This revision restructures the design to instead have an ArrayAttr that contains all of the attribute dictionaries for arguments and another for results. This design reduces the number of attribute name lookups to 1, and allows for O(1) lookup for individual element dictionaries. The major downside is that we can end up with larger memory usage, as the ArrayAttr contains an entry for each element even if that element has no attributes. If the memory usage becomes too problematic, we can experiment with a more sparse structure that still provides a lot of the wins in this revision.
This dropped the compilation time of a somewhat large TensorFlow model from ~650 seconds to ~400 seconds.
Differential Revision: https://reviews.llvm.org/D102035
This it to make more clear the difference between this and
an AliasAnalysis.
For example, given a sequence of subviews that create values
A -> B -> C -> d:
BufferViewFlowAnalysis::resolve(B) => {B, C, D}
AliasAnalysis::resolve(B) => {A, B, C, D}
Differential Revision: https://reviews.llvm.org/D100838
This revision takes the forward value propagation engine in SCCP and refactors it into a more generalized forward dataflow analysis framework. This framework allows for propagating information about values across the various control flow constructs in MLIR, and removes the need for users to reinvent the traversal (often not as completely). There are a few aspects of the traversal, that were conservative for SCCP, that should be relaxed to support the needs of different value analyses. To keep this revision simple, these conservative behaviors will be left in (Note that this won't produce an incorrect result, but may produce more conservative results than necessary in certain edge cases. e.g. region entry arguments for non-region branch interface operations). The framework also only focuses on computing lattices for values, given the SCCP origins, but this is something to relax as needed in the future.
Given that this logic is already in SCCP, a majority of this commit is NFC. The more interesting parts are the interface glue that clients interact with.
Differential Revision: https://reviews.llvm.org/D100915
This patch introduces the neccessary infrastructure changes to implement
cost-modelling for detensoring. In particular, it introduces the
following changes:
- An extension to the dialect conversion framework to selectively
convert sub-set of non-entry BB arguments.
- An extension to branch conversion pattern to selectively convert
sub-set of a branche's operands.
- An interface for detensoring cost-modelling.
- 2 simple implementations of 2 different cost models.
This sets the stage to explose cost-modelling for detessoring in an
easier way. We still need to come up with better cost models.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D99945
`ConversionPatternRewriter::applySignatureConversion` did not have a way
to apply a signature conversion that involved materializations.
Differential Revision: https://reviews.llvm.org/D99782
Setting the builder from a block is looking up for a parent operation
to get a context, instead by setting up the builder with an explicit
context we can support invoking this helper in absence of a parent
operation.
Fixes a bug in affine fusion pipeline where an incorrect slice is computed.
After the slice computation is done, original domain of the the source is
compared with the new domain that will result if the fusion succeeds. If the
new domain must be a subset of the original domain for the slice to be
valid. If the slice computed is incorrect, fusion based on such a slice is
avoided.
Relevant test cases are added/edited.
Fixes https://bugs.llvm.org/show_bug.cgi?id=49203
Differential Revision: https://reviews.llvm.org/D98239
If an operation has been inserted as a key in to the known values
hashtable, then it can not be modified in a way which changes its hash.
This change avoids modifying the operands of any previously recorded
operation, which prevents their hash from changing.
In an SSACFG region, it is impossible to visit an operation before
visiting its operands, so this is not a problem. This situation can only
happen in regions without strict dominance, such as graph regions.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D99486
This allows for the conversion to match `A(B()) -> C()` with a pattern matching
`A` and marking `B` for deletion.
Also add better assertions when an operation is erased while still having uses.
Differential Revision: https://reviews.llvm.org/D99442
Add a new clone operation to the memref dialect. This operation implicitly
copies data from a source buffer to a new buffer. In contrast to the linalg.copy
operation, this operation does not accept a target buffer as an argument.
Instead, this operation performs a conceptual allocation which does not need to
be performed manually.
Furthermore, this operation resolves the dependency from the linalg-dialect
in the BufferDeallocation pass. In addition, we also extended the canonicalization
patterns to fold clone operations. The copy removal pass has been removed.
Differential Revision: https://reviews.llvm.org/D99172
This reverts commit 361b7d125b by Chris
Lattner <clattner@nondot.org> dated Fri Mar 19 21:22:15 2021 -0700.
The change to the greedy rewriter driver picking a different order was
made without adequate analysis of the trade-offs and experimentation. A
change like this has far reaching consequences on transformation
pipelines, and a major impact upstream and downstream. For eg., one
can’t be sure that it doesn’t slow down a large number of cases by small
amounts or create other issues. More discussion here:
https://llvm.discourse.group/t/speeding-up-canonicalize/3015/25
Reverting this so that improvements to the traversal order can be made
on a clean slate, in bigger steps, and higher bar.
Differential Revision: https://reviews.llvm.org/D99329
In particular for Graph Regions, the terminator needs is just a
historical artifact of the generalization of MLIR from CFG region.
Operations like Module don't need a terminator, and before Module
migrated to be an operation with region there wasn't any needed.
To validate the feature, the ModuleOp is migrated to use this trait and
the ModuleTerminator operation is deleted.
This patch is likely to break clients, if you're in this case:
- you may iterate on a ModuleOp with `getBody()->without_terminator()`,
the solution is simple: just remove the ->without_terminator!
- you created a builder with `Builder::atBlockTerminator(module_body)`,
just use `Builder::atBlockEnd(module_body)` instead.
- you were handling ModuleTerminator: it isn't needed anymore.
- for generic code, a `Block::mayNotHaveTerminator()` may be used.
Differential Revision: https://reviews.llvm.org/D98468
To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern.
Differential Revision: https://reviews.llvm.org/D98986
This nicely aligns the naming with RewritePatternSet. This type isn't
as widely used, but we keep a using declaration in to help with
downstream consumption of this change.
Differential Revision: https://reviews.llvm.org/D99131
This doesn't change APIs, this just cleans up the many in-tree uses of these
names to use the new preferred names. We'll keep the old names around for a
couple weeks to help transitions.
Differential Revision: https://reviews.llvm.org/D99127
GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise.
Differential Revision: https://reviews.llvm.org/D99059
This updates the codebase to pass the context when creating an instance of
OwningRewritePatternList, and starts removing extraneous MLIRContext
parameters. There are many many more to be removed.
Differential Revision: https://reviews.llvm.org/D99028
This reapplies b5d9a3c / https://reviews.llvm.org/D98609 with a one line fix in
processExistingConstants to skip() when erasing a constant we've already seen.
Original commit message:
1) Change the canonicalizer to walk the function in top-down order instead of
bottom-up order. This composes well with the "top down" nature of constant
folding and simplification, reducing iterations and re-evaluation of ops in
simple cases.
2) Explicitly enter existing constants into the OperationFolder table before
canonicalizing. Previously we would "constant fold" them and rematerialize
them, wastefully recreating a bunch fo constants, which lead to pointless
memory traffic.
Both changes together provide a 33% speedup for canonicalize on some mid-size
CIRCT examples.
One artifact of this change is that the constants generated in normal pattern
application get inserted at the top of the function as the patterns are applied.
Because of this, we get "inverted" constants more often, which is an aethetic
change to the IR but does permute some testcases.
Differential Revision: https://reviews.llvm.org/D99006
This allows for notifying callers when operations/blocks get erased, which is especially useful for the greedy pattern driver. The current greedy pattern driver "throws away" all information on constants in the operation folder because it doesn't know if they get erased or not. By passing in RewriterBase, we can directly track this and prevent the need for the pattern driver to rediscover all of the existing constants. In some situations this cuts the compile time of the canonicalizer in half.
Differential Revision: https://reviews.llvm.org/D98755
When deleting operations in DCE, the algorithm uses a post-order walk of
the IR to ensure that value uses were erased before value defs. Graph
regions do not have the same structural invariants as SSA CFG, and this
post order walk could delete value defs before uses. This problem is
guaranteed to occur when there is a cycle in the use-def graph.
This change stops DCE from visiting the operations and blocks in any
meaningful order. Instead, we rely on explicitly dropping all uses of a
value before deleting it.
Reviewed By: mehdi_amini, rriddle
Differential Revision: https://reviews.llvm.org/D98919
This reverts commit b5d9a3c923.
The commit introduced a memory error in canonicalization/operation
walking that is exposed when compiled with ASAN. It leads to crashes in
some "release" configurations.
We know that all ConstantLike operations have one result and no operands,
so check this first before doing the trait check. This change speeds up
Canonicalize on a CIRCT testcase by ~5%.
Differential Revision: https://reviews.llvm.org/D98615
Two changes:
1) Change the canonicalizer to walk the function in top-down order instead of
bottom-up order. This composes well with the "top down" nature of constant
folding and simplification, reducing iterations and re-evaluation of ops in
simple cases.
2) Explicitly enter existing constants into the OperationFolder table before
canonicalizing. Previously we would "constant fold" them and rematerialize
them, wastefully recreating a bunch fo constants, which lead to pointless
memory traffic.
Both changes together provide a 33% speedup for canonicalize on some mid-size
CIRCT examples.
One artifact of this change is that the constants generated in normal pattern
application get inserted at the top of the function as the patterns are applied.
Because of this, we get "inverted" constants more often, which is an aethetic
change to the IR but does permute some testcases.
Differential Revision: https://reviews.llvm.org/D98609
The current implementation has some inefficiencies that become noticeable when running on large modules. This revision optimizes the code, and updates some out-dated idioms with newer utilities. The main components of this optimization include:
* Add an overload of Block::eraseArguments that allows for O(N) erasure of disjoint arguments.
* Don't process entry block arguments given that we don't erase them at this point.
* Don't track individual operation results, given that we don't erase them. We can just track the parent operation.
Differential Revision: https://reviews.llvm.org/D98309
This makes it easy to compose the distribution computation with
other affine computations.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D98171
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
This patch continues detensorizing implementation by detensoring
internal control flow in functions.
In order to detensorize functions, all the non-entry block's arguments
are detensored and branches between such blocks are properly updated to
reflect the detensored types as well. Function entry block (signature)
is left intact.
This continues work towards handling github/google/iree#1159.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D97148
Just a pure method renaming.
It is a preparation step for replacing "memory space as raw integer"
with more generic "memory space as attribute", which will be done in
separate commit.
The `MemRefType::getMemorySpace` method will return `Attribute` and
become the main API, while `getMemorySpaceAsInt` will be declared as
deprecated and will be replaced in all in-tree dialects (also in separate
commits).
Reviewed By: mehdi_amini, rriddle
Differential Revision: https://reviews.llvm.org/D97476
This also exposed a bug in Dialect loading where it was not correctly identifying identifiers that had the dialect namespace as a prefix.
Differential Revision: https://reviews.llvm.org/D97431
Fixes a bug in affine fusion pipeline where an incorrect fusion is performed
despite a Call Op that potentially modifies memrefs under consideration
exists between source and target.
Fixes part of https://bugs.llvm.org/show_bug.cgi?id=49220
Reviewed By: bondhugula, dcaballe
Differential Revision: https://reviews.llvm.org/D97252
This patch handles defining ops between the source and dest loop nests, and prevents loop nests with `iter_args` from being fused.
If there is any SSA value in the dest loop nest whose defining op has dependence from the source loop nest, we cannot fuse the loop nests.
If there is a `affine.for` with `iter_args`, prevent it from being fused.
Reviewed By: dcaballe, bondhugula
Differential Revision: https://reviews.llvm.org/D97030
This prevents a bug in the pass instrumentation implementation where the main thread would end up with a different pass manager in different runs of the pass.
llvm::parallelTransformReduce does not schedule work on the caller thread, which becomes very costly for
the inliner where a majority of SCCs are small, often ~1 element. The switch to llvm::parallelForEach solves this,
and also aligns the implementation with the PassManager (which realistically should share the same implementation).
This change dropped compile time on an internal benchmark by ~1(25%) second.
Differential Revision: https://reviews.llvm.org/D96086
Affine parallel ops may contain and yield results from MemRefsNormalizable ops in the loop body. Thus, both affine.parallel and affine.yield should have the MemRefsNormalizable trait.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D96821
The current implementation of tilePerfectlyNested utility doesn't handle
the non-unit step size. We have added support to perform tiling
correctly even if the step size of the loop to be tiled is non-unit.
Fixes https://bugs.llvm.org/show_bug.cgi?id=49188.
Differential Revision: https://reviews.llvm.org/D97037
This commit fixes a bug in affine fusion pipeline where an
incorrect fusion is performed despite a dealloc op is present
between a producer and a consumer. This is done by creating a
node for dealloc op in the MDG.
Reviewed By: bondhugula, dcaballe
Differential Revision: https://reviews.llvm.org/D97032
This commit introduced a cyclic dependency:
Memref dialect depends on Standard because it used ConstantIndexOp.
Std depends on the MemRef dialect in its EDSC/Intrinsics.h
Working on a fix.
This reverts commit 8aa6c3765b.
Create the memref dialect and move several dialect-specific ops without
dependencies to other ops from std dialect to this dialect.
Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
DeallocOp -> MemRef_DeallocOp
MemRefCastOp -> MemRef_CastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
TransposeOp -> MemRef_TransposeOp
ViewOp -> MemRef_ViewOp
The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667
Differential Revision: https://reviews.llvm.org/D96425
Separating the AffineMapAccessInterface from AffineRead/WriteOp interface so that dialects which extend Affine capabilities (e.g. PlaidML PXA = parallel extensions for Affine) can utilize relevant passes (e.g. MemRef normalization).
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D96284
SliceAnalysis originally was developed in the context of affine.for within mlfunc.
It predates the notion of region.
This revision updates it to not hardcode specific ops like scf::ForOp.
When rooted at an op, the behavior of the slice computation changes as it recurses into the regions of the op. This does not support gathering all values transitively depending on a loop induction variable anymore.
Additional variants rooted at a Value are added to also support the existing behavior.
Differential revision: https://reviews.llvm.org/D96702
These properties were useful for a few things before traits had a better integration story, but don't really carry their weight well these days. Most of these properties are already checked via traits in most of the code. It is better to align the system around traits, and improve the performance/cost of traits in general.
Differential Revision: https://reviews.llvm.org/D96088
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.
Differential Revision: https://reviews.llvm.org/D95841
In dialect conversion infrastructure, source materialization applies as part of
the finalization procedure to results of the newly produced operations that
replace previously existing values with values having a different type.
However, such operations may be created to replace operations created in other
patterns. At this point, it is possible that the results of the _original_
operation are still in use and have mismatching types, but the results of the
_intermediate_ operation that performed the type change are not in use leading
to the absence of source materialization. For example,
%0 = dialect.produce : !dialect.A
dialect.use %0 : !dialect.A
can be replaced with
%0 = dialect.other : !dialect.A
%1 = dialect.produce : !dialect.A // replaced, scheduled for removal
dialect.use %1 : !dialect.A
and then with
%0 = dialect.final : !dialect.B
%1 = dialect.other : !dialect.A // replaced, scheduled for removal
%2 = dialect.produce : !dialect.A // replaced, scheduled for removal
dialect.use %2 : !dialect.A
in the same rewriting, but only the %1->%0 replacement is currently considered.
Change the logic in dialect conversion to look up all values that were replaced
by the given value and performing source materialization if any of those values
is still in use with mismatching types. This is performed by computing the
inverse value replacement mapping. This arguably expensive manipulation is
performed only if there were some type-changing replacements. An alternative
could be to consider all replaced operations and not only those that resulted
in type changes, but it would harm pattern-level composability: the pattern
that performed the non-type-changing replacement would have to be made aware of
the type converter in order to call the materialization hook.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95626
We could extend this with an interface to allow dialect to perform a type
conversion, but that would make the folder creating operation which isn't
the case at the moment, and isn't necessarily always desirable.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95991
In dialect conversion, signature conversions essentially perform block argument
replacement and are added to the general value remapping. However, the replaced
values were not tracked, so if a signature conversion was rolled back, the
construction of operand lists for the following patterns could have obtained
block arguments from the mapping and give them to the pattern leading to
use-after-free. Keep track of signature conversions similarly to normal block
argument replacement, and erase such replacements from the general mapping when
the conversion is rolled back.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95688
Currently, for a scf.parallel (i,j,k) after the loop collapsing to 1D is done, the
IVs would be traversed as for an scf.parallel(k,j,i).
Differential Revision: https://reviews.llvm.org/D95693
This patch adds support for producer-consumer fusion scenarios with
multiple producer stores to the AffineLoopFusion pass. The patch
introduces some changes to the producer-consumer algorithm, including:
* For a given consumer loop, producer-consumer fusion iterates over its
producer candidates until a fixed point is reached.
* Producer candidates are gathered beforehand for each iteration of the
consumer loop and visited in reverse program order (not strictly guaranteed)
to maximize the number of loops fused per iteration.
In general, these changes were needed to simplify the multi-store producer
support and remove some of the workarounds that were introduced in the past
to support more fusion cases under the single-store producer limitation.
This patch also preserves the existing functionality of AffineLoopFusion with
one minor change in behavior. Producer-consumer fusion didn't fuse scenarios
with escaping memrefs and multiple outgoing edges (from a single store).
Multi-store producer scenarios will usually (always?) have multiple outgoing
edges so we couldn't fuse any with escaping memrefs, which would greatly limit
the applicability of this new feature. Therefore, the patch enables fusion for
these scenarios. Please, see modified tests for specific details.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D92876
This extracts the implementation of getType, setType, and getBody from
FunctionSupport.h into the mlir::impl namespace and defines them
generically in FunctionSupport.cpp. This allows them to be used
elsewhere for any FunctionLike ops that use FunctionType for their
type signature.
Using the new helpers, FuncOpSignatureConversion is generalized to
work with all such FunctionLike ops. Convenience helpers are added to
configure the pattern for a given concrete FunctionLike op type.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95021
This patch adds support for producer-consumer fusion scenarios with
multiple producer stores to the AffineLoopFusion pass. The patch
introduces some changes to the producer-consumer algorithm, including:
* For a given consumer loop, producer-consumer fusion iterates over its
producer candidates until a fixed point is reached.
* Producer candidates are gathered beforehand for each iteration of the
consumer loop and visited in reverse program order (not strictly guaranteed)
to maximize the number of loops fused per iteration.
In general, these changes were needed to simplify the multi-store producer
support and remove some of the workarounds that were introduced in the past
to support more fusion cases under the single-store producer limitation.
This patch also preserves the existing functionality of AffineLoopFusion with
one minor change in behavior. Producer-consumer fusion didn't fuse scenarios
with escaping memrefs and multiple outgoing edges (from a single store).
Multi-store producer scenarios will usually (always?) have multiple outgoing
edges so we couldn't fuse any with escaping memrefs, which would greatly limit
the applicability of this new feature. Therefore, the patch enables fusion for
these scenarios. Please, see modified tests for specific details.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D92876
Add a check if regions do not implement the RegionBranchOpInterface. This is not
allowed in the current deallocation steps. Furthermore, we handle edge-cases,
where a single region is attached and the parent operation has no results.
This fixes: https://bugs.llvm.org/show_bug.cgi?id=48575
Differential Revision: https://reviews.llvm.org/D94586
This revision adds a new `replaceOpWithIf` hook that replaces uses of an operation that satisfy a given functor. If all uses are replaced, the operation gets erased in a similar manner to `replaceOp`. DialectConversion support will be added in a followup as this requires adjusting how replacements are tracked there.
Differential Revision: https://reviews.llvm.org/D94632
This corrects the last 2 issues caught by tests when causing dialect
conversion rollbacks to occur.
Differential Revision: https://reviews.llvm.org/D94623
getDynOperands behavior is commonly used in a number of passes. Refactored to
use a helper function and avoid code reuse.
Differential Revision: https://reviews.llvm.org/D94340
This revision adds a new `initialize(MLIRContext *)` hook to passes that allows for them to initialize any heavy state before the first execution of the pass. A concrete use case of this is with patterns that rely on PDL, given that PDL is compiled at run time it is imperative that compilation results are cached as much as possible. The first use of this hook is in the Canonicalizer, which has the added benefit of reducing the number of expensive accesses to the context when collecting patterns.
Differential Revision: https://reviews.llvm.org/D93147
This class used to serve a few useful purposes:
* Allowed containing a null DictionaryAttr
* Provided some simple mutable API around a DictionaryAttr
The first of which is no longer an issue now that there is much better caching support for attributes in general, and a cache in the context for empty dictionaries. The second results in more trouble than it's worth because it mutates the internal dictionary on every action, leading to a potentially large number of dictionary copies. NamedAttrList is a much better alternative for the second use case, and should be modified as needed to better fit it's usage as a DictionaryAttrBuilder.
Differential Revision: https://reviews.llvm.org/D93442
This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.
Differential Revision: https://reviews.llvm.org/D93432
The current condition implies that the target materialization will be
called even if the type is the new operand type is legal, but slightly
different. For example, if there is a bufferization pattern that changes
memref layout, then target materialization for an illegal type
(TensorType) would be called.
Differential Revision: https://reviews.llvm.org/D93126
Now that passes have support for running nested pipelines, the inliner can now allow for users to provide proper nested pipelines to use for optimization during inlining. This revision also changes the behavior of optimization during inlining to optimize before attempting to inline, which should lead to a more accurate cost model and prevents the need for users to schedule additional duplicate cleanup passes before/after the inliner that would already be run during inlining.
Differential Revision: https://reviews.llvm.org/D91211
This reverts commit 0d48d265db.
This reapplies the following commit, with a fix for CAPI/ir.c:
[mlir] Start splitting the `tensor` dialect out of `std`.
This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).
Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.
This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.
Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2
Differential Revision: https://reviews.llvm.org/D92991
This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).
Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.
This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.
Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2
Differential Revision: https://reviews.llvm.org/D92991
OperationFolder currently uses ConstantOp as a backup when trying to materialize a constant after an operation is folded. This dependency isn't really useful or necessary given that dialects can/should provide a `materializeConstant` implementation.
Fixes PR#44866
Differential Revision: https://reviews.llvm.org/D92980
This fixes a subtle bug where SCCP could incorrectly optimize a private callable while waiting for its arguments to be resolved.
Fixes PR#48457
Differential Revision: https://reviews.llvm.org/D92976
This is part of a larger refactoring the better congregates the builtin structures under the BuiltinDialect. This also removes the problematic "standard" naming that clashes with the "standard" dialect, which is not defined within IR/. A temporary forward is placed in StandardTypes.h to allow time for downstream users to replaced references.
Differential Revision: https://reviews.llvm.org/D92435
Memrefs with affine_map in the results of normalizable operation were
not normalized by `--normalize-memrefs` option. This patch normalizes
them.
Differential Revision: https://reviews.llvm.org/D88719
Extended promote buffers to stack pass to support dynamically shaped allocas.
The conversion is limited by the rank of the underlying tensor.
An option is added to the pass to adjust the given rank.
Differential Revision: https://reviews.llvm.org/D91969
Given that OpState already implicit converts to Operator*, this seems reasonable.
The alternative would be to add more functions to OpState which forward to Operation.
Reviewed By: rriddle, ftynse
Differential Revision: https://reviews.llvm.org/D92266
- Address TODO in scf-bufferize: the argument materialization issue is
now fixed and the code is now in Transforms/Bufferize.cpp
- Tighten up finalizing-bufferize to avoid creating invalid IR when
operand types potentially change
- Tidy up the testing of func-bufferize, and move appropriate tests
to a new finalizing-bufferize.mlir
- The new stricter checking in finalizing-bufferize revealed that we
needed a DimOp conversion pattern (found when integrating into npcomp).
Previously, the converion infrastructure was blindly changing the
operand type during finalization, which happened to work due to
DimOp's tensor/memref polymorphism, but is generally not encouraged
(the new pattern is the way to tell the conversion infrastructure that
it is legal to change that type).
The rewrite logic has an optimization to drop a cast operation after
rewriting block arguments if the cast operation has no users. This is
unsafe as there might be a pending rewrite that replaced the cast operation
itself and hence would trigger a second free.
Instead, do not remove the casts and leave it up to a later canonicalization
to do so.
Differential Revision: https://reviews.llvm.org/D92184
This enables partial bufferization that includes function signatures. To test this, this
change also makes the func-bufferize partial and adds a dedicated finalizing-bufferize pass.
Differential Revision: https://reviews.llvm.org/D92032
Block merging in MLIR will incorrectly merge blocks with operations whose values are used outside of that block. This change forbids this behavior and provides a test where it is illegal to perform such a merge.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D91745
These utilities are more closely associated with the buffer
optimizations and buffer deallocation than with the dialect conversion
stuff in Bufferize.h. So move them out.
This makes Bufferize.h very easy to understand and completely focused on
dialect conversion.
Differential Revision: https://reviews.llvm.org/D91563
Refactoring/clean-up step needed to add support for producer-consumer fusion
with multi-store producer loops and, in general, to implement more general
loop fusion strategies in Affine. It introduces the following changes:
- AffineLoopFusion pass now uses loop fusion utilities more broadly to compute
fusion legality (canFuseLoops utility) and perform the fusion transformation
(fuseLoops utility).
- Loop fusion utilities have been extended to deal with AffineLoopFusion
requirements and assumptions while preserving both loop fusion utilities and
AffineLoopFusion current functionality within a unified implementation.
'FusionStrategy' has been introduced for this purpose and, in the future, it
will allow us to have a single loop fusion core implementation that will produce
different fusion outputs depending on the strategy used.
- Improve separation of concerns for legality and profitability analysis:
'isFusionProfitable' no longer filters out illegal scenarios that 'canFuse'
didn't detect, or the other way around. 'canFuse' now takes loop dependences
into account to determine the fusion loop depth (producer-consumer fusion only).
- As a result, maximal fusion now doesn't require any profitability analysis.
- Slices are now computed only once and reused across the legality, profitability
and fusion transformation steps (producer-consumer).
- Refactor some utilities and remove redundant copies of them.
This patch is NFCI and should preserve the existing functionality of both the
AffineLoopFusion pass and the affine fusion utilities.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D90798
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.
Differential Revision: https://reviews.llvm.org/D91572
Some rewriters take more iterations to converge, add a parameter to overwrite
the built-in maximum iteration count.
Fix PR48073.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D91553
This replaces the old type decomposition logic that was previously mixed
into bufferization, and makes it easily accessible.
This also deletes TestFinalizingBufferize, because after we remove the type
decomposition, it doesn't do anything that is not already provided by
func-bufferize.
Differential Revision: https://reviews.llvm.org/D90899
The index type does not have a bitsize and hence the size of corresponding allocations cannot be computed. Instead, the promotion pass now has an explicit option to specify the size of index.
Differential Revision: https://reviews.llvm.org/D91360
The previous logic for inlining a region A with N blocks into region B
would produce incorrect results on rollback for N greater than 1. This
rollback logic would leave blocks 1..N in region B and only move block 0
to region A.
The new inlining action recording stores the block move actions from N-1
to 0. Now on roll back, block 0 is moved to region A and then 1..N is
appended to the list of blocks in region A.
Differential Revision: https://reviews.llvm.org/D91185
This functionality is superceded by BufferResultsToOutParams pass (see
https://reviews.llvm.org/D90071) for users the require buffers to be
out-params. That pass should be run immediately after all tensors are gone from
the program (before buffer optimizations and deallocation insertion), such as
immediately after a "finalizing" bufferize pass.
The -test-finalizing-bufferize pass now defaults to what used to be the
`allowMemrefFunctionResults=true` flag. and the
finalizing-bufferize-allowed-memref-results.mlir file is moved
to test/Transforms/finalizing-bufferize.mlir.
Differential Revision: https://reviews.llvm.org/D90778
This is the most basic possible finalizing bufferization pass, which I
also think is sufficient for most new use cases. The more concentrated
nature of this pass also greatly clarifies the invariants that it
requires on its input to safely transform the program (see the
pass description in Passes.td).
With this pass, I have now upstreamed practically all of the
bufferizations from npcomp (the exception being std.constant, which can
be upstreamed when std.global_memref lands:
https://llvm.discourse.group/t/rfc-global-variables-in-mlir/2076/16 )
Differential Revision: https://reviews.llvm.org/D90205
This pass allows removing getResultConversionKind from
BufferizeTypeConverter. This pass replaces the AppendToArgumentsList
functionality. As far as I could tell, the only use of this functionlity
is to perform the transformation that is implemented in this pass.
Future patches will remove the getResultConversionKind machinery from
BufferizeTypeConverter, but sending this patch for individual review for
clarity.
Differential Revision: https://reviews.llvm.org/D90071
Often times the legality of inlining can change depending on if the callable is going to be inlined in-place, or cloned. For example, some operations are not allowed to be duplicated and can only be inlined if the original callable will cease to exist afterwards. The new `wouldBeCloned` flag allows for dialects to hook into this when determining legality.
Differential Revision: https://reviews.llvm.org/D90360
In certain situations it isn't legal to inline a call operation, but this isn't something that is possible(at least not easily) to prevent with the current hooks. This revision adds a new hook so that dialects with call operations that shouldn't be inlined can prevent it.
Differential Revision: https://reviews.llvm.org/D90359
This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass.
Differential Revision: https://reviews.llvm.org/D89104
There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following:
* Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there.
The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR.
* Create a new lib/Rewrite library and move PatternApplicator there.
This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL.
Differential Revision: https://reviews.llvm.org/D89103
The Pattern class was originally intended to be used for solely matching operations, but that use never materialized. All of the pattern infrastructure uses RewritePattern, and the infrastructure for pure matching(Matchers.h) is implemented inline. This means that this class isn't a useful abstraction at the moment, so this revision refactors it to solely encapsulate the "metadata" of a pattern. The metadata includes the various state describing a pattern; benefit, root operation, etc. The API on PatternApplicator is updated to now operate on `Pattern`s as nothing special from `RewritePattern` is necessary.
This refactoring is also necessary for the upcoming use of PDL patterns alongside C++ rewrite patterns.
Differential Revision: https://reviews.llvm.org/D86258
Added optimization pass to convert heap-based allocs to stack-based allocas in
buffer placement. Added the corresponding test file.
Differential Revision: https://reviews.llvm.org/D89688
Before this change, we would run `maxIterations` if the first iteration changed the op.
After this change, we exit the loop as soon as an iteration hasn't changed the op.
Assuming that we have reached a fixed point when an iteration doesn't change the op, this doesn't affect correctness.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D89981
A "structural" type conversion is one where the underlying ops are
completely agnostic to the actual types involved and simply need to update
their types. An example of this is scf.if -- the scf.if op and the
corresponding scf.yield ops need to update their types accordingly to the
TypeConverter, but otherwise don't care what type conversions are happening.
To test the structural type conversions, it is convenient to define a
bufferize pass for a dialect, which exercises them nicely.
Differential Revision: https://reviews.llvm.org/D89757
The current BufferPlacement transformation contains several concepts for
hoisting allocations. However, more advanced hoisting techniques should not be
integrated into the BufferPlacement transformation. Hence, this CL refactors the
current BufferPlacement pass into three separate pieces: BufferDeallocation and
BufferAllocation(Loop)Hoisting. Moreover, it extends the hoisting functionality
by allowing to move allocations out of loops.
Differential Revision: https://reviews.llvm.org/D87756
This transforms the symbol lookups to O(1) from O(NM), greatly speeding up both passes. For a large MLIR module this shaved seconds off of the compilation time.
Differential Revision: https://reviews.llvm.org/D89522
This revision contains two optimizations related to symbol checking:
* Optimize SymbolOpInterface to only check for a name attribute if the operation is an optional symbol.
This removes an otherwise unnecessary attribute lookup from a majority of symbols.
* Add a new SymbolTableCollection class to represent a collection of SymbolTables.
This allows for perfoming non-flat symbol lookups in O(1) time by caching SymbolTables for symbol table operations. This class is very useful for algorithms that operate on multiple symbol tables, either recursively or not.
Differential Revision: https://reviews.llvm.org/D89505
The opposite of tensor_to_memref is tensor_load.
- Add some basic tensor_load/tensor_to_memref folding.
- Add source/target materializations to BufferizeTypeConverter.
- Add an example std bufferization pattern/pass that shows how the
materialiations work together (more std bufferization patterns to come
in subsequent commits).
- In coming commits, I'll document how to write composable
bufferization passes/patterns and update the other in-tree
bufferization passes to match this convention. The populate* functions
will of course continue to be exposed for power users.
The naming on tensor_load/tensor_to_memref and their pretty forms are
not very intuitive. I'm open to any suggestions here. One key
observation is that the memref type must always be the one specified in
the pretty form, since the tensor type can be inferred from the memref
type but not vice-versa.
With this, I've been able to replace all my custom bufferization type
converters in npcomp with BufferizeTypeConverter!
Part of the plan discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17
Differential Revision: https://reviews.llvm.org/D89437
This revision introduces support for buffer allocation for any named linalg op.
To avoid template instantiating many ops, a new ConversionPattern is created to capture the LinalgOp interface.
Some APIs are updated to remain consistent with MLIR style:
`OwningRewritePatternList * -> OwningRewritePatternList &`
`BufferAssignmentTypeConverter * -> BufferAssignmentTypeConverter &`
Differential revision: https://reviews.llvm.org/D89226
This revision also inserts an end-to-end test that lowers tensors to buffers all the way to executable code on CPU.
Differential revision: https://reviews.llvm.org/D88998
The documentation for the NormalizeMemRefs pass and the associated MemRefsNormalizable
traits was confusing and not on the website. This update clarifies the language
around the difference between a MemRef Type, an operation that accesses the value of
MemRef Type, and better documents the limitations of the current implementation.
This patch also includes some basic debugging information for the pass so people
might have a chance of figuring out why it doesn't work on their code.
Differential Revision: https://reviews.llvm.org/D88532
Normalizing memrefs failed when a caller of symbolic use in a function
can not be casted to `CallOp`. This patch avoids the failure by checking
the result of the casting. If the caller can not be casted to `CallOp`,
it is skipped.
Differential Revision: https://reviews.llvm.org/D87746
Add support to tile affine.for ops with parametric sizes (i.e., SSA
values). Currently supports hyper-rectangular loop nests with constant
lower bounds only. Move methods
- moveLoopBody(*)
- getTileableBands(*)
- checkTilingLegality(*)
- tilePerfectlyNested(*)
- constructTiledIndexSetHyperRect(*)
to allow reuse with constant tile size API. Add a test pass -test-affine
-parametric-tile to test parametric tiling.
Differential Revision: https://reviews.llvm.org/D87353