The patch extends the vectorization pass to lower linalg index operations to vector code. It allocates constant 1d vectors that enumerate the indexes along the iteration dimensions and broadcasts/transposes these 1d vectors to the iteration space.
Differential Revision: https://reviews.llvm.org/D100373
This allows for walking all nested locations of a given location, and is generally useful when processing locations.
Differential Revision: https://reviews.llvm.org/D100437
We were using llvm::nulls, but that isn't thread safe so we switch to giving each thread it's own null stream.
Differential Revision: https://reviews.llvm.org/D100578
This CL introduces a generic attribute (called "encoding") on tensors.
The attribute currently does not carry any concrete information, but the type
system already correctly determines that tensor<8xi1,123> != tensor<8xi1,321>.
The attribute will be given meaning through an interface in subsequent CLs.
See ongoing discussion on discourse:
[RFC] Introduce a sparse tensor type to core MLIR
https://llvm.discourse.group/t/rfc-introduce-a-sparse-tensor-type-to-core-mlir/2944
A sparse tensor will look something like this:
```
// named alias with all properties we hold dear:
#CSR = {
// individual named attributes
}
// actual sparse tensor type:
tensor<?x?xf64, #CSR>
```
I see the following rough 5 step plan going forward:
(1) introduce this format attribute in this CL, currently still empty
(2) introduce attribute interface that gives it "meaning", focused on sparse in first phase
(3) rewrite sparse compiler to use new type, remove linalg interface and "glue"
(4) teach passes to deal with new attribute, by rejecting/asserting on non-empty attribute as simplest solution, or doing meaningful rewrite in the longer run
(5) add FE support, document, test, publicize new features, extend "format" meaning to other domains if useful
Reviewed By: stellaraccident, bondhugula
Differential Revision: https://reviews.llvm.org/D99548
The patch enables the use of index type in vectors. It is a prerequisite to support vectorization for indexed Linalg operations. This refactoring became possible due to the newly introduced data layout infrastructure. The data layout of a module defines the bitwidth of the index type needed to verify bitcasts and similar vector operations.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D99948
This revision tightens up the handling of attributes for both named
and generic linalg ops.
To demonstrate the IR validity, a working e2e Linalg example is added.
Differential Revision: https://reviews.llvm.org/D99430
This allows for the conversion to match `A(B()) -> C()` with a pattern matching
`A` and marking `B` for deletion.
Also add better assertions when an operation is erased while still having uses.
Differential Revision: https://reviews.llvm.org/D99442
Convert transfer_read ops with permutation maps into simpler
transfer_read with minority map + vector.braodcast and vector.transpose.
And transfer_read with leading dimensions broacast into transfer_read of
lower rank.
Differential Revision: https://reviews.llvm.org/D99019
The `mayNotHaveTerminator` was initially on Block but moved to the
verifier before landing and wasn't removed from its original place
where it is unused.
In particular for Graph Regions, the terminator needs is just a
historical artifact of the generalization of MLIR from CFG region.
Operations like Module don't need a terminator, and before Module
migrated to be an operation with region there wasn't any needed.
To validate the feature, the ModuleOp is migrated to use this trait and
the ModuleTerminator operation is deleted.
This patch is likely to break clients, if you're in this case:
- you may iterate on a ModuleOp with `getBody()->without_terminator()`,
the solution is simple: just remove the ->without_terminator!
- you created a builder with `Builder::atBlockTerminator(module_body)`,
just use `Builder::atBlockEnd(module_body)` instead.
- you were handling ModuleTerminator: it isn't needed anymore.
- for generic code, a `Block::mayNotHaveTerminator()` may be used.
Differential Revision: https://reviews.llvm.org/D98468
This avoided some conversion overhead on a model in TypeUniquer when
converting from ArrayRef -> TypeRange.
Differential Revision: https://reviews.llvm.org/D99300
ModuleOp is a natural place to provide scoped data layout information. However,
it is undesirable for ModuleOp to implement the entirety of
DataLayoutOpInterface because that would require either pushing the interface
inside the IR library instead of a separate library, or putting the default
implementation of the interface as inline functions in headers leading to
binary bloat. Instead, ModuleOp accepts an arbitrary data layout spec attribute
and has a dedicated hook to extract it, and DataLayout is modified to know
about ModuleOp particularities.
Reviewed By: herhut, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D98500
To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern.
Differential Revision: https://reviews.llvm.org/D98986
This is an assumption that is made in numerous places in the code. In
particular, in the code generated by mlir-tblgen for operand/result accessors
in ops with attr-sized operand or result lists. Make sure to verify this
assumption.
Note that the operation traits are verified before running the custom op
verifier, which can expect the trait verifier to have passed, but some traits
may be verified before the AttrSizedOperand/ResultTrait and should not make
such assumptions.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D99183
Supporting ranges in the byte code requires additional complexity, given that a range can't be easily representable as an opaque void *, as is possible with the existing bytecode value types (Attribute, Type, Value, etc.). To enable representing a range with void *, an auxillary storage is used for the actual range itself, with the pointer being passed around in the normal byte code memory. For type ranges, a TypeRange is stored. For value ranges, a ValueRange is stored. The above problem represents a majority of the complexity involved in this revision, the rest is adapting/adding byte code operations to support the changes made to the PDL interpreter in the parent revision.
After this revision, PDL will have initial end-to-end support for variadic operands/results.
Differential Revision: https://reviews.llvm.org/D95723
This has a numerous amount of benefits, given the overly clunky nature of CreateNativeOp:
* Users can now call into arbitrary rewrite functions from inside of PDL, allowing for more natural interleaving of PDL/C++ and enabling for more of the pattern to be in PDL.
* Removes the need for an additional set of C++ functions/registry/etc. The new ApplyNativeRewriteOp will use the same PDLRewriteFunction as the existing RewriteOp. This reduces the API surface area exposed to users.
This revision also introduces a new PDLResultList class. This class is used to provide results of native rewrite functions back to PDL. We introduce a new class instead of using a SmallVector to simplify the work necessary for variadics, given that ranges will require some changes to the structure of PDLValue.
Differential Revision: https://reviews.llvm.org/D95720
The patch in question broke the build with shared libraries due to
missing dependencies, one of which would have been circular between
MLIRStandard and MLIRMemRef if added. Fix this by moving more code
around and swapping the dependency direction. MLIRMemRef now depends on
MLIRStandard, but MLIRStandard does _not_ depend on MLIRMemRef.
Arguably, this is the right direction anyway since numerous libraries
depend on MLIRStandard and don't necessarily need to depend on
MLIRMemref.
Other otable changes include:
- some EDSC code is moved inline to MemRef/EDSC/Intrinsics.h because it
creates MemRef dialect operations;
- a utility function related to shape moved to BuiltinTypes.h/cpp
because it only realtes to shaped types and not any particular dialect
(standard dialect is erroneously believed to contain MemRefType);
- a Python test for the standard dialect is disabled completely because
the ops it tests moved to the new MemRef dialect, but it is not
exposed to Python bindings, and the change for that is non-trivial.
This patch introduces progressive lowering patterns for rewriting
vector.transfer_read/write to vector.load/store and vector.broadcast
in certain supported cases.
Reviewed By: dcaballe, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97822
This allows for storage instances to store data that isn't uniqued in the context, or contain otherwise non-trivial logic, in the rare situations that they occur. Storage instances with trivial destructors will still have their destructor skipped. A consequence of this is that the storage instance definition must be visible from the place that registers the type.
Differential Revision: https://reviews.llvm.org/D98311
verifyCompatibleShapes is not transitive. Create an n-ary version and
update SameOperandShapes and SameOperandAndResultShapes traits to use
it.
Differential Revision: https://reviews.llvm.org/D98331
The current implementation has some inefficiencies that become noticeable when running on large modules. This revision optimizes the code, and updates some out-dated idioms with newer utilities. The main components of this optimization include:
* Add an overload of Block::eraseArguments that allows for O(N) erasure of disjoint arguments.
* Don't process entry block arguments given that we don't erase them at this point.
* Don't track individual operation results, given that we don't erase them. We can just track the parent operation.
Differential Revision: https://reviews.llvm.org/D98309
Based on the following discussion:
https://llvm.discourse.group/t/rfc-memref-memory-shape-as-attribute/2229
The goal of the change is to make memory space property to have more
expressive representation, rather then "magic" integer values.
It will allow to have more clean ASM form:
```
gpu.func @test(%arg0: memref<100xf32, "workgroup">)
// instead of
gpu.func @test(%arg0: memref<100xf32, 3>)
```
Explanation for `Attribute` choice instead of plain `string`:
* `Attribute` classes allow to use more type safe API based on RTTI.
* `Attribute` classes provides faster comparison operator based on
pointer comparison in contrast to generic string comparison.
* `Attribute` allows to store more complex things, like structs or dictionaries.
It will allows to have more complex memory space hierarchy.
This commit preserve old integer-based API and implements it on top
of the new one.
Depends on D97476
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D96145
This method allows for removing multiple disjoint operands at once, reducing the need to erase operands individually (which results in shifting the operand list).
Differential Revision: https://reviews.llvm.org/D98290
This class provides efficient implementations of symbol queries related to uses, such as collecting the users of a symbol, replacing all uses, etc. This provides similar benefits to use related queries, as SymbolTableCollection did for lookup queries.
Differential Revision: https://reviews.llvm.org/D98071
This will allow for removing the duplicated type documentation from LangRef and instead link to the builtin dialect documentation.
Differential Revision: https://reviews.llvm.org/D98093
This patch is a follow-up on D97217. It adds a new 'Skip' result to the Operation visitor
so that a callback can stop the ongoing visit of an operation/block/region and
continue visiting the next one without fully interrupting the walk. Skipping is
needed to be able to erase an operation/block in pre-order and do not continue
visiting the internals of that operation/block.
Related to the skipping mechanism, the patch also introduces the following changes:
* Added new TestIRVisitors pass with basic testing for the IR visitors.
* Fixed missing early increment ranges in visitor implementation.
* Updated documentation of walk methods to include erasure information and walk
order information.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D97820
This patch extends the Region, Block and Operation visitors to also support pre-order walks.
We introduce a new template argument that dictates the walk order (only pre-order and
post-order are supported for now). The default order for Regions, Blocks and Operations is
post-order. Mixed orders (e.g., Region/Block pre-order + Operation post-order) could easily
be implemented, as shown in NumberOfExecutions.cpp.
Reviewed By: rriddle, frgossen, bondhugula
Differential Revision: https://reviews.llvm.org/D97217
In .mlir modules with larges amounts of attributes, e.g. a function with a larger number of argument attributes, the string comparison filtering greatly affects compile time. This revision switches to using a SmallDenseSet in these situations, resulting in over a 10x speed up in some situations.
Differential Revision: https://reviews.llvm.org/D97980
Now that attributes can be generated using ODS, we can move the builtin attributes as well. This revision removes a majority of the builtin attributes with a few left for followup revisions. The attributes moved to ODS in this revision are: AffineMapAttr, ArrayAttr, DictionaryAttr, IntegerSetAttr, StringAttr, SymbolRefAttr, TypeAttr, and UnitAttr.
Differential Revision: https://reviews.llvm.org/D97591
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
Some elementwise operations are not scalarizable, vectorizable, or tensorizable.
Split `ElementwiseMappable` trait into the following, more precise traits.
- `Elementwise`
- `Scalarizable`
- `Vectorizable`
- `Tensorizable`
This allows for reuse of `Elementwise` in dialects like HLO.
Differential Revision: https://reviews.llvm.org/D97674
Just a pure method renaming.
It is a preparation step for replacing "memory space as raw integer"
with more generic "memory space as attribute", which will be done in
separate commit.
The `MemRefType::getMemorySpace` method will return `Attribute` and
become the main API, while `getMemorySpaceAsInt` will be declared as
deprecated and will be replaced in all in-tree dialects (also in separate
commits).
Reviewed By: mehdi_amini, rriddle
Differential Revision: https://reviews.llvm.org/D97476
Move the results in line with the op instead. This results in each
operation having its own types recorded vs single tuple type, but comes
at benefit that every mutation doesn't incurs uniquing. Ran into cases
where updating result type of operation led to very large memory usage.
Differential Revision: https://reviews.llvm.org/D97652
Not only this is likely more efficient than BitVector::find_first(), but
also if the BitVector is empty find_first() returns -1, which
llvm::drop_begin isn't robust against.
This also exposed a bug in Dialect loading where it was not correctly identifying identifiers that had the dialect namespace as a prefix.
Differential Revision: https://reviews.llvm.org/D97431
A majority of operations have a very small number of interfaces, which means that the cost of using a hash map is generally larger for interface lookups than just a binary search. In the future when there are a number of operations with large amounts of interfaces, we can switch to a hybrid approach that optimizes lookups based on the number of interfaces. For now, however, a binary search is the best approach.
This dropped compile time on a largish TF MLIR module by 20%(half a second).
Differential Revision: https://reviews.llvm.org/D96085
This revision adds the infrastructure for `Debug Actions`. This is a DEBUG only
API that allows for external entities to control various aspects of compiler
execution. This is conceptually similar to something like DebugCounters in LLVM, but at a lower level. This framework doesn't make any assumptions about how the higher level driver is controlling the execution, it merely provides a framework for connecting the two together. This means that on top of DebugCounter functionality, we could also provide more interesting drivers such as interactive execution. A high level overview of the workflow surrounding debug actions is
shown below:
* Compiler developer defines an `action` that is taken by the a pass,
transformation, utility that they are developing.
* Depending on the needs, the developer dispatches various queries, pertaining
to this action, to an `action manager` that will provide an answer as to
what behavior the action should do.
* An external entity registers an `action handler` with the action manager,
and provides the logic to resolve queries on actions.
The exact definition of an `external entity` is left opaque, to allow for more
interesting handlers.
This framework was proposed here: https://llvm.discourse.group/t/rfc-debug-actions-in-mlir-debug-counters-for-the-modern-world
Differential Revision: https://reviews.llvm.org/D84986
`verifyConstructionInvariants` is intended to allow for verifying the invariants of an attribute/type on construction, and `getChecked` is intended to enable more graceful error handling aside from an assert. There are a few problems with the current implementation of these methods:
* `verifyConstructionInvariants` requires an mlir::Location for emitting errors, which is prohibitively costly in the situations that would most likely use them, e.g. the parser.
This creates an unfortunate code duplication between the verifier code and the parser code, given that the parser operates on llvm::SMLoc and it is an undesirable overhead to pre-emptively convert from that to an mlir::Location.
* `getChecked` effectively requires duplicating the definition of the `get` method, creating a quite clunky workflow due to the subtle different in its signature.
This revision aims to talk the above problems by refactoring the implementation to use a callback for error emission. Using a callback allows for deferring the costly part of error emission until it is actually necessary.
Due to the necessary signature change in each instance of these methods, this revision also takes this opportunity to cleanup the definition of these methods by:
* restructuring the signature of `getChecked` such that it can be generated from the same code block as the `get` method.
* renaming `verifyConstructionInvariants` to `verify` to match the naming scheme of the rest of the compiler.
Differential Revision: https://reviews.llvm.org/D97100
Allow clients to create a new ShapedType of the same "container" type
but with different element or shape. First use case is when refining
shape during shape inference without needing to consider which
ShapedType is being refined.
Differential Revision: https://reviews.llvm.org/D96682
Dialects themselves do not support repeated addition of interfaces with the
same TypeID. However, in case of delayed registration, the registry may contain
such an interface, or have the same interface registered several times due to,
e.g., dependencies. Make sure we delayed registration does not attempt to add
an interface with the same TypeID more than once.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D96606
Rationale:
This computation failed ASAN for the following input
(integer overflow during 4032000000000000000 * 100):
tensor<100x200x300x400x500x600x700x800xf32>
This change adds a simple overflow detection during
debug mode (which we run more regularly than ASAN).
Arguably this is an unrealistic tensor input, but
in the context of sparse tensors, we may start to
see cases like this.
Bug:
https://bugs.llvm.org/show_bug.cgi?id=49136
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D96530
The AffineMap in the MemRef inferred by SubViewOp may have uncompressed symbols which result in type mismatch on otherwise unused symbols. Make the computation of the AffineMap compress those unused symbols which results in better canonical types.
Additionally, improve the error message to report which inferred type was expected.
Differential Revision: https://reviews.llvm.org/D96551
MLIRContext allows its users to access directly to the DialectRegistry it
contains. While sometimes useful for registering additional dialects on an
already existing context, this breaks the encapsulation by essentially giving
raw accesses to a part of the context's internal state. Remove this mutable
access and instead provide a method to append a given DialectRegistry to the
one already contained in the context. Also provide a shortcut mechanism to
construct a context from an already existing registry, which seems to be a
common use case in the wild. Keep read-only access to the registry contained in
the context in case it needs to be copied or used for constructing another
context.
With this change, DialectRegistry is no longer concerned with loading the
dialects and deciding whether to invoke delayed interface registration. Loading
is concentrated in the MLIRContext, and the functionality of the registry
better reflects its name.
Depends On D96137
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D96331
This introduces a mechanism to register interfaces for a dialect without making
the dialect itself depend on the interface. The registration request happens on
DialectRegistry and, if the dialect has not been loaded yet, the actual
registration is delayed until the dialect is loaded. It requires
DialectRegistry to become aware of the context that contains it and the context
to expose methods for querying if a dialect is loaded.
This mechanism will enable a simple extension mechanism for dialects that can
have interfaces defined outside of the dialect code. It is particularly helpful
for, e.g., translation to LLVM IR where we don't want the dialect itself to
depend on LLVM IR libraries.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D96137
These properties were useful for a few things before traits had a better integration story, but don't really carry their weight well these days. Most of these properties are already checked via traits in most of the code. It is better to align the system around traits, and improve the performance/cost of traits in general.
Differential Revision: https://reviews.llvm.org/D96088
This reverts commit 511dd4f438 along with
a couple fixes.
Original message:
Now the context is the first, rather than the last input.
This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.
Phabricator: https://reviews.llvm.org/D96111
Now the context is the first, rather than the last input.
This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.
Differential Revision: https://reviews.llvm.org/D96111
The `AffineMap` class follows the same semantic as Type and Attribute.
It is immutable object, so it make sence to mark its methods as const.
Also part of its API is already marked as const, this change just make the API consistent.
Reviewed By: ftynse, bondhugula
Differential Revision: https://reviews.llvm.org/D96026
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.
Differential Revision: https://reviews.llvm.org/D95841
This revision adds two new classes, RewriterBase and IRRewriter. RewriterBase is a new shared base class between IRRewriter and PatternRewriter. PatternRewriter will continue to be the base class used to perform rewrites within a rewrite pattern. IRRewriter on the other hand, is a new class that allows for tracking IR rewrites from outside of a rewrite pattern. In this revision all of the old API from PatternRewriter is moved to RewriterBase, but the distinction between IRRewriter and PatternRewriter is kept on the chance that a necessary API divergence happens in the future.
Currently if you want to have some utility that transforms a piece of IR and share it between pattern and non-pattern code, you have to duplicate it. This revision enables the creation of utilities that can be invoked from rewrite patterns and normal transformation code:
```c++
void someSharedUtility(RewriterBase &rewriter, ...) {
// Some interesting IR mutation here.
}
// Some RewritePattern
LogicalResult MyPattern::matchAndRewrite(Operation *op, PatternRewriter &rewriter) {
...
someSharedUtility(rewriter, ...);
...
}
// Some Pass
void MyPass::runOnOperation() {
...
IRRewriter rewriter(...);
someSharedUtility(rewriter, ...);
}
```
Differential Revision: https://reviews.llvm.org/D94638
* Fixing missing `type` keyword in alias print
* Add test for large tuple type alias & rerun output to verify printed
form can be parsed (which caught the above).
Tuples can occupy quite a lot of space, instead of printing out tuple type
everywhere, just use the type alias if larger (arbitrarily chose a bound for
now).
Differential Revision: https://reviews.llvm.org/D95707
Update ElementsAttr::isValidIndex to handle ElementsAttr with a scalar. Scalar will have rank 0.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95663
This class is looking up a dialect prefix on the identifier on initialization
and keeping a pointer to the Dialect when found.
The NamedAttribute key is now a DialectIdentifier.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D95418
Expand existing one to handle the common case for verifying compatible
is existing and inferred. This considers arrays equivalent if they they
have the same size and pairwise compatible elements.
The subview verifier in the rank-reduced case is plainly skipping verification
when the resulting type is a memref with empty affine map. This is generally incorrect.
Instead, form the actual expected rank-reduced MemRefType that takes into account the projections of 1's dimensions. Then, check the canonicalized expected rank-reduced type against the canonicalized candidate type.
Differential Revision: https://reviews.llvm.org/D95316
This prevents needless reinitialization for clients that want to reuse a pass manager multiple times. A new `getRegisryHash` function is exposed by the context to give a rough indicator of when the context registry has changed.
Differential Revision: https://reviews.llvm.org/D95493
This extracts the implementation of getType, setType, and getBody from
FunctionSupport.h into the mlir::impl namespace and defines them
generically in FunctionSupport.cpp. This allows them to be used
elsewhere for any FunctionLike ops that use FunctionType for their
type signature.
Using the new helpers, FuncOpSignatureConversion is generalized to
work with all such FunctionLike ops. Convenience helpers are added to
configure the pattern for a given concrete FunctionLike op type.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95021
An `unrealized_conversion_cast` operation represents an unrealized conversion
from one set of types to another, that is used to enable the inter-mixing of
different type systems. This operation should not be attributed any special
representational or execution semantics, and is generally only intended to be
used to satisfy the temporary intermixing of type systems during the conversion
of one type system to another.
This operation was discussed in the following RFC(and ODM):
https://llvm.discourse.group/t/open-meeting-1-14-dialect-conversion-and-type-conversion-the-question-of-cast-operations/
Differential Revision: https://reviews.llvm.org/D94832
A cast-like operation is one that converts from a set of input types to a set of output types. The arity of the inputs may be from 0-N, whereas the arity of the outputs may be anything from 1-N. Cast-like operations are removable in cases where they produce a "no-op", i.e when the input types and output types match 1-1.
Differential Revision: https://reviews.llvm.org/D94831
In prehistorical times, AffineApplyOp was allowed to produce multiple values.
This allowed the creation of intricate SSA use-def chains.
AffineApplyNormalizer was originally introduced as a means of reusing the AffineMap::compose method to write SSA use-def chains.
Unfortunately, symbols that were produced by an AffineApplyOp needed to be promoted to dims and reordered for the mathematical composition to be valid.
Since then, single result AffineApplyOp became the law of the land but the original assumptions were not revisited.
This revision revisits these assumptions and retires AffineApplyNormalizer.
Differential Revision: https://reviews.llvm.org/D94920
This revision adds a new `replaceOpWithIf` hook that replaces uses of an operation that satisfy a given functor. If all uses are replaced, the operation gets erased in a similar manner to `replaceOp`. DialectConversion support will be added in a followup as this requires adjusting how replacements are tracked there.
Differential Revision: https://reviews.llvm.org/D94632
The type tablegen backend now has enough support to represent these types well enough, so we can now move them to be declaratively defined.
Differential Revision: https://reviews.llvm.org/D94275
The functions will be removed by January 20th.
All call sites within MLIR have been converted in previous changes.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D94191
A previous patch made Value::getType() be resilient to null values which was
considered to be too sweeping. This is a more targeted change which requires
deabstracting some templates.
A middle ground would be to make ValueTypeIterator be tolerant to null values.
Differential Revision: https://reviews.llvm.org/D93908
The asmprinter would crash when dumping IR objects that had their
operands dropped. With this change, we now get this output, which
makes op->dump() style debugging more useful.
%5 = "firrtl.eq"(<<NULL>>, <<NULL>>) : (<<NULL TYPE>>, <<NULL TYPE>>) -> !firrtl.uint<1>
Previously the asmprinter would crash getting the types of the null operands.
Differential Revision: https://reviews.llvm.org/D93869
This class used to serve a few useful purposes:
* Allowed containing a null DictionaryAttr
* Provided some simple mutable API around a DictionaryAttr
The first of which is no longer an issue now that there is much better caching support for attributes in general, and a cache in the context for empty dictionaries. The second results in more trouble than it's worth because it mutates the internal dictionary on every action, leading to a potentially large number of dictionary copies. NamedAttrList is a much better alternative for the second use case, and should be modified as needed to better fit it's usage as a DictionaryAttrBuilder.
Differential Revision: https://reviews.llvm.org/D93442
This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.
Differential Revision: https://reviews.llvm.org/D93432
Previous behavior would fail if inserting an operation that already
existed. Now SymbolTable::insert can also be used as a way to make a
symbol's name unique even after insertion.
Further TODOs have been left over naming and consistent behavior
considerations.
Differential Revision: https://reviews.llvm.org/D93349
This exposes several issues with the current generation that this revision also fixes.
* TypeDef now allows specifying the base class to use when generating.
* TypeDef now inherits from DialectType, which allows for using it as a TypeConstraint
* Parser/Printers are now no longer generated in the header(removing duplicate symbols), and are now only generated when necessary.
- Now that generatedTypeParser/Printer are only generated in the definition file,
existing users will need to manually expose this functionality when necessary.
* ::get() is no longer generated for singleton types, because it isn't necessary.
Differential Revision: https://reviews.llvm.org/D93270
This revision adds a new `printNewline` hook to OpAsmPrinter that allows for printing a newline within the custom format of an operation, that is then indented to the start of the operation. Support for the declarative assembly format is also added, in the form of a `\n` literal.
Differential Revision: https://reviews.llvm.org/D93151