This revision adds 0-d vector support to vector.transfer ops.
In the process, numerous cleanups are applied, in particular around normalizing
and reducing the number of builders.
Reviewed By: ThomasRaoux, springerm
Differential Revision: https://reviews.llvm.org/D114803
However, since CallOps have no aliasing OpResults, their OpOperands always bufferize out-of-place.
This change removes `bufferizesToMemoryWrite` from `CallOpInterface`. This method was called, but its return value did not matter.
Differential Revision: https://reviews.llvm.org/D114616
Proper test for sparse tensor outputs is a single condition throughout
the whole tensor index expression (not a general conjunction, since this
may include other conditions that cause cancellation).
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D114810
This revision reintroduces tensor.insert_slice verification which seems
to have vanished over time: a verifier was initially introduced in cf9503c1b7
but for some reason the invalid.mlir was not properly updated; as time passed the verifier was not called anymore and later the code was deleted.
As a consequence, a non-negligible portion of tests has run astray using invalid
tensor.insert_slice semantics and needed to be fixed.
Also, extract isRankReducedType from TensorOps for better reuse
Originally, this facility was used by both tensor and memref forms but
it got copied around as dialects were split.
Differential Revision: https://reviews.llvm.org/D114715
The canonical type of the result of the `memref.subview` needs to make
sure that the previously dropped unit-dimensions are the ones dropped
for the canonicalized type as well. This means the generic
`inferRankReducedResultType` cannot be used. Instead the current
dropped dimensions need to be querried and the same need to be dropped.
Reviewed By: nicolasvasilache, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D114751
For a 1x1 weight and stride of 1, the input/weight can be reshaped and passed into a fully connected op then reshaped back
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D114757
Add the decompose patterns that lower higher dimensional convolutions to lower dimensional ones to CodegenStrategy and use CodegenStrategy to test the decompose patterns. Additionally, remove the assertion that checks the anchor op name is set in the CodegenStrategyTest pass. Removing the assertion allows us to simplify the pipelines used in the interchange and decompose tests.
Depends On D114797
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114798
The revision updates the convolution decomposition patterns to take a linalg transformation filter. The transformation filter in a later revision allows use the patterns from CodegenStrategy.
Depends On D114690
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114797
Add support for an empty anchor op string in vectorization. An empty anchor op string is useful after fusion when there are multiple different operations to vectorize.
Depends On D114689
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114690
This patch introduces a new conversion to convert bufferization.clone operations
into a memref.alloc and a memref.copy operation. This transformation is needed to
transform all remaining clones which "survive" all previous transformations, before
a given program is lowered further (to LLVM e.g.). Otherwise, these operations
cannot be handled anymore and lead to compile errors.
See: https://llvm.discourse.group/t/bufferization-error-related-to-memref-clone/4665
Differential Revision: https://reviews.llvm.org/D114233
* set_symbol_name, get_symbol_name, set_visibility, get_visibility, replace_all_symbol_uses, walk_symbol_tables
* In integrations I've been doing, I've been reaching for all of these to do both general IR manipulation and module merging.
* I don't love the replace_all_symbol_uses underlying APIs since they necessitate SYMBOL_COUNT walks and have various sharp edges. I'm hoping that whatever emerges eventually for this can still retain this simple API as a one-shot.
Differential Revision: https://reviews.llvm.org/D114687
Moves sparse tensor output support forward by generalizing from injective
insertions only to include reductions. This revision accepts the case with all
parallel outer and all reduction inner loops, since that can be handled with
an injective insertion still. Next revision will allow the inner parallel loop
to move inward (but that will require "access pattern expansion" aka "workspace").
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D114399
We check whether the maximum index of dimensional identifier present
in the result expressions is less than dimCount (number of dimensional
identifiers) argument passed in the AffineMap::get() and the maximum index
of symbolic identifier present in the result expressions is less than
symbolCount (number of symbolic identifiers) argument passed in AffineMap::get().
Reviewed By: nicolasvasilache, bondhugula
Differential Revision: https://reviews.llvm.org/D114238
This changes the op to produce `AnyVectorOfAnyRank` following mostly the code for 1-D vectors.
Depends On D114598
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114550
This changes the op to produce `AnyVectorOfAnyRank` and implements this by just
inserting the element (skipping the shuffle that we do for the 1-D case).
Depends On D114549
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114598
There is special logic for InsertSliceOp to check if a memcpy is needed. This change extracts that piece of code and makes it a PostAnalysisStep.
The purpose of this change is to untangle `bufferize` from BufferizationAliasInfo. (Not fully there yet.)
Differential Revision: https://reviews.llvm.org/D114513
This is commit 4 of 4 for the multi-root matching in PDL, discussed in https://llvm.discourse.group/t/rfc-multi-root-pdl-patterns-for-kernel-matching/4148 (topic flagged for review).
This PR integrates the various components (root ordering algorithm, nondeterministic execution of PDL bytecode) to implement multi-root PDL matching. The main idea is for the pattern to specify mulitple candidate roots. The PDL-to-PDLInterp lowering selects one of these roots and "hangs" the pattern from this root, traversing the edges downwards (from operation to its operands) when possible and upwards (from values to its uses) when needed. The root is selected by invoking the optimal matching multiple times, once for each candidate root, and the connectors are determined form the optimal matching. The costs in the directed graph are equal to the number of upward edges that need to be traversed when connecting the given two candidate roots. It can be shown that, for this choice of the cost function, "hanging" the pattern an inner node is no better than from the optimal root.
The following three main additions were implemented as a part of this PR:
1. OperationPos predicate has been extended to allow tracing the operation accepting a value (the opposite of operation defining a value).
2. Predicate checking if two values are not equal - this is useful to ensure that we do not traverse the edge back downwards after we traversed it upwards.
3. Function for for building the cost graph among the candidate roots.
4. Updated buildPredicateList, building the predicates optimal branching has been determined.
Testing: unit tests (an integration test to follow once the stack of commits has landed)
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D108550
This is commit 2 of 4 for the multi-root matching in PDL, discussed in https://llvm.discourse.group/t/rfc-multi-root-pdl-patterns-for-kernel-matching/4148 (topic flagged for review).
This commit implements the features needed for the execution of the new operations pdl_interp.get_accepting_ops, pdl_interp.choose_op:
1. The implementation of the generation and execution of the two ops.
2. The addition of Stack of bytecode positions within the ByteCodeExecutor. This is needed because in pdl_interp.choose_op, we iterate over the values returned by pdl_interp.get_accepting_ops until we reach finalize. When we reach finalize, we need to return back to the position marked in the stack.
3. The functionality to extend the lifetime of values that cross the nondeterministic choice. The existing bytecode generator allocates the values to memory positions by representing the liveness of values as a collection of disjoint intervals over the matcher positions. This is akin to register allocation, and substantially reduces the footprint of the bytecode executor. However, because with iterative operation pdl_interp.choose_op, execution "returns" back, so any values whose original liveness cross the nondeterminstic choice must have their lifetime executed until finalize.
Testing: pdl-bytecode.mlir test
Reviewed By: rriddle, Mogball
Differential Revision: https://reviews.llvm.org/D108547
This is commit 1 of 4 for the multi-root matching in PDL, discussed in https://llvm.discourse.group/t/rfc-multi-root-pdl-patterns-for-kernel-matching/4148 (topic flagged for review).
These operations are:
* pdl.get_accepting_ops: Returns a list of operations accepting the given value or a range of values at the specified position. Thus if there are two operations `%op1 = "foo"(%val)` and `%op2 = "bar"(%val)` accepting a value at position 0, `%ops = pdl_interp.get_accepting_ops of %val : !pdl.value at 0` will return both of them. This allows us to traverse upwards from a value to operations accepting the value.
* pdl.choose_op: Iteratively chooses one operation from a range of operations. Therefore, writing `%op = pdl_interp.choose_op from %ops` in the example above will select either `%op1`or `%op2`.
Testing: Added the corresponding test cases to mlir/test/Dialect/PDLInterp/ops.mlir.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D108543
Use composition instead of inheritance for storing dialect-specific bufferization state. This is in preparation of adding "tensor dialect"-specific bufferization state.
Differential Revision: https://reviews.llvm.org/D114508
Bufferization of function boundaries is extracted from ComprehensiveBufferize into a separate file. This will become its own build target in the future.
Differential Revision: https://reviews.llvm.org/D114226
These methods for results and arguments would create an ArrayRef full
of nullptrs when there were no argument attributes. This is problematic
because this result could not be passed to the FuncOp::build creator
without causing a segfault. Now the list will have empty attributes.
Differential Revision: https://reviews.llvm.org/D114358
Add a helper function to ControlFlowInterfaces for checking if two ops
are in mutually exclusive regions according to RegionBranchOpInterface.
Utilize this new helper in Linalg ComprehensiveBufferize. This makes the
analysis independent of the SCF dialect and generalizes it to other ops
that implement RegionBranchOpInterface.
Differential Revision: https://reviews.llvm.org/D114220
* Implement `FlatAffineConstraints::getConstantBound(EQ)`.
* Inject a simpler constraint for loops that have at most 1 iteration.
* Taking into account constant EQ bounds of FlatAffineConstraint dims/symbols during canonicalization of the resulting affine map in `canonicalizeMinMaxOp`.
Differential Revision: https://reviews.llvm.org/D114138
For synthesizing an op's implementation of the generated interface
from {Min|Max}Version, we need to define an `initializer` and
`mergeAction`. The `initializer` specifies the initial version,
and `mergeAction` specifies how version specifications from
different parts of the op should be merged to generate a final
version requirements.
Previously we use the specified version enum as the type for both
the initializer and thus the final return type. This means we need
to perform `static_cast` over some hopefully not used number (`~0u`)
as the initializer. This is quite opaque and sort of not guaranteed
to work. Also, there are ops that have an enum attribute where some
values declare version requirements (e.g., enumerant `B` requires
v1.1+) but some not (e.g., enumerant `A` requires nothing). Then a
concrete op instance with `A` will still declare it implements the
version interface (because interface implementation is static for
an op) but actually theirs no requirements for version.
So this commit changes to use an more explicit `llvm::Optional`
to wrap around the returned version enum. This should make it
more clear.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D108312
Add the makeComposedPadHighOp method which creates a new PadTensorOp if necessary. If the source to pad is actually the result of a sequence of padded LinalgOps, the method checks if padding is needed or if we can use the padded result of the padded LinalgOp sequence directly.
Example:
```
%0 = tensor.extract_slice %arg0 [%iv0, %iv1] [%sz0, %sz1]
%1 = linalg.pad_tensor %0 low[0, 0] high[...] { linalg.yield %cst }
%2 = linalg.matmul ins(...) outs(%1)
%3 = tensor.extract_slice %2 [0, 0] [%sz0, %sz1]
```
when padding %3 return %2 instead of introducing
```
%4 = linalg.pad_tensor %3 low[0, 0] high[...] { linalg.yield %cst }
```
Depends On D114161
Reviewed By: nicolasvasilache, pifon2a
Differential Revision: https://reviews.llvm.org/D114175
The alloc dealloc pair generation callback is really central to the
bufferization algorithm, it modifies the state in a way that affects
correctness. This is not really a configurable option. Moving it to
BufferizationState removes what was probably the reason it was added
as a callback.
Differential Revision: https://reviews.llvm.org/D114417
Remove duplicate `Pass` suffix from view-op-graph pass class name. The
extra suffix would lead to methods like registerViewOpGraphPassPass
being generated.
Differential Revision: https://reviews.llvm.org/D114459
Padding now can explicitly specify the padding value when non-zero is wanted.
This also includes bypassing pads when the pad does nothing.
Differential Revision: https://reviews.llvm.org/D113611
Transpose convolution decomposition is now performed in a separate pass. This
allows padding / constant propagation to be performed at the TOSA level. It
also adds support for striding when there is no dilation.
Differential Revision: https://reviews.llvm.org/D114409
This revision makes concrete use of 0-d vectors to extend the semantics of
InsertElementOp.
Reviewed By: dcaballe, pifon2a
Differential Revision: https://reviews.llvm.org/D114388
This revision starts making concrete use of 0-d vectors to extend the semantics of
ExtractElementOp.
In the process a new VectorOfAnyRank Tablegen OpBase.td is added to allow progressive transition to supporting 0-d vectors by gradually opting in.
Differential Revision: https://reviews.llvm.org/D114387
The interface method `bufferize` controls how (and it what order) nested ops are traversed. This simplifies bufferization of scf::ForOps and scf::IfOps, which used to need special rules in scf::YieldOp.
Differential Revision: https://reviews.llvm.org/D114057
The only users of returned futures from ThreadPool is llvm-reduce after
D113857.
There should be no cases where multiple threads wait on the same future,
so there should be no need to return std::shared_future<>. Instead return
plain std::future<>.
If users need to share a future between multiple threads, they can share
the futures themselves.
Reviewed By: Meinersbur, mehdi_amini
Differential Revision: https://reviews.llvm.org/D114363
Refactored two new parser APIs parseGenericOperationAfterOperands and
parseCustomOperationName out of parseGenericOperation and parseCustomOperation.
Motivation: Sometimes an op can be printed in a special way if certain criteria
is met. While parsing, we need to handle all the versions.
`parseGenericOperationAfterOperands` is handy in situation where we already
parsed the operands and decide to fall back to default parsing.
`parseCustomOperationName` is useful when we need to know details (dialect,
operation name etc.) about a parsed token meant to be an mlir operation.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D113719
This patch adds functionality to parse FlatAffineConstraints from a
StringRef with the intention to be used for unit tests. This should
make the construction of FlatAffineConstraints easier for testing
purposes.
The patch contains an example usage of the functionality in a unit test that
uses FlatAffineConstraints.
Reviewed By: bondhugula, grosser
Differential Revision: https://reviews.llvm.org/D113275
This reverts commit 3028bca6a9.
For some reason using FallbackModel works with CMake and does not work
with bazel. Using `ExternalModel` works. I will check what's going on
and resubmit tomorrow.
Remove the interface from op defs in MemRefOps.td and make it an external model.
This is the first PR of many that will move bufferization-related ops, interfaces, passes to Dialect/Bufferize.
RFC: https://llvm.discourse.group/t/rfc-dialect-for-bufferization-related-ops/4712
It is still debated if the comprehensive bufferization has to be moved there as well, so for now I am just moving the "gradual" bufferization.
Differential Revision: https://reviews.llvm.org/D114147
This reverts commit a9e236bed8.
This broke the Windows build:
mlir\include\mlir/Dialect/X86Vector/Transforms.h(28): error C2061: syntax error: identifier 'uint'
MLIR supports recursive types but they could not be handled by the conversion
infrastructure directly as it would result in infinite recursion in
`convertType` for elemental types. Support this case by keeping the "call
stack" of nested type conversions in the TypeConverter class and by passing it
as an optional argument to the individual conversion callback. The callback can
then check if a specific type is present on the stack more than once to detect
and handle the recursive case.
This approach is preferred to the alternative approach of having a separate
callback dedicated to handling only the recursive case as the latter was
observed to introduce ~3% time overhead on a 50MB IR file even if it did not
contain recursive types.
This approach is also preferred to keeping a local stack in type converters
that need to handle recursive types as that would compose poorly in case of
out-of-tree or cross-project extensions.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D113579
The purpose of the change is to make clear whether the user is
retrieving the original function or the wrapper function, in line with
the invoke commands. This new functionality is useful for users that
already have defined their own packed interface, so they do not want the
extra layer of indirection, or for users wanting to the look at the
resulting primary function rather than the wrapper function.
All locations, except the python bindings now have a `lookupPacked`
method that matches the original `lookup` functionality. `lookup`
still exists, but with new semantics.
- `lookup` returns the function with a given name. If `bool f(int,int)`
is compiled, `lookup` will return a reference to `bool(*f)(int,int)`.
- `lookupPacked` returns the packed wrapper of the function with the
given name. If `bool f(int,int)` is compiled, `lookupPacked` will return
`void(*mlir_f)(void**)`.
Differential Revision: https://reviews.llvm.org/D114352
Remove the tile and fuse test pass that has been replaced by codegen strategy.
Depends On D114067
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114068
Add a pattern to apply the new tile and fuse on tensors method. Integrate the pattern into the CodegenStrategy and use the CodegenStrategy to implement the tests.
Depends On D114012
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114067
Tile and fuse failed if the outermost tile loop is a reduction dimension. Add the necessary check to handle outermost reductions and introduce a test case to verify the change.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114012
Step towards removing the hard coded behavior for this trait and to instead use common interface.
Differential Revision: https://reviews.llvm.org/D114208
- Adds hooks that allow SerializeTo* passes to arbitrarily transform
the produced LLVM Module before it is passed to the code generation
passes.
- Uses these hooks within the SerializeToHsaco pass in order to run
LLVM optimizations and to set the optimization level on the
TargetMachine.
- Adds an optLevel parameter to SerializeToHsaco
Future work may include moving much of what's been added to
SerializeToHsaco to SerializeToBlob, but that would require
confirmation from the NVVM backend maintainers that it would be
appropriate to do so.
Depends on D114107
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D114113
As discussed in D109579, this patch exposes `runRegionDCE` and
`eraseUnreachableBlocks` so they can be used as separate utilities in
other passes.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D114160
`vector::InsertElementOp` and `vector::ExtractElementOp` have had their `position`
operand changed to accept `AnySignlessIntegerOrIndex` for better operability with
operations that use `index`, such as affine loops.
LLVM's `extractelement` and `insertelement` can also accept `i64`, so lowering
directly to these operations without explicitly inserting casts is allowed. SPIRV's
equivalent ops can also accept `i64`.
Reviewed By: nicolasvasilache, jpienaar
Differential Revision: https://reviews.llvm.org/D114139
Add a new BufferizableOpInterface method `isNotConflicting` that can be used to implement custom analysis rules.
Differential Revision: https://reviews.llvm.org/D113961
NamedAttribute is currently represented as an std::pair, but this
creates an extremely clunky .first/.second API. This commit
converts it to a class, with better accessors (getName/getValue)
and also opens the door for more convenient API in the future.
Differential Revision: https://reviews.llvm.org/D113956
The current implementation is quite clunky; OperationName stores either an Identifier
or an AbstractOperation that corresponds to an operation. This has several problems:
* OperationNames created before and after an operation are registered are different
* Accessing the identifier name/dialect/etc. from an OperationName are overly branchy
- they need to dyn_cast a PointerUnion to check the state
This commit refactors this such that we create a single information struct for every
operation name, even operations that aren't registered yet. When an OperationName is
created for an unregistered operation, we only populate the name field. When the
operation is registered, we populate the remaining fields. With this we now have two
new classes: OperationName and RegisteredOperationName. These both point to the
same underlying operation information struct, but only RegisteredOperationName can
assume that the operation is actually registered. This leads to a much cleaner API, and
we can also move some AbstractOperation functionality directly to OperationName.
Differential Revision: https://reviews.llvm.org/D114049
There seems to be a consensus that we should allow 0D vectors:
https://llvm.discourse.group/t/should-we-have-0-d-vectors/3097
This commit is only the first step: it changes the verifier and the parser to
allow vectors like `vector<f32>` (but does not allow explicit 0 dimensions,
i.e., `vector<0xf32>` is not allowed).
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114086
This struct was added and was intended to be used, but it was missed in the original patch.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D114041
This allows for using SFINAE partial specialization for DenseMapInfo.
In MLIR, this is particularly useful as it will allow for defining partial
specializations that support all Attribute, Op, and Type classes without
needing to specialize DenseMapInfo for each individual class.
Differential Revision: https://reviews.llvm.org/D113641
LLVM switchop currently only permits i32. Both LLVM IR and MLIR Standard switch permit other integer types leading to an illegal state when lowering an i8 switch from MLIR standard
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D113955
For the semi affine expressions, whenever rhs of a floordiv, ceildiv, mod
or product expression is a symbolic expression, we introduce a local variable
representing the result, and store the floordiv/ceildiv, mod or product
affine expression in LocalExprs. In this way the expression is flattened,
and trivial addition and subtraction related simplifications are performed.
Also rule based matching for detecting and transforming "expr - q * (expr floordiv q)"
to "expr mod q", where q is a symbolic exxpression, in simplifyAdd function.
Differential Revision: https://reviews.llvm.org/D112808
This patch extends the existing functionality of computing an explicit
representation for local variables, to also get the explicit representation,
instead of only the inequality pairs.
This is required for a future patch to remove redundant local ids based on
their explicit representation.
Reviewed By: arjunp
Differential Revision: https://reviews.llvm.org/D113814
This reverts commit 94992670fc.
Build is broken with:
tools/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.cpp.inc:23996:3: error: no matching function for call to 'printSwitchOpCases'
printSwitchOpCases(_odsPrinter, *this, getValue().getType(), getCaseValuesAttr(), getCaseDestinations(), getCaseOperands(), getCaseOperands().getTypes());
^~~~~~~~~~~~~~~~~~
LLVM switchop currently only permits i32. Both LLVM IR and MLIR Standard switch permit other integer types leading to an illegal state when lowering an i8 switch from MLIR standard
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D113955
This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the kernel has no reduction (viz. insertions occur in lexicographic order and are "injective"). This will be later generalized to allow reductions too. Also, this first revision only supports sparse 1-d tensors (viz. vectors) as output in the runtime support library. This will be generalized to n-d tensors shortly. But this way, the revision is kept to a manageable size.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D113705
Multiply by one can be removed during canonicalization. This optimizes away unneeded operations.
Differential Revision: https://reviews.llvm.org/D113807
This is in prevision of dropping them altogether and using insert/extract based patterns.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D113928
This is analogous to what LLVM's PatternMatch.h supports,
but LLVM calls it m_Value for both the binding and
nonbinding versions.
This is an upstream from CIRCT and is used there.
Differential Revision: https://reviews.llvm.org/D113905
OperationLegalizer::isIllegal returns false if operation legality wasn't
registered by user and we expect same behaviour when dynamic legality
callback return None, but instead true was returned.
Differential Revision: https://reviews.llvm.org/D113267
This change makes it possible to set up custom mappings in a PostAnalysisStep. Some users of Comprehensive Bufferize have custom tensor types and it is most convenient to just reuse the same bvm.
Also add some more assertions.
Differential Revision: https://reviews.llvm.org/D113726
Names should be consistent across all operations otherwise painful bugs will surface.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D113762
This reverts commit bec488b818.
This commit introduced a layering violation between MLIR libraries.
Reverting for now while discussing on the original review thread.
This patch adds functionality to parse FlatAffineConstraints from a
StringRef with the intention to be used for unit tests. This should
make the construction of FlatAffineConstraints easier for testing
purposes.
The patch contains an example usage of the functionality in a unit test that
uses FlatAffineConstraints.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D113275
Part of the very first discussion here, but didn't upstream it before as we
didn't use it yet. Fix that for pending updates. Just adding the op here,
follow up will add the lowering to codegen.
Per discussion on discord and various feature requests across bindings (Haskell and Rust bindings authors have asked me directly), we should be building a link-ready MLIR-C dylib which exports the C API and can be used without linking to anything else.
This patch:
* Adds a new MLIR-C aggregate shared library (libMLIR-C.so), which is similar in name and function to libLLVM-C.so.
* It is guarded by the new CMake option MLIR_BUILD_MLIR_C_DYLIB, which has a similar purpose/name to the LLVM_BUILD_LLVM_C_DYLIB option.
* On all platforms, this will work with both static, BUILD_SHARED_LIBS, and libMLIR builds, if supported:
* In static builds: libMLIR-C.so will export the CAPI symbols and statically link all dependencies into itself.
* In BUILD_SHARED_LIBS: libMLIR-C.so will export the CAPI symbols and have dynamic dependencies on implementation shared libraries.
* In libMLIR.so mode: same as static. libMLIR.so was not finished for actual linking use within the project. An eventual relayering so that libMLIR-C.so depends on libMLIR.so is possible but requires first re-engineering the latter to use the aggregate facility.
* On Linux, exported symbols are filtered to only the CAPI. On others (MacOS, Windows), all symbols are exported. A CMake status is printed unless if global visibility is hidden indicating that this has not yet been implemented. The library should still work, but it will be larger and more likely to conflict until fixed. Someone should look at lifting the corresponding support from libLLVM-C.so and adapting. Or, for special uses, just build with `-DCMAKE_CXX_VISIBILITY_PRESET=hidden -DCMAKE_C_VISIBILITY_PRESET=hidden`.
* Includes fixes to execution engine symbol export macros to enable default visibility. Without this, the advice to use hidden visibility would have resulted in test failures and unusable execution engine support libraries.
Differential Revision: https://reviews.llvm.org/D113731
With `-Os` turned on, results in 2-5% binary size reduction
(depends on the original binary). Without it, the binary size
is essentially unchanged.
Depends on D113128
Differential Revision: https://reviews.llvm.org/D113331
This helper struct allows users of ComprehensiveBufferize to inject "post analysis" steps that are implemented after the analysis but before the bufferization.
Differential Revision: https://reviews.llvm.org/D113458
* Some long names were added and script decided to change whitespaces in a lot of places
* `ImageOperand` was renamed to `ImageOperands` in spec
* Some *NV enums were renamed to *KHR (spec actually maintains both variants with same value, but script pulled only *KHR versions)
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D113667
Support load with broadcast, elementwise divf op and remove the
hardcoded restriction on the vector size. Picking the right size should
be enfored by user and will fail conversion to llvm/spirv if it is not
supported.
Differential Revision: https://reviews.llvm.org/D113618
* Move "linalg.inplaceable" attr name literals to BufferizableOpInterface.
* Use `memref.copy` by default. Override to `linalg.copy` in ComprehensiveBufferizePass.
These are the last remaining code dependencies on Linalg in Comprehensive Bufferize. The next commit will make ComprehensiveBufferize independent of the Linalg dialect.
Differential Revision: https://reviews.llvm.org/D113457
This revision adds an implementation of 2-D vector.transpose for 4x8 and 8x8 for
AVX2 and surfaces it to the Linalg level of control.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D113347
This decouples the printing/parsing from the "context" in which the parsing occurs.
This will allow to invoke these methods directly using an OpAsmParser/OpAsmPrinter.
Differential Revision: https://reviews.llvm.org/D113637
Move helper functions for traversing reverse use-def chains. These are useful for implementing custom optimizations (e.g., custom InitTensorOp eliminations).
Also move over the AllocationCallbacks struct. This is in preparation for decoupling ComprehensiveBufferize from various dialects.
Differential Revision: https://reviews.llvm.org/D113386
mapped_iterator is a useful abstraction for applying a
map function over an existing iterator, but our current
usage ends up allocating storage/making indirect calls
even with the map function is a known function, which
is horribly inefficient. This commit refactors the usage
of mapped_iterator to avoid this, and allows for directly
referencing the map function when dereferencing.
Fixes PR52319
Differential Revision: https://reviews.llvm.org/D113511
* Sprinkle `inline` on a few small and hot hashing/uniquing methods
* Use the faster DenseMapInfo hash functions instead of
llvm::hash_value.
This provides a speed up of a few percent in workloads with lots of
attributes.
Identifier and StringAttr essentially serve the same purpose, i.e. to hold a string value. Keeping these seemingly identical pieces of functionality separate has caused problems in certain situations:
* Identifier has nice accessors that StringAttr doesn't
* Identifier can't be used as an Attribute, meaning strings are often duplicated between Identifier/StringAttr (e.g. in PDL)
The only thing that Identifier has that StringAttr doesn't is support for caching a dialect that is referenced by the string (e.g. dialect.foo). This functionality is added to StringAttr, as this is useful for StringAttr in generally the same ways it was useful for Identifier.
Differential Revision: https://reviews.llvm.org/D113536
This make `getResultBuffer` in ComprehensiveBufferize independent of the SCF, Affine and Linalg dialects. This commit is in preparating of decoupling op interface implementations from ComprehensiveBufferize.
Differential Revision: https://reviews.llvm.org/D113380
The specific description is [[ https://llvm.discourse.group/t/adding-unsigned-integer-ceil-and-floor-in-std-dialect/4541 | Adding unsigned integer ceil in Std Dialect ]] .
When we lower ceilDivOp this will generate below code, sometimes we know m and n are unsigned intergal.Here are some redundant judgments about positive and negative.
So we need to add some unsigned operations to simplify the instructions.
```
ceilDiv(n, m)
x = (m > 0) ? -1 : 1
return (n*m>0) ? ((n+x) / m) + 1 : - (-n / m)
```
unsigned operations:
```
ceilDivU(n, m)
return n ==0 ? 0 : ((n - 1) / m) + 1
```
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D113363
BufferizationAliasInfo is used in BufferizationOpInterface::bufferize implementations, so it should be part of the same build target as BufferizableOpInterface.
This commit is in preparation of decoupling the ComprehensiveBufferize from the various dialects.
Differential Revision: https://reviews.llvm.org/D113378
* Store inplace bufferization decisions in `inplaceBufferized`.
* Remove `InPlaceSpec`. Use a bool instead.
* Use `BufferizableOpInterface::bufferizesToWritableMemory` and `bufferizesToWritableMemory` instead of `getInPlace(BlockArgument)`. The analysis does not care about inplacability of block arguments. It only cares whether the buffer can be written to or not.
* The `kInPlaceResultsAttrName` op attribute is for testing purposes only.
This commit further decouples BufferizationAliasInfo from other dialects such as SCF.
Differential Revision: https://reviews.llvm.org/D113375
After replacing then init_tensor with a new value, the new value must be inserted into the corresponding union/equivalence sets.
Differential Revision: https://reviews.llvm.org/D113374
Use existing helper instead of handling only a subset of indices lowering
arithmetic. Also relax the restriction on the memref rank for the GPU mma ops
as we can now support any rank.
Differential Revision: https://reviews.llvm.org/D113383
Remove the getSmallestBoundingIndex method that has no uses anymore.
Depends On D113548
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D113549
Replace the getSmallestBoundingIndex method used in padding by getConstantUpperBoundForIndex that uses flat affine constraints to compute a constant upper bound.
Depends On D113398
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D113546
Add a flag to control if CodegenStrategy runs the EnablePass between the transformations.
Depends On D113382
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D113409
Remove unused members and store the indexing and packing loops in SmallVector.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D113398
Remove the padding options from the tiling options since padding is now implemented by a separate pattern/pass introduced in https://reviews.llvm.org/D112412.
The revsion remove the tile-and-pad-tensors.mlir and replaces it with the pad.mlir that tests padding in isolation (without tiling). Similarly, hoist-padding.mlir is replaced by pad-and-hoist.mlir introduced in https://reviews.llvm.org/D112713.
Depends On D112838
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D113382
This is useful for ops such as scf::IfOp, which always bufferize in-place.
This commit is in preparation of decoupling BufferizationAliasInfo from the SCF dialect.
Differential Revision: https://reviews.llvm.org/D113339
Enables using the same iterator interface to these even though underlying storage is different.
Differential Revision: https://reviews.llvm.org/D113512
This breaking change requires to remove printing the mnemonic in the print()
method on Type/Attribute classes.
This makes it consistent with the parsing code which alread handles the
mnemonic outside of the parsing method.
This likely won't break the build for anyone, but tests will start
failing for dialects downstream. The fix is trivial and look like
going from:
void emitc::OpaqueType::print(DialectAsmPrinter &printer) const {
printer << "opaque<\"";
to:
void emitc::OpaqueAttr::print(DialectAsmPrinter &printer) const {
printer << "<\"";
Reviewed By: rriddle, aartbik
Differential Revision: https://reviews.llvm.org/D113334
Add a new `useDefaultTypePrinterParser` boolean settings on the dialect
(default to false for now) that emits the boilerplate to dispatch type
parsing/printing to the auto-generated method.
We will likely turn this on by default in the future.
Differential Revision: https://reviews.llvm.org/D113332
Add a new `useDefaultAttributePrinterParser` boolean settings on the dialect
(default to false for now) that emits the boilerplate to dispatch attribute
parsing/printing to the auto-generated method.
We will likely turn this on by default in the future.
Differential Revision: https://reviews.llvm.org/D113329
This predates the templated variant, and has been simply forwarding
to getSplatValue<Attribute> for some time. Removing this makes the
API a bit more uniform, and also helps prevent users from thinking
it is "cheap".
There are several aspects of the API that either aren't easy to use, or are
deceptively easy to do the wrong thing. The main change of this commit
is to remove all of the `getValue<T>`/`getFlatValue<T>` from ElementsAttr
and instead provide operator[] methods on the ranges returned by
`getValues<T>`. This provides a much more convenient API for the value
ranges. It also removes the easy-to-be-inefficient nature of
getValue/getFlatValue, which under the hood would construct a new range for
the type `T`. Constructing a range is not necessarily cheap in all cases, and
could lead to very poor performance if used within a loop; i.e. if you were to
naively write something like:
```
DenseElementsAttr attr = ...;
for (int i = 0; i < size; ++i) {
// We are internally rebuilding the APFloat value range on each iteration!!
APFloat it = attr.getFlatValue<APFloat>(i);
}
```
Differential Revision: https://reviews.llvm.org/D113229
Add a new directive `either` to specify the operands can be matched in either order
Reviewed By: jpienaar, Mogball
Differential Revision: https://reviews.llvm.org/D110666
Generate static function for matching the type/attribute to reduce the
memory footprint.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D110199
Add pad_const field to tosa.pad.
Add builders to enable optional construction of pad_const in pad op.
Update documentation of tosa.clamp to match spec wording.
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D113322
Declarative attribute and type formats with assembly formats. Define an
`assemblyFormat` field in attribute and type defs with a `mnemonic` to
generate a parser and printer.
```tablegen
def MyAttr : AttrDef<MyDialect, "MyAttr"> {
let parameters = (ins "int64_t":$count, "AffineMap":$map);
let mnemonic = "my_attr";
let assemblyFormat = "`<` $count `,` $map `>`";
}
```
Use `struct` to define a comma-separated list of key-value pairs:
```tablegen
def MyType : TypeDef<MyDialect, "MyType"> {
let parameters = (ins "int":$one, "int":$two, "int":$three);
let mnemonic = "my_attr";
let assemblyFormat = "`<` $three `:` struct($one, $two) `>`";
}
```
Use `struct(*)` to capture all parameters.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111594
Added omp.sections and omp.section operation according to the
section 2.8.1 of OpenMP Standard 5.0.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D110844
[NFC] This patch fixes URLs containing "master". Old URLs were either broken or
redirecting to the new URL.
Reviewed By: #libc, ldionne, mehdi_amini
Differential Revision: https://reviews.llvm.org/D113186
This commit separates the bufferization from the bufferization pass in Linalg. This allows other dialects to use ComprehensiveBufferize more easily.
This commit mainly moves files to a new directory and adds a new build target.
Differential Revision: https://reviews.llvm.org/D112989
AllocationCallbacks functions allocate/deallocate only. They no longer set the insertion point.
This is in preparation of decoupling ComprehensiveBufferize from the Linalg dialect.
Differential Revision: https://reviews.llvm.org/D112991
Move dialect-specific and analysis-specific function out of BufferizationAliasInfo. BufferizationAliasInfo's only job now is to keep track of aliases.
This is in preparation of futher decoupling ComprehensiveBufferize from various dialects.
Differential Revision: https://reviews.llvm.org/D112992
By default, OpResult buffers are writable. But there are ops (e.g., ConstantOp) for which this is not the case.
The purpose of this commit is to further decouple Comprehensive Bufferize from the Standard dialect.
Differential Revision: https://reviews.llvm.org/D112908
This in preparation of decoupling BufferizableOpInterface, Comprehensive Bufferize and dialects.
The goal of this CL is to make `getResultBuffer` (and other `bufferize` functions) independent of `LinalgOps`.
Differential Revision: https://reviews.llvm.org/D112907
These two methods are redundant and removed:
* `bufferizesToAliasOnly`: If not `bufferizesToMemoryRead` and not `bufferizesToMemoryWrite` but `getAliasingOpResult` returns a non-null value, we know that this OpOperand is alias-only. This method now has a default implementation and does not have to be implemented.
* `getInplaceableOpResult`: The analysis does not differentiate between "inplaceable" and "aliasing". The only thing that matters is whether or not OpOperand and OpResult are aliasing. That is the key property that makes buffer copies necessary.
Differential Revision: https://reviews.llvm.org/D112902
- String binary search does 1 less string comparison
- Identifier linear scan on large attribute list is switched to string binary search
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D112970
This allows for external users of Comprehensive Bufferize to specify their own InitTensorOp elimination procedures.
Differential Revision: https://reviews.llvm.org/D112686
The main benefits of this change are faster access to operands
(no need to compute the offset, as it is now right after the
operation), simpler code(no need to manage a lot of the "is the
operand storage trailing" logic we had to before). The major
downside to this though, is that operand holding operations now
grow in size by 1 word (as no matter how we do this change, there
will need to be some additional book keeping).
Differential Revision: https://reviews.llvm.org/D111695
A quick grep for NDEBUG in MLIR revealed a use in DebugActions.h that breaks ABI. This patch changes the use of NDEBUG to LLVM_ENABLE_ABI_BREAKING_CHECKS which has the advantage of being independent of whether clients build their own app in debug or release as it is purely dependant on how MLIR itself was built.
Differential Revision: https://reviews.llvm.org/D113088
- Provide the operator overloads for constructing (semi-)affine expressions in
Python by combining existing expressions with constants.
- Make AffineExpr, AffineMap and IntegerSet hashable in Python.
- Expose the AffineExpr composition functionality.
Reviewed By: gysit, aoyal
Differential Revision: https://reviews.llvm.org/D113010
The current setup of LinalgTransformationFilter allows a
transformation to trigger when either
1) The StringAttr is not set and no filter identifier is specified.
2) The StringAttr is set and its value matches (one of) the provided
identifier.
This misses the case where the transformation should trigger either
when the attribute is not set or its value matches (one of) the
provided identifier. Since `Identifier` does not allow empty strings,
add a boolean option to match when the attribute is not set. This
option is by default off.
Differential Revision: https://reviews.llvm.org/D113057
The 2-D case can be rewritten to generate quite fewer instructions and a single vector.shuffle which seems to provide a nice performance boost.
Add this arrow to our quiver by exposing it with a new vector transform option.
Differential Revision: https://reviews.llvm.org/D113062