Linalg to processors.
This changes adds infrastructure to distribute the loops generated in
Linalg to processors at the time of generation. This addresses use
case where the instantiation of loop is done just to distribute
them. The option to distribute is added to TilingOptions for now and
will allow specifying the distribution as a transformation option,
just like tiling and promotion are specified as options.
Differential Revision: https://reviews.llvm.org/D85147
- Fix ODS framework to suppress build methods that infer result types and are
ambiguous with collective variants. This applies to operations with a single variadic
inputs whose result types can be inferred.
- Extended OpBuildGenTest to test these kinds of ops.
Differential Revision: https://reviews.llvm.org/D85060
Previously, the memory leaks on heap. Since the MlirOperationState is not intended to be used again after mlirOperationCreate, the patch simplify frees the memory in mlirOperationCreate instead of creating any new API.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D85629
This diff attempts to resolve the TODO in `getOpIndexSet` (formerly
known as `getInstIndexSet`), which states "Add support to handle IfInsts
surronding `op`".
Major changes in this diff:
1. Overload `getIndexSet`. The overloaded version considers both
`AffineForOp` and `AffineIfOp`.
2. The `getInstIndexSet` is updated accordingly: its name is changed to
`getOpIndexSet` and its implementation is based on a new API `getIVs`
instead of `getLoopIVs`.
3. Add `addAffineIfOpDomain` to `FlatAffineConstraints`, which extracts
new constraints from the integer set of `AffineIfOp` and merges it to
the current constraint system.
4. Update how a `Value` is determined as dim or symbol for
`ValuePositionMap` in `buildDimAndSymbolPositionMaps`.
Differential Revision: https://reviews.llvm.org/D84698
This patch also fixes a minor issue that shape.rank should allow
returning !shape.size. The dialect doc has such an example for
shape.rank.
Differential Revision: https://reviews.llvm.org/D85556
This reverts commit 9f24640b7e.
We hit some dead-locks on thread exit in some configurations: TLS exit handler is taking a lock.
Temporarily reverting this change as we're debugging what is going on.
This revision aims to provide a new API, `checkTilingLegality`, to
verify that the loop tiling result still satisifes the dependence
constraints of the original loop nest.
Previously, there was no check for the validity of tiling. For instance:
```
func @diagonal_dependence() {
%A = alloc() : memref<64x64xf32>
affine.for %i = 0 to 64 {
affine.for %j = 0 to 64 {
%0 = affine.load %A[%j, %i] : memref<64x64xf32>
%1 = affine.load %A[%i, %j - 1] : memref<64x64xf32>
%2 = addf %0, %1 : f32
affine.store %2, %A[%i, %j] : memref<64x64xf32>
}
}
return
}
```
You can find more information about this example from the Section 3.11
of [1].
In general, there are three types of dependences here: two flow
dependences, one in direction `(i, j) = (0, 1)` (notation that depicts a
vector in the 2D iteration space), one in `(i, j) = (1, -1)`; and one
anti dependence in the direction `(-1, 1)`.
Since two of them are along the diagonal in opposite directions, the
default tiling method in `affine`, which tiles the iteration space into
rectangles, will violate the legality condition proposed by Irigoin and
Triolet [2]. [2] implies two tiles cannot depend on each other, while in
the `affine` tiling case, two rectangles along the same diagonal are
indeed dependent, which simply violates the rule.
This diff attempts to put together a validator that checks whether the
rule from [2] is violated or not when applying the default tiling method
in `affine`.
The canonical way to perform such validation is by examining the effect
from adding the constraint from Irigoin and Triolet to the existing
dependence constraints.
Since we already have the prior knowlegde that `affine` tiles in a
hyper-rectangular way, and the resulting tiles will be scheduled in the
same order as their respective loop indices, we can simplify the
solution to just checking whether all dependence components are
non-negative along the tiling dimensions.
We put this algorithm into a new API called `checkTilingLegality` under
`LoopTiling.cpp`. This function iterates every `load`/`store` pair, and
if there is any dependence between them, we get the dependence component
and check whether it has any negative component. This function returns
`failure` if the legality condition is violated.
[1]. Bondhugula, Uday. Effective Automatic parallelization and locality optimization using the Polyhedral model. https://dl.acm.org/doi/book/10.5555/1559029
[2]. Irigoin, F. and Triolet, R. Supernode Partitioning. https://dl.acm.org/doi/10.1145/73560.73588
Differential Revision: https://reviews.llvm.org/D84882
Implement the Reduction Tree Pass framework as part of the MLIR Reduce tool. This is a parametarizable pass that allows for the implementation of custom reductions passes in the tool.
Implement the FunctionReducer class as an example of a Reducer class parameter for the instantiation of a Reduction Tree Pass.
Create a pass pipeline with a Reduction Tree Pass with the FunctionReducer class specified as parameter.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D83969
This also beefs up the test coverage:
- Make unranked memref testing consistent with ranked memrefs.
- Add testing for the invalid element type cases.
This is not quite NFC: index types are now allowed in unranked memrefs.
Differential Revision: https://reviews.llvm.org/D85541
This simple patch translates the num_threads and if clauses of the parallel
operation. Also includes test cases.
A minor change was made to parsing of the if clause to parse AnyType and
return the parsed type. Updates to test cases also.
Reviewed by: SouraVX
Differential Revision: https://reviews.llvm.org/D84798
This revision refactors the default definition of the attribute and type `classof` methods to use the TypeID of the concrete class instead of invoking the `kindof` method. The TypeID is already used as part of uniquing, and this allows for removing the need for users to define any of the type casting utilities themselves.
Differential Revision: https://reviews.llvm.org/D85356
Subclass data is useful when a certain amount of memory is allocated, but not all of it is used. In the case of Type, that hasn't been the case for a while and the subclass is just taking up a full `unsigned`. Removing this frees up ~8 bytes for almost every type instance.
Differential Revision: https://reviews.llvm.org/D85348
This class allows for defining thread local objects that have a set non-static lifetime. This internals of the cache use a static thread_local map between the various different non-static objects and the desired value type. When a non-static object destructs, it simply nulls out the entry in the static map. This will leave an entry in the map, but erase any of the data for the associated value. The current use cases for this are in the MLIRContext, meaning that the number of items in the static map is ~1-2 which aren't particularly costly enough to warrant the complexity of pruning. If a use case arises that requires pruning of the map, the functionality can be added.
This is especially useful in the context of MLIR for implementing thread-local caching of context level objects that would otherwise have very high lock contention. This revision adds a thread local cache in the MLIRContext for attributes, identifiers, and types to reduce some of the locking burden. This led to a speedup of several hundred miliseconds when compiling a conversion pass on a very large mlir module(>300K operations).
Differential Revision: https://reviews.llvm.org/D82597
This allows for bucketing the different possible storage types, with each bucket having its own allocator/mutex/instance map. This greatly reduces the amount of lock contention when multi-threading is enabled. On some non-trivial .mlir modules (>300K operations), this led to a compile time decrease of a single conversion pass by around half a second(>25%).
Differential Revision: https://reviews.llvm.org/D82596
This change adds initial support needed to generate OpenCL compliant SPIRV.
If Kernel capability is declared then memory model becomes OpenCL.
If Addresses capability is declared then addressing model becomes Physical64.
Additionally for Kernel capability interface variable ABI attributes are not
generated as entry point function is expected to have normal arguments.
Differential Revision: https://reviews.llvm.org/D85196
This revision adds a folding pattern to replace affine.min ops by the actual min value, when it can be determined statically from the strides and bounds of enclosing scf loop .
This matches the type of expressions that Linalg produces during tiling and simplifies boundary checks. For now Linalg depends both on Affine and SCF but they do not depend on each other, so the pattern is added there.
In the future this will move to a more appropriate place when it is determined.
The canonicalization of AffineMinOp operations in the context of enclosing scf.for and scf.parallel proceeds by:
1. building an affine map where uses of the induction variable of a loop
are replaced by `%lb + %step * floordiv(%iv - %lb, %step)` expressions.
2. checking if any of the results of this affine map divides all the other
results (in which case it is also guaranteed to be the min).
3. replacing the AffineMinOp by the result of (2).
The algorithm is functional in simple parametric tiling cases by using semi-affine maps. However simplifications of such semi-affine maps are not yet available and the canonicalization does not succeed yet.
Differential Revision: https://reviews.llvm.org/D82009
Using a shuffle for the last recursive step in progressive lowering not only
results in much more compact IR, but also more efficient code (since the
backend is no longer confused on subvector aliasing for longer vectors).
E.g. the following
%f = vector.shape_cast %v0: vector<1024xf32> to vector<32x32xf32>
yields much better x86-64 code that runs 3x faster than the original.
Reviewed By: bkramer, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D85482
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()
This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.
To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.
Differential Revision: https://reviews.llvm.org/D85495
Original modeling of LLVM IR types in the MLIR LLVM dialect had been wrapping
LLVM IR types and therefore required the LLVMContext in which they were created
to outlive them, which was solved by placing the LLVMContext inside the dialect
and thus having the lifetime of MLIRContext. This has led to numerous issues
caused by the lack of thread-safety of LLVMContext and the need to re-create
LLVM IR modules, obtained by translating from MLIR, in different LLVM contexts
to enable parallel compilation. Similarly, llvm::Module had been introduced to
keep track of identified structure types that could not be modeled properly.
A recent series of commits changed the modeling of LLVM IR types in the MLIR
LLVM dialect so that it no longer wraps LLVM IR types and has no dependence on
LLVMContext and changed the ownership model of the translated LLVM IR modules.
Remove LLVMContext and LLVM modules from the implementation of MLIR LLVM
dialect and clean up the remaining uses.
The only part of LLVM IR that remains necessary for the LLVM dialect is the
data layout. It should be moved from the dialect level to the module level and
replaced with an MLIR-based representation to remove the dependency of the
LLVMDialect on LLVM IR library.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85445
Historically, LLVMDialect has been required in the conversion from LLVM IR in
order to be able to construct types. This is no longer necessary with the new
type model and the dialect can be replaced with a local LLVM context.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85444
Due to the original type system implementation, LLVMDialect in MLIR contains an
LLVMContext in which the relevant objects (types, metadata) are created. When
an MLIR module using the LLVM dialect (and related intrinsic-based dialects
NVVM, ROCDL, AVX512) is converted to LLVM IR, it could only live in the
LLVMContext owned by the dialect. The type system no longer relies on the
LLVMContext, so this limitation can be removed. Instead, translation functions
now take a reference to an LLVMContext in which the LLVM IR module should be
constructed. The caller of the translation functions is responsible for
ensuring the same LLVMContext is not used concurrently as the translation no
longer uses a dialect-wide context lock.
As an additional bonus, this change removes the need to recreate the LLVM IR
module in a different LLVMContext through printing and parsing back, decreasing
the compilation overhead in JIT and GPU-kernel-to-blob passes.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85443
This new pattern mixes vector.transpose and direct lowering to vector.reduce.
This allows more progressive lowering than immediately going to insert/extract and
composes more nicely with other canonicalizations.
This has 2 use cases:
1. for very wide vectors the generated IR may be much smaller
2. when we have a custom lowering for transpose ops we can target it directly
rather than rely LLVM
Differential Revision: https://reviews.llvm.org/D85428
When any of the memrefs in a structured linalg op has a zero dimension, it becomes dead.
This is consistent with the fact that linalg ops deduce their loop bounds from their operands.
Note however that this is not the case for the `tensor<0xelt_type>` which is a special convention
that must be lowered away into either `memref<elt_type>` or just `elt_type` before this
canonicalization can kick in.
Differential Revision: https://reviews.llvm.org/D85413
The RewritePattern will become one of several, and will be part of the LLVM conversion pass (instead of a separate pass following LLVM conversion).
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D84946
Historical modeling of the LLVM dialect types had been wrapping LLVM IR types
and therefore needed access to the instance of LLVMContext stored in the
LLVMDialect. The new modeling does not rely on that and only needs the
MLIRContext that is used for uniquing, similarly to other MLIR types. Change
LLVMType::get<Kind>Ty functions to take `MLIRContext *` instead of
`LLVMDialect *` as first argument. This brings the code base closer to
completely removing the dependence on LLVMContext from the LLVMDialect,
together with additional support for thread-safety of its use.
Depends On D85371
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85372
This prepares for the removal of llvm::Module and LLVMContext from the
mlir::LLVMDialect.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85371
The intrinsics were already supported and vector.transfer_read/write lowered
direclty into these operations. By providing them as individual ops, however,
clients can used them directly, and it opens up progressively lowering transfer
operations at higher levels (rather than direct lowering to LLVM IR as done now).
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D85357
Previous type model in the LLVM dialect did not support identified structure
types properly and therefore could use stateless translations implemented as
free functions. The new model supports identified structs and must keep track
of the identified structure types present in the target context (LLVMContext or
MLIRContext) to avoid creating duplicate structs due to LLVM's type
auto-renaming. Expose the stateful type translation classes and use them during
translation, storing the state as part of ModuleTranslation.
Drop the test type translation mechanism that is no longer necessary and update
the tests to exercise type translation as part of the main translation flow.
Update the code in vector-to-LLVM dialect conversion that relied on stateless
translation to use the new class in a stateless manner.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85297
Per Vulkan's SPIR-V environment spec: "While the OpSRem and OpSMod
instructions are supported by the Vulkan environment, they require
non-negative values and thus do not enable additional functionality
beyond what OpUMod provides."
The `getOffsetForBitwidth` function is used for lowering std.load
and std.store, whose indices are of `index` type and cannot be
negative. So we should be okay to use spv.UMod directly here to
be exact. Also made the comment explicit about the assumption.
Differential Revision: https://reviews.llvm.org/D83714
If Int16 is not available, 16-bit integers inside Workgroup storage
class should be emulated via 32-bit integers. This was previously
broken because the capability querying logic was incorrectly
intercepting all storage classes where it meant to only handle
interface storage classes. Adjusted where we return to fix this.
Differential Revision: https://reviews.llvm.org/D85308
`promoteMemRefDescriptors` also converts types of every operand, not only
memref-typed ones. I think `promoteMemRefDescriptors` name does not imply that.
Differential Revision: https://reviews.llvm.org/D85325
Previously, `LinalgOperand` is defined with `Type<Or<..,>>`, which produces
not very readable error messages when it is not matched, e.g.,
```
'linalg.generic' op operand #0 must be anonymous_326, but got ....
```
It is simply because the `description` property is not properly set.
This diff switches to use `AnyTypeOf` for `LinalgOperand`, which automatically
generates a description based on the allowed types provided.
As a result, the error message now becomes:
```
'linalg.generic' op operand #0 must be ranked tensor of any type values or strided memref of any type values, but got ...
```
Which is clearer and more informative.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D84428