Vector to SCF conversion still had issues due to the interaction with the natural alignment derived by the LLVM data layout. One traditional workaround is to allocate aligned. However, this does not always work for vector sizes that are non-powers of 2.
This revision implements a more portable mechanism where the intermediate allocation is always a memref of elemental vector type. AllocOp is extended to use the natural LLVM DataLayout alignment for non-scalar types, when the alignment is not specified in the first place.
An integration test is added that exercises the transfer to scf.for + scalar lowering with a 5x5 transposition.
Differential Revision: https://reviews.llvm.org/D87150
* Resolves todos from D87091.
* Also modifies PyConcreteAttribute to follow suite (should be useful for ElementsAttr and friends).
* Adds a test to ensure that the ShapedType base class functions as expected.
Differential Revision: https://reviews.llvm.org/D87208
Based on the PyType and PyConcreteType classes, this patch implements the bindings of Shaped Type, Tensor Type and MemRef Type subclasses.
The Tensor Type and MemRef Type are bound as ranked and unranked separately.
This patch adds the ***GetChecked C API to make sure the python side can get a valid type or a nullptr.
Shaped type is not a kind of standard types, it is the base class for vectors, memrefs and tensors, this patch binds the PyShapedType class as the base class of Vector Type, Tensor Type and MemRef Type subclasses.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D87091
These fields will be used to choose/influence patterns for
SPIR-V code generation.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D87106
This refactors the standalone-translate executable to use mlirTranslateMain() declared in Translation.h and further applies D87129.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D87131
Drops the include on InitAllDialects.h, as dialects are now initialized in the translation passes.
Differential Revision: https://reviews.llvm.org/D87129
Historically, the operations in the MLIR's LLVM dialect only checked that the
operand are of LLVM dialect type without more detailed constraints. This was
due to LLVM dialect types wrapping LLVM IR types and having clunky verification
methods. With the new first-class modeling, it is possible to define type
constraints similarly to other dialects and use them to enforce some
correctness rules in verifiers instead of having LLVM assert during translation
to LLVM IR. This hardening discovered several issues where MLIR was producing
LLVM dialect operations that cannot exist in LLVM IR.
Depends On D85900
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85901
When allowed, use 32-bit indices rather than 64-bit indices in the
SIMD computation of masks. This runs up to 2x and 4x faster on
a number of AVX2 and AVX512 microbenchmarks.
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D87116
Its handling is similar to optional attributes, except for the
getter method.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D87055
This is the first bit from D73546. Primarily setting up the corresponding test. Will add more pretty printers in a separate revision.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D86937
This is allowing to build an OpPassManager from a StringRef instead of an
Identifier, which enables building pipelines without an MLIRContext.
An identifier is still cached on-demand on the OpPassManager for efficiency
during the IR traversal.
Sizes of tiles (subviews) are bigger by 1 than they should. Let's consider
1D convolution without batches or channels. Furthermore let m iterate over
the output and n over the kernel then input is accessed with m + n. In tiling
subview sizes for convolutions are computed by applying requested tile size
together with kernel size to the above mentioned expression thus let's say
for tile size of 2 the subview size is 2 + size(n), which is bigger by one
than it should since we move kernel only once. The problem behind it is that
range is not turned into closed interval before the composition. This commit
fixes the problem by turning ranges first into closed intervals by substracting
1 and after the composition back to half open by adding 1.
Differential Revision: https://reviews.llvm.org/D86638
This patch adds basic support for vectorization of uniform values to SuperVectorizer.
For now, only invariant values to the target vector loops are considered uniform. This
enables the vectorization of loops that use function arguments and external definitions
to the vector loops. We could extend uniform support in the future if we implement some
kind of divergence analysis algorithm.
Reviewed By: nicolasvasilache, aartbik
Differential Revision: https://reviews.llvm.org/D86756
This allows to defers the check for traits to the execution instead of forcing it on the pipeline creation.
In particular, this is making our pipeline creation tolerant to dialects not being loaded in the context yet.
Reviewed By: rriddle, GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D86915
Make use of affine memory op interfaces in AffineLoopInvariantCodeMotion so
that it can also work on affine.vector_load and affine.vector_store ops.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D86986
Make sure that memory ops that are defined inside the loop are registered
as such in 'defineOp'. In the test provided, the 'mulf' op was hoisted
outside the loop nest even when its 'affine.load' operand was not.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D86982
Instead of storing a StringRef, we keep an Identifier which otherwise requires a lock on the context to retrieve.
This will allow to get an Identifier for any registered Operation for "free".
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D86994
In this PR, the users of BufferPlacement can configure
BufferAssginmentTypeConverter. These new configurations would give the user more
freedom in the process of converting function signature, and return and call
operation conversions.
These are the new features:
- Accepting callback functions for decomposing types (i.e. 1 to N type
conversion such as unpacking tuple types).
- Defining ResultConversionKind for specifying whether a function result
with a certain type should be appended to the function arguments list or
should be kept as function result. (Usage:
converter.setResultConversionKind<MemRefType>(AppendToArgumentList))
- Accepting callback functions for composing or decomposing values (i.e. N
to 1 and 1 to N value conversion).
Differential Revision: https://reviews.llvm.org/D85133
This reverts commit 94f5d24877 because
of failing the following tests:
MLIR :: Dialect/Linalg/tensors-to-buffers.mlir
MLIR :: Transforms/buffer-placement-preparation-allowed-memref-results.mlir
MLIR :: Transforms/buffer-placement-preparation.mlir
Added 128 byte alignment to alloc ops created in VectorToSCF pass.
128b alignment was already introduced to this pass but not to all alloc
ops. This commit changes that by adding 128b alignment to the remaining ops.
The point of specifying alignment is to prevent possible memory alignment errors
on weakly tested architectures.
Differential Revision: https://reviews.llvm.org/D86454
In this PR, the users of BufferPlacement can configure
BufferAssginmentTypeConverter. These new configurations would give the user more
freedom in the process of converting function signature, and return and call
operation conversions.
These are the new features:
- Accepting callback functions for decomposing types (i.e. 1 to N type
conversion such as unpacking tuple types).
- Defining ResultConversionKind for specifying whether a function result
with a certain type should be appended to the function arguments list or
should be kept as function result. (Usage:
converter.setResultConversionKind<MemRefType>(AppendToArgumentList))
- Accepting callback functions for composing or decomposing values (i.e. N
to 1 and 1 to N value conversion).
Differential Revision: https://reviews.llvm.org/D85133
Based on the PyType and PyConcreteType classes, this patch implements the bindings of Complex Type, Vector Type and Tuple Type subclasses.
For the convenience of type checking, this patch defines a `mlirTypeIsAIntegerOrFloat` function to check whether the given type is an integer or float type.
These three subclasses in this patch have similar binding strategy:
- The function pointer `isaFunction` points to `mlirTypeIsA***`.
- The `mlir***TypeGet` C API is bound with the `get_***` method in the python side.
- The Complex Type and Vector Type check whether the given type is an integer or float type.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D86785
Unsigned and Signless attributes use uintN_t and signed attributes use intN_t, where N is the fixed width. The 1-bit variants use bool.
Differential Revision: https://reviews.llvm.org/D86739
This patch add the missing operands to the acc.loop operation. Only the device_type
information is not part of the operation for now.
Reviewed By: rriddle, kiranchandramohan
Differential Revision: https://reviews.llvm.org/D86753
Clients who rely on the Context loading dialects from the global
registry can call `mlir::enableGlobalDialectRegistry(true);` before
creating an MLIRContext
Differential Revision: https://reviews.llvm.org/D86897
This adds some initial support for regions and does not support formatting the specific arguments of a region. For now this can be achieved by using a custom directive that formats the arguments and then parses the region.
Differential Revision: https://reviews.llvm.org/D86760
Symbol names are a special form of StringAttr that get treated specially in certain areas, such as formatting. This revision adds a special derived attr for them in ODS and adds support in the assemblyFormat for formatting them properly.
Differential Revision: https://reviews.llvm.org/D86759
This revision adds support for custom directives to the declarative assembly format. This allows for users to use C++ for printing and parsing subsections of an otherwise declaratively specified format. The custom directive is structured as follows:
```
custom-directive ::= `custom` `<` UserDirective `>` `(` Params `)`
```
`user-directive` is used as a suffix when this directive is used during printing and parsing. When parsing, `parseUserDirective` will be invoked. When printing, `printUserDirective` will be invoked. The first parameter to these methods must be a reference to either the OpAsmParser, or OpAsmPrinter. The type of rest of the parameters is dependent on the `Params` specified in the assembly format.
Differential Revision: https://reviews.llvm.org/D84719
Full diagnostic was:
warning: base class ‘class mlir::OptReductionBase<mlir::OptReductionPass>’ should be explicitly initialized in the copy constructor [-Wextra]
* This is just enough to create regions/blocks and iterate over them.
* Does not yet implement the preferred iteration strategy (python pseudo containers).
* Refinements need to come after doing basic mappings of operations and values so that the whole hierarchy can be used.
Differential Revision: https://reviews.llvm.org/D86683
The tensor_reshape op was only fusible only if it is a collapsing case. Now we
propagate the op to all the operands so there is a further chance to fuse it
with generic op. The pre-conditions are:
1) The producer is not an indexed_generic op.
2) All the shapes of the operands are the same.
3) All the indexing maps are identity.
4) All the loops are parallel loops.
5) The producer has a single user.
It is possible to fuse the ops if the producer is an indexed_generic op. We
still can compute the original indices. E.g., if the reshape op collapses the d0
and d1, we can use DimOp to get the width of d1, and calculate the index
`d0 * width + d1`. Then replace all the uses with it. However, this pattern is
not implemented in the patch.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86314
This is intended to ease the transition for client with a lot of
dependencies. It'll be removed in the coming weeks.
Differential Revision: https://reviews.llvm.org/D86755
The prior diff that introduced `addAffineIfOpDomain` missed appending
constraints from the ifOp domain. This revision fixes this problem.
Differential Revision: https://reviews.llvm.org/D86421
Adding a conversion pattern for the parallel Operation. This will
help the conversion of parallel operation with standard dialect to
parallel operation with llvm dialect. The type conversion of the block
arguments in a parallel region are controlled by the pattern for the
parallel Operation. Without this pattern, a parallel Operation with
block arguments cannot be converted from standard to LLVM dialect.
Other OpenMP operations without regions are marked as legal. When
translation of OpenMP operations with regions are added then patterns
for these operations can also be added.
Also uses all the standard to llvm patterns. Patterns of other dialects
can be added later if needed.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D86273
When dealing with dialects that will results in function calls to
external libraries, it is important to be able to handle maps as some
dialects may require mapped data. Before this patch, the detection of
whether normalization can apply or not, operations are compared to an
explicit list of operations (`alloc`, `dealloc`, `return`) or to the
presence of specific operation interfaces (`AffineReadOpInterface`,
`AffineWriteOpInterface`, `AffineDMAStartOp`, or `AffineDMAWaitOp`).
This patch add a trait, `MemRefsNormalizable` to determine if an
operation can have its `memrefs` normalized.
This trait can be used in turn by dialects to assert that such
operations are compatible with normalization of `memrefs` with
nontrivial memory layout specification. An example is given in the
literal tests.
Differential Revision: https://reviews.llvm.org/D86236
This patch allows to pass the gpu module name to SPIR-V
module during conversion. This has many benefits as we can lookup
converted to SPIR-V kernel in the symbol table.
In order to avoid symbol conflicts, `"__spv__"` is added to the
gpu module name to form the new one.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86384
This patch introduces a hook to encode descriptor set
and binding number into `spv.globalVariable`'s symbolic name. This
allows to preserve this information, and at the same time legalize
the global variable for the conversion to LLVM dialect.
This is required for `mlir-spirv-cpu-runner` to convert kernel
arguments into LLVM.
Also, a couple of some nits added:
- removed unused comment
- changed to a capital letter in the comment
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86515
This makes OpPassManager more of a "container" of passes and not responsible to drive the execution.
As such we also make it constructible publicly, which will allow to build arbitrary pipeline decoupled from the execution. We'll make use of this facility to expose "dynamic pipeline" in the future.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D86391
This patch updates the type conversion section of the documentation.
It includes the modelling of array strides and the mapping of the
naturally padded structs.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86674
This patch adds an optional name to SPIR-V module.
This will help with lowering from GPU dialect (so that we
can pass the kernel module name) and will be more naturally
aligned with `GPUModuleOp`/`ModuleOp`.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86386
As discussed in D86576, noundef attribute is removed from masked store/load/gather/scatter's
pointer operands.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D86656
This patch adds NoUndef to Intrinsics.td.
The attribute is attached to llvm.assume's operand, because llvm.assume(undef)
is UB.
It is attached to pointer operands of several memory accessing intrinsics
as well.
This change makes ValueTracking::getGuaranteedNonPoisonOps' intrinsic check
unnecessary, so it is removed.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D86576
The PDL Interpreter dialect provides a lower level abstraction compared to the PDL dialect, and is targeted towards low level optimization and interpreter code generation. The dialect operations encapsulates low-level pattern match and rewrite "primitives", such as navigating the IR (Operation::getOperand), creating new operations (OpBuilder::create), etc. Many of the operations within this dialect also fuse branching control flow with some form of a predicate comparison operation. This type of fusion reduces the amount of work that an interpreter must do when executing.
An example of this representation is shown below:
```mlir
// The following high level PDL pattern:
pdl.pattern : benefit(1) {
%resultType = pdl.type
%inputOperand = pdl.input
%root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType
pdl.rewrite %root {
pdl.replace %root with (%inputOperand)
}
}
// May be represented in the interpreter dialect as follows:
module {
func @matcher(%arg0: !pdl.operation) {
pdl_interp.check_operation_name of %arg0 is "foo.op" -> ^bb2, ^bb1
^bb1:
pdl_interp.return
^bb2:
pdl_interp.check_operand_count of %arg0 is 1 -> ^bb3, ^bb1
^bb3:
pdl_interp.check_result_count of %arg0 is 1 -> ^bb4, ^bb1
^bb4:
%0 = pdl_interp.get_operand 0 of %arg0
pdl_interp.is_not_null %0 : !pdl.value -> ^bb5, ^bb1
^bb5:
%1 = pdl_interp.get_result 0 of %arg0
pdl_interp.is_not_null %1 : !pdl.value -> ^bb6, ^bb1
^bb6:
pdl_interp.record_match @rewriters::@rewriter(%0, %arg0 : !pdl.value, !pdl.operation) : benefit(1), loc([%arg0]), root("foo.op") -> ^bb1
}
module @rewriters {
func @rewriter(%arg0: !pdl.value, %arg1: !pdl.operation) {
pdl_interp.replace %arg1 with(%arg0)
pdl_interp.return
}
}
}
```
Differential Revision: https://reviews.llvm.org/D84579
This assertion does not achieve what it meant to do originally, as it
would fire only when applied to an unregistered operation, which is a
fairly rare circumstance (it needs a dialect or context allowing
unregistered operation in the input in the first place).
Instead we relax it to only fire when it should have matched but didn't
because of the misconfiguration.
Differential Revision: https://reviews.llvm.org/D86588
Instead of using the TypeConverter infer the value of the alloca created based
on the init value. This will allow some ambiguous types like multidimensional
vectors to be converted correctly.
Differential Revision: https://reviews.llvm.org/D86582
Provides fast, generic way of setting a mask up to a certain
point. Potential use cases that may benefit are create_mask
and transfer_read/write operations in the vector dialect.
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D86501
Based on the PyType and PyConcreteType classes, this patch implements the bindings of Index Type, Floating Point Type and None Type subclasses.
These three subclasses share the same binding strategy:
- The function pointer `isaFunction` points to `mlirTypeIsA***`.
- The `mlir***TypeGet` C API is bound with the `***Type` constructor in the python side.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D86466
* Generic mlir.ir.Attribute class.
* First standard attribute (mlir.ir.StringAttr), following the same pattern as generic vs standard types.
* NamedAttribute class.
Differential Revision: https://reviews.llvm.org/D86250
This will allow out-of-tree translation to register the dialects they expect
to see in their input, on the model of getDependentDialects() for passes.
Differential Revision: https://reviews.llvm.org/D86409
This patch updates the SPIR-V to LLVM conversion manual.
Particularly, the following sections are added:
- `spv.EntryPoint`/`spv.ExecutionMode` handling
- Mapping for `spv.AccessChain`
- Change in allowed storage classes for `spv.globalVariable`
- Change of the runner section name
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86288
Binding MemRefs of f16 needs special handling as the type is not supported on
CPU. There was a bug in the type used.
Differential Revision: https://reviews.llvm.org/D86328
Refactor the way the reduction tree pass works in the MLIR Reduce tool by introducing a set of utilities that facilitate the implementation of new Reducer classes to be used in the passes.
This will allow for the fast implementation of general transformations to operate on all mlir modules as well as custom transformations for different dialects.
These utilities allow for the implementation of Reducer classes by simply defining a method that indexes the operations/blocks/regions to be transformed and a method to perform the deletion or transfomration based on the indexes.
Create the transformSpace class member in the ReductionNode class to keep track of the indexes that have already been transformed or deleted at a current level.
Delete the FunctionReducer class and replace it with the OpReducer class to reflect this new API while performing the same transformation and allowing the instantiation of a reduction pass for different types of operations at the module's highest hierarchichal level.
Modify the SinglePath Traversal method to reflect the use of the new API.
Reviewed: jpienaar
Differential Revision: https://reviews.llvm.org/D85591
Add a folder to the affine.parallel op so that loop bounds expressions are canonicalized.
Additionally, a new AffineParallelNormalizePass is added to adjust affine.parallel ops so that the lower bound is always 0 and the upper bound always represents a range with a step size of 1.
Differential Revision: https://reviews.llvm.org/D84998
Removed the Standard to LLVM conversion patterns that were previously
pulled in for testing purposes. This helps to separate the conversion
to LLVM dialect of the MLIR module with both SPIR-V and Standard
dialects in it (particularly helpful for SPIR-V cpu runner). Also,
tests were changed accordingly.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86285
This patch adds the capability to perform constraint redundancy checks for `FlatAffineConstraints` using `Simplex`, via a new member function `FlatAffineConstraints::removeRedundantConstraints`. The pre-existing redundancy detection algorithm runs a full rational emptiness check for each inequality separately for checking redundancy. Leveraging the existing `Simplex` infrastructure, in this patch we have an algorithm for redundancy checks that can check each constraint by performing pivots on the tableau, which provides an alternative to running Fourier-Motzkin elimination for each constraint separately.
Differential Revision: https://reviews.llvm.org/D84935
- This utility to merge a block anywhere into another one can help inline single
block regions into other blocks.
- Modified patterns test to use the new function.
Differential Revision: https://reviews.llvm.org/D86251
Add the unsigned complements to the existing FPToSI and SIToFP operations in the
standard dialect, with one-to-one lowerings to the corresponding LLVM operations.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D85557
PDL presents a high level abstraction for the rewrite pattern infrastructure available in MLIR. This abstraction allows for representing patterns transforming MLIR, as MLIR. This allows for applying all of the benefits that the general MLIR infrastructure provides, to the infrastructure itself. This means that pattern matching can be more easily verified for correctness, targeted by frontends, and optimized.
PDL abstracts over various different aspects of patterns and core MLIR data structures. Patterns are specified via a `pdl.pattern` operation. These operations contain a region body for the "matcher" code, and terminate with a `pdl.rewrite` that either dispatches to an external rewriter or contains a region for the rewrite specified via `pdl`. The types of values in `pdl` are handle types to MLIR C++ types, with `!pdl.attribute`, `!pdl.operation`, and `!pdl.type` directly mapping to `mlir::Attribute`, `mlir::Operation*`, and `mlir::Value` respectively.
An example pattern is shown below:
```mlir
// pdl.pattern contains metadata similarly to a `RewritePattern`.
pdl.pattern : benefit(1) {
// External input operand values are specified via `pdl.input` operations.
// Result types are constrainted via `pdl.type` operations.
%resultType = pdl.type
%inputOperand = pdl.input
%root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType
pdl.rewrite(%root) {
pdl.replace %root with (%inputOperand)
}
}
```
This is a culmination of the work originally discussed here: https://groups.google.com/a/tensorflow.org/g/mlir/c/j_bn74ByxlQ
Differential Revision: https://reviews.llvm.org/D84578
Provide C API for MLIR standard attributes. Since standard attributes live
under lib/IR in core MLIR, place the C APIs in the IR library as well (standard
ops will go in a separate library).
Affine map and integer set attributes are only exposed as placeholder types
with IsA support due to the lack of C APIs for the corresponding types.
Integer and floating point attribute APIs expecting APInt and APFloat are not
exposed pending decision on how to support APInt and APFloat.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D86143
* The binding for Type is trivial and should be non-controversial.
* The way that I define the IntegerType should serve as a pattern for what I want to do next.
* I propose defining the rest of the standard types in this fashion and then generalizing for dialect types as necessary.
* Essentially, creating/accessing a concrete Type (vs interacting with the string form) is done by "casting" to the concrete type (i.e. IntegerType can be constructed with a Type and will throw if the cast is illegal).
* This deviates from some of our previous discussions about global objects but I think produces a usable API and we should go this way.
Differential Revision: https://reviews.llvm.org/D86179
If Memref has rank > 1 this pass emits N-1 loops around
TransferRead op and transforms the op itself to 1D read. Since vectors
must have static shape while memrefs don't the pass emits if condition
to prevent out of bounds accesses in case some memref dimension is smaller
than the corresponding dimension of targeted vector. This logic is fine
but authors forgot to apply `permutation_map` on loops upper bounds and
thus if condition compares induction variable to incorrect loop upper bound
(dimension of the memref) in case `permutation_map` is not identity map.
This commit aims to fix that.
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
The documentation needs a refresh now that "kinds" are no longer a concept. This revision also adds mentions to a few other new concepts, e.g. traits and interfaces.
Differential Revision: https://reviews.llvm.org/D86182
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This greatly simplifies a large portion of the underlying infrastructure, allows for lookups of singleton classes to be much more efficient and always thread-safe(no locking). As a result of this, the dialect symbol registry has been removed as it is no longer necessary.
For users broken by this change, an alert was sent out(https://llvm.discourse.group/t/removing-kinds-from-attributes-and-types) that helps prevent a majority of the breakage surface area. All that should be necessary, if the advice in that alert was followed, is removing the kind passed to the ::get methods.
Differential Revision: https://reviews.llvm.org/D86121
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
LinalgDistribution options to allow more general distributions.
Changing the signature of the callback to send in the ranges for all
the parallel loops and expect a vector with the Value to use for the
processor-id and number-of-processors for each of the parallel loops.
Differential Revision: https://reviews.llvm.org/D86095
There should be an equivalent std.floor op to std.ceil. This includes
matching lowerings for SPIRV, NVVM, ROCDL, and LLVM.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D85940
Create a reduction pass that accepts an optimization pass as argument
and only replaces the golden module in the pipeline if the output of the
optimization pass is smaller than the input and still exhibits the
interesting behavior.
Add a -test-pass option to test individual passes in the MLIR Reduce
tool.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D84783
This patch adds more op/type conversion support
necessary for `spirv-runner`:
- EntryPoint/ExecutionMode: currently removed since we assume
having only one kernel function in the kernel module.
- StorageBuffer storage class is now supported. We are not
concerned with multithreading so this is fine for now.
- Type conversion enhanced, now regular offsets and strides
for structs and arrays are supported (based on
`VulkanLayoutUtils`).
- Support of `spc.AccessChain` that is modelled with GEP op
in LLVM dialect.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86109
When the operand to the linalg.tensor_reshape op is a splat constant,
the result can be replaced with a splat constant of the same value but
different type.
Differential Revision: https://reviews.llvm.org/D86117
Provide C API for MLIR standard types. Since standard types live under lib/IR
in core MLIR, place the C APIs in the IR library as well (standard ops will go
into a separate library). This also defines a placeholder for affine maps that
are necessary to construct a memref, but are not yet exposed to the C API.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D86094
The function makes too strong assumption regarding parent FuncOp
which gets broken when FuncOp is first lowered to llvm function.
In this fix we generalize the assumption to allocation scope and
add assertion to produce user friendly message in case our assumption
is broken.
Differential Revision: https://reviews.llvm.org/D86086
This function is available on llvm::Type and has been used by some clients of
the LLVM dialect before the transition. Implement the MLIR counterpart.
Reviewed By: schweitz
Differential Revision: https://reviews.llvm.org/D85847
- Add variants of getAnalysis() and friends that operate on a specific derived
operation types.
- Add OpPassManager::getAnalysis() to always call the base getAnalysis() with OpT.
- With this, an OperationPass can call getAnalysis<> using an analysis type that
is generic (works on Operation *) or specific to the OpT for the pass. Anything
else will fail to compile.
- Extend AnalysisManager unit test to test this, and add a new PassManager unit
test to test this functionality in the context of an OperationPass.
Differential Revision: https://reviews.llvm.org/D84897
According to the LLVM Language Reference, 'cmpxchg' accepts integer or pointer
types. Several MLIR tests were using it with floats as it appears possible to
programmatically construct and print such an instruction, but it cannot be
parsed back. Use integers instead.
Depends On D85899
Reviewed By: flaub, rriddle
Differential Revision: https://reviews.llvm.org/D85900
Legacy implementation of the LLVM dialect in MLIR contained an instance of
llvm::Module as it was required to parse LLVM IR types. The access to the data
layout of this module was exposed to the users for convenience, but in practice
this layout has always been the default one obtained by parsing an empty layout
description string. Current implementation of the dialect no longer relies on
wrapping LLVM IR types, but it kept an instance of DataLayout for
compatibility. This effectively forces a single data layout to be used across
all modules in a given MLIR context, which is not desirable. Remove DataLayout
from the LLVM dialect and attach it as a module attribute instead. Since MLIR
does not yet have support for data layouts, use the LLVM DataLayout in string
form with verification inside MLIR. Introduce the layout when converting a
module to the LLVM dialect and keep the default "" description for
compatibility.
This approach should be replaced with a proper MLIR-based data layout when it
becomes available, but provides an immediate solution to compiling modules with
different layouts, e.g. for GPUs.
This removes the need for LLVMDialectImpl, which is also removed.
Depends On D85650
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D85652
This will help refactoring some of the tools to prepare for the explicit registration of
Dialects.
Differential Revision: https://reviews.llvm.org/D86023
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Differential Revision: https://reviews.llvm.org/D85622
Explicitly declare ReductionTreeBase base class in ReductionTreePass copy constructor.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D85983
It appears in this case that an implicit cast from StringRef to std::string
doesn't happen. Fixed with an explicit cast.
Differential Revision: https://reviews.llvm.org/D85986
This changes mlir_check_link_libraries() to work with interface libraries.
These don't have the LINK_LIBRARIES property.
Differential Revision: https://reviews.llvm.org/D85957
This library does not depend on all the dialects, conceptually. This is
changing the recently introduced `mlirContextLoadAllDialects()` function
to not call `registerAllDialects()` itself, which aligns it better with
the C++ code anyway (and this is deprecated and will be removed soon).
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
The convresion of memref cast operaitons from the Standard dialect to the LLVM
dialect has been emitting bitcasts from a struct type to itself. Beyond being
useless, such casts are invalid as bitcast does not operate on aggregate types.
This kept working by accident because LLVM IR bitcast construction API skips
the construction if types are equal before it verifies that the types are
acceptable in a bitcast. Do not emit such bitcasts, the memref cast that only
adds/erases size information is in fact a noop on the current descriptor as it
always contains dynamic values for all sizes.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D85899
We have been asking for this systematically, mention it in the documentation.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D85902
Masked loading/storing in various forms can be optimized
into simpler memory operations when the mask is all true
or all false. Note that the backend does similar optimizations
but doing this early may expose more opportunities for further
optimizations. This further prepares progressively lowering
transfer read and write into 1-D memory operations.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D85769
This infrastructure has evolved a lot over the course of MLIRs lifetime, and has never truly been documented outside of rationale or proposals. This revision aims to document the infrastructure and user facing API, with the rationale specific portions moved to the Rationale folder and updated.
Differential Revision: https://reviews.llvm.org/D85260
This revision updates the documentation for dialect conversion, as many concepts have changed/evolved over time.
Differential Revision: https://reviews.llvm.org/D85167
This exercises the corner case that was fixed in
https://reviews.llvm.org/rG8979a9cdf226066196f1710903d13492e6929563.
The bug can be reproduced when there is a @callee with a custom type argument and @caller has a producer of this argument passed to the @callee.
Example:
func @callee(!test.test_type) -> i32
func @caller() -> i32 {
%arg = "test.type_producer"() : () -> !test.test_type
%out = call @callee(%arg) : (!test.test_type) -> i32
return %out : i32
}
Even though there is a type conversion for !test.test_type, the output IR (before the fix) contained a DialectCastOp:
module {
llvm.func @callee(!llvm.ptr<i8>) -> !llvm.i32
llvm.func @caller() -> !llvm.i32 {
%0 = llvm.mlir.null : !llvm.ptr<i8>
%1 = llvm.mlir.cast %0 : !llvm.ptr<i8> to !test.test_type
%2 = llvm.call @callee(%1) : (!test.test_type) -> !llvm.i32
llvm.return %2 : !llvm.i32
}
}
instead of
module {
llvm.func @callee(!llvm.ptr<i8>) -> !llvm.i32
llvm.func @caller() -> !llvm.i32 {
%0 = llvm.mlir.null : !llvm.ptr<i8>
%1 = llvm.call @callee(%0) : (!llvm.ptr<i8>) -> !llvm.i32
llvm.return %1 : !llvm.i32
}
}
Differential Revision: https://reviews.llvm.org/D85914
-- This commit handles the returnOp in memref map layout normalization.
-- An initial filter is applied on FuncOps which helps us know which functions can be
a suitable candidate for memref normalization which doesn't lead to invalid IR.
-- Handles memref map normalization for external function assuming the external function
is normalizable.
Differential Revision: https://reviews.llvm.org/D85226
This revision removes all of the lingering usages of Type::getKind. A consequence of this is that FloatType is now split into 4 derived types that represent each of the possible float types(BFloat16Type, Float16Type, Float32Type, and Float64Type). Other than this split, this revision is NFC.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D85566
These hooks were introduced before the Interfaces mechanism was available.
DialectExtractElementHook is unused and entirely removed. The
DialectConstantFoldHook is used a fallback in the
operation fold() method, and is replaced by a DialectInterface.
The DialectConstantDecodeHook is used for interpreting OpaqueAttribute
and should be revamped, but is replaced with an interface in 1:1 fashion
for now.
Differential Revision: https://reviews.llvm.org/D85595
- Add "using namespace mlir::tblgen" in several of the TableGen/*.cpp files and
eliminate the tblgen::prefix to reduce code clutter.
Differential Revision: https://reviews.llvm.org/D85800
Provide printing functions for most IR objects in C API (except Region that
does not have a `print` function, and Module that is expected to be printed as
Operation instead). The printing is based on a callback that is called with
chunks of the string representation and forwarded user-defined data.
Reviewed By: stellaraccident, Jing, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85748
Using intptr_t is a consensus for MLIR C API, but the change was missing
from 75f239e975 (that was using unsigned initially) due to a
misrebase.
Reviewed By: stellaraccident, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85751
This patch adds the translation of the proc_bind clause in a
parallel operation.
The values that can be specified for the proc_bind clause are
specified in the OMP.td tablegen file in the llvm/Frontend/OpenMP
directory. From this single source of truth enumeration for
proc_bind is generated in llvm and mlir (used in specification of
the parallel Operation in the OpenMP dialect). A function to return
the enum value from the string representation is also generated.
A new header file (DirectiveEmitter.h) containing definitions of
classes directive, clause, clauseval etc is created so that it can
be used in mlir as well.
Reviewers: clementval, jdoerfert, DavidTruby
Differential Revision: https://reviews.llvm.org/D84347
Inital conversion of `spv._address_of` and `spv.globalVariable`.
In SPIR-V, the global returns a pointer, whereas in LLVM dialect
the global holds an actual value. This difference is handled by
`spv._address_of` and `llvm.mlir.addressof`ops that both return
a pointer. Moreover, only current invocation is in conversion's
scope.
Reviewed By: antiagainst, mravishankar
Differential Revision: https://reviews.llvm.org/D84626
Now that LLVM dialect types are implemented directly in the dialect, we can use
MLIR hooks for verifying type construction invariants. Implement the verifiers
and use them in the parser.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85663
Linalg to processors.
This changes adds infrastructure to distribute the loops generated in
Linalg to processors at the time of generation. This addresses use
case where the instantiation of loop is done just to distribute
them. The option to distribute is added to TilingOptions for now and
will allow specifying the distribution as a transformation option,
just like tiling and promotion are specified as options.
Differential Revision: https://reviews.llvm.org/D85147
- Fix ODS framework to suppress build methods that infer result types and are
ambiguous with collective variants. This applies to operations with a single variadic
inputs whose result types can be inferred.
- Extended OpBuildGenTest to test these kinds of ops.
Differential Revision: https://reviews.llvm.org/D85060
Previously, the memory leaks on heap. Since the MlirOperationState is not intended to be used again after mlirOperationCreate, the patch simplify frees the memory in mlirOperationCreate instead of creating any new API.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D85629
This diff attempts to resolve the TODO in `getOpIndexSet` (formerly
known as `getInstIndexSet`), which states "Add support to handle IfInsts
surronding `op`".
Major changes in this diff:
1. Overload `getIndexSet`. The overloaded version considers both
`AffineForOp` and `AffineIfOp`.
2. The `getInstIndexSet` is updated accordingly: its name is changed to
`getOpIndexSet` and its implementation is based on a new API `getIVs`
instead of `getLoopIVs`.
3. Add `addAffineIfOpDomain` to `FlatAffineConstraints`, which extracts
new constraints from the integer set of `AffineIfOp` and merges it to
the current constraint system.
4. Update how a `Value` is determined as dim or symbol for
`ValuePositionMap` in `buildDimAndSymbolPositionMaps`.
Differential Revision: https://reviews.llvm.org/D84698
This patch also fixes a minor issue that shape.rank should allow
returning !shape.size. The dialect doc has such an example for
shape.rank.
Differential Revision: https://reviews.llvm.org/D85556
This reverts commit 9f24640b7e.
We hit some dead-locks on thread exit in some configurations: TLS exit handler is taking a lock.
Temporarily reverting this change as we're debugging what is going on.
This revision aims to provide a new API, `checkTilingLegality`, to
verify that the loop tiling result still satisifes the dependence
constraints of the original loop nest.
Previously, there was no check for the validity of tiling. For instance:
```
func @diagonal_dependence() {
%A = alloc() : memref<64x64xf32>
affine.for %i = 0 to 64 {
affine.for %j = 0 to 64 {
%0 = affine.load %A[%j, %i] : memref<64x64xf32>
%1 = affine.load %A[%i, %j - 1] : memref<64x64xf32>
%2 = addf %0, %1 : f32
affine.store %2, %A[%i, %j] : memref<64x64xf32>
}
}
return
}
```
You can find more information about this example from the Section 3.11
of [1].
In general, there are three types of dependences here: two flow
dependences, one in direction `(i, j) = (0, 1)` (notation that depicts a
vector in the 2D iteration space), one in `(i, j) = (1, -1)`; and one
anti dependence in the direction `(-1, 1)`.
Since two of them are along the diagonal in opposite directions, the
default tiling method in `affine`, which tiles the iteration space into
rectangles, will violate the legality condition proposed by Irigoin and
Triolet [2]. [2] implies two tiles cannot depend on each other, while in
the `affine` tiling case, two rectangles along the same diagonal are
indeed dependent, which simply violates the rule.
This diff attempts to put together a validator that checks whether the
rule from [2] is violated or not when applying the default tiling method
in `affine`.
The canonical way to perform such validation is by examining the effect
from adding the constraint from Irigoin and Triolet to the existing
dependence constraints.
Since we already have the prior knowlegde that `affine` tiles in a
hyper-rectangular way, and the resulting tiles will be scheduled in the
same order as their respective loop indices, we can simplify the
solution to just checking whether all dependence components are
non-negative along the tiling dimensions.
We put this algorithm into a new API called `checkTilingLegality` under
`LoopTiling.cpp`. This function iterates every `load`/`store` pair, and
if there is any dependence between them, we get the dependence component
and check whether it has any negative component. This function returns
`failure` if the legality condition is violated.
[1]. Bondhugula, Uday. Effective Automatic parallelization and locality optimization using the Polyhedral model. https://dl.acm.org/doi/book/10.5555/1559029
[2]. Irigoin, F. and Triolet, R. Supernode Partitioning. https://dl.acm.org/doi/10.1145/73560.73588
Differential Revision: https://reviews.llvm.org/D84882
Implement the Reduction Tree Pass framework as part of the MLIR Reduce tool. This is a parametarizable pass that allows for the implementation of custom reductions passes in the tool.
Implement the FunctionReducer class as an example of a Reducer class parameter for the instantiation of a Reduction Tree Pass.
Create a pass pipeline with a Reduction Tree Pass with the FunctionReducer class specified as parameter.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D83969
This also beefs up the test coverage:
- Make unranked memref testing consistent with ranked memrefs.
- Add testing for the invalid element type cases.
This is not quite NFC: index types are now allowed in unranked memrefs.
Differential Revision: https://reviews.llvm.org/D85541
This simple patch translates the num_threads and if clauses of the parallel
operation. Also includes test cases.
A minor change was made to parsing of the if clause to parse AnyType and
return the parsed type. Updates to test cases also.
Reviewed by: SouraVX
Differential Revision: https://reviews.llvm.org/D84798
This revision refactors the default definition of the attribute and type `classof` methods to use the TypeID of the concrete class instead of invoking the `kindof` method. The TypeID is already used as part of uniquing, and this allows for removing the need for users to define any of the type casting utilities themselves.
Differential Revision: https://reviews.llvm.org/D85356
Subclass data is useful when a certain amount of memory is allocated, but not all of it is used. In the case of Type, that hasn't been the case for a while and the subclass is just taking up a full `unsigned`. Removing this frees up ~8 bytes for almost every type instance.
Differential Revision: https://reviews.llvm.org/D85348
This class allows for defining thread local objects that have a set non-static lifetime. This internals of the cache use a static thread_local map between the various different non-static objects and the desired value type. When a non-static object destructs, it simply nulls out the entry in the static map. This will leave an entry in the map, but erase any of the data for the associated value. The current use cases for this are in the MLIRContext, meaning that the number of items in the static map is ~1-2 which aren't particularly costly enough to warrant the complexity of pruning. If a use case arises that requires pruning of the map, the functionality can be added.
This is especially useful in the context of MLIR for implementing thread-local caching of context level objects that would otherwise have very high lock contention. This revision adds a thread local cache in the MLIRContext for attributes, identifiers, and types to reduce some of the locking burden. This led to a speedup of several hundred miliseconds when compiling a conversion pass on a very large mlir module(>300K operations).
Differential Revision: https://reviews.llvm.org/D82597
This allows for bucketing the different possible storage types, with each bucket having its own allocator/mutex/instance map. This greatly reduces the amount of lock contention when multi-threading is enabled. On some non-trivial .mlir modules (>300K operations), this led to a compile time decrease of a single conversion pass by around half a second(>25%).
Differential Revision: https://reviews.llvm.org/D82596
This change adds initial support needed to generate OpenCL compliant SPIRV.
If Kernel capability is declared then memory model becomes OpenCL.
If Addresses capability is declared then addressing model becomes Physical64.
Additionally for Kernel capability interface variable ABI attributes are not
generated as entry point function is expected to have normal arguments.
Differential Revision: https://reviews.llvm.org/D85196
This revision adds a folding pattern to replace affine.min ops by the actual min value, when it can be determined statically from the strides and bounds of enclosing scf loop .
This matches the type of expressions that Linalg produces during tiling and simplifies boundary checks. For now Linalg depends both on Affine and SCF but they do not depend on each other, so the pattern is added there.
In the future this will move to a more appropriate place when it is determined.
The canonicalization of AffineMinOp operations in the context of enclosing scf.for and scf.parallel proceeds by:
1. building an affine map where uses of the induction variable of a loop
are replaced by `%lb + %step * floordiv(%iv - %lb, %step)` expressions.
2. checking if any of the results of this affine map divides all the other
results (in which case it is also guaranteed to be the min).
3. replacing the AffineMinOp by the result of (2).
The algorithm is functional in simple parametric tiling cases by using semi-affine maps. However simplifications of such semi-affine maps are not yet available and the canonicalization does not succeed yet.
Differential Revision: https://reviews.llvm.org/D82009
Using a shuffle for the last recursive step in progressive lowering not only
results in much more compact IR, but also more efficient code (since the
backend is no longer confused on subvector aliasing for longer vectors).
E.g. the following
%f = vector.shape_cast %v0: vector<1024xf32> to vector<32x32xf32>
yields much better x86-64 code that runs 3x faster than the original.
Reviewed By: bkramer, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D85482
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()
This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.
To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.
Differential Revision: https://reviews.llvm.org/D85495
Original modeling of LLVM IR types in the MLIR LLVM dialect had been wrapping
LLVM IR types and therefore required the LLVMContext in which they were created
to outlive them, which was solved by placing the LLVMContext inside the dialect
and thus having the lifetime of MLIRContext. This has led to numerous issues
caused by the lack of thread-safety of LLVMContext and the need to re-create
LLVM IR modules, obtained by translating from MLIR, in different LLVM contexts
to enable parallel compilation. Similarly, llvm::Module had been introduced to
keep track of identified structure types that could not be modeled properly.
A recent series of commits changed the modeling of LLVM IR types in the MLIR
LLVM dialect so that it no longer wraps LLVM IR types and has no dependence on
LLVMContext and changed the ownership model of the translated LLVM IR modules.
Remove LLVMContext and LLVM modules from the implementation of MLIR LLVM
dialect and clean up the remaining uses.
The only part of LLVM IR that remains necessary for the LLVM dialect is the
data layout. It should be moved from the dialect level to the module level and
replaced with an MLIR-based representation to remove the dependency of the
LLVMDialect on LLVM IR library.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85445
Historically, LLVMDialect has been required in the conversion from LLVM IR in
order to be able to construct types. This is no longer necessary with the new
type model and the dialect can be replaced with a local LLVM context.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85444
Due to the original type system implementation, LLVMDialect in MLIR contains an
LLVMContext in which the relevant objects (types, metadata) are created. When
an MLIR module using the LLVM dialect (and related intrinsic-based dialects
NVVM, ROCDL, AVX512) is converted to LLVM IR, it could only live in the
LLVMContext owned by the dialect. The type system no longer relies on the
LLVMContext, so this limitation can be removed. Instead, translation functions
now take a reference to an LLVMContext in which the LLVM IR module should be
constructed. The caller of the translation functions is responsible for
ensuring the same LLVMContext is not used concurrently as the translation no
longer uses a dialect-wide context lock.
As an additional bonus, this change removes the need to recreate the LLVM IR
module in a different LLVMContext through printing and parsing back, decreasing
the compilation overhead in JIT and GPU-kernel-to-blob passes.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85443
This new pattern mixes vector.transpose and direct lowering to vector.reduce.
This allows more progressive lowering than immediately going to insert/extract and
composes more nicely with other canonicalizations.
This has 2 use cases:
1. for very wide vectors the generated IR may be much smaller
2. when we have a custom lowering for transpose ops we can target it directly
rather than rely LLVM
Differential Revision: https://reviews.llvm.org/D85428
When any of the memrefs in a structured linalg op has a zero dimension, it becomes dead.
This is consistent with the fact that linalg ops deduce their loop bounds from their operands.
Note however that this is not the case for the `tensor<0xelt_type>` which is a special convention
that must be lowered away into either `memref<elt_type>` or just `elt_type` before this
canonicalization can kick in.
Differential Revision: https://reviews.llvm.org/D85413
The RewritePattern will become one of several, and will be part of the LLVM conversion pass (instead of a separate pass following LLVM conversion).
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D84946
Historical modeling of the LLVM dialect types had been wrapping LLVM IR types
and therefore needed access to the instance of LLVMContext stored in the
LLVMDialect. The new modeling does not rely on that and only needs the
MLIRContext that is used for uniquing, similarly to other MLIR types. Change
LLVMType::get<Kind>Ty functions to take `MLIRContext *` instead of
`LLVMDialect *` as first argument. This brings the code base closer to
completely removing the dependence on LLVMContext from the LLVMDialect,
together with additional support for thread-safety of its use.
Depends On D85371
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85372
This prepares for the removal of llvm::Module and LLVMContext from the
mlir::LLVMDialect.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85371
The intrinsics were already supported and vector.transfer_read/write lowered
direclty into these operations. By providing them as individual ops, however,
clients can used them directly, and it opens up progressively lowering transfer
operations at higher levels (rather than direct lowering to LLVM IR as done now).
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D85357
Previous type model in the LLVM dialect did not support identified structure
types properly and therefore could use stateless translations implemented as
free functions. The new model supports identified structs and must keep track
of the identified structure types present in the target context (LLVMContext or
MLIRContext) to avoid creating duplicate structs due to LLVM's type
auto-renaming. Expose the stateful type translation classes and use them during
translation, storing the state as part of ModuleTranslation.
Drop the test type translation mechanism that is no longer necessary and update
the tests to exercise type translation as part of the main translation flow.
Update the code in vector-to-LLVM dialect conversion that relied on stateless
translation to use the new class in a stateless manner.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85297
Per Vulkan's SPIR-V environment spec: "While the OpSRem and OpSMod
instructions are supported by the Vulkan environment, they require
non-negative values and thus do not enable additional functionality
beyond what OpUMod provides."
The `getOffsetForBitwidth` function is used for lowering std.load
and std.store, whose indices are of `index` type and cannot be
negative. So we should be okay to use spv.UMod directly here to
be exact. Also made the comment explicit about the assumption.
Differential Revision: https://reviews.llvm.org/D83714
If Int16 is not available, 16-bit integers inside Workgroup storage
class should be emulated via 32-bit integers. This was previously
broken because the capability querying logic was incorrectly
intercepting all storage classes where it meant to only handle
interface storage classes. Adjusted where we return to fix this.
Differential Revision: https://reviews.llvm.org/D85308
`promoteMemRefDescriptors` also converts types of every operand, not only
memref-typed ones. I think `promoteMemRefDescriptors` name does not imply that.
Differential Revision: https://reviews.llvm.org/D85325
Previously, `LinalgOperand` is defined with `Type<Or<..,>>`, which produces
not very readable error messages when it is not matched, e.g.,
```
'linalg.generic' op operand #0 must be anonymous_326, but got ....
```
It is simply because the `description` property is not properly set.
This diff switches to use `AnyTypeOf` for `LinalgOperand`, which automatically
generates a description based on the allowed types provided.
As a result, the error message now becomes:
```
'linalg.generic' op operand #0 must be ranked tensor of any type values or strided memref of any type values, but got ...
```
Which is clearer and more informative.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D84428
Introduce an initial version of C API for MLIR core IR components: Value, Type,
Attribute, Operation, Region, Block, Location. These APIs allow for both
inspection and creation of the IR in the generic form and intended for wrapping
in high-level library- and language-specific constructs. At this point, there
is no stability guarantee provided for the API.
Reviewed By: stellaraccident, lattner
Differential Revision: https://reviews.llvm.org/D83310
This dialect was introduced during the bring-up of the new LLVM dialect type
system for testing purposes. The main LLVM dialect now uses the new type system
and the test dialect is no longer necessary, so remove it.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85224
This quietly disabled use of zlib on Windows even when building with
-DLLVM_ENABLE_ZLIB=FORCE_ON.
> Rather than handling zlib handling manually, use find_package from CMake
> to find zlib properly. Use this to normalize the LLVM_ENABLE_ZLIB,
> HAVE_ZLIB, HAVE_ZLIB_H. Furthermore, require zlib if LLVM_ENABLE_ZLIB is
> set to YES, which requires the distributor to explicitly select whether
> zlib is enabled or not. This simplifies the CMake handling and usage in
> the rest of the tooling.
>
> This is a reland of abb0075 with all followup changes and fixes that
> should address issues that were reported in PR44780.
>
> Differential Revision: https://reviews.llvm.org/D79219
This reverts commit 10b1b4a231 and follow-ups
64d99cc6ab and
f9fec0447e.
Handle the case where the ViewOp takes in a memref that has
an memory space.
Reviewed By: ftynse, bondhugula, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D85048
The extent tensor type is a `tensor<?xindex>` that is used in the shape dialect.
To facilitate the use of this type when working with the shape dialect, we
expose the helper function for its construction.
Differential Revision: https://reviews.llvm.org/D85121
Updated the documentation with new MLIR LLVM types for
vectors, pointers, arrays and structs. Also, changed remaining
tabs to spaces.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D85277
This patch introduces a conversion of `spv.loop` to LLVM dialect.
Similarly to `spv.selection`, op's control attributes are not mapped
to LLVM yet and therefore the conversion fails if the loop control is
not `None`. Also, all blocks within the loop should be reachable in
order for conversion to succeed.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D84245
Updated the documentation for SPIR-V to LLVM conversion, particularly:
- Added a section on control flow
- Added a section on memory ops
- Added a section on GLSL ops
Also, moved `spv.FunctionCall` to control flow section. Added a new section
that will be used to describe the modelling of runtime-related ops.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D84734
- Moved TypeRange into its own header/cpp file, and add hashing support.
- Change FunctionType::get() and TupleType::get() to use TypeRange
Differential Revision: https://reviews.llvm.org/D85075
Always define a remapping for the memref replacement (`indexRemap`)
with the proper number of inputs, including all the `outerIVs`, so that
the number of inputs and the operands provided for the map don't mismatch.
Reviewed By: bondhugula, andydavis1
Differential Revision: https://reviews.llvm.org/D85177
Introduces the expand and compress operations to the Vector dialect
(important memory operations for sparse computations), together
with a first reference implementation that lowers to the LLVM IR
dialect to enable running on CPU (and other targets that support
the corresponding LLVM IR intrinsics).
Reviewed By: reidtatge
Differential Revision: https://reviews.llvm.org/D84888
Simplify semi-affine expression for the operations like ceildiv,
floordiv and modulo by any given symbol by checking divisibilty by that
symbol.
Some properties used in simplification are:
1) Commutative property of the floordiv and ceildiv:
((expr1 floordiv expr2) floordiv expr3 ) = ((expr1 floordiv expr3) floordiv expr2)
((expr1 ceildiv expr2) ceildiv expr3 ) = ((expr1 ceildiv expr3) ceildiv expr2)
While simplification if operations are different no simplification is
possible as there is no property that simplify expressions like these:
((expr1 ceildiv expr2) floordiv expr3) or ((expr1 floordiv expr2)
ceildiv expr3).
2) If both expr1 and expr2 are divisible by the expr3 then:
(expr1 % expr2) / expr3 = ((expr1 / expr3) % (expr2 / expr3))
where / is divide symbol.
3) If expr1 is divisible by expr2 then expr1 % expr2 = 0.
Signed-off-by: Yash Jain <yash.jain@polymagelabs.com>
Differential Revision: https://reviews.llvm.org/D84920
The `splitFullAndPartialTransferPrecondition` has a restrictive condition to
prevent the pattern to be applied recursively if it is nested under an scf.IfOp.
Relaxing the condition to the immediate parent op must not be an scf.IfOp lets
the pattern be applied more generally while still preventing recursion.
Differential Revision: https://reviews.llvm.org/D85209
This revision adds a transformation and a pattern that rewrites a "maybe masked" `vector.transfer_read %view[...], %pad `into a pattern resembling:
```
%1:3 = scf.if (%inBounds) {
scf.yield %view : memref<A...>, index, index
} else {
%2 = linalg.fill(%extra_alloc, %pad)
%3 = subview %view [...][...][...]
linalg.copy(%3, %alloc)
memref_cast %extra_alloc: memref<B...> to memref<A...>
scf.yield %4 : memref<A...>, index, index
}
%res= vector.transfer_read %1#0[%1#1, %1#2] {masked = [false ... false]}
```
where `extra_alloc` is a top of the function alloca'ed buffer of one vector.
This rewrite makes it possible to realize the "always full tile" abstraction where vector.transfer_read operations are guaranteed to read from a padded full buffer.
The extra work only occurs on the boundary tiles.
A new first-party modeling for LLVM IR types in the LLVM dialect has been
developed in parallel to the existing modeling based on wrapping LLVM `Type *`
instances. It resolves the long-standing problem of modeling identified
structure types, including recursive structures, and enables future removal of
LLVMContext and related locking mechanisms from LLVMDialect.
This commit only switches the modeling by (a) renaming LLVMTypeNew to LLVMType,
(b) removing the old implementaiton of LLVMType, and (c) updating the tests. It
is intentionally minimal. Separate commits will remove the infrastructure built
for the transition and update API uses where appropriate.
Depends On D85020
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85021
These are intended to smoothen the transition and may be removed in the future
in favor of more MLIR-compatible APIs. They intentionally have the same
semantics as the existing functions, which must remain stable until the
transition is complete.
Depends On D85019
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D85020
With new LLVM dialect type modeling, the dialect types no longer wrap LLVM IR
types. Therefore, they need to be translated to and from LLVM IR during export
and import. Introduce the relevant functionality for translating types. It is
currently exercised by an ad-hoc type translation roundtripping test that will
be subsumed by the actual translation test when the type system transition is
complete.
Depends On D84339
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D85019
The bug was not noticed because we didn't have a lot of custom type conversions
directly to LLVM dialect.
Differential Revision: https://reviews.llvm.org/D85192
This is a first patch that sweeps over tests to fix
indentation (tabs to spaces). It also adds label checks and
removes redundant matching of `%{{.*}} = `.
The following tests have been fixed:
- arithmetic-ops-to-llvm
- bitwise-ops-to-llvm
- cast-ops-to-llvm
- comparison-ops-to-llvm
- logical-ops-to-llvm (renamed to match the rest)
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D85181
Added a "clone" of the 1D vector's test_transfer_read and added a second dimensionality. The test is not as generic as I would like it to be, because more generic versions appear to break the compiler or the runtime at this stage. As bug are fixed, I will be happy to add another more complete test.
Differential Revision: https://reviews.llvm.org/D83096
Lowering of newly defined Conv ops in TC syntax to standard
dialect is not supported and therefore this commit adds support
for it.
Differential Revision: https://reviews.llvm.org/D84840
Unit attributes are given meaning by their existence, and thus have no meaningful value beyond "is it present". As such, in the format of an operation unit attributes are generally used to guard the printing of other elements and aren't generally printed themselves; as the presence of the group when parsing means that the unit attribute should be added. This revision adds support to the declarative format for eliding unit attributes in situations where they anchor an optional group, but aren't the first element.
For example,
```
let assemblyFormat = "(`is_optional` $unit_attr^)? attr-dict";
```
would print `foo.op is_optional` when $unit_attr is present, instead of the current `foo.op is_optional unit`.
Differential Revision: https://reviews.llvm.org/D84577
Remove use of iterator::difference_type to know where to insert a
moved or erased block during undo actions.
Differential Revision: https://reviews.llvm.org/D85066
This revision adds a transformation and a pattern that rewrites a "maybe masked" `vector.transfer_read %view[...], %pad `into a pattern resembling:
```
%1:3 = scf.if (%inBounds) {
scf.yield %view : memref<A...>, index, index
} else {
%2 = vector.transfer_read %view[...], %pad : memref<A...>, vector<...>
%3 = vector.type_cast %extra_alloc : memref<...> to
memref<vector<...>> store %2, %3[] : memref<vector<...>> %4 =
memref_cast %extra_alloc: memref<B...> to memref<A...> scf.yield %4 :
memref<A...>, index, index
}
%res= vector.transfer_read %1#0[%1#1, %1#2] {masked = [false ... false]}
```
where `extra_alloc` is a top of the function alloca'ed buffer of one vector.
This rewrite makes it possible to realize the "always full tile" abstraction where vector.transfer_read operations are guaranteed to read from a padded full buffer.
The extra work only occurs on the boundary tiles.
Differential Revision: https://reviews.llvm.org/D84631
This reverts commit 35b65be041.
Build is broken with -DBUILD_SHARED_LIBS=ON with some undefined
references like:
VectorTransforms.cpp:(.text._ZN4llvm12function_refIFvllEE11callback_fnIZL24createScopedInBoundsCondN4mlir25VectorTransferOpInterfaceEE3$_8EEvlll+0xa5): undefined reference to `mlir::edsc::op::operator+(mlir::Value, mlir::Value)'
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This model makes thread-safe type manipulation hard and is being
progressively replaced with a cleaner MLIR model that replicates the type
system. Introduce a set of classes reflecting the LLVM IR type system in MLIR
instead of wrapping the existing types. These are currently introduced as
separate classes without affecting the dialect flow, and are exercised through
a test dialect. Once feature parity is reached, the old implementation will be
gradually substituted with the new one.
Depends On D84171
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D84339