This dependency was already existing indirectly, but is now more direct
since the registration relies on a inline function. This fixes the
link of the tools with BFD.
Historically, custom builder specification in OpBuilder has been accepting the
formal parameter list for the builder method as a raw string containing C++.
While this worked well to connect the signature and the body, this became
problematic when ODS needs to manipulate the parameter list, e.g. to inject
OpBuilder or to trim default values when generating the definition. This has
also become inconsistent with other method declarations, in particular in
interface definitions.
Introduce the possibility to define OpBuilder formal parameters using a
TableGen dag similarly to other methods. Additionally, introduce a mechanism to
declare parameters with default values using an additional class. This
mechanism can be reused in other methods. The string-based builder signature
declaration is deprecated and will be removed after a transition period.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D89470
Have the ODS TypeDef generator write the getChecked() definition.
Also add to TypeParamCommaFormatter a `JustParams` format and
refactor around that.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D89438
Added an underlying matcher for generic constant ops. This
included a rewriter of RewriterGen to make variable use more
clear.
Differential Revision: https://reviews.llvm.org/D89161
This CL allows user to specify the same name for the operands in the source pattern which implicitly enforces equality on operands with the same name.
E.g., Pat<(OpA $a, $b, $a) ... > would create a matching rule for checking equality for the first and the last operands. Equality of the operands is enforced at any depth, e.g., OpA ($a, $b, OpB($a, $c, OpC ($a))).
Example usage: Pat<(Reshape $arg0, (Shape $arg0)), (replaceWithValue $arg0)>
Note, this feature only covers operands but not attributes.
Current use cases are based on the operand equality and explicitly add the constraint into the pattern. Attribute equality will be worked out on the different CL.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D89254
The buffers are used as source or destination of transfer commands
so always add VK_BUFFER_USAGE_TRANSFER_{DST,SRC}_BIT to their usage
flags.
Signed-off-by: Kevin Petit <kevin.petit@arm.com>
This revision adds a programmable codegen strategy from linalg based on staged rewrite patterns. Testing is exercised on a simple linalg.matmul op.
Differential Revision: https://reviews.llvm.org/D89374
This reverts commit 7271c1bcb9.
This broke the gcc-5 build:
/usr/include/c++/5/ext/new_allocator.h:120:4: error: no matching function for call to 'std::pair<const std::__cxx11::basic_string<char>, mlir::tblgen::SymbolInfoMap::SymbolInfo>::pair(llvm::StringRef&, mlir::tblgen::SymbolInfoMap::SymbolInfo)'
{ ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
^
In file included from /usr/include/c++/5/utility:70:0,
from llvm/include/llvm/Support/type_traits.h:18,
from llvm/include/llvm/Support/Casting.h:18,
from mlir/include/mlir/Support/LLVM.h:24,
from mlir/include/mlir/TableGen/Pattern.h:17,
from mlir/lib/TableGen/Pattern.cpp:14:
/usr/include/c++/5/bits/stl_pair.h:206:9: note: candidate: template<class ... _Args1, long unsigned int ..._Indexes1, class ... _Args2, long unsigned int ..._Indexes2> std::pair<_T1, _T2>::pair(std::tuple<_Args1 ...>&, std::tuple<_Args2 ...>&, std::_Index_tuple<_Indexes1 ...>, std::_Index_tuple<_Indexes2 ...>)
pair(tuple<_Args1...>&, tuple<_Args2...>&,
^
Adds a TypeDef class to OpBase and backing generation code. Allows one
to define the Type, its parameters, and printer/parser methods in ODS.
Can generate the Type C++ class, accessors, storage class, per-parameter
custom allocators (for the storage constructor), and documentation.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D86904
This CL allows user to specify the same name for the operands in the source pattern which implicitly enforces equality on operands with the same name.
E.g., Pat<(OpA $a, $b, $a) ... > would create a matching rule for checking equality for the first and the last operands. Equality of the operands is enforced at any depth, e.g., OpA ($a, $b, OpB($a, $c, OpC ($a))).
Example usage: Pat<(Reshape $arg0, (Shape $arg0)), (replaceWithValue $arg0)>
Note, this feature only covers operands but not attributes.
Current use cases are based on the operand equality and explicitly add the constraint into the pattern. Attribute equality will be worked out on the different CL.
Differential Revision: https://reviews.llvm.org/D89254
This is the same diff as https://reviews.llvm.org/D88809/ except side effect
free check is removed for involution and a FIXME is added until the dependency
is resolved for shared builds. The old diff has more details on possible fixes.
Reviewed By: rriddle, andyly
Differential Revision: https://reviews.llvm.org/D89333
CMake Error at llvm/cmake/modules/AddLLVM.cmake:870 (add_dependencies):
The dependency target "Core" of target "mlir-cuda-runner" does not exist.
Call Stack (most recent call first):
llvm/cmake/modules/AddLLVM.cmake:1169 (add_llvm_executable)
mlir/tools/mlir-cuda-runner/CMakeLists.txt:69 (add_llvm_tool)
CMake Error at llvm/cmake/modules/AddLLVM.cmake:870 (add_dependencies):
The dependency target "LINK_COMPONENTS" of target "mlir-cuda-runner" does
not exist.
Call Stack (most recent call first):
llvm/cmake/modules/AddLLVM.cmake:1169 (add_llvm_executable)
mlir/tools/mlir-cuda-runner/CMakeLists.txt:69 (add_llvm_tool)
CMake Error at llvm/cmake/modules/AddLLVM.cmake:870 (add_dependencies):
The dependency target "Support" of target "mlir-cuda-runner" does not
exist.
Call Stack (most recent call first):
llvm/cmake/modules/AddLLVM.cmake:1169 (add_llvm_executable)
mlir/tools/mlir-cuda-runner/CMakeLists.txt:69 (add_llvm_tool)
This revision reduces the number of places that specific information needs to be modified when adding new named Linalg ops.
Differential Revision: https://reviews.llvm.org/D89223
This revision introduces support for buffer allocation for any named linalg op.
To avoid template instantiating many ops, a new ConversionPattern is created to capture the LinalgOp interface.
Some APIs are updated to remain consistent with MLIR style:
`OwningRewritePatternList * -> OwningRewritePatternList &`
`BufferAssignmentTypeConverter * -> BufferAssignmentTypeConverter &`
Differential revision: https://reviews.llvm.org/D89226
This reverts commit 1ceaffd95a.
The build is broken with -DBUILD_SHARED_LIBS=ON ; seems like a possible
layering issue to investigate:
tools/mlir/lib/IR/CMakeFiles/obj.MLIRIR.dir/Operation.cpp.o: In function `mlir::MemoryEffectOpInterface::hasNoEffect(mlir::Operation*)':
Operation.cpp:(.text._ZN4mlir23MemoryEffectOpInterface11hasNoEffectEPNS_9OperationE[_ZN4mlir23MemoryEffectOpInterface11hasNoEffectEPNS_9OperationE]+0x9c): undefined reference to `mlir::MemoryEffectOpInterface::getEffects(llvm::SmallVectorImpl<mlir::SideEffects::EffectInstance<mlir::MemoryEffects::Effect> >&)'
mlir-tblgen was incompatible with libLLVM, due to explicit linkage with
libLLVMSupport etc.
As it cannot link with libLLVM, make sure all lib it uses are not using libLLVM
either.
As a side effect, also remove some explicit references to LLVM libs and use
components instead.
Differential Revision: https://reviews.llvm.org/D88846
This change allows folds to be done on a newly introduced involution trait rather than having to manually rewrite this optimization for every instance of an involution
Reviewed By: rriddle, andyly, stephenneuendorffer
Differential Revision: https://reviews.llvm.org/D88809
This change replaces container used for storing temporary
strings for generated code to std::list.
SmallVector may reallocate internal data, which will invalidate
references when more than one extended instruction set is
generated.
Reviewed By: mravishankar, antiagainst
Differential Revision: https://reviews.llvm.org/D88626
This reverts commit e9b87f43bd.
There are issues with macros generating macros without an obvious simple fix
so I'm going to revert this and try something different.
New projects (particularly out of tree) have a tendency to hijack the existing
llvm configuration options and build targets (add_llvm_library,
add_llvm_tool). This can lead to some confusion.
1) When querying a configuration variable, do we care about how LLVM was
configured, or how these options were configured for the out of tree project?
2) LLVM has lots of defaults, which are easy to miss
(e.g. LLVM_BUILD_TOOLS=ON). These options all need to be duplicated in the
CMakeLists.txt for the project.
In addition, with LLVM Incubators coming online, we need better ways for these
incubators to do things the "LLVM way" without alot of futzing. Ideally, this
would happen in a way that eases importing into the LLVM monorepo when
projects mature.
This patch creates some generic infrastructure in llvm/cmake/modules and
refactors MLIR to use this infrastructure. This should expand to include
add_xxx_library, which is by far the most complicated bit of building a
project correctly, since it has to deal with lots of shared library
configuration bits. (MLIR currently hijacks the LLVM infrastructure for
building libMLIR.so, so this needs to get refactored anyway.)
Differential Revision: https://reviews.llvm.org/D85140
Class simplifies keeping track of the indentation while emitting. For every new line the current indentation is simply prefixed (if not at start of line, then it just emits as normal). Add a simple Region helper that makes it easy to have the C++ scope match the emitted scope.
Use this in op doc generator and rewrite generator.
This reverts revert commit be185b6a73 addresses shared lib failure by fixing up cmake files.
Differential Revision: https://reviews.llvm.org/D84107
Class simplifies keeping track of the indentation while emitting. For every new line the current indentation is simply prefixed (if not at start of line, then it just emits as normal). Add a simple Region helper that makes it easy to have the C++ scope match the emitted scope.
Use this in op doc generator and rewrite generator.
Differential Revision: https://reviews.llvm.org/D84107
The pattern is structured similar to other patterns like
LinalgTilingPattern. The fusion patterns takes options that allows you
to fuse with producers of multiple operands at once.
- The pattern fuses only at the level that is known to be legal, i.e
if a reduction loop in the consumer is tiled, then fusion should
happen "before" this loop. Some refactoring of the fusion code is
needed to fuse only where it is legal.
- Since the fusion on buffers uses the LinalgDependenceGraph that is
not mutable in place the fusion pattern keeps the original
operations in the IR, but are tagged with a marker that can be later
used to find the original operations.
This change also fixes an issue with tiling and
distribution/interchange where if the tile size of a loop were 0 it
wasnt account for in these.
Differential Revision: https://reviews.llvm.org/D88435
This tweaks the generated code for parsing attributes with a custom
directive to call `addAttribute` on the `OperationState` directly,
and adds a newline after this call. Previously, the generated code
would call `addAttribute` on the `OperationState` field `attributes`,
which has no such method and fails to compile. Furthermore, the lack
of newline would generate code with incorrectly formatted single line
`if` statements. Added tests for parsing and printing attributes with
a custom directive.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D87860
- Change the default builders to use TypeRange instead of ArrayRef<Type>
- Custom builders defined in LinalgStructuredOps now conflict with the default
separate param ones, but the default collective params one is still needed. Resolve
this by replicating the collective param builder as a custom builder and skipping
the generation of default builders for these ops.
Differential Revision: https://reviews.llvm.org/D87926
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.
Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637
Recommit after fixing an ASAN issue: the callback lambda needs to be
allocated to a temporary to have its lifetime extended to the end of the
current block instead of just the current call expression.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D86392
The OpBuilder is required to start with OpBuilder and OperationState, so remove
the need for the user to specify it. To make it simpler to update callers,
retain the legacy behavior for now and skip injecting OpBuilder/OperationState
when params start with OpBuilder.
Related to bug 47442.
Differential Revision: https://reviews.llvm.org/D88050
This reverts commit 385c3f43fc.
Test mlir/test/Pass:dynamic-pipeline-fail-on-parent.mlir.test fails
when run with ASAN:
ERROR: AddressSanitizer: stack-use-after-scope on address ...
Reviewed By: bkramer, pifon2a
Differential Revision: https://reviews.llvm.org/D88079
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.
Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D86392
check-mlir target run tests simultaneously with multiple threads. This caused multiple threads to invoke the `lld:🧝:link()` interface at the same time. Since the interface does not have a thread-safe implementation, add a metex to prevent multi-threaded access.
I discovered this by looking the the failure stack trace. lld/ELF/symbolTable.cpp, SymbolTable::insert() hit into an assert with related to Epoch Trackers. The root cause is to due to there is no protection around the symMap (update) which is implemented in non-thread safe data structure: denseMap.
Differential Revision: https://reviews.llvm.org/D88038
This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented.
As an illustration, the syntax for a `linalg.matmul` writing into a buffer is:
```
linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>)
outs(%c : memref<?x?xf32>)
```
, whereas the syntax for a `linalg.matmul` returning a new tensor is:
```
%d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>)
init(%c : memref<?x?xf32>)
-> tensor<?x?xf32>
```
Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
- Change OpClass new method addition to find and eliminate any existing methods that
are made redundant by the newly added method, as well as detect if the newly added
method will be redundant and return nullptr in that case.
- To facilitate that, add the notion of resolved and unresolved parameters, where resolved
parameters have each parameter type known, so that redundancy checks on methods
with same name but different parameter types can be done.
- Eliminate existing code to avoid adding conflicting/redundant build methods and rely
on this new mechanism to eliminate conflicting build methods.
Fixes https://bugs.llvm.org/show_bug.cgi?id=47095
Differential Revision: https://reviews.llvm.org/D87059
Add support to tile affine.for ops with parametric sizes (i.e., SSA
values). Currently supports hyper-rectangular loop nests with constant
lower bounds only. Move methods
- moveLoopBody(*)
- getTileableBands(*)
- checkTilingLegality(*)
- tilePerfectlyNested(*)
- constructTiledIndexSetHyperRect(*)
to allow reuse with constant tile size API. Add a test pass -test-affine
-parametric-tile to test parametric tiling.
Differential Revision: https://reviews.llvm.org/D87353
Now backends spell out which namespace they want to be in, instead of relying on
clients #including them inside already-opened namespaces. This also means that
cppNamespaces should be fully qualified, and there's no implicit "::mlir::"
prepended to them anymore.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D86811
This revision refactors and cleans up a bunch of things to simplify StructuredOpInterface
before work can proceed on Linalg on tensors:
- break out pieces of the StructuredOps trait that are part of the StructuredOpInterface,
- drop referenceIterators and referenceIndexingMaps that end up being more confusing than useful,
- drop NamedStructuredOpTrait
In this commit a new way of convolution ops lowering is introduced.
The conv op vectorization pass lowers linalg convolution ops
into vector contractions. This lowering is possible when conv op
is first tiled by 1 along specific dimensions which transforms
it into dot product between input and kernel subview memory buffers.
This pass converts such conv op into vector contraction and does
all necessary vector transfers that make it work.
Differential Revision: https://reviews.llvm.org/D86619
Drops the include on InitAllDialects.h, as dialects are now initialized in the translation passes.
Differential Revision: https://reviews.llvm.org/D87129
Its handling is similar to optional attributes, except for the
getter method.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D87055
Clients who rely on the Context loading dialects from the global
registry can call `mlir::enableGlobalDialectRegistry(true);` before
creating an MLIRContext
Differential Revision: https://reviews.llvm.org/D86897
This adds some initial support for regions and does not support formatting the specific arguments of a region. For now this can be achieved by using a custom directive that formats the arguments and then parses the region.
Differential Revision: https://reviews.llvm.org/D86760
Symbol names are a special form of StringAttr that get treated specially in certain areas, such as formatting. This revision adds a special derived attr for them in ODS and adds support in the assemblyFormat for formatting them properly.
Differential Revision: https://reviews.llvm.org/D86759
This revision adds support for custom directives to the declarative assembly format. This allows for users to use C++ for printing and parsing subsections of an otherwise declaratively specified format. The custom directive is structured as follows:
```
custom-directive ::= `custom` `<` UserDirective `>` `(` Params `)`
```
`user-directive` is used as a suffix when this directive is used during printing and parsing. When parsing, `parseUserDirective` will be invoked. When printing, `printUserDirective` will be invoked. The first parameter to these methods must be a reference to either the OpAsmParser, or OpAsmPrinter. The type of rest of the parameters is dependent on the `Params` specified in the assembly format.
Differential Revision: https://reviews.llvm.org/D84719
Full diagnostic was:
warning: base class ‘class mlir::OptReductionBase<mlir::OptReductionPass>’ should be explicitly initialized in the copy constructor [-Wextra]
The PDL Interpreter dialect provides a lower level abstraction compared to the PDL dialect, and is targeted towards low level optimization and interpreter code generation. The dialect operations encapsulates low-level pattern match and rewrite "primitives", such as navigating the IR (Operation::getOperand), creating new operations (OpBuilder::create), etc. Many of the operations within this dialect also fuse branching control flow with some form of a predicate comparison operation. This type of fusion reduces the amount of work that an interpreter must do when executing.
An example of this representation is shown below:
```mlir
// The following high level PDL pattern:
pdl.pattern : benefit(1) {
%resultType = pdl.type
%inputOperand = pdl.input
%root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType
pdl.rewrite %root {
pdl.replace %root with (%inputOperand)
}
}
// May be represented in the interpreter dialect as follows:
module {
func @matcher(%arg0: !pdl.operation) {
pdl_interp.check_operation_name of %arg0 is "foo.op" -> ^bb2, ^bb1
^bb1:
pdl_interp.return
^bb2:
pdl_interp.check_operand_count of %arg0 is 1 -> ^bb3, ^bb1
^bb3:
pdl_interp.check_result_count of %arg0 is 1 -> ^bb4, ^bb1
^bb4:
%0 = pdl_interp.get_operand 0 of %arg0
pdl_interp.is_not_null %0 : !pdl.value -> ^bb5, ^bb1
^bb5:
%1 = pdl_interp.get_result 0 of %arg0
pdl_interp.is_not_null %1 : !pdl.value -> ^bb6, ^bb1
^bb6:
pdl_interp.record_match @rewriters::@rewriter(%0, %arg0 : !pdl.value, !pdl.operation) : benefit(1), loc([%arg0]), root("foo.op") -> ^bb1
}
module @rewriters {
func @rewriter(%arg0: !pdl.value, %arg1: !pdl.operation) {
pdl_interp.replace %arg1 with(%arg0)
pdl_interp.return
}
}
}
```
Differential Revision: https://reviews.llvm.org/D84579
This will allow out-of-tree translation to register the dialects they expect
to see in their input, on the model of getDependentDialects() for passes.
Differential Revision: https://reviews.llvm.org/D86409
Refactor the way the reduction tree pass works in the MLIR Reduce tool by introducing a set of utilities that facilitate the implementation of new Reducer classes to be used in the passes.
This will allow for the fast implementation of general transformations to operate on all mlir modules as well as custom transformations for different dialects.
These utilities allow for the implementation of Reducer classes by simply defining a method that indexes the operations/blocks/regions to be transformed and a method to perform the deletion or transfomration based on the indexes.
Create the transformSpace class member in the ReductionNode class to keep track of the indexes that have already been transformed or deleted at a current level.
Delete the FunctionReducer class and replace it with the OpReducer class to reflect this new API while performing the same transformation and allowing the instantiation of a reduction pass for different types of operations at the module's highest hierarchichal level.
Modify the SinglePath Traversal method to reflect the use of the new API.
Reviewed: jpienaar
Differential Revision: https://reviews.llvm.org/D85591
PDL presents a high level abstraction for the rewrite pattern infrastructure available in MLIR. This abstraction allows for representing patterns transforming MLIR, as MLIR. This allows for applying all of the benefits that the general MLIR infrastructure provides, to the infrastructure itself. This means that pattern matching can be more easily verified for correctness, targeted by frontends, and optimized.
PDL abstracts over various different aspects of patterns and core MLIR data structures. Patterns are specified via a `pdl.pattern` operation. These operations contain a region body for the "matcher" code, and terminate with a `pdl.rewrite` that either dispatches to an external rewriter or contains a region for the rewrite specified via `pdl`. The types of values in `pdl` are handle types to MLIR C++ types, with `!pdl.attribute`, `!pdl.operation`, and `!pdl.type` directly mapping to `mlir::Attribute`, `mlir::Operation*`, and `mlir::Value` respectively.
An example pattern is shown below:
```mlir
// pdl.pattern contains metadata similarly to a `RewritePattern`.
pdl.pattern : benefit(1) {
// External input operand values are specified via `pdl.input` operations.
// Result types are constrainted via `pdl.type` operations.
%resultType = pdl.type
%inputOperand = pdl.input
%root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType
pdl.rewrite(%root) {
pdl.replace %root with (%inputOperand)
}
}
```
This is a culmination of the work originally discussed here: https://groups.google.com/a/tensorflow.org/g/mlir/c/j_bn74ByxlQ
Differential Revision: https://reviews.llvm.org/D84578
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Create a reduction pass that accepts an optimization pass as argument
and only replaces the golden module in the pipeline if the output of the
optimization pass is smaller than the input and still exhibits the
interesting behavior.
Add a -test-pass option to test individual passes in the MLIR Reduce
tool.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D84783
This will help refactoring some of the tools to prepare for the explicit registration of
Dialects.
Differential Revision: https://reviews.llvm.org/D86023
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
This exercises the corner case that was fixed in
https://reviews.llvm.org/rG8979a9cdf226066196f1710903d13492e6929563.
The bug can be reproduced when there is a @callee with a custom type argument and @caller has a producer of this argument passed to the @callee.
Example:
func @callee(!test.test_type) -> i32
func @caller() -> i32 {
%arg = "test.type_producer"() : () -> !test.test_type
%out = call @callee(%arg) : (!test.test_type) -> i32
return %out : i32
}
Even though there is a type conversion for !test.test_type, the output IR (before the fix) contained a DialectCastOp:
module {
llvm.func @callee(!llvm.ptr<i8>) -> !llvm.i32
llvm.func @caller() -> !llvm.i32 {
%0 = llvm.mlir.null : !llvm.ptr<i8>
%1 = llvm.mlir.cast %0 : !llvm.ptr<i8> to !test.test_type
%2 = llvm.call @callee(%1) : (!test.test_type) -> !llvm.i32
llvm.return %2 : !llvm.i32
}
}
instead of
module {
llvm.func @callee(!llvm.ptr<i8>) -> !llvm.i32
llvm.func @caller() -> !llvm.i32 {
%0 = llvm.mlir.null : !llvm.ptr<i8>
%1 = llvm.call @callee(%0) : (!llvm.ptr<i8>) -> !llvm.i32
llvm.return %1 : !llvm.i32
}
}
Differential Revision: https://reviews.llvm.org/D85914
This patch adds the translation of the proc_bind clause in a
parallel operation.
The values that can be specified for the proc_bind clause are
specified in the OMP.td tablegen file in the llvm/Frontend/OpenMP
directory. From this single source of truth enumeration for
proc_bind is generated in llvm and mlir (used in specification of
the parallel Operation in the OpenMP dialect). A function to return
the enum value from the string representation is also generated.
A new header file (DirectiveEmitter.h) containing definitions of
classes directive, clause, clauseval etc is created so that it can
be used in mlir as well.
Reviewers: clementval, jdoerfert, DavidTruby
Differential Revision: https://reviews.llvm.org/D84347
- Fix ODS framework to suppress build methods that infer result types and are
ambiguous with collective variants. This applies to operations with a single variadic
inputs whose result types can be inferred.
- Extended OpBuildGenTest to test these kinds of ops.
Differential Revision: https://reviews.llvm.org/D85060
Implement the Reduction Tree Pass framework as part of the MLIR Reduce tool. This is a parametarizable pass that allows for the implementation of custom reductions passes in the tool.
Implement the FunctionReducer class as an example of a Reducer class parameter for the instantiation of a Reduction Tree Pass.
Create a pass pipeline with a Reduction Tree Pass with the FunctionReducer class specified as parameter.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D83969
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()
This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.
To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.
Differential Revision: https://reviews.llvm.org/D85495
Due to the original type system implementation, LLVMDialect in MLIR contains an
LLVMContext in which the relevant objects (types, metadata) are created. When
an MLIR module using the LLVM dialect (and related intrinsic-based dialects
NVVM, ROCDL, AVX512) is converted to LLVM IR, it could only live in the
LLVMContext owned by the dialect. The type system no longer relies on the
LLVMContext, so this limitation can be removed. Instead, translation functions
now take a reference to an LLVMContext in which the LLVM IR module should be
constructed. The caller of the translation functions is responsible for
ensuring the same LLVMContext is not used concurrently as the translation no
longer uses a dialect-wide context lock.
As an additional bonus, this change removes the need to recreate the LLVM IR
module in a different LLVMContext through printing and parsing back, decreasing
the compilation overhead in JIT and GPU-kernel-to-blob passes.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85443
The RewritePattern will become one of several, and will be part of the LLVM conversion pass (instead of a separate pass following LLVM conversion).
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D84946
Previous type model in the LLVM dialect did not support identified structure
types properly and therefore could use stateless translations implemented as
free functions. The new model supports identified structs and must keep track
of the identified structure types present in the target context (LLVMContext or
MLIRContext) to avoid creating duplicate structs due to LLVM's type
auto-renaming. Expose the stateful type translation classes and use them during
translation, storing the state as part of ModuleTranslation.
Drop the test type translation mechanism that is no longer necessary and update
the tests to exercise type translation as part of the main translation flow.
Update the code in vector-to-LLVM dialect conversion that relied on stateless
translation to use the new class in a stateless manner.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85297
This dialect was introduced during the bring-up of the new LLVM dialect type
system for testing purposes. The main LLVM dialect now uses the new type system
and the test dialect is no longer necessary, so remove it.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D85224
With new LLVM dialect type modeling, the dialect types no longer wrap LLVM IR
types. Therefore, they need to be translated to and from LLVM IR during export
and import. Introduce the relevant functionality for translating types. It is
currently exercised by an ad-hoc type translation roundtripping test that will
be subsumed by the actual translation test when the type system transition is
complete.
Depends On D84339
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D85019
Unit attributes are given meaning by their existence, and thus have no meaningful value beyond "is it present". As such, in the format of an operation unit attributes are generally used to guard the printing of other elements and aren't generally printed themselves; as the presence of the group when parsing means that the unit attribute should be added. This revision adds support to the declarative format for eliding unit attributes in situations where they anchor an optional group, but aren't the first element.
For example,
```
let assemblyFormat = "(`is_optional` $unit_attr^)? attr-dict";
```
would print `foo.op is_optional` when $unit_attr is present, instead of the current `foo.op is_optional unit`.
Differential Revision: https://reviews.llvm.org/D84577
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This model makes thread-safe type manipulation hard and is being
progressively replaced with a cleaner MLIR model that replicates the type
system. Introduce a set of classes reflecting the LLVM IR type system in MLIR
instead of wrapping the existing types. These are currently introduced as
separate classes without affecting the dialect flow, and are exercised through
a test dialect. Once feature parity is reached, the old implementation will be
gradually substituted with the new one.
Depends On D84171
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D84339
The current output is a bit clunky and requires including files+macros everywhere, or manually wrapping the file inclusion in a registration function. This revision refactors the pass backend to automatically generate `registerFooPass`/`registerFooPasses` functions that wrap the pass registration. `gen-pass-decls` now takes a `-name` input that specifies a tag name for the group of passes that are being generated. For each pass, the generator now produces a `registerFooPass` where `Foo` is the name of the definition specified in tablegen. It also generates a `registerGroupPasses`, where `Group` is the tag provided via the `-name` input parameter, that registers all of the passes present.
Differential Revision: https://reviews.llvm.org/D84983
The patch fixes minor issues in the rocm runtime wrapper due to api differences between CUDA and HIP.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D84861
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This is model makes thread-safe type manipulation hard and is
being progressively replaced with a cleaner MLIR model that replicates the type
system. In the new model, LLVMType will no longer have an underlying LLVM IR
type. Restrict access to this type in the current model in preparation for the
change.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D84389
functions.
This allows using command line flags to lowere from GPU to SPIR-V. The
pass added is only for testing/example purposes. Most uses cases will
need more fine-grained control on setting workgroup sizes for kernel
functions.
Differential Revision: https://reviews.llvm.org/D84619
Do not return error code, instead return created resource handles or void. Error reporting is done by the library function.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D84660
Introduce support for mutable storage in the StorageUniquer infrastructure.
This makes MLIR have key-value storage instead of just uniqued key storage. A
storage instance now contains a unique immutable key and a mutable value, both
stored in the arena allocator that belongs to the context. This is a
preconditio for supporting recursive types that require delayed initialization,
in particular LLVM structure types. The functionality is exercised in the test
pass with trivial self-recursive type. So far, recursive types can only be
printed in parsed in a closed type system. Removing this restriction is left
for future work.
Differential Revision: https://reviews.llvm.org/D84171
- Added more default values for `attributes` parameter for 2 more build methods
- Extend the op-decls.td unit test to test these build methods.
Differential Revision: https://reviews.llvm.org/D83839
This adds a `parseOptionalAttribute` method to the OpAsmParser that allows for parsing optional attributes, in a similar fashion to how optional types are parsed. This also enables the use of attribute values as the first element of an assembly format optional group.
Differential Revision: https://reviews.llvm.org/D83712
Summary: Currently forward decls are included with all the op classes. But there are cases (say when splitting up headers) where one wants the forward decls but not all the classes. Add an option to enable this. This does not change any current behavior (some further refactoring is probably due here).
Differential Revision: https://reviews.llvm.org/D83727
- Provide default value for `ArrayRef<NamedAttribute> attributes` parameter of
the collective params build method.
- Change the `genSeparateArgParamBuilder` function to not generate build methods
that may be ambiguous with the new collective params build method.
- This change should help eliminate passing empty NamedAttribue ArrayRef when the
collective params build method is used
- Extend op-decl.td unit test to make sure the ambiguous build methods are not
generated.
Differential Revision: https://reviews.llvm.org/D83517
The namespace can be specified using the `cppNamespace` field. This matches the functionality already present on dialects, enums, etc. This fixes problems with using interfaces on operations in a different namespace than the interface was defined in.
Differential Revision: https://reviews.llvm.org/D83604
Introduce pass to convert parallel affine.for op into 1-D affine.parallel op.
Run using --affine-parallelize. Removes test-detect-parallel: pass for checking
parallel affine.for ops.
Signed-off-by: Yash Jain <yash.jain@polymagelabs.com>
Differential Revision: https://reviews.llvm.org/D83193
- Create a pass that generates bugs based on trivially defined behavior for the purpose of testing the MLIR Reduce Tool.
- Implement the functionality inside the pass to crash mlir-opt in the presence of an operation with the name "crashOp".
- Register the pass as a test pass in the mlir-opt tool.
Reviewed by: jpienaar
Differential Revision: https://reviews.llvm.org/D83422
Create the framework and testing environment for MLIR Reduce - a tool
with the objective to reduce large test cases into smaller ones while
preserving their interesting behavior.
Implement the functionality to parse command line arguments, parse the
MLIR test cases into modules and run the interestingness tests on
the modules.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D82803
An operation can specify that an operation or result type matches the
type of another operation, result, or attribute via the `AllTypesMatch`
or `TypesMatchWith` constraints.
Use these constraints to also automatically resolve types in the
automatically generated assembly parser.
This way, only the attribute needs to be listed in `assemblyFormat`,
e.g. for constant operations.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D78434
with the objective to reduce large test cases into smaller ones while
preserving their interesting behavior.
Implement the framework to parse the command line arguments, parse the
input MLIR test case into a module and call reduction passes on the MLIR module.
Implement the Tester class which allows the different reduction passes to test the
interesting behavior of the generated reduced variants of the test case and keep track
of the most reduced generated variant.
Introduce pass to convert parallel affine.for op into 1-D
affine.parallel op. Run using --affine-parallelize. Removes
test-detect-parallel: pass for checking parallel affine.for ops.
Differential Revision: https://reviews.llvm.org/D82672
This enables better support for traits such as SameOperandsAndResultType, and other situations in which a variadic operand may be resolved from a non-variadic.
Differential Revision: https://reviews.llvm.org/D83011
This revision adds support to ODS for generating interfaces for attributes and types, in addition to operations. These interfaces can be specified using `AttrInterface` and `TypeInterface` in place of `OpInterface`. All of the features of `OpInterface` are supported except for the `verify` method, which does not have a matching representation in the Attribute/Type world. Generating these interface can be done using `gen-(attr|type)-interface-(defs|decls|docs)`.
Differential Revision: https://reviews.llvm.org/D81884
Also fixed bug in type inferface generator to address bug where operands and
attributes are interleaved.
Differential Revision: https://reviews.llvm.org/D82819
To be able to have more meaningful performance out of workloadsi going through
the vulkan-runner we need to use buffers from GPU device memory as access to
system memory is significantly slower for GPU with dedicated memory. This adds
code to do a copy through staging buffer as GPU memory cannot always be mapped
on the host.
Differential Revision: https://reviews.llvm.org/D82504
Using fully qualified names wherever possible avoids ambiguous class and function names. This is a follow-up to D82371.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D82471
Summary: The Pass class exists in both the mlir and the llvm namespaces. Use the fully qualified class name to avoid any ambiguities.
Reviewers: rriddle
Reviewed By: rriddle
Subscribers: mehdi_amini, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D82371
Add option to filter which op the OpDefinitionsGen run on. This enables having multiple ops together in the same TD file but generating different CC files for them (useful if one wants to use multiclasses or split out 1 dialect into multiple different libraries). There is probably more general query here (e.g., split out all ops that don't have a verify method, or that are commutative) but filtering based on op name (e.g., test.a_op) seemed a reasonable start and didn't require inventing a query specification mechanism here.
Differential Revision: https://reviews.llvm.org/D82319
Summary:
Currently, the TableGen rewrite generates redundant native calls in MLIR DRR files. This is a problem as some native calls may involve significant computations (e.g. when performing constant propagation where every values in a large tensor is touched).
The pattern was as follow:
```c++
if (native-call(args)) tblgen_attrs.emplace_back(rewriter, attribute, native-call(args))
```
The replacement pattern compute `native-call(args)` once and then use it both in the `if` condition and the `emplace_back` call.
Differential Revision: https://reviews.llvm.org/D82101
Summary:
Fixed build of D81618
Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
2) exp^{2x}-1 / exp^{2x}+1 , if x < 0.
Differential Revision: https://reviews.llvm.org/D82040
Summary:
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Reviewers: aartbik
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D81935
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Differential Revision: https://reviews.llvm.org/D79762
This reverts commit 32c757e4f8.
Broke the build bot:
******************** TEST 'MLIR :: Examples/standalone/test.toy' FAILED ********************
[...]
/tmp/ci-KIMiRFcVZt/lib/libMLIRLinalgToLLVM.a(LinalgToLLVM.cpp.o): In function `(anonymous namespace)::ConvertLinalgToLLVMPass::runOnOperation()':
LinalgToLLVM.cpp:(.text._ZN12_GLOBAL__N_123ConvertLinalgToLLVMPass14runOnOperationEv+0x100): undefined reference to `mlir::populateExpandTanhPattern(mlir::OwningRewritePatternList&, mlir::MLIRContext*)'
Summary:
Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
2) exp^{2x}-1 / exp^{2x}+1 , if x < 0.
Differential Revision: https://reviews.llvm.org/D81618
Summary:
* extra ';' in the following files:
mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
mlir/lib/Dialect/Shape/IR/Shape.cpp
* base class ‘mlir::ConvertVectorToSCFBase<ConvertVectorToSCFPass>’
should be explicitly initialized in the copy constructor [-Wextra] in
mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
* warning: ‘bool Expression::operator==(const Expression&) const’
defined but not used [-Wunused-function] in
mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp
Differential Revision: https://reviews.llvm.org/D81673
This parameter gives the developers the freedom to choose their desired function
signature conversion for preparing their functions for buffer placement. It is
introduced for BufferAssignmentFuncOpConverter, and also for
BufferAssignmentReturnOpConverter, and BufferAssignmentCallOpConverter to adapt
the return and call operations with the selected function signature conversion.
If the parameter is set, buffer placement won't also deallocate the returned
buffers.
Differential Revision: https://reviews.llvm.org/D81137
This allows verifying op-indepent attributes (e.g., attributes that do not require the op to have been created) before constructing an operation. These include checking whether required attributes are defined or constraints on attributes (such as I32 attribute). This is not perfect (e.g., if one had a disjunctive constraint where one part relied on the op and the other doesn't, then this would not try and extract the op independent from the op dependent).
The next step is to move these out to a trait that could be verified earlier than in the generated method. The first use case is for inferring the return type while constructing the op. At that point you don't have an Operation yet and that ends up in one having to duplicate the same checks, e.g., verify that attribute A is defined before querying A in shape function which requires that duplication. Instead this allows one to invoke a method to verify all the traits and, if this is checked first during verification, then all other traits could use attributes knowing they have been verified.
It is a little bit funny to have these on the adaptor, but I see the adaptor as a place to collect information about the op before the op is constructed (e.g., avoiding stringly typed accessors, verifying what is possible to verify before the op is constructed) while being cheap to use even with constructed op (so layer of indirection between the op constructed/being constructed). And from that point of view it made sense to me.
Differential Revision: https://reviews.llvm.org/D80842
Summary:
`mlir-rocm-runner` is introduced in this commit to execute GPU modules on ROCm
platform. A small wrapper to encapsulate ROCm's HIP runtime API is also inside
the commit.
Due to behavior of ROCm, raw pointers inside memrefs passed to `gpu.launch`
must be modified on the host side to properly capture the pointer values
addressable on the GPU.
LLVM MC is used to assemble AMD GCN ISA coming out from
`ConvertGPUKernelToBlobPass` to binary form, and LLD is used to produce a shared
ELF object which could be loaded by ROCm HIP runtime.
gfx900 is the default target be used right now, although it could be altered via
an option in `mlir-rocm-runner`. Future revisions may consider using ROCm Agent
Enumerator to detect the right target on the system.
Notice AMDGPU Code Object V2 is used in this revision. Future enhancements may
upgrade to AMDGPU Code Object V3.
Bitcode libraries in ROCm-Device-Libs, which implements math routines exposed in
`rocdl` dialect are not yet linked, and is left as a TODO in the logic.
Reviewers: herhut
Subscribers: mgorny, tpr, dexonsmith, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D80676
This revision adds a helper function to hoist alloc/dealloc pairs and
alloca op out of immediately enclosing scf::ForOp if both conditions are true:
1. all operands are defined outside the loop.
2. all uses are ViewLikeOp or DeallocOp.
This is now considered Linalg-specific and will be generalized on a per-need basis.
Differential Revision: https://reviews.llvm.org/D81152
This utility factors out the machinery required to add iterArgs and yield values to an scf.ForOp.
Differential Revision: https://reviews.llvm.org/D80656
https://reviews.llvm.org/D79246 introduces alignment propagation for vector transfer operations. Unfortunately, the alignment calculation is incorrect and can result in crashes.
This revision fixes the calculation by using the natural alignment of the memref elemental type, instead of the resulting vector type.
If more alignment is desired, it can be done in 2 ways:
1. use a proper vector.type_cast to transform a memref<axbxcxdxf32> into a memref<axbxvector<cxdxf32>> giving a natural alignment of vector<cxdxf32>
2. add an alignment attribute to vector transfer operations and propagate it.
With this change the alignment in the relevant tests goes down from 128 to 4.
Lastly, a few minor cleanups are performed and the custom `isMinorIdentityMap` is deprecated.
Differential Revision: https://reviews.llvm.org/D80734
This allows constructing operand adaptor from existing op (useful for commonalizing verification as I want to do in a follow up).
I also add ability to use member initializers for the generated adaptor constructors for convenience.
Differential Revision: https://reviews.llvm.org/D80667
Make ConvertKernelFuncToCubin pass to be generic:
- Rename to ConvertKernelFuncToBlob.
- Allow specifying triple, target chip, target features.
- Initializing LLVM backend is supplied by a callback function.
- Lowering process from MLIR module to LLVM module is via another callback.
- Change mlir-cuda-runner to adopt the revised pass.
- Add new tests for lowering to ROCm HSA code object (HSACO).
- Tests for CUDA and ROCm are kept in separate directories.
Differential Revision: https://reviews.llvm.org/D80142
Take advantage of equality constrains to generate the type inference interface.
This is used for equality and trivially built types. The type inference method
is only generated when no type inference trait is specified already.
This reorders verification that changes some test error messages.
Differential Revision: https://reviews.llvm.org/D80484
Summary:
Add DynamicMemRefType which can reference one of the statically ranked StridedMemRefType or a UnrankedMemRefType so that runner utils only need to be implemented once.
There is definitely room for more clean up and unification, but I will keep that for follow-ups.
Reviewers: nicolasvasilache
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80513
* Enables using with more variadic sized operands;
* Generate convenience accessors for attributes;
- The accessor are named the same as their name in ODS and returns attribute
type (not convenience type) and no derived attributes.
This is first step to changing adapter to support verifying argument
constraints before the op is even created. This does not change the name of
adaptor nor does it require it except for ops with variadic operands to keep this change smaller.
Considered creating separate adapter but decided against that given operands also require attributes in general (and definitely for verification of operands and attributes).
Differential Revision: https://reviews.llvm.org/D80420
Adds support for cooperative matrix support for arithmetic and cast
instructions. It also adds cooperative matrix store, muladd and matrixlength
instructions which are part of the extension.
Differential Revision: https://reviews.llvm.org/D80181
Due to similar APIs between CUDA and ROCm (HIP),
ConvertGpuLaunchFuncToCudaCalls pass could be used on both platforms with some
refactoring.
In this commit:
- Migrate ConvertLaunchFuncToCudaCalls from GPUToCUDA to GPUCommon, and rename.
- Rename runtime wrapper APIs be platform-neutral.
- Let GPU binary annotation attribute be specifiable as a PassOption.
- Naming changes within the implementation and tests.
Subsequent patches would introduce ROCm-specific tests and runtime wrapper
APIs.
Differential Revision: https://reviews.llvm.org/D80167
This reverts commit cdb6f05e2d.
The build is broken with:
You have called ADD_LIBRARY for library obj.MLIRGPUtoCUDATransforms without any source files. This typically indicates a problem with your CMakeLists.txt file
Due to similar APIs between CUDA and ROCm (HIP),
ConvertGpuLaunchFuncToCudaCalls pass could be used on both platforms with some
refactoring.
In this commit:
- Migrate ConvertLaunchFuncToCudaCalls from GPUToCUDA to GPUCommon, and rename.
- Rename runtime wrapper APIs be platform-neutral.
- Let GPU binary annotation attribute be specifiable as a PassOption.
- Naming changes within the implementation and tests.
Subsequent patches would introduce ROCm-specific tests and runtime wrapper
APIs.
Differential Revision: https://reviews.llvm.org/D80167
Enclose verifier code for AttrSizedOperandSegments and AttrSizedResultSegments
in a nested code block to avoid symbol collision.
Differential Revision: https://reviews.llvm.org/D80250
Summary: This revision adds support for assembly formats with optional attributes. It elides optional attributes that are part of the syntax from the attribute dictionary.
Reviewers: ftynse, Kayjukh
Reviewed By: ftynse, Kayjukh
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, jurahul, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80113
The JitRunner library is logically very close to the execution engine,
and shares similar dependencies.
find -name "*.cpp" -exec sed -i "s/Support\/JitRunner/ExecutionEngine\/JitRunner/" "{}" \;
Differential Revision: https://reviews.llvm.org/D79899
The following Conversions are affected: LoopToStandard -> SCFToStandard,
LoopsToGPU -> SCFToGPU, VectorToLoops -> VectorToSCF. Full file paths are
affected. Additionally, drop the 'Convert' prefix from filenames living under
lib/Conversion where applicable.
API names and CLI options for pass testing are also renamed when applicable. In
particular, LoopsToGPU contains several passes that apply to different kinds of
loops (`for` or `parallel`), for which the original names are preserved.
Differential Revision: https://reviews.llvm.org/D79940
The Vulkan runtime wrapper will be compiled to a shared library
that are loaded by the JIT runner. Depending on LLVM libraries
means that LLVM symbols will be compiled into the shared library.
That can cause problems if we are using it with other shared
libraries depending on LLVM, notably Mesa, the open-source graphics
driver framework. The Vulkan API wrappers invoked by the JIT runner
links to the system libvulkan.so. If it's Mesa providing the
implementation, Mesa will normally try to load the system libLLVM.so
for its shader compilation. That causes issues because the JIT runner
already loaded the Vulkan runtime wrapper which has LLVM sybmols
compiled in. So system linker will instruct Mesa to use those symbols
instead.
Differential Revision: https://reviews.llvm.org/D79860
This normalize the name of the tablegen file with the name of the generated
files (SideEffectInterfaces.h.inc) and the other Interface tablegen files,
which all end in Interface(s).td
Differential Revision: https://reviews.llvm.org/D79517
This is a wrapper around vector of NamedAttributes that keeps track of whether sorted and does some minimal effort to remain sorted (doing more, e.g., appending attributes in sorted order, could be done in follow up). It contains whether sorted and if a DictionaryAttr is queried, it caches the returned DictionaryAttr along with whether sorted.
Change MutableDictionaryAttr to always return a non-null Attribute even when empty (reserve null cases for errors). To this end change the getter to take a context as input so that the empty DictionaryAttr could be queried. Also create one instance of the empty dictionary attribute that could be reused without needing to lock context etc.
Update infer type op interface to use DictionaryAttr and use NamedAttrList to avoid incurring multiple conversion costs.
Fix bug in sorting helper function.
Differential Revision: https://reviews.llvm.org/D79463
vulkan-runtime-wrappers does not need MLIRSPIRVSerialization,
which is used by the ConvertGpuLaunchFuncToVulkanLaunchFunc pass
under the hood.
Differential Revision: https://reviews.llvm.org/D79577
SPIR-V ops can mix operands and attributes in the definition. These
operands and attributes are serialized in the exact order of the definition
to match SPIR-V binary format requirements. It can cause excessive
generated code bloat because we are emitting code to handle each
operand/attribute separately. So here we probe first to check whether all
the operands are ahead of attributes. Then we can serialize all operands
together.
This removes ~1000 lines of code from the generated inc file.
Differential Revision: https://reviews.llvm.org/D79446
These template functions are used in the serializer, where we can
actually directly query the opcode from the op's definition and
use that in the auto-generated serialization logic.
This removes a set of templates accounting for 319 lines from
the auto-generated inc file.
Differential Revision: https://reviews.llvm.org/D79444
We see intermittent build errors on the windows buildbot because
mlir-opt is including Linalg headers which haven't been built yet.
This dependence should be resolved by declaring a PUBLIC dependence
on the Linalg library when building MLIROptMain.
Summary:
Adds the loop unroll transformation for loop::ForOp.
Adds support for promoting the body of single-iteration loop::ForOps into its containing block.
Adds check tests for loop::ForOps with dynamic and static lower/upper bounds and step.
Care was taken to share code (where possible) with the AffineForOp unroll transformation to ease maintenance and potential future transition to a LoopLike construct on which loop transformations for different loop types can implemented.
Reviewers: ftynse, nicolasvasilache
Reviewed By: ftynse
Subscribers: bondhugula, mgorny, zzheng, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79184
- Exports MLIR targets to be used out-of-tree.
- mimicks `add_clang_library` and `add_flang_library`.
- Fixes libMLIR.so
After https://reviews.llvm.org/D77515 libMLIR.so was no longer containing
any object files. We originally had a cludge there that made it work with
the static initalizers and when switchting away from that to the way the
clang shlib does it, I noticed that MLIR doesn't create a `obj.{name}` target,
and doesn't export it's targets to `lib/cmake/mlir`.
This is due to MLIR using `add_llvm_library` under the hood, which adds
the target to `llvmexports`.
Differential Revision: https://reviews.llvm.org/D78773
[MLIR] Fix libMLIR.so and LLVM_LINK_LLVM_DYLIB
Primarily, this patch moves all mlir references to LLVM libraries into
either LLVM_LINK_COMPONENTS or LINK_COMPONENTS. This enables magic in
the llvm cmake files to automatically replace reference to LLVM components
with references to libLLVM.so when necessary. Among other things, this
completes fixing libMLIR.so, which has been broken for some configurations
since D77515.
Unlike previously, the pattern is now that mlir libraries should almost
always use add_mlir_library. Previously, some libraries still used
add_llvm_library. However, this confuses the export of targets for use
out of tree because libraries specified with add_llvm_library are exported
by LLVM. Instead users which don't need/can't be linked into libMLIR.so
can specify EXCLUDE_FROM_LIBMLIR
A common error mode is linking with LLVM libraries outside of LINK_COMPONENTS.
This almost always results in symbol confusion or multiply defined options
in LLVM when the same object file is included as a static library and
as part of libLLVM.so. To catch these errors more directly, there's now
mlir_check_all_link_libraries.
To simplify usage of add_mlir_library, we assume that all mlir
libraries depend on LLVMSupport, so it's not necessary to separately specify
it.
tested with:
BUILD_SHARED_LIBS=on,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB + LLVM_LINK_LLVM_DYLIB.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79067
[MLIR] Move from using target_link_libraries to LINK_LIBS
This allows us to correctly generate dependencies for derived targets,
such as targets which are created for object libraries.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79243
Three commits have been squashed to avoid intermediate build breakage.
Summary:
This is an initial version, currently supports OpString and OpLine
for autogenerated operations during (de)serialization.
Differential Revision: https://reviews.llvm.org/D79091
Summary:
This revision cleans up a layer of complexity in ScopedContext and uses InsertGuard instead of previously manual bookkeeping.
The method `getBuilder` is renamed to `getBuilderRef` and spurious copies of OpBuilder are tracked.
This results in some canonicalizations not happening anymore in the Linalg matmul to vector test. This test is retired because relying on DRRs for this has been shaky at best. The solution will be better support to write fused passes in C++ with more idiomatic pattern composition and application.
Differential Revision: https://reviews.llvm.org/D79208
This revision adds support to allow named ops to lower to loops.
Linalg.batch_matmul successfully lowers to loops and to LLVM.
In the process, this test also activates linalg to affine loops.
However padded convolutions to not lower to affine.load atm so this revision overrides the type of underlying load / store operation.
Differential Revision: https://reviews.llvm.org/D79135
This range allows for performing many different operations on successor operands, including erasing/adding/setting. This removes the need for the explicit canEraseSuccessorOperand and eraseSuccessorOperand methods.
Differential Revision: https://reviews.llvm.org/D79077
Currently a declaration won't be generated if the method has a default implementation. Meaning that operations that wan't to override the default have to explicitly declare the method in the extraClassDeclarations. This revision adds an optional list parameter to DeclareOpInterfaceMethods to allow for specifying a set of methods that should always have the declarations generated, even if there is a default.
Differential Revision: https://reviews.llvm.org/D79030
This class allows for mutating an operand range in-place, and provides vector like API for adding/erasing/setting. ODS now uses this class to generate mutable wrappers for named operands, with the name `MutableOperandRange <operand-name>Mutable()`
Differential Revision: https://reviews.llvm.org/D78892
Summary:
When creating an operation with
* `AttrSizedOperandSegments` trait
* Variadic operands of only non-buildable types
* assemblyFormat to automatically generate the parser
the `builder` local variable is used, but never declared.
This adds a fix as well as a test for this case as existing ones use buildable types only.
Reviewers: rriddle, Kayjukh, grosser
Reviewed By: Kayjukh
Subscribers: mehdi_amini, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D79004