- Fixes bug 48242 point 3 crash.
- Makes the improvments from points 1 & 2.
https://bugs.llvm.org/show_bug.cgi?id=48262
```
def RTLValueType : Type<CPred<"isRTLValueType($_self)">, "Type"> {
string cppType = "::mlir::Type";
}
```
Works now, but merely by happenstance. Parameters expects a `TypeParameter` class def or a string representing a c++ type but doesn't enforce it.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D91939
This reverts commit f8284d21a8.
Revert "[mlir][Linalg] NFC: Expose some utility functions used for promotion."
This reverts commit 0c59f51592.
Revert "Remove unused isZero function"
This reverts commit 0f9f0a4046.
Change f8284d21 led to multiple failures in IREE compilation.
This commit starts a new pass and patterns for converting Linalg
named ops to generic ops. This enables us to leverage the flexbility
from generic ops during transformations. Right now only linalg.conv
is supported; others will be added when useful.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D91357
For intrinsics with multiple returns where one or more operands are overloaded, the overloaded type is inferred from the corresponding field of the resulting struct, instead of accessing the result directly.
As such, the hasResult parameter of LLVM_IntrOpBase (and derived classes) is replaced with numResults. TableGen for intrinsics also updated to populate this field with the total number of results.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D91680
This allows for operations that exclusively affect symbol operations to better describe their side effects.
Differential Revision: https://reviews.llvm.org/D91581
As discussed in https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020
this CL is the start of sparse tensor compiler support in MLIR. Starting with a
"dense" kernel expressed in the Linalg dialect together with per-dimension
sparsity annotations on the tensors, the compiler automatically lowers the
kernel to sparse code using the methods described in Fredrik Kjolstad's thesis.
Many details are still TBD. For example, the sparse "bufferization" is purely
done locally since we don't have a global solution for propagating sparsity
yet. Furthermore, code to input and output the sparse tensors is missing.
Nevertheless, with some hand modifications, the generated MLIR can be
easily converted into runnable code already.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D90994
This utility function is helpful for dialect-specific builders that need
to access the context through location, and the location itself may be
either provided as an argument or expected to be recovered from the
implicit location stack.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91623
It may be necessary for interface methods to process or return variables with
the interface class type, in particular for attribute and type interfaces that
can return modified attributes and types that implement the same interface.
However, the code generated by ODS in this case would not compile because the
signature (and the body if provided) appear in the definition of the Model
class and before the interface class, which derives from the Model. Change the ODS
interface method generator to emit only method declarations in the Model class
itself, and emit method definitions after the interface class. Mark as "inline"
since their definitions are still emitted in the header and are no longer
implicitly inline. Add a forward declaration of the interface class before the
Concept+Model classes to make the class name usable in declarations.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D91499
In ODS, attributes of an operation can be provided as a part of the "arguments"
field, together with operands. Such attributes are accepted by the op builder
and have accessors generated.
Implement similar functionality for ODS-generated op-specific Python bindings:
the `__init__` method now accepts arguments together with operands, in the same
order as in the ODS `arguments` field; the instance properties are introduced
to OpView classes to access the attributes.
This initial implementation accepts and returns instances of the corresponding
attribute class, and not the underlying values since the mapping scheme of the
value types between C++, C and Python is not yet clear. Default-valued
attributes are not supported as that would require Python to be able to parse
C++ literals.
Since attributes in ODS are tightely related to the actual C++ type system,
provide a separate Tablegen file with the mapping between ODS storage type for
attributes (typically, the underlying C++ attribute class), and the
corresponding class name. So far, this might look unnecessary since all names
match exactly, but this is not necessarily the cases for non-standard,
out-of-tree attributes, which may also be placed in non-default namespaces or
Python modules. This also allows out-of-tree users to generate Python bindings
without having to modify the bindings generator itself. Storage type was
preferred over the Tablegen "def" of the attribute class because ODS
essentially encodes attribute _constraints_ rather than classes, e.g. there may
be many Tablegen "def"s in the ODS that correspond to the same attribute type
with additional constraints
The presence of the explicit mapping requires the change in the .td file
structure: instead of just calling the bindings generator directly on the main
ODS file of the dialect, it becomes necessary to create a new file that
includes the main ODS file of the dialect and provides the mapping for
attribute types. Arguably, this approach offers better separability of the
Python bindings in the build system as the main dialect no longer needs to know
that it is being processed by the bindings generator.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91542
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.
Differential Revision: https://reviews.llvm.org/D91572
This replaces the old type decomposition logic that was previously mixed
into bufferization, and makes it easily accessible.
This also deletes TestFinalizingBufferize, because after we remove the type
decomposition, it doesn't do anything that is not already provided by
func-bufferize.
Differential Revision: https://reviews.llvm.org/D90899
The tokens are already handled by the lexer. This revision exposes them
through the parser interface.
This revision also adds missing functions for question mark parsing and
completes the list of valid punctuation tokens in the documentation.
Differential Revision: https://reviews.llvm.org/D90907
Add an ODS-backed generator of default builders. This currently does not
support operation with attribute arguments, for which the builder is
just ignored. Attribute support will be introduced separately for
builders and accessors.
Default builders are always generated with the same number of result and
operand groups as the ODS specification, i.e. one group per each operand
or result. Optional elements accept None but cannot be omitted. Variadic
groups accept iterable objects and cannot be replaced with a single
object.
For some operations, it is possible to infer the result type given the
traits, but most traits rely on inline pieces of C++ that we cannot
(yet) forward to Python bindings. Since the Ops where the inference is
possible (having the `SameOperandAndResultTypes` trait or
`TypeMatchesWith` without transform field) are a small minority, they
also require the result type to make the builder syntax more consistent.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91190
I would like to use this for D90589 to switch std.alloc to assemblyFormat.
Hopefully it will be useful in other places as well.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D91068
This only exposes the ability to round-trip a textual pipeline at the
moment.
To exercise it, we also bind the libTransforms in a new Python extension. This
does not include any interesting bindings, but it includes all the
mechanism to add separate native extensions and load them dynamically.
As such passes in libTransforms are only registered after `import
mlir.transforms`.
To support this global registration, the TableGen backend is also
extended to bind to the C API the group registration for passes.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D90819
Introduce an ODS/Tablegen backend producing Op wrappers for Python bindings
based on the ODS operation definition. Usage:
mlir-tblgen -gen-python-op-bindings -Iinclude <path/to/Ops.td> \
-bind-dialect=<dialect-name>
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D90960
The pass combines patterns of ExpandAtomic, ExpandMemRefReshape,
StdExpandDivs passes. The pass is meant to legalize STD for conversion to LLVM.
Differential Revision: https://reviews.llvm.org/D91082
* Wires them in the same way that peer-dialect test passes are registered.
* Fixes the build for -DLLVM_INCLUDE_TESTS=OFF.
Differential Revision: https://reviews.llvm.org/D91022
We were discussing on discord regarding the need for extension-based systems like Python to dynamically link against MLIR (or else you can only have one extension that depends on it). Currently, when I set that up, I piggy-backed off of the flag that enables build libLLVM.so and libMLIR.so and depended on libMLIR.so from the python extension if shared library building was enabled. However, this is less than ideal.
In the current setup, libMLIR.so exports both all symbols from the C++ API and the C-API. The former is a kitchen sink and the latter is curated. We should be splitting them and for things that are properly factored to depend on the C-API, they should have the option to *only* depend on the C-API, and we should build that shared library no matter what. Its presence isn't just an optimization: it is a key part of the system.
To do this right, I needed to:
* Introduce visibility macros into mlir-c/Support.h. These should work on both *nix and windows as-is.
* Create a new libMLIRPublicAPI.so with just the mlir-c object files.
* Compile the C-API with -fvisibility=hidden.
* Conditionally depend on the libMLIR.so from libMLIRPublicAPI.so if building libMLIR.so (otherwise, also links against the static libs and will produce a mondo libMLIRPublicAPI.so).
* Disable re-exporting of static library symbols that come in as transitive deps.
This gives us a dynamic linked C-API layer that is minimal and should work as-is on all platforms. Since we don't support libMLIR.so building on Windows yet (and it is not very DLL friendly), this will fall back to a mondo build of libMLIRPublicAPI.so, which has its uses (it is also the most size conscious way to go if you happen to know exactly what you need).
Sizes (release/stripped, Ubuntu 20.04):
Shared library build:
libMLIRPublicAPI.so: 121Kb
_mlir.cpython-38-x86_64-linux-gnu.so: 1.4Mb
mlir-capi-ir-test: 135Kb
libMLIR.so: 21Mb
Static build:
libMLIRPublicAPI.so: 5.5Mb (since this is a "static" build, this includes the MLIR implementation as non-exported code).
_mlir.cpython-38-x86_64-linux-gnu.so: 1.4Mb
mlir-capi-ir-test: 44Kb
Things like npcomp and circt which bring their own dialects/transforms/etc would still need the shared library build and code that links against libMLIR.so (since it is all C++ interop stuff), but hopefully things that only depend on the public C-API can just have the one narrow dep.
I spot checked everything with nm, and it looks good in terms of what is exporting/importing from each layer.
I'm not in a hurry to land this, but if it is controversial, I'll probably split off the Support.h and API visibility macro changes, since we should set that pattern regardless.
Reviewed By: mehdi_amini, benvanik
Differential Revision: https://reviews.llvm.org/D90824
This functionality is superceded by BufferResultsToOutParams pass (see
https://reviews.llvm.org/D90071) for users the require buffers to be
out-params. That pass should be run immediately after all tensors are gone from
the program (before buffer optimizations and deallocation insertion), such as
immediately after a "finalizing" bufferize pass.
The -test-finalizing-bufferize pass now defaults to what used to be the
`allowMemrefFunctionResults=true` flag. and the
finalizing-bufferize-allowed-memref-results.mlir file is moved
to test/Transforms/finalizing-bufferize.mlir.
Differential Revision: https://reviews.llvm.org/D90778
TestDialect has many operations and they all live in ::mlir namespace.
Sometimes it is not clear whether the ops used in the code for the test passes
belong to Standard or to Test dialects.
Also, with this change it is easier to understand what test passes registered
in mlir-opt are actually passes in mlir/test.
Differential Revision: https://reviews.llvm.org/D90794
The LinalgDependenceGraph and alias analysis provide the necessary analysis for the Linalg fusion on buffers case.
However this is not enough for linalg on tensors which require proper memory effects to play nicely with DCE and other transformations.
This revision adds side effects to Linalg ops that were previously missing and has 2 consequences:
1. one example in the copy removal pass now fails since the linalg.generic op has side effects and the pass does not perform alias analysis / distinguish between reads and writes.
2. a few examples in fusion-tensor.mlir need to return the resulting tensor otherwise DCE automatically kicks in as part of greedy pattern application.
Differential Revision: https://reviews.llvm.org/D90762
This is exposing the basic functionalities (create, nest, addPass, run) of
the PassManager through the C API in the new header: `include/mlir-c/Pass.h`.
In order to exercise it in the unit-test, a basic TableGen backend is
also provided to generate a simple C wrapper around the pass
constructor. It is used to expose the libTransforms passes to the C API.
Reviewed By: stellaraccident, ftynse
Differential Revision: https://reviews.llvm.org/D90667
BufferPlacement is no longer part of bufferization. However, this test
is an important test of "finalizing" bufferize passes.
A "finalizing" bufferize conversion is one that performs a "full"
conversion and expects all tensors to be gone from the program. This in
particular involves rewriting funcs (including block arguments of the
contained region), calls, and returns. The unique property of finalizing
bufferization passes is that they cannot be done via a local
transformation with suitable materializations to ensure composability
(as other bufferization passes do). For example, if a call is
rewritten, the callee needs to be rewritten otherwise the IR will end up
invalid. Thus, finalizing bufferization passes require an atomic change
to the entire program (e.g. the whole module).
This new designation makes it clear also that it shouldn't be testing
bufferization of linalg ops, so the tests have been updated to not use
linalg.generic ops. (linalg.copy is still used as the "copy" op for
copying into out-params)
Differential Revision: https://reviews.llvm.org/D89979
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).
To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:
(1) resolving conflicts between pairs of ops from different modules
(2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)
This patch implements only the first phase.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D90477
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).
To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:
(1) resolving conflicts between pairs of ops from different modules
(2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)
This patch implements only the first phase.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D90477
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).
To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:
(1) resolving conflicts between pairs of ops from different modules
(2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)
This patch implements only the first phase.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D90022
Linalg "tile-and-fuse" is currently exposed as a Linalg pass "-linalg-fusion" but only the mechanics of the transformation are currently relevant.
Instead turn it into a "-test-linalg-greedy-fusion" pass which performs canonicalizations to enable more fusions to compose.
This allows dropping the OperationFolder which is not meant to be used with the pattern rewrite infrastructure.
Differential Revision: https://reviews.llvm.org/D90394
When compiling for code size, the use of a vtable causes a destructor(and constructor in certain cases) to be generated for the class. Interface models don't need a complex constructor or a destructor, so this can lead to many megabytes of code size increase(even in opt). This revision switches to a simpler struct of function pointers approach that accomplishes the same API requirements as before. This change requires no updates to user code, or any other code aside from the generator, as the user facing API is still exactly the same.
Differential Revision: https://reviews.llvm.org/D90085
A recent commit introduced a new syntax for specifying builder arguments in
ODS, which is better amenable to automated processing, and deprecated the old
form. Transition all dialects as well as Linalg ODS generator to use the new
syntax.
Add a deprecation notice to ODS generator.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D90038
This patch introduces a SPIR-V runner. The aim is to run a gpu
kernel on a CPU via GPU -> SPIRV -> LLVM conversions. This is a first
prototype, so more features will be added in due time.
- Overview
The runner follows similar flow as the other runners in-tree. However,
having converted the kernel to SPIR-V, we encode the bind attributes of
global variables that represent kernel arguments. Then SPIR-V module is
converted to LLVM. On the host side, we emulate passing the data to device
by creating in main module globals with the same symbolic name as in kernel
module. These global variables are later linked with ones from the nested
module. We copy data from kernel arguments to globals, call the kernel
function from nested module and then copy the data back.
- Current state
At the moment, the runner is capable of running 2 modules, nested one in
another. The kernel module must contain exactly one kernel function. Also,
the runner supports rank 1 integer memref types as arguments (to be scaled).
- Enhancement of JitRunner and ExecutionEngine
To translate nested modules to LLVM IR, JitRunner and ExecutionEngine were
altered to take an optional (default to `nullptr`) function reference that
is a custom LLVM IR module builder. This allows to customize LLVM IR module
creation from MLIR modules.
Reviewed By: ftynse, mravishankar
Differential Revision: https://reviews.llvm.org/D86108
This dependency was already existing indirectly, but is now more direct
since the registration relies on a inline function. This fixes the
link of the tools with BFD.