The support for functions taking and returning memrefs of floats was introduced
in the first version of the runner, created before MLIR had reliable lowering
of allocation/deallocation to library calls. It forcibly runs MLIR
transformation convering affine, loop and standard dialects into the LLVM
dialect, unlike the other runner flows that accept the LLVM dialect directly.
Memref support leads to more complex layering and is generally fragile. Drop
it in favor of functions returning a scalar, or library-based function calls to
print memrefs and other data structures.
PiperOrigin-RevId: 271330839
- the list of passes run by mlir-cpu-runner included -lower-affine and
-lower-to-llvm but was missing -lower-to-cfg (because -lower-affine at
some point used to lower straight to CFG); add -lower-to-cfg in
between. IR with affine ops can now be run by mlir-cpu-runner.
- update -lower-to-cfg to be consistent with other passes (create*Pass methods
were changed to return unique ptrs, but -lower-to-cfg appears to have been
missed).
- mlir-cpu-runner was unable to parse custom form of affine op's - fix
link options
- drop unnecessary run options from test/mlir-cpu-runner/simple.mlir
(none of the test cases had loops)
- -convert-to-llvmir was changed to -lower-to-llvm at some point, but the
create pass method name wasn't updated (this pass converts/lowers to LLVM
dialect as opposed to LLVM IR). Fix this.
(If we prefer "convert", the cmd-line options could be changed to
"-convert-to-llvm/cfg" then.)
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#115
PiperOrigin-RevId: 266666909
This CL splits the lowering of affine to LLVM into 2 parts:
1. affine -> std
2. std -> LLVM
The conversions mostly consists of splitting concerns between the affine and non-affine worlds from existing conversions.
Short-circuiting of affine `if` conditions was never tested or exercised and is removed in the process, it can be reintroduced later if needed.
LoopParametricTiling.cpp is updated to reflect the newly added ForOp::build.
PiperOrigin-RevId: 257794436
This tool allows to execute MLIR IR snippets written in the GPU dialect
on a CUDA capable GPU. For this to work, a working CUDA install is required
and the build has to be configured with MLIR_CUDA_RUNNER_ENABLED set to 1.
PiperOrigin-RevId: 256551415
As with Functions, Module will soon become an operation, which are value-typed. This eases the transition from Module to ModuleOp. A new class, OwningModuleRef is provided to allow for owning a reference to a Module, and will auto-delete the held module on destruction.
PiperOrigin-RevId: 256196193
Move the data members out of Function and into a new impl storage class 'FunctionStorage'. This allows for Function to become value typed, which will greatly simplify the transition of Function to FuncOp(given that FuncOp is also value typed).
PiperOrigin-RevId: 255983022
Conversions from dialect A to dialect B depend on both A and B. Therefore, it
is reasonable for them to live in a separate library that depends on both
DialectA and DialectB library, and does not forces dependees of DialectA or
DialectB to also link in the conversion. Create the directory layout for the
conversions and move the Standard to LLVM dialect conversion as the first
example.
PiperOrigin-RevId: 253312252
Originally, ExecutionEngine was created before MLIR had a proper pass
management infrastructure or an LLVM IR dialect (using the LLVM target
directly). It has been running a bunch of lowering passes to convert the input
IR from Standard+Affine dialects to LLVM IR and, later, to the LLVM IR dialect.
This is no longer necessary and is even undesirable for compilation flows that
perform their own conversion to the LLVM IR dialect. Drop this integration and
make ExecutionEngine accept only the LLVM IR dialect. Users of the
ExecutionEngine can call the relevant passes themselves.
--
PiperOrigin-RevId: 249004676
This CL performs post-commit cleanups.
It adds the ability to specify which shared libraries to load dynamically in ExecutionEngine. The linalg integration test is updated to use a shared library.
Additional minor cleanups related to LLVM lowering of Linalg are also included.
--
PiperOrigin-RevId: 248346589
This CL extends the execution engine to allow the additional resolution of symbols names
that have been registered explicitly. This allows linking static library symbols that have not been explicitly exported with the -rdynamic linking flag (which is deemed too intrusive).
--
PiperOrigin-RevId: 247969504
This CL adds support for functions in the Linalg dialect to run with mlir-cpu-runner.
For this purpose, this CL adds BufferAllocOp, BufferDeallocOp, LoadOp and StoreOp to the Linalg dialect as well as their lowering to LLVM. To avoid collisions with mlir::LoadOp/StoreOp (which should really become mlir::affine::LoadOp/StoreOp), the mlir::linalg namespace is added.
The execution uses a dummy linalg_dot function that just returns for now. In the future a proper library call will be used.
--
PiperOrigin-RevId: 247476061
This is making up for some differences in standard library and linker flags.
It also get rid of the requirement to build with RTTI.
--
PiperOrigin-RevId: 241348845
Original implementation of OutUtils provided two different LLVM IR module
transformers to be used with the MLIR ExecutionEngine: OptimizingTransformer
parameterized by the optimization levels (similar to -O3 flags) and
LLVMPassesTransformer parameterized by the string formatted similarly to
command line options of LLVM's "opt" tool without support for -O* flags.
Introduce such support by declaring the flags inside the parser and by
populating the pass managers similarly to what "opt" does. Remove the
additional flags from mlir-cpu-runner as they can now be wrapped into
`-llvm-opts` together with other LLVM-related flags.
PiperOrigin-RevId: 236107292
A recent change introduced a possibility to run LLVM IR transformation during
JIT-compilation in the ExecutionEngine. Provide helper functions that
construct IR transformers given either clang-style optimization levels or a
list passes to run. The latter wraps the LLVM command line option parser to
parse strings rather than actual command line arguments. As a result, we can
run either of
mlir-cpu-runner -O3 input.mlir
mlir-cpu-runner -some-mlir-pass -llvm-opts="-llvm-pass -other-llvm-pass"
to combine different transformations. The transformer builder functions are
provided as a separate library that depends on LLVM pass libraries unlike the
main execution engine library. The library can be used for integrating MLIR
execution engine into external frameworks.
PiperOrigin-RevId: 234173493
Original implementation of the translation from MLIR to LLVM IR operated on the
Standard+BuiltIn dialect, with a later addition of the SuperVector dialect.
This required the translation to be aware of a potetially large number of other
dialects as the infrastructure extended. With the recent introduction of the
LLVM IR dialect into MLIR, the translation can be switched to only translate
the LLVM IR dialect, and the translation of the operations becomes largely
mechanical.
The reimplementation of the translator follows the lines of the original
translator in function and basic block conversion. In particular, block
arguments are converted to LLVM IR PHI nodes, which are connected to their
sources after all blocks of a function had been converted. Thanks to LLVM IR
types being wrapped in the MLIR LLVM dialect type, type conversion is
simplified to only convert function types, all other types are simply
unwrapped. Individual instructions are constructed using the LLVM IRBuilder,
which has a great potential for being table-generated from the LLVM IR dialect
operation definitions.
The input of the test/Target/llvmir.mlir is updated to use the MLIR LLVM IR
dialect. While it is now redundant with the dialect conversion test, the point
of the exercise is to guarantee exactly the same LLVM IR is emitted. (Only the
name of the allocation function is changed from `__mlir_alloc` to `alloc` in
the CHECK lines.) It will be simplified in a follow-up commit.
PiperOrigin-RevId: 233842306
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363