Add missing mlir-capi-*-test tool substitutions in order to fix CAPI
test failures when mlir is not installed yet.
Differential Revision: https://reviews.llvm.org/D110991
Include mlir_tools_dir in the PATH used in test environment,
as otherwise mlir-reduce is unable to find mlir-opt when building
standalone (and hence mlir_tools_dir != llvm_tools_dir).
Differential Revision: https://reviews.llvm.org/D110992
First the leak sanitizer has to be disabled, as even an empty script
leads to leak detection with Python.
Then we need to preload the ASAN runtime, as the main binary (python)
won't be linked against it. This will only work on Linux right now.
Differential Revision: https://reviews.llvm.org/D111004
* This could have been removed some time ago as it only had one op left in it, which is redundant with the new approach.
* `matmul_i8_i8_i32` (the remaining op) can be trivially replaced by `matmul`, which natively supports mixed precision.
Differential Revision: https://reviews.llvm.org/D110792
* Implements all of the discussed features:
- Links against common CAPI libraries that are self contained.
- Stops using the 'python/' directory at the root for everything, opening the namespace up for multiple projects to embed the MLIR python API.
- Separates declaration of sources (py and C++) needed to build the extension from building, allowing external projects to build custom assemblies from core parts of the API.
- Makes the core python API relocatable (i.e. it could be embedded as something like 'npcomp.ir', 'npcomp.dialects', etc). Still a bit more to do to make it truly isolated but the main structural reset is done.
- When building statically, installed python packages are completely self contained, suitable for direct setup and upload to PyPi, et al.
- Lets external projects assemble their own CAPI common runtime library that all extensions use. No more possibilities for TypeID issues.
- Begins modularizing the API so that external projects that just include a piece pay only for what they use.
* I also rolled in a re-organization of the native libraries that matches how I was packaging these out of tree and is a better layering (i.e. all libraries go into a nested _mlir_libs package). There is some further cleanup that I resisted since it would have required source changes that I'd rather do in a followup once everything stabilizes.
* Note that I made a somewhat odd choice in choosing to recompile all extensions for each project they are included into (as opposed to compiling once and just linking). While not leveraged yet, this will let us set definitions controlling the namespacing of the extensions so that they can be made to not conflict across projects (with preprocessor definitions).
* This will be a relatively substantial breaking change for downstreams. I will handle the npcomp migration and will coordinate with the circt folks before landing. We should stage this and make sure it isn't causing problems before landing.
* Fixed a couple of absolute imports that were causing issues.
Differential Revision: https://reviews.llvm.org/D106520
The patch extends the yaml code generation to support the following new OpDSL constructs:
- captures
- constants
- iteration index accesses
- predefined types
These changes have been introduced by revision
https://reviews.llvm.org/D101364.
Differential Revision: https://reviews.llvm.org/D102075
This commits adds a basic LSP server for MLIR that supports resolving references and definitions. Several components of the setup are simplified to keep the size of this commit down, and will be built out in later commits. A followup commit will add a vscode language client that communicates with this server, paving the way for better IDE experience when interfacing with MLIR files.
The structure of this tool is similar to mlir-opt and mlir-translate, i.e. the implementation is structured as a library that users can call into to implement entry points that contain the dialects/passes that they are interested in.
Note: This commit contains several files, namely those in `mlir-lsp-server/lsp`, that have been copied from the LSP code in clangd and adapted for use in MLIR. This copying was decided as the best initial path forward (discussed offline by several stake holders in MLIR and clangd) given the different needs of our MLIR server, and the one for clangd. If a strong desire/need for unification arises in the future, the existence of these files in mlir-lsp-server can be reconsidered.
Differential Revision: https://reviews.llvm.org/D100439
This change combines for ROCm what was done for CUDA in D97463, D98203, D98360, and D98396.
I did not try to compile SerializeToHsaco.cpp or test mlir/test/Integration/GPU/ROCM because I don't have an AMD card. I fixed the things that had obvious bit-rot though.
Reviewed By: whchung
Differential Revision: https://reviews.llvm.org/D98447
This does not change the behavior directly: the tests only run when
`-DMLIR_INCLUDE_INTEGRATION_TESTS=ON` is configured. However running
`ninja check-mlir` will not run all the tests within a single
lit invocation. The previous behavior would wait for all the integration
tests to complete before starting to run the first regular test. The
test results were also reported separately. This change is unifying all
of this and allow concurrent execution of the integration tests with
regular non-regression and unit-tests.
Differential Revision: https://reviews.llvm.org/D97241
Add the necessary bits to CMakeLists to make it possible to configure
MLIR against installed LLVM, and build it with minimal need for LLVM
source tree. The latter is only necessary to run unittests, and if it
is missing then unittests are skipped with a warning.
This change includes the necessary changes to tests, in particular
adding some missing substitutions and defining missing variables
for lit.site.cfg.py substitution.
Reviewed By: stephenneuendorffer
Differential Revision: https://reviews.llvm.org/D85464
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
This is exposing the basic functionalities (create, nest, addPass, run) of
the PassManager through the C API in the new header: `include/mlir-c/Pass.h`.
In order to exercise it in the unit-test, a basic TableGen backend is
also provided to generate a simple C wrapper around the pass
constructor. It is used to expose the libTransforms passes to the C API.
Reviewed By: stellaraccident, ftynse
Differential Revision: https://reviews.llvm.org/D90667
This patch introduces a SPIR-V runner. The aim is to run a gpu
kernel on a CPU via GPU -> SPIRV -> LLVM conversions. This is a first
prototype, so more features will be added in due time.
- Overview
The runner follows similar flow as the other runners in-tree. However,
having converted the kernel to SPIR-V, we encode the bind attributes of
global variables that represent kernel arguments. Then SPIR-V module is
converted to LLVM. On the host side, we emulate passing the data to device
by creating in main module globals with the same symbolic name as in kernel
module. These global variables are later linked with ones from the nested
module. We copy data from kernel arguments to globals, call the kernel
function from nested module and then copy the data back.
- Current state
At the moment, the runner is capable of running 2 modules, nested one in
another. The kernel module must contain exactly one kernel function. Also,
the runner supports rank 1 integer memref types as arguments (to be scaled).
- Enhancement of JitRunner and ExecutionEngine
To translate nested modules to LLVM IR, JitRunner and ExecutionEngine were
altered to take an optional (default to `nullptr`) function reference that
is a custom LLVM IR module builder. This allows to customize LLVM IR module
creation from MLIR modules.
Reviewed By: ftynse, mravishankar
Differential Revision: https://reviews.llvm.org/D86108
* Links against libMLIR.so if the project is built for DYLIBs.
* Puts things in the right place in build and install time python/ trees so that RPaths line up.
* Adds install actions to install both the extension and sources.
* Copies py source files to the build directory to match (consistent layout between build/install time and one place to point a PYTHONPATH for tests and interactive use).
* Finally, "import mlir" from an installed LLVM just works.
Differential Revision: https://reviews.llvm.org/D89167
Introduce an initial version of C API for MLIR core IR components: Value, Type,
Attribute, Operation, Region, Block, Location. These APIs allow for both
inspection and creation of the IR in the generic form and intended for wrapping
in high-level library- and language-specific constructs. At this point, there
is no stability guarantee provided for the API.
Reviewed By: stellaraccident, lattner
Differential Revision: https://reviews.llvm.org/D83310
Summary:
* Native '_mlir' extension module.
* Python mlir/__init__.py trampoline module.
* Lit test that checks a message.
* Uses some cmake configurations that have worked for me in the past but likely needs further elaboration.
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D83279
This patch adds the `default_triple` feature to MLIR test suite.
This feature was added to LLVM in d178f4fc8 in order to be able to
run the LLVM tests without having the host targets configured in.
With this change, `ninja check-mlir` passes without the host
target, i.e. this config:
cmake ../llvm -DLLVM_TARGETS_TO_BUILD="" -DLLVM_DEFAULT_TARGET_TRIPLE="" -DLLVM_ENABLE_PROJECTS=mlir -GNinja
Differential Revision: https://reviews.llvm.org/D82142
This option avoids to accidentally reuse variable across -LABEL match,
it can be explicitly opted-in by prefixing the variable name with $
Differential Revision: https://reviews.llvm.org/D81531
Summary:
`mlir-rocm-runner` is introduced in this commit to execute GPU modules on ROCm
platform. A small wrapper to encapsulate ROCm's HIP runtime API is also inside
the commit.
Due to behavior of ROCm, raw pointers inside memrefs passed to `gpu.launch`
must be modified on the host side to properly capture the pointer values
addressable on the GPU.
LLVM MC is used to assemble AMD GCN ISA coming out from
`ConvertGPUKernelToBlobPass` to binary form, and LLD is used to produce a shared
ELF object which could be loaded by ROCm HIP runtime.
gfx900 is the default target be used right now, although it could be altered via
an option in `mlir-rocm-runner`. Future revisions may consider using ROCm Agent
Enumerator to detect the right target on the system.
Notice AMDGPU Code Object V2 is used in this revision. Future enhancements may
upgrade to AMDGPU Code Object V3.
Bitcode libraries in ROCm-Device-Libs, which implements math routines exposed in
`rocdl` dialect are not yet linked, and is left as a TODO in the logic.
Reviewers: herhut
Subscribers: mgorny, tpr, dexonsmith, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D80676
Summary:
This revision adds a tool that generates the ODS and C++ implementation for "named" Linalg ops according to the [RFC discussion](https://llvm.discourse.group/t/rfc-declarative-named-ops-in-the-linalg-dialect/745).
While the mechanisms and language aspects are by no means set in stone, this revision allows connecting the pieces end-to-end from a mathematical-like specification.
Some implementation details and short-term decisions taken for the purpose of bootstrapping and that are not set in stone include:
1. using a "[Tensor Comprehension](https://arxiv.org/abs/1802.04730)-inspired" syntax
2. implicit and eager discovery of dims and symbols when parsing
3. using EDSC ops to specify the computation (e.g. std_addf, std_mul_f, ...)
A followup revision will connect this tool to tablegen mechanisms and allow the emission of named Linalg ops that automatically lower to various loop forms and run end to end.
For the following "Tensor Comprehension-inspired" string:
```
def batch_matmul(A: f32(Batch, M, K), B: f32(K, N)) -> (C: f32(Batch, M, N)) {
C(b, m, n) = std_addf<k>(std_mulf(A(b, m, k), B(k, n)));
}
```
With -gen-ods-decl=1, this emits (modulo formatting):
```
def batch_matmulOp : LinalgNamedStructured_Op<"batch_matmul", [
NInputs<2>,
NOutputs<1>,
NamedStructuredOpTraits]> {
let arguments = (ins Variadic<LinalgOperand>:$views);
let results = (outs Variadic<AnyRankedTensor>:$output_tensors);
let extraClassDeclaration = [{
llvm::Optional<SmallVector<StringRef, 8>> referenceIterators();
llvm::Optional<SmallVector<AffineMap, 8>> referenceIndexingMaps();
void regionBuilder(ArrayRef<BlockArgument> args);
}];
let hasFolder = 1;
}
```
With -gen-ods-impl, this emits (modulo formatting):
```
llvm::Optional<SmallVector<StringRef, 8>> batch_matmul::referenceIterators() {
return SmallVector<StringRef, 8>{ getParallelIteratorTypeName(),
getParallelIteratorTypeName(),
getParallelIteratorTypeName(),
getReductionIteratorTypeName() };
}
llvm::Optional<SmallVector<AffineMap, 8>> batch_matmul::referenceIndexingMaps()
{
MLIRContext *context = getContext();
AffineExpr d0, d1, d2, d3;
bindDims(context, d0, d1, d2, d3);
return SmallVector<AffineMap, 8>{
AffineMap::get(4, 0, {d0, d1, d3}),
AffineMap::get(4, 0, {d3, d2}),
AffineMap::get(4, 0, {d0, d1, d2}) };
}
void batch_matmul::regionBuilder(ArrayRef<BlockArgument> args) {
using namespace edsc;
using namespace intrinsics;
ValueHandle _0(args[0]), _1(args[1]), _2(args[2]);
ValueHandle _4 = std_mulf(_0, _1);
ValueHandle _5 = std_addf(_2, _4);
(linalg_yield(ValueRange{ _5 }));
}
```
Differential Revision: https://reviews.llvm.org/D77067
Add an initial version of mlir-vulkan-runner execution driver.
A command line utility that executes a MLIR file on the Vulkan by
translating MLIR GPU module to SPIR-V and host part to LLVM IR before
JIT-compiling and executing the latter.
Differential Revision: https://reviews.llvm.org/D72696
Moving cuda-runtime-wrappers.so into subdirectory to match libmlir_runner_utils.so.
Provide parent directory when running test and load .so from subdirectory.
PiperOrigin-RevId: 282410749
This adds an importer from LLVM IR or bitcode to the LLVM dialect. The importer is registered with mlir-translate.
Known issues exposed by this patch but not yet fixed:
* Globals' initializers are attributes, which makes it impossible to represent a ConstantExpr. This will be fixed in a followup.
* icmp returns i32 rather than i1.
* select and a couple of other instructions aren't implemented.
* llvm.cond_br takes its successors in a weird order.
The testing here is known to be non-exhaustive.
I'd appreciate feedback on where this functionality should live. It looks like the translator *from MLIR to LLVM* lives in Target/, but the SPIR-V deserializer lives in Dialect/ which is why I've put this here too.
PiperOrigin-RevId: 278711683
This tool allows to execute MLIR IR snippets written in the GPU dialect
on a CUDA capable GPU. For this to work, a working CUDA install is required
and the build has to be configured with MLIR_CUDA_RUNNER_ENABLED set to 1.
PiperOrigin-RevId: 256551415
EDSC subsystem contains an API test which is a .cpp file calling the API in
question and producing IR. This IR is further checked using FileCheck and
should plug into lit. Provide a CMakeLists.txt to build the test and modify
the lit configuration to process the source file.
--
PiperOrigin-RevId: 248794443
This CL performs post-commit cleanups.
It adds the ability to specify which shared libraries to load dynamically in ExecutionEngine. The linalg integration test is updated to use a shared library.
Additional minor cleanups related to LLVM lowering of Linalg are also included.
--
PiperOrigin-RevId: 248346589
Mainly a missing dependency caused the tests to pass if one already built
the repo, but not from a clean (or incremental) build.
--
PiperOrigin-RevId: 241852313