Commit Graph

37 Commits

Author SHA1 Message Date
Valentin Clement 56aba9699d [MLIR] Fix wrong header for mlir-cuda-runner
Just updated the wrong header probably copied from the mlir-cpu-runner

Differential Revision: https://reviews.llvm.org/D74497
2020-02-12 22:35:46 +01:00
Stephan Herhut 864110b5b4 [MLIR][CUDA] Fix build file for mlir-cuda-runner
Summary:
This was broken recently when moving from dialect registration via
static initializers to explicit intialization.

Differential Revision: https://reviews.llvm.org/D74480
2020-02-12 15:10:51 +01:00
Alex Zinenko 5a1778057f [mlir] use unpacked memref descriptors at function boundaries
The existing (default) calling convention for memrefs in standard-to-LLVM
conversion was motivated by interfacing with LLVM IR produced from C sources.
In particular, it passes a pointer to the memref descriptor structure when
calling the function. Therefore, the descriptor is allocated on stack before
the call. This convention leads to several problems. PR44644 indicates a
problem with stack exhaustion when calling functions with memref-typed
arguments in a loop. Allocating outside of the loop may lead to concurrent
access problems in case the loop is parallel. When targeting GPUs, the contents
of the stack-allocated memory for the descriptor (passed by pointer) needs to
be explicitly copied to the device. Using an aggregate type makes it impossible
to attach pointer-specific argument attributes pertaining to alignment and
aliasing in the LLVM dialect.

Change the default calling convention for memrefs in standard-to-LLVM
conversion to transform a memref into a list of arguments, each of primitive
type, that are comprised in the memref descriptor. This avoids stack allocation
for ranked memrefs (and thus stack exhaustion and potential concurrent access
problems) and simplifies the device function invocation on GPUs.

Provide an option in the standard-to-LLVM conversion to generate auxiliary
wrapper function with the same interface as the previous calling convention,
compatible with LLVM IR porduced from C sources. These auxiliary functions
pack the individual values into a descriptor structure or unpack it. They also
handle descriptor stack allocation if necessary, serving as an allocation
scope: the memory reserved by `alloca` will be freed on exiting the auxiliary
function.

The effect of this change on MLIR-generated only LLVM IR is minimal. When
interfacing MLIR-generated LLVM IR with C-generated LLVM IR, the integration
only needs to require auxiliary functions and change the function name to call
the wrapper function instead of the original function.

This also opens the door to forwarding aliasing and alignment information from
memrefs to LLVM IR pointers in the standrd-to-LLVM conversion.
2020-02-10 15:03:43 +01:00
River Riddle c33d6970e0 [mlir] Add support for basic location translation to LLVM.
Summary:
This revision adds basic support for emitting line table information when exporting to LLVMIR. We don't yet have a story for supporting all of the LLVM debug metadata, so this revision stubs some features(like subprograms) to enable emitting line tables.

Differential Revision: https://reviews.llvm.org/D73934
2020-02-05 17:41:51 -08:00
Kern Handa b8004b7308 [mlir] Mark the MLIR tools for installation in CMake
This binplaces `mlir-translate`, `mlir-cuda-runner`, and `mlir-cpu-runner` when building the CMake install target.

Differential Revision: https://reviews.llvm.org/D73986
2020-02-05 03:42:57 +00:00
Mehdi Amini 308571074c Mass update the MLIR license header to mention "Part of the LLVM project"
This is an artifact from merging MLIR into LLVM, the file headers are
now aligned with the rest of the project.
2020-01-26 03:58:30 +00:00
Tres Popp 9a52ea5cf9 Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reverts "Revert "[mlir] Create a gpu.module operation for the GPU Dialect.""

This reverts commit ac446302ca4145cdc89f377c0c364c29ee303be5 after
fixing internal Google issues.

This additionally updates ROCDL lowering to use the new gpu.module.

Reviewers: herhut, mravishankar, antiagainst, nicolasvasilache

Subscribers: jholewinski, mgorny, mehdi_amini, jpienaar, burmako, shauheen, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits, mravishankar, rriddle, antiagainst, bkramer

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72921
2020-01-21 14:05:03 +01:00
Benjamin Kramer 0133cc60e4 Revert "[mlir] Create a gpu.module operation for the GPU Dialect."
This reverts commit 4624a1e8ac. Causing
problems downstream.
2020-01-15 17:52:17 +01:00
Tres Popp 4624a1e8ac [mlir] Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reviewers: herhut, mravishankar, antiagainst, rriddle

Reviewed By: herhut, antiagainst, rriddle

Subscribers: liufengdb, aartbik, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72336
2020-01-14 12:05:47 +01:00
Mehdi Amini 56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
Christian Sigg d7c17195a4 Change CUDA tests to use print_memref.
Swap dimensions in all-reduce-op test.

PiperOrigin-RevId: 281791744
2019-11-21 11:26:36 -08:00
Christian Sigg f868adafee Make type and rank explicit in mcuMemHostRegister function.
Fix registered size of indirect MemRefType kernel arguments.

PiperOrigin-RevId: 281362940
2019-11-19 13:13:02 -08:00
Nicolas Vasilache f51a155337 Add support for alignment attribute in std.alloc.
This CL adds an extra pointer to the memref descriptor to allow specifying alignment.

In a previous implementation, we used 2 types: `linalg.buffer` and `view` where the buffer type was the unit of allocation/deallocation/alignment and `view` was the unit of indexing.

After multiple discussions it was decided to use a single type, which conflates both, so the memref descriptor now needs to carry both pointers.

This is consistent with the [RFC-Proposed Changes to MemRef and Tensor MLIR Types](https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ).

PiperOrigin-RevId: 279959463
2019-11-12 07:06:54 -08:00
Alex Zinenko 11d12670da GPUToCUDA: attach CUBIN to the nested module rather than to the function
Originally, we were attaching attributes containing CUBIN blobs to the kernel
function called by `gpu.launch_func`. This kernel is now contained in a nested
module that is used as a compilation unit. Attach compiled CUBIN blobs to the
module rather than to the function since we were compiling the module. This
also avoids duplication of the attribute on multiple kernels within the same
module.

PiperOrigin-RevId: 273497303
2019-10-08 05:11:26 -07:00
Alex Zinenko 16af5924cb Fuse GenerateCubinAccessors pass into LaunchFunctToCuda
Now that the accessor function is a trivial getter of the global variable, it
makes less sense to have the getter generation as a separate pass. Move the
getter generation into the lowering of `gpu.launch_func` to CUDA calls. This
change is mostly code motion, but the process can be simplified further by
generating the addressof inplace instead of using a call. This is will be done
in a follow-up.

PiperOrigin-RevId: 273492517
2019-10-08 04:35:33 -07:00
Alex Zinenko e0d78eac23 NFC: rename Conversion/ControlFlowToCFG to Conversion/LoopToStandard
This makes the name of the conversion pass more consistent with the naming
scheme, since it actually converts from the Loop dialect to the Standard
dialect rather than working with arbitrary control flow operations.

PiperOrigin-RevId: 272612112
2019-10-03 01:35:03 -07:00
Nicolas Vasilache 98594d4dd5 Replace spurious `long` stride type by int64_t - NFC
PiperOrigin-RevId: 272425434
2019-10-02 06:37:40 -07:00
Nicolas Vasilache 923b33ea16 Normalize MemRefType lowering to LLVM as strided MemRef descriptor
This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

Strided memrefs correspond conceptually to the following templated C++ struct:
```
template <typename Elem, size_t Rank>
struct {
  Elem *ptr;
  int64_t offset;
  int64_t sizes[Rank];
  int64_t strides[Rank];
};
```
The linearization procedure for address calculation for strided memrefs is the same as for linalg views:
`base_offset + SUM_i index_i * stride_i`.

The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs.

PiperOrigin-RevId: 272033399
2019-09-30 11:58:54 -07:00
Nicolas Vasilache ddf737c5da Promote MemRefDescriptor to a pointer to struct when passing function boundaries in LLVMLowering.
The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues.
Solving the ABI problem generally is a very complex problem and most likely involves taking
a dependence on clang that we do not want atm.

A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement.

PiperOrigin-RevId: 271591708
2019-09-27 09:57:36 -07:00
Christian Sigg b8676da1fc Outline GPU kernel function into a nested module.
Roll forward of commit 5684a12.

When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 270639748
2019-09-23 03:17:01 -07:00
George Karpenkov 2df646bef6 Automated rollback of commit 5684a12434
PiperOrigin-RevId: 270126672
2019-09-19 14:34:30 -07:00
MLIR Team 5684a12434 Outline GPU kernel function into a nested module.
When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 269987720
2019-09-19 01:51:28 -07:00
River Riddle f1b100c77b NFC: Finish replacing FunctionPassBase/ModulePassBase with OpPassBase.
These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass.

PiperOrigin-RevId: 268970259
2019-09-13 13:34:27 -07:00
River Riddle 5c036e682d Refactor the pass manager to support operations other than FuncOp/ModuleOp.
This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used:

// Pass manager for the top-level module.
PassManager pm(ctx);

// Nest a pipeline operating on FuncOp.
OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(...);

// Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp
OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>();

// Nest a pipeline on FuncOps inside of the spirv::ModuleOp.
OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>();

To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation.

/// Pass to verify an operation and signal failure if necessary.
class VerifierPass : public OperationPass<VerifierPass> {
  void runOnOperation() override {
    Operation *op = getOperation();
    if (failed(verify(op)))
      signalPassFailure();
    markAllAnalysesPreserved();
  }
};

PiperOrigin-RevId: 266840344
2019-09-02 19:25:26 -07:00
Stephan Herhut 545c3e489f Port mlir-cuda-runner to use dialect conversion framework.
Instead of lowering the program in two steps (Standard->LLVM followed
by GPU->NVVM), leading to invalid IR inbetween, the runner now uses
one pattern based rewrite step to go directly from Standard+GPU to
LLVM+NVVM.

PiperOrigin-RevId: 265861934
2019-08-28 01:50:57 -07:00
River Riddle ba0fa92524 NFC: Move LLVMIR, SDBM, and StandardOps to the Dialect/ directory.
PiperOrigin-RevId: 264193915
2019-08-19 11:01:25 -07:00
Jacques Pienaar 79f53b0cf1 Change from llvm::make_unique to std::make_unique
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.

PiperOrigin-RevId: 263953918
2019-08-17 11:06:03 -07:00
River Riddle a0df3ebd15 NFC: Implement OwningRewritePatternList as a class instead of a using directive.
This allows for proper forward declaration, as opposed to leaking the internal implementation via a using directive. This also allows for all pattern building to go through 'insert' methods on the OwningRewritePatternList, replacing uses of 'push_back' and 'RewriteListBuilder'.

PiperOrigin-RevId: 261816316
2019-08-05 18:38:22 -07:00
Alex Zinenko 60965b4612 Move GPU dialect to {lib,include/mlir}/Dialect
Per tacit agreement, individual dialects should now live in lib/Dialect/Name
with headers in include/mlir/Dialect/Name and tests in test/Dialect/Name.

PiperOrigin-RevId: 259896851
2019-07-25 00:41:17 -07:00
River Riddle 8b447b6cad NFC: Expose a ConversionPatternRewriter for use with ConversionPatterns.
This specific PatternRewriter will allow for exposing hooks in the future that are only useful for the conversion framework, e.g. type conversions.

PiperOrigin-RevId: 258818122
2019-07-19 11:40:00 -07:00
Stephan Herhut 6760ea5338 Move shared cpu runner library to Support/JitRunner.
PiperOrigin-RevId: 258347825
2019-07-16 13:45:16 -07:00
Nicolas Vasilache cab671d166 Lower affine control flow to std control flow to LLVM dialect
This CL splits the lowering of affine to LLVM into 2 parts:
1. affine -> std
2. std -> LLVM

The conversions mostly consists of splitting concerns between the affine and non-affine worlds from existing conversions.
Short-circuiting of affine `if` conditions was never tested or exercised and is removed in the process, it can be reintroduced later if needed.

LoopParametricTiling.cpp is updated to reflect the newly added ForOp::build.

PiperOrigin-RevId: 257794436
2019-07-12 08:44:28 -07:00
Alex Zinenko 7ef559e0f2 mcuMemHostRegister: take into account sizeof(float)
cuMemHostRegister expects the size of registered memory in bytes whereas the
memref descriptor in memref_t contains the number of elements.  Get the actual
size in bytes instead.

PiperOrigin-RevId: 257589116
2019-07-12 08:43:15 -07:00
River Riddle fec20e590f NFC: Rename Module to ModuleOp.
Module is a legacy name that only exists as a typedef of ModuleOp.

PiperOrigin-RevId: 257427248
2019-07-10 10:11:21 -07:00
River Riddle 8c44367891 NFC: Rename Function to FuncOp.
PiperOrigin-RevId: 257293379
2019-07-10 10:10:53 -07:00
Alex Zinenko 471da08e4e Simplify launch_func rewrite pattern in mlir-cuda-runner
Use the bulk setOperands on the cloned operation, no need to cast or iterate
over the operand list.

PiperOrigin-RevId: 256643496
2019-07-05 02:51:53 -07:00
Stephan Herhut e8b21a75f8 Add an mlir-cuda-runner tool.
This tool allows to execute MLIR IR snippets written in the GPU dialect
on a CUDA capable GPU. For this to work, a working CUDA install is required
and the build has to be configured with MLIR_CUDA_RUNNER_ENABLED set to 1.

PiperOrigin-RevId: 256551415
2019-07-04 07:53:54 -07:00