Commit Graph

38 Commits

Author SHA1 Message Date
River Riddle 9a277af2d4 [mlir][Pass] Add support for generating pass utilities via tablegen
This revision adds support for generating utilities for passes such as options/statistics/etc. that can be inferred from the tablegen definition. This removes additional boilerplate from the pass, and also makes it easier to remove the reliance on the pass registry to provide certain things(e.g. the pass argument).

Differential Revision: https://reviews.llvm.org/D76659
2020-04-01 02:10:46 -07:00
River Riddle 3dddd8969f [mlir][Pass] Move the registration of conversion passes to tablegen
This removes the need to statically register conversion passes, and also puts all of the conversions within one centralized file.

Differential Revision: https://reviews.llvm.org/D76658
2020-04-01 02:10:46 -07:00
Alex Zinenko 5a1778057f [mlir] use unpacked memref descriptors at function boundaries
The existing (default) calling convention for memrefs in standard-to-LLVM
conversion was motivated by interfacing with LLVM IR produced from C sources.
In particular, it passes a pointer to the memref descriptor structure when
calling the function. Therefore, the descriptor is allocated on stack before
the call. This convention leads to several problems. PR44644 indicates a
problem with stack exhaustion when calling functions with memref-typed
arguments in a loop. Allocating outside of the loop may lead to concurrent
access problems in case the loop is parallel. When targeting GPUs, the contents
of the stack-allocated memory for the descriptor (passed by pointer) needs to
be explicitly copied to the device. Using an aggregate type makes it impossible
to attach pointer-specific argument attributes pertaining to alignment and
aliasing in the LLVM dialect.

Change the default calling convention for memrefs in standard-to-LLVM
conversion to transform a memref into a list of arguments, each of primitive
type, that are comprised in the memref descriptor. This avoids stack allocation
for ranked memrefs (and thus stack exhaustion and potential concurrent access
problems) and simplifies the device function invocation on GPUs.

Provide an option in the standard-to-LLVM conversion to generate auxiliary
wrapper function with the same interface as the previous calling convention,
compatible with LLVM IR porduced from C sources. These auxiliary functions
pack the individual values into a descriptor structure or unpack it. They also
handle descriptor stack allocation if necessary, serving as an allocation
scope: the memory reserved by `alloca` will be freed on exiting the auxiliary
function.

The effect of this change on MLIR-generated only LLVM IR is minimal. When
interfacing MLIR-generated LLVM IR with C-generated LLVM IR, the integration
only needs to require auxiliary functions and change the function name to call
the wrapper function instead of the original function.

This also opens the door to forwarding aliasing and alignment information from
memrefs to LLVM IR pointers in the standrd-to-LLVM conversion.
2020-02-10 15:03:43 +01:00
Benjamin Kramer adcd026838 Make llvm::StringRef to std::string conversions explicit.
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.

This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.

This doesn't actually modify StringRef yet, I'll do that in a follow-up.
2020-01-28 23:25:25 +01:00
Mehdi Amini 308571074c Mass update the MLIR license header to mention "Part of the LLVM project"
This is an artifact from merging MLIR into LLVM, the file headers are
now aligned with the rest of the project.
2020-01-26 03:58:30 +00:00
Tres Popp 9a52ea5cf9 Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reverts "Revert "[mlir] Create a gpu.module operation for the GPU Dialect.""

This reverts commit ac446302ca4145cdc89f377c0c364c29ee303be5 after
fixing internal Google issues.

This additionally updates ROCDL lowering to use the new gpu.module.

Reviewers: herhut, mravishankar, antiagainst, nicolasvasilache

Subscribers: jholewinski, mgorny, mehdi_amini, jpienaar, burmako, shauheen, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits, mravishankar, rriddle, antiagainst, bkramer

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72921
2020-01-21 14:05:03 +01:00
Benjamin Kramer 0133cc60e4 Revert "[mlir] Create a gpu.module operation for the GPU Dialect."
This reverts commit 4624a1e8ac. Causing
problems downstream.
2020-01-15 17:52:17 +01:00
Tres Popp 4624a1e8ac [mlir] Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reviewers: herhut, mravishankar, antiagainst, rriddle

Reviewed By: herhut, antiagainst, rriddle

Subscribers: liufengdb, aartbik, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72336
2020-01-14 12:05:47 +01:00
River Riddle 2bdf33cc4c [mlir] NFC: Remove Value::operator* and Value::operator-> now that Value is properly value-typed.
Summary: These were temporary methods used to simplify the transition.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D72548
2020-01-11 08:54:39 -08:00
River Riddle e62a69561f NFC: Replace ValuePtr with Value and remove it now that Value is value-typed.
ValuePtr was a temporary typedef during the transition to a value-typed Value.

PiperOrigin-RevId: 286945714
2019-12-23 16:36:53 -08:00
Mehdi Amini 56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
River Riddle 35807bc4c5 NFC: Introduce new ValuePtr/ValueRef typedefs to simplify the transition to Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ

This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.

PiperOrigin-RevId: 286844725
2019-12-22 22:00:23 -08:00
Alex Zinenko d5e627f84b Introduce Linkage attribute to the LLVM dialect
LLVM IR supports linkage on global objects such as global variables and
functions. Introduce the Linkage attribute into the LLVM dialect, backed by an
integer storage. Use this attribute on LLVM::GlobalOp and make it mandatory.
Implement parsing/printing of the attribute and conversion to LLVM IR.

See tensorflow/mlir#277.

PiperOrigin-RevId: 283309328
2019-12-02 03:28:10 -08:00
Christian Sigg f868adafee Make type and rank explicit in mcuMemHostRegister function.
Fix registered size of indirect MemRefType kernel arguments.

PiperOrigin-RevId: 281362940
2019-11-19 13:13:02 -08:00
Alex Zinenko 5e7959a353 Use llvm.func to define functions with wrapped LLVM IR function type
This function-like operation allows one to define functions that have wrapped
LLVM IR function type, in particular variadic functions. The operation was
added in parallel to the existing lowering flow, this commit only switches the
flow to use it.

Using a custom function type makes the LLVM IR dialect type system more
consistent and avoids complex conversion rules for functions that previously
had to use the built-in function type instead of a wrapped LLVM IR dialect type
and perform conversions during the analysis.

PiperOrigin-RevId: 273910855
2019-10-10 01:34:06 -07:00
Alex Zinenko 11d12670da GPUToCUDA: attach CUBIN to the nested module rather than to the function
Originally, we were attaching attributes containing CUBIN blobs to the kernel
function called by `gpu.launch_func`. This kernel is now contained in a nested
module that is used as a compilation unit. Attach compiled CUBIN blobs to the
module rather than to the function since we were compiling the module. This
also avoids duplication of the attribute on multiple kernels within the same
module.

PiperOrigin-RevId: 273497303
2019-10-08 05:11:26 -07:00
Alex Zinenko 52e082b6ed GPUToCUDA: emit addressof directly instead of wrapping it into a getter function
Originally, the CUBIN getter function was introduced as a mechanism to
circumvent the absence of globals in the LLVM dialect. It would allocate memory
and populate it with the CUBIN data. LLVM dialect now supports globals and they
are already used to store CUBIN data, making the getter function a trivial
address computation of a global. Emit the address computation directly at the
place of `gpu.launch_func` instead of putting it in a function and calling it.
This simplifies the conversion flow and prepares it for using the
DialectConversion infrastructure.

PiperOrigin-RevId: 273496221
2019-10-08 05:03:42 -07:00
Alex Zinenko 16af5924cb Fuse GenerateCubinAccessors pass into LaunchFunctToCuda
Now that the accessor function is a trivial getter of the global variable, it
makes less sense to have the getter generation as a separate pass. Move the
getter generation into the lowering of `gpu.launch_func` to CUDA calls. This
change is mostly code motion, but the process can be simplified further by
generating the addressof inplace instead of using a call. This is will be done
in a follow-up.

PiperOrigin-RevId: 273492517
2019-10-08 04:35:33 -07:00
Alex Zinenko 90d65d32d6 Use named modules for gpu.launch_func
The kernel function called by gpu.launch_func is now placed into an isolated
nested module during the outlining stage to simplify separate compilation.
Until recently, modules did not have names and could not be referenced. This
limitation was circumvented by introducing a stub kernel at the same name at
the same nesting level as the module containing the actual kernel. This
relation is only effective in one direction: from actual kernel function to its
launch_func "caller".

Leverage the recently introduced symbol name attributes on modules to refer to
a specific nested module from `gpu.launch_func`. This removes the implicit
connection between the identically named stub and kernel functions. It also
enables support for `gpu.launch_func`s to call different kernels located in the
same module.

PiperOrigin-RevId: 273491891
2019-10-08 04:30:32 -07:00
Nicolas Vasilache ddf737c5da Promote MemRefDescriptor to a pointer to struct when passing function boundaries in LLVMLowering.
The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues.
Solving the ABI problem generally is a very complex problem and most likely involves taking
a dependence on clang that we do not want atm.

A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement.

PiperOrigin-RevId: 271591708
2019-09-27 09:57:36 -07:00
MLIR Team 1c73be76d8 Unify error messages to start with lower-case.
PiperOrigin-RevId: 269803466
2019-09-18 07:45:17 -07:00
MLIR Team 1da0290c4b Error out when kernel function is not found while translating GPU calls.
PiperOrigin-RevId: 269327909
2019-09-16 07:19:36 -07:00
River Riddle f1b100c77b NFC: Finish replacing FunctionPassBase/ModulePassBase with OpPassBase.
These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass.

PiperOrigin-RevId: 268970259
2019-09-13 13:34:27 -07:00
River Riddle 4bfae66d70 Refactor the 'walk' methods for operations.
This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example:

    op->walk<AffineForOp>([](AffineForOp op) { ... });

is now accomplished via:

    op->walk([](AffineForOp op) { ... });

PiperOrigin-RevId: 266209552
2019-08-29 13:04:50 -07:00
Alex Zinenko 006fcce44a ConvertLaunchFuncToCudaCalls: use LLVM dialect globals
This conversion has been using a stack-allocated array of i8 to store the
null-terminated kernel name in order to pass it to the CUDA wrappers expecting
a C string because the LLVM dialect was missing support for globals.  Now that
the suport is introduced, use a global instead.

Refactor global string construction from GenerateCubinAccessors into a common
utility function living in the LLVM namespace.

PiperOrigin-RevId: 264382489
2019-08-20 07:52:01 -07:00
River Riddle ba0fa92524 NFC: Move LLVMIR, SDBM, and StandardOps to the Dialect/ directory.
PiperOrigin-RevId: 264193915
2019-08-19 11:01:25 -07:00
Nicolas Vasilache 9bf69e6a2e Refactor linalg lowering to LLVM
The linalg.view type used to be lowered to a struct containing a data pointer, offset, sizes/strides information. This was problematic when passing to external functions due to ABI, struct padding and alignment issues.

The linalg.view type is now lowered to LLVMIR as a *pointer* to a struct containing the data pointer, offset and sizes/strides. This simplifies the interfacing with external library functions and makes it trivial to add new functions without creating a shim that would go from a value type struct to a pointer type.

The consequences are that:
1. lowering explicitly uses llvm.alloca in lieu of llvm.undef and performs the proper llvm.load/llvm.store where relevant.
2. the shim creation function `getLLVMLibraryCallDefinition` disappears.
3. views are passed by pointer, scalars are passed by value. In the future, other structs will be passed by pointer (on a per-need basis).

PiperOrigin-RevId: 264183671
2019-08-19 10:21:40 -07:00
Jacques Pienaar 79f53b0cf1 Change from llvm::make_unique to std::make_unique
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.

PiperOrigin-RevId: 263953918
2019-08-17 11:06:03 -07:00
Mehdi Amini 926fb685de Express ownership transfer in PassManager API through std::unique_ptr (NFC)
Since raw pointers are always passed around for IR construct without
implying any ownership transfer, it can be error prone to have implicit
ownership transferred the same way.
For example this code can seem harmless:

  Pass *pass = ....
  pm.addPass(pass);
  pm.addPass(pass);
  pm.run(module);

PiperOrigin-RevId: 263053082
2019-08-12 19:13:12 -07:00
Alex Zinenko 60965b4612 Move GPU dialect to {lib,include/mlir}/Dialect
Per tacit agreement, individual dialects should now live in lib/Dialect/Name
with headers in include/mlir/Dialect/Name and tests in test/Dialect/Name.

PiperOrigin-RevId: 259896851
2019-07-25 00:41:17 -07:00
River Riddle 9dbef0bf96 Rename FunctionAttr to SymbolRefAttr.
This allows for the attribute to hold symbolic references to other operations than FuncOp. This also allows for removing the dependence on FuncOp from the base Builder.

PiperOrigin-RevId: 257650017
2019-07-12 08:43:42 -07:00
River Riddle 6da343ecfc NFC: Replace Module::getNamedFunction with lookupSymbol<FuncOp>.
This allows for removing the last direct reference to FuncOp from ModuleOp.

PiperOrigin-RevId: 257498296
2019-07-12 08:43:03 -07:00
River Riddle fec20e590f NFC: Rename Module to ModuleOp.
Module is a legacy name that only exists as a typedef of ModuleOp.

PiperOrigin-RevId: 257427248
2019-07-10 10:11:21 -07:00
River Riddle 8c44367891 NFC: Rename Function to FuncOp.
PiperOrigin-RevId: 257293379
2019-07-10 10:10:53 -07:00
River Riddle 626b8b6a5d NFC: Remove `Module::getFunctions` in favor of a general `getOps<T>`.
Modules can now contain more than just Functions, this just updates the iteration API to reflect that. The 'begin'/'end' methods have also been updated to iterate over opaque Operations.

PiperOrigin-RevId: 257099084
2019-07-08 18:28:17 -07:00
River Riddle 206e55cc16 NFC: Refactor Module to be value typed.
As with Functions, Module will soon become an operation, which are value-typed. This eases the transition from Module to ModuleOp. A new class, OwningModuleRef is provided to allow for owning a reference to a Module, and will auto-delete the held module on destruction.

PiperOrigin-RevId: 256196193
2019-07-02 16:43:36 -07:00
River Riddle 54cd6a7e97 NFC: Refactor Function to be value typed.
Move the data members out of Function and into a new impl storage class 'FunctionStorage'. This allows for Function to become value typed, which will greatly simplify the transition of Function to FuncOp(given that FuncOp is also value typed).

PiperOrigin-RevId: 255983022
2019-07-01 11:39:00 -07:00
Stephan Herhut 9d81081d90 Add a pass that translates GPU.launch_func into a series of runtime calls.
This does not map the calls to the CUDA libary directly but uses a slim wrapper
ABI on top that has more convenient types for code generation and is stable. Such
ABI is expected to be provided by the actual runner.

PiperOrigin-RevId: 253983833
2019-06-19 23:07:43 -07:00