Commit Graph

113 Commits

Author SHA1 Message Date
Alex Zinenko 11d12670da GPUToCUDA: attach CUBIN to the nested module rather than to the function
Originally, we were attaching attributes containing CUBIN blobs to the kernel
function called by `gpu.launch_func`. This kernel is now contained in a nested
module that is used as a compilation unit. Attach compiled CUBIN blobs to the
module rather than to the function since we were compiling the module. This
also avoids duplication of the attribute on multiple kernels within the same
module.

PiperOrigin-RevId: 273497303
2019-10-08 05:11:26 -07:00
Alex Zinenko 52e082b6ed GPUToCUDA: emit addressof directly instead of wrapping it into a getter function
Originally, the CUBIN getter function was introduced as a mechanism to
circumvent the absence of globals in the LLVM dialect. It would allocate memory
and populate it with the CUBIN data. LLVM dialect now supports globals and they
are already used to store CUBIN data, making the getter function a trivial
address computation of a global. Emit the address computation directly at the
place of `gpu.launch_func` instead of putting it in a function and calling it.
This simplifies the conversion flow and prepares it for using the
DialectConversion infrastructure.

PiperOrigin-RevId: 273496221
2019-10-08 05:03:42 -07:00
Alex Zinenko 16af5924cb Fuse GenerateCubinAccessors pass into LaunchFunctToCuda
Now that the accessor function is a trivial getter of the global variable, it
makes less sense to have the getter generation as a separate pass. Move the
getter generation into the lowering of `gpu.launch_func` to CUDA calls. This
change is mostly code motion, but the process can be simplified further by
generating the addressof inplace instead of using a call. This is will be done
in a follow-up.

PiperOrigin-RevId: 273492517
2019-10-08 04:35:33 -07:00
Alex Zinenko 90d65d32d6 Use named modules for gpu.launch_func
The kernel function called by gpu.launch_func is now placed into an isolated
nested module during the outlining stage to simplify separate compilation.
Until recently, modules did not have names and could not be referenced. This
limitation was circumvented by introducing a stub kernel at the same name at
the same nesting level as the module containing the actual kernel. This
relation is only effective in one direction: from actual kernel function to its
launch_func "caller".

Leverage the recently introduced symbol name attributes on modules to refer to
a specific nested module from `gpu.launch_func`. This removes the implicit
connection between the identically named stub and kernel functions. It also
enables support for `gpu.launch_func`s to call different kernels located in the
same module.

PiperOrigin-RevId: 273491891
2019-10-08 04:30:32 -07:00
Christian Sigg 7c765d97f9 Support reduction of partial warps.
gpu.all_reduce now supports block sizes that are not multiple of 32.

PiperOrigin-RevId: 273255204
2019-10-07 03:31:00 -07:00
Nicolas Vasilache 754ea72794 Replace constexpr MemRefType::kDynamicStrideOrOffset by a MemRefType:;getDynamicStrideOrOffset() method - NFC
This fixes global ODR-use issues, some of which manifest in Parser.cpp.

Fixes tensorflow/mlir#167.

PiperOrigin-RevId: 272886347
2019-10-04 08:58:09 -07:00
Deven Desai d064469f6f Moving the GPUIndexIntrinsicOpLowering template to a common location
The GPUIndexIntrinsicOpLowering template is currently used by the code in both the GPUToNVVM and GPUToROCDL dirs.
Moving it to a common location to remove code duplication.

Closes tensorflow/mlir#163

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/163 from deven-amd:deven-refactor-gpu-index-ops-lowering b8dc2a5f5353df196039b6ff2ad42106028693ed
PiperOrigin-RevId: 272863297
2019-10-04 06:20:05 -07:00
Christian Sigg 85dcaf19c7 Fix typos, NFC.
PiperOrigin-RevId: 272851237
2019-10-04 04:37:53 -07:00
MLIR Team 0dfa7fc908 Add fpext and fptrunc to the Standard dialect and includes conversion to LLVM
PiperOrigin-RevId: 272768027
2019-10-03 16:37:24 -07:00
Alex Zinenko bd4762502c Add parentheses around boolean operators in assert
This removes a warning and is generally a good practice.

PiperOrigin-RevId: 272613597
2019-10-03 01:47:14 -07:00
Alex Zinenko e0d78eac23 NFC: rename Conversion/ControlFlowToCFG to Conversion/LoopToStandard
This makes the name of the conversion pass more consistent with the naming
scheme, since it actually converts from the Loop dialect to the Standard
dialect rather than working with arbitrary control flow operations.

PiperOrigin-RevId: 272612112
2019-10-03 01:35:03 -07:00
Nicolas Vasilache 9604bb6269 Extract MemRefType::getStridesAndOffset as a free function and fix dynamic offset determination.
This also adds coverage with a missing test, which uncovered a bug in the conditional for testing whether an offset is dynamic or not.

PiperOrigin-RevId: 272505798
2019-10-02 13:25:05 -07:00
Deven Desai e81b3129b4 [ROCm] Adding pass to lower GPU Dialect to ROCDL Dialect.
This is a follow-up to the PRtensorflow/mlir#146 which introduced the ROCDL Dialect. This PR introduces a pass to lower GPU Dialect to the ROCDL Dialect. As with the previous PR, this one builds on the work done by @whchung, and addresses most of the review comments in the original PR.

Closes tensorflow/mlir#154

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/154 from deven-amd:deven-lower-gpu-to-rocdl 809893e08236da5ab6a38e3459692fa04247773d
PiperOrigin-RevId: 272390729
2019-10-02 01:50:30 -07:00
Alex Zinenko c760f233b3 Fix and simplify CallOp/CallIndirectOp to LLVM::CallOp conversion
A recent ABI compatibility change affected the conversion from standard
CallOp/CallIndirectOp to LLVM::CallOp by changing its signature. In order to
analyze the signature, the code was looking up the callee symbol in the module.
This is incorrect since, during the conversion, the module may contain both the
original and the converted function op that have the same symbol name. There is
no strict guarantee on which of the two symbols will be found by the lookup.
The conversion was not failing because the type legalizer converts the LLVM
types to themselves making the original and the converted function signatures
ultimately produce the same type.

Instead of looking up the function signature to get the list of result types,
use the types of the CallOp/CallIndirectOp results which must match those of
the function in valid IR. These types are guaranteed to be the original,
unconverted types when converting the operation. Furthermore, this avoids the
need to perform a lookup of a symbol name in the module which may be expensive.

Finally, propagate attributes as-is from the original op to the converted op
since they share the attribute name for the callee of direct calls and the rest
of attributes are not affected by the conversion. This removes the need for
additional contorsions between direct and indirect calls to extract the name of
the optional callee attribute only to insert it back. This also prevents the
conversion from unintentionally dropping the other attributes of the op.

PiperOrigin-RevId: 272218871
2019-10-01 08:41:50 -07:00
Nicolas Vasilache e36337a998 Unify Linalg types by using strided memrefs
This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere.

PiperOrigin-RevId: 272187165
2019-10-01 05:23:21 -07:00
Christian Sigg 1129931a62 Change all_reduce lowering to support 2D and 3D blocks.
Perform second reduce only with first warp. This requires an additional __sync_threads(), but doesn't need special handling when the last warp is small. This simplifies support for block sizes that are not multiple of 32.

Supporting partial warp reduce will be done in a separate CL.

PiperOrigin-RevId: 272168917
2019-10-01 02:51:15 -07:00
Nicolas Vasilache 923b33ea16 Normalize MemRefType lowering to LLVM as strided MemRef descriptor
This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

Strided memrefs correspond conceptually to the following templated C++ struct:
```
template <typename Elem, size_t Rank>
struct {
  Elem *ptr;
  int64_t offset;
  int64_t sizes[Rank];
  int64_t strides[Rank];
};
```
The linearization procedure for address calculation for strided memrefs is the same as for linalg views:
`base_offset + SUM_i index_i * stride_i`.

The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs.

PiperOrigin-RevId: 272033399
2019-09-30 11:58:54 -07:00
Christian Sigg 3d9679bde4 Switch comments from GPU dialect terms to CUDA terms (NFC).
local workgroup -> block, subgroup -> warp, invocation -> thread.

PiperOrigin-RevId: 271946342
2019-09-30 03:19:45 -07:00
Nicolas Vasilache bc4984e4f7 Add TODO to revisit coupling of CallOp to MemRefType lowering
PiperOrigin-RevId: 271619132
2019-09-27 12:03:00 -07:00
Nicolas Vasilache ddf737c5da Promote MemRefDescriptor to a pointer to struct when passing function boundaries in LLVMLowering.
The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues.
Solving the ABI problem generally is a very complex problem and most likely involves taking
a dependence on clang that we do not want atm.

A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement.

PiperOrigin-RevId: 271591708
2019-09-27 09:57:36 -07:00
Christian Sigg 116dac00ba Add AllReduceOp to GPU dialect with lowering to NVVM.
The reduction operation is currently fixed to "add", and the scope is fixed to "workgroup".

The implementation is currently limited to sizes that are multiple 32 (warp size) and no larger than 1024.

PiperOrigin-RevId: 271290265
2019-09-26 00:17:50 -07:00
Uday Bondhugula 458ede8775 Introduce splat op + provide its LLVM lowering
- introduce splat op in standard dialect (currently for int/float/index input
  type, output type can be vector or statically shaped tensor)
- implement LLVM lowering (when result type is 1-d vector)
- add constant folding hook for it
- while on Ops.cpp, fix some stale names

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#141

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/141 from bondhugula:splat 48976a6aa0a75be6d91187db6418de989e03eb51
PiperOrigin-RevId: 270965304
2019-09-24 12:44:58 -07:00
Nicolas Vasilache 42d8fa667b Normalize lowering of MemRef types
The RFC for unifying Linalg and Affine compilation passes into an end-to-end flow with a predictable ABI and linkage to external function calls raised the question of why we have variable sized descriptors for memrefs depending on whether they have static or dynamic dimensions  (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

This CL standardizes the ABI on the rank of the memrefs.
The LLVM struct for a memref becomes equivalent to:
```
template <typename Elem, size_t Rank>
struct {
  Elem *ptr;
  int64_t sizes[Rank];
};
```

PiperOrigin-RevId: 270947276
2019-09-24 11:21:49 -07:00
Mehdi Amini 5583252173 Add convenience methods to set an OpBuilder insertion point after an Operation (NFC)
PiperOrigin-RevId: 270727180
2019-09-23 11:54:55 -07:00
Christian Sigg b8676da1fc Outline GPU kernel function into a nested module.
Roll forward of commit 5684a12.

When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 270639748
2019-09-23 03:17:01 -07:00
Christian Sigg c900d4994e Fix a number of Clang-Tidy warnings.
PiperOrigin-RevId: 270632324
2019-09-23 02:34:27 -07:00
Manuel Freiberger 2c11997d48 Add integer sign- and zero-extension and truncation to standard.
This adds sign- and zero-extension and truncation of integer types to the
standard dialects. This allows to perform integer type conversions without
having to go to the LLVM dialect and introduce custom type casts (between
standard and LLVM integer types).

Closes tensorflow/mlir#134

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/134 from ombre5733:sext-zext-trunc-in-std c7657bc84c0ca66b304e53ec03797e09152e4d31
PiperOrigin-RevId: 270479722
2019-09-21 16:14:56 -07:00
George Karpenkov 2df646bef6 Automated rollback of commit 5684a12434
PiperOrigin-RevId: 270126672
2019-09-19 14:34:30 -07:00
MLIR Team 5684a12434 Outline GPU kernel function into a nested module.
When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 269987720
2019-09-19 01:51:28 -07:00
MLIR Team 1c73be76d8 Unify error messages to start with lower-case.
PiperOrigin-RevId: 269803466
2019-09-18 07:45:17 -07:00
MLIR Team 1da0290c4b Error out when kernel function is not found while translating GPU calls.
PiperOrigin-RevId: 269327909
2019-09-16 07:19:36 -07:00
Alex Zinenko 6755dfdec9 Drop makePositionAttr and the like in favor of Builder::getI64ArrayAttr
The helper functions makePositionAttr() and positionAttr() were originally
introduced in the lowering-to-LLVM-dialect pass to construct integer array
attributes that are used for static positions in extract/insertelement.
Constructing an integer array attribute being fairly common, a utility function
Builder::getI64ArrayAttr was later introduced into the Builder API.  Drop
makePositionAttr and similar homegrown functions and use that API instead.
PiperOrigin-RevId: 269295836
2019-09-16 03:31:09 -07:00
Mahesh Ravishankar 9814b3fa0d Add mechanism to specify extended instruction sets in SPIR-V.
Add support for specifying extended instructions sets. The operations
in SPIR-V dialect are named as 'spv.<extension-name>.<op-name>'. Use
this mechanism to define a 'Exp' operation from GLSL(450)
instructions.
Later CLs will add support for (de)serialization of these operations,
and update the dialect generation scripts to auto-generate the
specification using the spec directly.

Additional changes:
Add a Type Constraint to OpBase.td to check for vector of specified
lengths. This is used to check that the vector type used in SPIR-V
dialect are of lengths 2, 3 or 4.
Update SPIRVBase.td to use this Type constraints for vectors.

PiperOrigin-RevId: 269234377
2019-09-15 19:40:07 -07:00
Lei Zhang 113aadddf9 Update SPIR-V symbols and use GLSL450 instead of VulkanKHR
SPIR-V recently publishes v1.5, which brings a bunch of symbols
into core. So the suffix "KHR"/"EXT"/etc. is removed from the
symbols. We use a script to pull information from the spec
directly.

Also changed conversion and tests to use GLSL450 instead of
VulkanKHR memory model. GLSL450 is still the main memory model
supported by Vulkan shaders and it does not require extra
capability to enable.

PiperOrigin-RevId: 268992661
2019-09-13 15:26:32 -07:00
River Riddle f1b100c77b NFC: Finish replacing FunctionPassBase/ModulePassBase with OpPassBase.
These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass.

PiperOrigin-RevId: 268970259
2019-09-13 13:34:27 -07:00
MLIR Team 36508528c7 Overload LLVM::TerminatorOp::build() for empty operands list.
PiperOrigin-RevId: 268041584
2019-09-09 11:39:03 -07:00
MLIR Team b5652720c1 Retain address space during MLIR > LLVM conversion.
PiperOrigin-RevId: 267206460
2019-09-04 12:26:52 -07:00
Lei Zhang 5593e005c6 Add folding rule and dialect materialization hook for spv.constant
This will allow us to use MLIR's folding infrastructure to deduplicate
SPIR-V constants.

This CL also changed isValidSPIRVType in SPIRVDialect to a static method.

PiperOrigin-RevId: 266984403
2019-09-03 12:09:58 -07:00
Mehdi Amini 765d60fd4d Add missing lowering to CFG in mlir-cpu-runner + related cleanup
- the list of passes run by mlir-cpu-runner included -lower-affine and
  -lower-to-llvm but was missing -lower-to-cfg (because -lower-affine at
  some point used to lower straight to CFG); add -lower-to-cfg in
  between. IR with affine ops can now be run by mlir-cpu-runner.

- update -lower-to-cfg to be consistent with other passes (create*Pass methods
  were changed to return unique ptrs, but -lower-to-cfg appears to have been
  missed).

- mlir-cpu-runner was unable to parse custom form of affine op's - fix
  link options

- drop unnecessary run options from test/mlir-cpu-runner/simple.mlir
  (none of the test cases had loops)

- -convert-to-llvmir was changed to -lower-to-llvm at some point, but the
  create pass method name wasn't updated (this pass converts/lowers to LLVM
  dialect as opposed to LLVM IR). Fix this.

(If we prefer "convert", the cmd-line options could be changed to
"-convert-to-llvm/cfg" then.)

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#115

PiperOrigin-RevId: 266666909
2019-09-01 11:33:22 -07:00
River Riddle 4bfae66d70 Refactor the 'walk' methods for operations.
This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example:

    op->walk<AffineForOp>([](AffineForOp op) { ... });

is now accomplished via:

    op->walk([](AffineForOp op) { ... });

PiperOrigin-RevId: 266209552
2019-08-29 13:04:50 -07:00
Stephan Herhut 545c3e489f Port mlir-cuda-runner to use dialect conversion framework.
Instead of lowering the program in two steps (Standard->LLVM followed
by GPU->NVVM), leading to invalid IR inbetween, the runner now uses
one pattern based rewrite step to go directly from Standard+GPU to
LLVM+NVVM.

PiperOrigin-RevId: 265861934
2019-08-28 01:50:57 -07:00
Mahesh Ravishankar 4ced99c085 Enhance GPU To SPIR-V conversion to support builtins and load/store ops.
To support a conversion of a simple load-compute-store kernel from GPU
dialect to SPIR-V dialect, the conversion of operations like
"gpu.block_dim", "gpu.thread_id" which allow threads to get the launch
conversion is needed. In SPIR-V these are specified as global
variables with builin attributes. This CL adds support to specify
builtin variables in SPIR-V conversion framework. This is used to
convert the relevant operations from GPU dialect to SPIR-V dialect.
Also add support for conversion of load/store operation in Standard
dialect to SPIR-V dialect.
To simplify the conversion add a method to build a spv.AccessChain
operation that automatically determines the return type based on the
base pointer type and the indices provided.

PiperOrigin-RevId: 265718525
2019-08-27 10:50:23 -07:00
Nicolas Vasilache fa592908af Let LLVMOpLowering specify a PatternBenefit - NFC
Currently the benefit is always set to 1 which limits the ability to do A->B->C lowering

PiperOrigin-RevId: 264854146
2019-08-22 09:38:42 -07:00
River Riddle ffde975e21 NFC: Move AffineOps dialect to the Dialect sub-directory.
PiperOrigin-RevId: 264482571
2019-08-20 15:36:39 -07:00
Alex Zinenko 006fcce44a ConvertLaunchFuncToCudaCalls: use LLVM dialect globals
This conversion has been using a stack-allocated array of i8 to store the
null-terminated kernel name in order to pass it to the CUDA wrappers expecting
a C string because the LLVM dialect was missing support for globals.  Now that
the suport is introduced, use a global instead.

Refactor global string construction from GenerateCubinAccessors into a common
utility function living in the LLVM namespace.

PiperOrigin-RevId: 264382489
2019-08-20 07:52:01 -07:00
Nicolas Vasilache f55ac5c076 Add support for LLVM lowering of binary ops on n-D vector types
This CL allows binary operations on n-D vector types to be lowered to LLVMIR by performing an (n-1)-D extractvalue, 1-D vector operation and an (n-1)-D insertvalue.

PiperOrigin-RevId: 264339118
2019-08-20 02:00:22 -07:00
Nicolas Vasilache b628194013 Move Linalg and VectorOps dialects to the Dialect subdir - NFC
PiperOrigin-RevId: 264277760
2019-08-19 17:11:38 -07:00
River Riddle ba0fa92524 NFC: Move LLVMIR, SDBM, and StandardOps to the Dialect/ directory.
PiperOrigin-RevId: 264193915
2019-08-19 11:01:25 -07:00
Nicolas Vasilache 9bf69e6a2e Refactor linalg lowering to LLVM
The linalg.view type used to be lowered to a struct containing a data pointer, offset, sizes/strides information. This was problematic when passing to external functions due to ABI, struct padding and alignment issues.

The linalg.view type is now lowered to LLVMIR as a *pointer* to a struct containing the data pointer, offset and sizes/strides. This simplifies the interfacing with external library functions and makes it trivial to add new functions without creating a shim that would go from a value type struct to a pointer type.

The consequences are that:
1. lowering explicitly uses llvm.alloca in lieu of llvm.undef and performs the proper llvm.load/llvm.store where relevant.
2. the shim creation function `getLLVMLibraryCallDefinition` disappears.
3. views are passed by pointer, scalars are passed by value. In the future, other structs will be passed by pointer (on a per-need basis).

PiperOrigin-RevId: 264183671
2019-08-19 10:21:40 -07:00
Jacques Pienaar 79f53b0cf1 Change from llvm::make_unique to std::make_unique
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.

PiperOrigin-RevId: 263953918
2019-08-17 11:06:03 -07:00