Commit Graph

52 Commits

Author SHA1 Message Date
Alex Zinenko ec1f4e7c3b [mlir] switch the modeling of LLVM types to use the new mechanism
A new first-party modeling for LLVM IR types in the LLVM dialect has been
developed in parallel to the existing modeling based on wrapping LLVM `Type *`
instances. It resolves the long-standing problem of modeling identified
structure types, including recursive structures, and enables future removal of
LLVMContext and related locking mechanisms from LLVMDialect.

This commit only switches the modeling by (a) renaming LLVMTypeNew to LLVMType,
(b) removing the old implementaiton of LLVMType, and (c) updating the tests. It
is intentionally minimal. Separate commits will remove the infrastructure built
for the transition and update API uses where appropriate.

Depends On D85020

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85021
2020-08-04 14:29:25 +02:00
Frederik Gossen 904f91db5f [MLIR][Standard] Make the `dim` operation index an operand.
Allow for dynamic indices in the `dim` operation.
Rather than an attribute, the index is now an operand of type `index`.
This allows to apply the operation to dynamically ranked tensors.
The correct lowering of dynamic indices remains to be implemented.

Differential Revision: https://reviews.llvm.org/D81551
2020-06-10 13:54:47 +00:00
Mehdi Amini d31c9e5a46 Change filecheck default to dump input on failure
Having the input dumped on failure seems like a better
default: I debugged FileCheck tests for a while without knowing
about this option, which really helps to understand failures.

Remove `-dump-input-on-failure` and the environment variable
FILECHECK_DUMP_INPUT_ON_FAILURE which are now obsolete.

Differential Revision: https://reviews.llvm.org/D81422
2020-06-09 18:57:46 +00:00
Wen-Heng (Jack) Chung 603b974cf7 [mlir][gpu] Fix logic error in D79508 computing number of private attributions.
Fix logic error in D79508. The old logic would make the first check in
`GPUFuncOp::verifyBody` always pass.
2020-06-08 07:40:34 -05:00
Jacques Pienaar b0921f68e1 [mlir] Add verify method to adaptor
This allows verifying op-indepent attributes (e.g., attributes that do not require the op to have been created) before constructing an operation. These include checking whether required attributes are defined or constraints on attributes (such as I32 attribute). This is not perfect (e.g., if one had a disjunctive constraint where one part relied on the op and the other doesn't, then this would not try and extract the op independent from the op dependent).

The next step is to move these out to a trait that could be verified earlier than in the generated method. The first use case is for inferring the return type while constructing the op. At that point you don't have an Operation yet and that ends up in one having to duplicate the same checks, e.g., verify that attribute A is defined before querying A in shape function which requires that duplication. Instead this allows one to invoke a method to verify all the traits and, if this is checked first during verification, then all other traits could use attributes knowing they have been verified.

It is a little bit funny to have these on the adaptor, but I see the adaptor as a place to collect information about the op before the op is constructed (e.g., avoiding stringly typed accessors, verifying what is possible to verify before the op is constructed) while being cheap to use even with constructed op (so layer of indirection between the op constructed/being constructed). And from that point of view it made sense to me.

Differential Revision: https://reviews.llvm.org/D80842
2020-06-05 09:47:37 -07:00
Thomas Raoux 661235e126 [mlir][gpu] Add subgroup Id/Size/Num to GPU dialect
Add SubgroupId, SubgroupSize and NumSubgroups to GPU dialect ops and add the
lowering of those ops to SPIRV.

Differential Revision: https://reviews.llvm.org/D81042
2020-06-04 10:52:40 -07:00
Alex Zinenko 60f443bb3b [mlir] Change dialect namespace loop->scf
All ops of the SCF dialect now use the `scf.` prefix instead of `loop.`. This
is a part of dialect renaming.

Differential Revision: https://reviews.llvm.org/D79844
2020-05-13 19:20:21 +02:00
Tres Popp 2d2d696137 [MLIR] Propagate input side effect information
Summary:
Previously operations like std.load created methods for obtaining their
effects but did not inherit from the SideEffect interfaces when their
parameters were decorated with the information. The resulting situation
was that passes had no information on the SideEffects of std.load/store
and had to treat them more cautiously. This adds the inheritance
information when creating the methods.

As a side effect, many tests are modified, as they were using std.load
for testing and this oepration would be folded away as part of pattern
rewriting. Tests are modified to use store or to reutn the result of the
std.load.

Reviewers: mravishankar, antiagainst, nicolasvasilache, herhut, aartbik, ftynse!

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78802
2020-04-27 11:35:52 +02:00
Frederik Gossen 7e4b139a04 [MLIR] Ensure `gpu.func` must be inside a `gpu.module`.
Ensure that `gpu.func` is only used within the dedicated `gpu.module`.
Implement the constraint to the GPU dialect and adopt test cases.

Differential Revision: https://reviews.llvm.org/D78541
2020-04-24 07:17:48 +00:00
Frederik Gossen 0372db05bb [MLIR] Use nested symbol to identify kernel in `LaunchFuncOp`.
Summary:
Use a nested symbol to identify the kernel to be invoked by a `LaunchFuncOp` in the GPU dialect.
This replaces the two attributes that were used to identify the kernel module and the kernel within seperately.

Differential Revision: https://reviews.llvm.org/D78551
2020-04-22 07:44:29 +00:00
Frederik Gossen 648fc95083 [MLIR] Use `kernel` as a short hand for `gpu.kernel` attribute.
Summary:
Use the shortcu `kernel` for the `gpu.kernel` attribute of `gpu.func`.
The parser supports this and test cases are easier to read.

Differential Revision: https://reviews.llvm.org/D78542
2020-04-22 07:38:30 +00:00
Frederik Gossen 2813802746 [MLIR] Fix test case for kernel attribute.
Summary:
Fix a broken test case in the `invalid.mlir` lit test case.
`expect` was missing its `e`.

Differential Revision: https://reviews.llvm.org/D78540
2020-04-22 07:27:39 +00:00
Mehdi Amini bab5bcf8fd Add a flag on the context to protect against creation of operations in unregistered dialects
Differential Revision: https://reviews.llvm.org/D76903
2020-03-30 19:37:31 +00:00
Valentin Clement d4d62fcab6 [MLIR] Add test for multiple gpu.all_reduce in the same kernel when lowering to NVVM
Summary: This patch add tests when lowering multiple `gpu.all_reduce` operations in the same kernel. This was previously failing.

Differential Revision: https://reviews.llvm.org/D75930
2020-03-19 16:36:38 +01:00
Valentin Clement c7380995f8 [MLIR] Add `and`, `or`, `xor`, `min`, `max` too gpu.all_reduce and the nvvm lowering
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`

This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.

Differential Revision: https://reviews.llvm.org/D75766
2020-03-11 14:07:04 +01:00
Stephan Herhut f6790a1c63 Revert "[MLIR] Add `and`, `or`, `xor`, `min`, `max` too gpu.all_reduce and the nvvm lowering"
Attribution to original author got lost.
2020-03-11 14:07:04 +01:00
Stephan Herhut 2eff566b07 [MLIR] Add `and`, `or`, `xor`, `min`, `max` too gpu.all_reduce and the nvvm lowering
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`

This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.

Differential Revision: https://reviews.llvm.org/D75766
2020-03-10 21:09:06 +01:00
MaheshRavishankar 3f44495dfd [mlir][GPU] Expose the functionality to create a GPUFuncOp from a LaunchOp
The current setup of the GPU dialect is to model both the host and
device side codegen. For cases (like IREE) the host side modeling
might not directly fit its use case, but device-side codegen is still
valuable. First step in accessing just the device-side functionality
of the GPU dialect is to allow just creating a gpu.func operation from
a gpu.launch operation. In addition this change also "inlines"
operations into the gpu.func op at time of creation instead of this
being a later step.

Differential Revision: https://reviews.llvm.org/D75287
2020-03-05 11:03:51 -08:00
Stephan Herhut 7a7eacc797 [MLIR][GPU] Implement a simple greedy loop mapper.
Summary:
The mapper assigns annotations to loop.parallel operations that
are compatible with the loop to gpu mapping pass. The outermost
loop uses the grid dimensions, followed by block dimensions. All
remaining loops are mapped to sequential loops.

Differential Revision: https://reviews.llvm.org/D74963
2020-02-25 11:42:42 +01:00
Alex Zinenko 5a1778057f [mlir] use unpacked memref descriptors at function boundaries
The existing (default) calling convention for memrefs in standard-to-LLVM
conversion was motivated by interfacing with LLVM IR produced from C sources.
In particular, it passes a pointer to the memref descriptor structure when
calling the function. Therefore, the descriptor is allocated on stack before
the call. This convention leads to several problems. PR44644 indicates a
problem with stack exhaustion when calling functions with memref-typed
arguments in a loop. Allocating outside of the loop may lead to concurrent
access problems in case the loop is parallel. When targeting GPUs, the contents
of the stack-allocated memory for the descriptor (passed by pointer) needs to
be explicitly copied to the device. Using an aggregate type makes it impossible
to attach pointer-specific argument attributes pertaining to alignment and
aliasing in the LLVM dialect.

Change the default calling convention for memrefs in standard-to-LLVM
conversion to transform a memref into a list of arguments, each of primitive
type, that are comprised in the memref descriptor. This avoids stack allocation
for ranked memrefs (and thus stack exhaustion and potential concurrent access
problems) and simplifies the device function invocation on GPUs.

Provide an option in the standard-to-LLVM conversion to generate auxiliary
wrapper function with the same interface as the previous calling convention,
compatible with LLVM IR porduced from C sources. These auxiliary functions
pack the individual values into a descriptor structure or unpack it. They also
handle descriptor stack allocation if necessary, serving as an allocation
scope: the memory reserved by `alloca` will be freed on exiting the auxiliary
function.

The effect of this change on MLIR-generated only LLVM IR is minimal. When
interfacing MLIR-generated LLVM IR with C-generated LLVM IR, the integration
only needs to require auxiliary functions and change the function name to call
the wrapper function instead of the original function.

This also opens the door to forwarding aliasing and alignment information from
memrefs to LLVM IR pointers in the standrd-to-LLVM conversion.
2020-02-10 15:03:43 +01:00
Stephan Herhut 283b5e733d [MLIR] Make gpu.launch implicitly capture uses of values defined above.
Summary:
In the original design, gpu.launch required explicit capture of uses
and passing them as operands to the gpu.launch operation. This was
motivated by infrastructure restrictions rather than design. This
change lifts the requirement and removes the concept of kernel
arguments from gpu.launch. Instead, the kernel outlining
transformation now does the explicit capturing.

This is a breaking change for users of gpu.launch.

Differential Revision: https://reviews.llvm.org/D73769
2020-02-03 10:08:48 +01:00
Stephan Herhut 2692751895 Add 'gpu.terminator' operation.
Summary:
The 'gpu.terminator' operation is used as the terminator for the
regions of gpu.launch. This is to disambugaute them from the
return operation on 'gpu.func' functions.

This is a breaking change and users of the gpu dialect will need
to adapt their code when producting 'gpu.launch' operations.

Reviewers: nicolasvasilache

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, csigg, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73620
2020-01-30 12:41:41 +01:00
Tres Popp 9a52ea5cf9 Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reverts "Revert "[mlir] Create a gpu.module operation for the GPU Dialect.""

This reverts commit ac446302ca4145cdc89f377c0c364c29ee303be5 after
fixing internal Google issues.

This additionally updates ROCDL lowering to use the new gpu.module.

Reviewers: herhut, mravishankar, antiagainst, nicolasvasilache

Subscribers: jholewinski, mgorny, mehdi_amini, jpienaar, burmako, shauheen, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits, mravishankar, rriddle, antiagainst, bkramer

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72921
2020-01-21 14:05:03 +01:00
Christian Sigg 8b2eb7c494 [mlir] Add in-dialect lowering of gpu.all_reduce.
Reviewers: ftynse, nicolasvasilache, herhut

Reviewed By: ftynse, herhut

Subscribers: liufengdb, aartbik, herhut, merge_guards_bot, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72129
2020-01-20 13:43:43 +01:00
Benjamin Kramer 0133cc60e4 Revert "[mlir] Create a gpu.module operation for the GPU Dialect."
This reverts commit 4624a1e8ac. Causing
problems downstream.
2020-01-15 17:52:17 +01:00
Tres Popp 4624a1e8ac [mlir] Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reviewers: herhut, mravishankar, antiagainst, rriddle

Reviewed By: herhut, antiagainst, rriddle

Subscribers: liufengdb, aartbik, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72336
2020-01-14 12:05:47 +01:00
Alex Zinenko 08778d8c4f [mlir][GPU] introduce utilities for promotion to workgroup memory
Introduce a set of function that promote a memref argument of a `gpu.func` to
workgroup memory using memory attribution. The promotion boils down to
additional loops performing the copy from the original argument to the
attributed memory in the beginning of the function, and back at the end of the
function using all available threads. The loop bounds are specified so as to
adapt to any size of the workgroup. These utilities are intended to compose
with other existing utilities (loop coalescing and tiling) in cases where the
distribution of work across threads is uneven, e.g. copying a 2D memref with
only the threads along the "x" dimension. Similarly, specialization of the
kernel to specific launch sizes should be implemented as a separate pass
combining constant propagation and canonicalization.

Introduce a simple attribute-driven pass to test the promotion transformation
since we don't have a heuristic at the moment.

Differential revision: https://reviews.llvm.org/D71904
2020-01-09 10:06:00 +01:00
Christian Sigg 42d46b4efa Add gpu.shuffle op.
This will allow us to lower most of gpu.all_reduce (when all_reduce
doesn't exist in the target dialect) within the GPU dialect, and only do
target-specific lowering for the shuffle op.

PiperOrigin-RevId: 286548256
2019-12-20 02:52:52 -08:00
Alex Zinenko 40ef46fba4 Harden the requirements to memory attribution types in gpu.func
When memory attributions are present in `gpu.func`, require that they are of
memref type and live in memoryspaces 3 and 5 for workgroup and private memory
attributions, respectively. Adapt the conversion from the GPU dialect to the
NVVM dialect to drop the private memory space from attributions as NVVM is able
to model them as local `llvm.alloca`s in the default memory space.

PiperOrigin-RevId: 286161763
2019-12-18 03:38:55 -08:00
Alex Zinenko 6273fa0c6a Plug gpu.func into the GPU lowering pipelines
This updates the lowering pipelines from the GPU dialect to lower-level
dialects (NVVM, SPIRV) to use the recently introduced gpu.func operation
instead of a standard function annotated with an attribute. In particular, the
kernel outlining is updated to produce gpu.func instead of std.func and the
individual conversions are updated to consume gpu.funcs and disallow standard
funcs after legalization, if necessary. The attribute "gpu.kernel" is preserved
in the generic syntax, but can also be used with the custom syntax on
gpu.funcs. The special kind of function for GPU allows one to use additional
features such as memory attribution.

PiperOrigin-RevId: 285822272
2019-12-16 12:12:48 -08:00
Alex Zinenko d5e627f84b Introduce Linkage attribute to the LLVM dialect
LLVM IR supports linkage on global objects such as global variables and
functions. Introduce the Linkage attribute into the LLVM dialect, backed by an
integer storage. Use this attribute on LLVM::GlobalOp and make it mandatory.
Implement parsing/printing of the attribute and conversion to LLVM IR.

See tensorflow/mlir#277.

PiperOrigin-RevId: 283309328
2019-12-02 03:28:10 -08:00
Alex Zinenko bf4692dc49 Introduce gpu.func
Introduce a new function-like operation to the GPU dialect to provide a
placeholder for the execution semantic description and to add support for GPU
memory hierarchy.  This aligns with the overall goal of the dialect to expose
the common abstraction layer for GPU devices, in particular by providing an
MLIR unit of semantics (i.e. an operation) for memory modeling.

This proposal has been discussed in the mailing list:
https://groups.google.com/a/tensorflow.org/d/msg/mlir/RfXNP7Hklsc/MBNN7KhjAgAJ
As decided, the "convergence" aspect of the execution model will be factored
out into a new discussion and therefore is not included in this commit. This
commit only introduces the operation but does not hook it up with the remaining
flow. The intention is to develop the new flow while keeping the old flow
operational and do the switch in a simple, separately reversible commit.

PiperOrigin-RevId: 282357599
2019-11-25 08:10:37 -08:00
Alex Zinenko b5af3784a6 Don't force newline before function attributes
Due to legacy reasons, a newline character followed by two spaces was always
inserted before the attributes of the function Op in pretty form. This breaks
formatting when functions are nested in some other operations. Don't print the
newline and just put the attributes on the same line, which is also more
consistent with module Op. Line breaking aware of indentation can be introduced
separately into the parser if deemed useful.

PiperOrigin-RevId: 281721793
2019-11-21 05:08:19 -08:00
Stephan Herhut abb626686d Extend kernel outlining to also consider dim worth inlining.
PiperOrigin-RevId: 281483447
2019-11-20 02:59:35 -08:00
MLIR Team 9fbf52e330 Look for SymbolRefAttr in KernelOutlining instead of hard-coding CallOp
This code should be exercised using the existing kernel outlining unit test, but
let me know if I should add a dedicated unit test using a fake call instruction
as well.

PiperOrigin-RevId: 279436321
2019-11-08 19:13:13 -08:00
River Riddle 2b61b7979e Convert the Canonicalize and CSE passes to generic Operation Passes.
This allows for them to be used on other non-function, or even other function-like, operations. The algorithms are already generic, so this is simply changing the derived pass type. The majority of this change is just ensuring that the nesting of these passes remains the same, as the pass manager won't auto-nest them anymore.

PiperOrigin-RevId: 276573038
2019-10-24 15:01:09 -07:00
Kazuaki Ishizaki f28c5aca17 Fix minor spelling tweaks (NFC)
Closes tensorflow/mlir#175

PiperOrigin-RevId: 275726876
2019-10-20 09:44:36 -07:00
Stephan Herhut 3622e1833f Use StrEnumAttr for gpu.allreduce op instead of StringAttr to better encode constraints.
PiperOrigin-RevId: 275448372
2019-10-18 04:44:48 -07:00
Christian Sigg fe0ee32da5 Add gpu.barrier op to synchronize invocations of a local workgroup.
Adding gen table for rewrite patterns from GPU to NVVM dialect.

Copy missing op documentation from GPUOps.td to GPU.md.

PiperOrigin-RevId: 275419588
2019-10-18 00:30:44 -07:00
Christian Sigg d2f0f847af Support custom accumulator provided as region to gpu.all_reduce.
In addition to specifying the type of accumulation through the 'op' attribute, the accumulation can now also be specified as arbitrary code region.

Adds a gpu.yield op to specify the result of the accumulation.

Also support more types (integers) and accumulations (mul).

PiperOrigin-RevId: 275065447
2019-10-16 10:43:44 -07:00
Alex Zinenko 90d65d32d6 Use named modules for gpu.launch_func
The kernel function called by gpu.launch_func is now placed into an isolated
nested module during the outlining stage to simplify separate compilation.
Until recently, modules did not have names and could not be referenced. This
limitation was circumvented by introducing a stub kernel at the same name at
the same nesting level as the module containing the actual kernel. This
relation is only effective in one direction: from actual kernel function to its
launch_func "caller".

Leverage the recently introduced symbol name attributes on modules to refer to
a specific nested module from `gpu.launch_func`. This removes the implicit
connection between the identically named stub and kernel functions. It also
enables support for `gpu.launch_func`s to call different kernels located in the
same module.

PiperOrigin-RevId: 273491891
2019-10-08 04:30:32 -07:00
Nicolas Vasilache ddf737c5da Promote MemRefDescriptor to a pointer to struct when passing function boundaries in LLVMLowering.
The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues.
Solving the ABI problem generally is a very complex problem and most likely involves taking
a dependence on clang that we do not want atm.

A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement.

PiperOrigin-RevId: 271591708
2019-09-27 09:57:36 -07:00
Christian Sigg 116dac00ba Add AllReduceOp to GPU dialect with lowering to NVVM.
The reduction operation is currently fixed to "add", and the scope is fixed to "workgroup".

The implementation is currently limited to sizes that are multiple 32 (warp size) and no larger than 1024.

PiperOrigin-RevId: 271290265
2019-09-26 00:17:50 -07:00
Christian Sigg 74cdbf5909 Clone called functions into nested GPU module.
PiperOrigin-RevId: 270891190
2019-09-24 06:29:54 -07:00
Christian Sigg b8676da1fc Outline GPU kernel function into a nested module.
Roll forward of commit 5684a12.

When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 270639748
2019-09-23 03:17:01 -07:00
George Karpenkov 2df646bef6 Automated rollback of commit 5684a12434
PiperOrigin-RevId: 270126672
2019-09-19 14:34:30 -07:00
MLIR Team 5684a12434 Outline GPU kernel function into a nested module.
When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 269987720
2019-09-19 01:51:28 -07:00
Stephan Herhut 318ff019cf Addressing some late review comments on kernel inlining.
Just formatting and better lit tests, no functional change.

PiperOrigin-RevId: 267942907
2019-09-09 01:15:47 -07:00
Stephan Herhut 7eb25cd367 Make GPU kernel outlining test independent of value names.
PiperOrigin-RevId: 267323604
2019-09-05 01:46:26 -07:00
Stephan Herhut dfd06af562 Make GPU kernel outlining inline constants.
It is generally beneficial to pass less arguments to a kernel, so cloning constants
into the kernel is beneficial.

PiperOrigin-RevId: 267139084
2019-09-04 06:16:07 -07:00