Commit Graph

3785 Commits

Author SHA1 Message Date
Nicolas Vasilache 3f906c54a2 [mlir][Vector] Add 2-D vector contract lowering to ReduceOp
This new pattern mixes vector.transpose and direct lowering to vector.reduce.
This allows more progressive lowering than immediately going to insert/extract and
composes more nicely with other canonicalizations.
This has 2 use cases:
1. for very wide vectors the generated IR may be much smaller
2. when we have a custom lowering for transpose ops we can target it directly
rather than rely LLVM

Differential Revision: https://reviews.llvm.org/D85428
2020-08-07 06:17:48 -04:00
Nicolas Vasilache 1353cbc257 [mlir][Vector] NFC - Use matchAndRewrite in ContractionOp lowering patterns
Replace the use of separate match and rewrite which unnecessarily duplicates logic.

Differential Revision: https://reviews.llvm.org/D85421
2020-08-06 09:02:25 -04:00
Nicolas Vasilache 54fafd17a7 [mlir][Linalg] Introduce canonicalization to remove dead LinalgOps
When any of the memrefs in a structured linalg op has a zero dimension, it becomes dead.
This is consistent with the fact that linalg ops deduce their loop bounds from their operands.

Note however that this is not the case for the `tensor<0xelt_type>` which is a special convention
that must be lowered away into either `memref<elt_type>` or just `elt_type` before this
canonicalization can kick in.

Differential Revision: https://reviews.llvm.org/D85413
2020-08-06 06:08:46 -04:00
Christian Sigg 45676a8936 [MLIR] Change GpuLaunchFuncToGpuRuntimeCallsPass to wrap a RewritePattern with the same functionality.
The RewritePattern will become one of several, and will be part of the LLVM conversion pass (instead of a separate pass following LLVM conversion).

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D84946
2020-08-06 11:55:46 +02:00
Alexander Belyaev 3effc35015 [mlir] Lower DimOp to LLVM for unranked memrefs.
Differential Revision: https://reviews.llvm.org/D85361
2020-08-06 11:46:11 +02:00
Alex Zinenko 5446ec8507 [mlir] take MLIRContext instead of LLVMDialect in getters of LLVMType's
Historical modeling of the LLVM dialect types had been wrapping LLVM IR types
and therefore needed access to the instance of LLVMContext stored in the
LLVMDialect. The new modeling does not rely on that and only needs the
MLIRContext that is used for uniquing, similarly to other MLIR types. Change
LLVMType::get<Kind>Ty functions to take `MLIRContext *` instead of
`LLVMDialect *` as first argument. This brings the code base closer to
completely removing the dependence on LLVMContext from the LLVMDialect,
together with additional support for thread-safety of its use.

Depends On D85371

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85372
2020-08-06 11:05:40 +02:00
Alex Zinenko d3a9807674 [mlir] Remove most uses of LLVMDialect::getModule
This prepares for the removal of llvm::Module and LLVMContext from the
mlir::LLVMDialect.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85371
2020-08-06 10:54:30 +02:00
aartbik 39379916a7 [mlir] [VectorOps] Add masked load/store operations to Vector dialect
The intrinsics were already supported and vector.transfer_read/write lowered
direclty into these operations. By providing them as individual ops, however,
clients can used them directly, and it opens up progressively lowering transfer
operations at higher levels (rather than direct lowering to LLVM IR as done now).

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D85357
2020-08-05 16:45:24 -07:00
Alex Zinenko b2ab375d1f [mlir] use the new stateful LLVM type translator by default
Previous type model in the LLVM dialect did not support identified structure
types properly and therefore could use stateless translations implemented as
free functions. The new model supports identified structs and must keep track
of the identified structure types present in the target context (LLVMContext or
MLIRContext) to avoid creating duplicate structs due to LLVM's type
auto-renaming. Expose the stateful type translation classes and use them during
translation, storing the state as part of ModuleTranslation.

Drop the test type translation mechanism that is no longer necessary and update
the tests to exercise type translation as part of the main translation flow.

Update the code in vector-to-LLVM dialect conversion that relied on stateless
translation to use the new class in a stateless manner.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85297
2020-08-06 00:36:33 +02:00
Lei Zhang 0d03b3901d [mlir][StandardToSPIRV] Use spv.UMod for index re-calculation
Per Vulkan's SPIR-V environment spec: "While the OpSRem and OpSMod
instructions are supported by the Vulkan environment, they require
non-negative values and thus do not enable additional functionality
beyond what OpUMod provides."

The `getOffsetForBitwidth` function is used for lowering std.load
and std.store, whose indices are of `index` type and cannot be
negative. So we should be okay to use spv.UMod directly here to
be exact. Also made the comment explicit about the assumption.

Differential Revision: https://reviews.llvm.org/D83714
2020-08-05 14:52:04 -04:00
Lei Zhang 48378a32af [spirv] Fix bitwidth emulation for Workgroup storage class
If Int16 is not available, 16-bit integers inside Workgroup storage
class should be emulated via 32-bit integers. This was previously
broken because the capability querying logic was incorrectly
intercepting all storage classes where it meant to only handle
interface storage classes. Adjusted where we return to fix this.

Differential Revision: https://reviews.llvm.org/D85308
2020-08-05 14:44:03 -04:00
Alexander Belyaev 9fdd0df949 [mlir][nfc] Rename `promoteMemRefDescriptors` to `promoteOperands`.
`promoteMemRefDescriptors` also converts types of every operand, not only
memref-typed ones. I think `promoteMemRefDescriptors` name does not imply that.

Differential Revision: https://reviews.llvm.org/D85325
2020-08-05 20:24:48 +02:00
Uday Bondhugula 1d75f004ab [MLIR][NFC] Fix clang-tidy warnings in std to llvm conversion
Fix clang-tidy warnings in std to llvm conversion.
2020-08-05 22:12:05 +05:30
Alexander Belyaev bc7456fd8a [mlir] Fix rank bitwidth in UnrankedMemRefType conversion.
Differential Revision: https://reviews.llvm.org/D85300
2020-08-05 18:35:23 +02:00
Alex Zinenko 75f239e975 [mlir] Initial version of C APIs
Introduce an initial version of C API for MLIR core IR components: Value, Type,
    Attribute, Operation, Region, Block, Location. These APIs allow for both
    inspection and creation of the IR in the generic form and intended for wrapping
    in high-level library- and language-specific constructs. At this point, there
    is no stability guarantee provided for the API.

Reviewed By: stellaraccident, lattner

Differential Revision: https://reviews.llvm.org/D83310
2020-08-05 15:04:08 +02:00
Arpith C. Jacob fab4b59961 [mlir] Conversion of ViewOp with memory space to LLVM.
Handle the case where the ViewOp takes in a memref that has
an memory space.

Reviewed By: ftynse, bondhugula, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D85048
2020-08-05 12:19:52 +02:00
Alexander Belyaev a3d427d30c [mlir] Lower RankOp to LLVM for unranked memrefs.
Differential Revision: https://reviews.llvm.org/D85273
2020-08-05 12:13:43 +02:00
Frederik Gossen 4cd923784e [MLIR][Shape] Expose extent tensor type builder
The extent tensor type is a `tensor<?xindex>` that is used in the shape dialect.
To facilitate the use of this type when working with the shape dialect, we
expose the helper function for its construction.

Differential Revision: https://reviews.llvm.org/D85121
2020-08-05 09:42:57 +00:00
George Mitenkov e739648cfa [MLIR][SPIRVToLLVM] Conversion pattern for loop op
This patch introduces a conversion of `spv.loop` to LLVM dialect.
Similarly to `spv.selection`, op's control attributes are not mapped
to LLVM yet and therefore the conversion fails if the loop control is
not `None`. Also, all blocks within the loop should be reachable in
order for conversion to succeed.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D84245
2020-08-05 10:33:54 +03:00
Rahul Joshi 1d6a724aa1 [MLIR] Change FunctionType::get() and TupleType::get() to use TypeRange
- Moved TypeRange into its own header/cpp file, and add hashing support.
- Change FunctionType::get() and TupleType::get() to use TypeRange

Differential Revision: https://reviews.llvm.org/D85075
2020-08-04 12:43:40 -07:00
Diego Caballero 3bfbc5df87 [MLIR][Affine] Fix createPrivateMemRef in affine fusion
Always define a remapping for the memref replacement (`indexRemap`)
with the proper number of inputs, including all the `outerIVs`, so that
the number of inputs and the operands provided for the map don't mismatch.

Reviewed By: bondhugula, andydavis1

Differential Revision: https://reviews.llvm.org/D85177
2020-08-04 12:17:48 -07:00
aartbik e8dcf5f87d [mlir] [VectorOps] Add expand/compress operations to Vector dialect
Introduces the expand and compress operations to the Vector dialect
(important memory operations for sparse computations), together
with a first reference implementation that lowers to the LLVM IR
dialect to enable running on CPU (and other targets that support
the corresponding LLVM IR intrinsics).

Reviewed By: reidtatge

Differential Revision: https://reviews.llvm.org/D84888
2020-08-04 12:00:42 -07:00
Yash Jain 56593fa370 [MLIR] Simplify semi-affine expressions
Simplify semi-affine expression for the operations like ceildiv,
floordiv and modulo by any given symbol by checking divisibilty by that
symbol.

Some properties used in simplification are:

1) Commutative property of the floordiv and ceildiv:
((expr1 floordiv expr2) floordiv expr3 ) = ((expr1 floordiv expr3) floordiv expr2)
((expr1 ceildiv expr2) ceildiv expr3 ) = ((expr1 ceildiv expr3) ceildiv expr2)

While simplification if operations are different no simplification is
possible as there is no property that simplify expressions like these:
((expr1 ceildiv expr2) floordiv expr3) or  ((expr1 floordiv expr2)
ceildiv expr3).

2) If both expr1 and expr2 are divisible by the expr3 then:
(expr1 % expr2) / expr3 = ((expr1 / expr3) % (expr2 / expr3))
where / is divide symbol.

3) If expr1 is divisible by expr2 then expr1 % expr2 = 0.

Signed-off-by: Yash Jain <yash.jain@polymagelabs.com>

Differential Revision: https://reviews.llvm.org/D84920
2020-08-04 22:07:18 +05:30
Nicolas Vasilache 2d0b05969b [mlir][Vector] Relax condition for `splitFullAndPartialTransferPrecondition`
The `splitFullAndPartialTransferPrecondition` has a restrictive condition to
prevent the pattern to be applied recursively if it is nested under an scf.IfOp.
Relaxing the condition to the immediate parent op must not be an scf.IfOp lets
the pattern be applied more generally while still preventing recursion.

Differential Revision: https://reviews.llvm.org/D85209
2020-08-04 10:06:21 -04:00
Nicolas Vasilache 1a4263d394 [mlir][Vector] Add linalg.copy-based pattern for splitting vector.transfer_read into full and partial copies.
This revision adds a transformation and a pattern that rewrites a "maybe masked" `vector.transfer_read %view[...], %pad `into a pattern resembling:

```
   %1:3 = scf.if (%inBounds) {
      scf.yield %view : memref<A...>, index, index
    } else {
      %2 = linalg.fill(%extra_alloc, %pad)
      %3 = subview %view [...][...][...]
      linalg.copy(%3, %alloc)
      memref_cast %extra_alloc: memref<B...> to memref<A...>
      scf.yield %4 : memref<A...>, index, index
   }
   %res= vector.transfer_read %1#0[%1#1, %1#2] {masked = [false ... false]}
```
where `extra_alloc` is a top of the function alloca'ed buffer of one vector.

This rewrite makes it possible to realize the "always full tile" abstraction where vector.transfer_read operations are guaranteed to read from a padded full buffer.
The extra work only occurs on the boundary tiles.
2020-08-04 08:46:08 -04:00
Alex Zinenko cb9f9df5f8 [mlir] Fix GCC5 compilation problem in MLIR->LLVM type translation
GCC5 seems to dislike generic lambdas calling a method of the class
containing the lambda without explicit `this`.
2020-08-04 14:42:17 +02:00
Alex Zinenko ec1f4e7c3b [mlir] switch the modeling of LLVM types to use the new mechanism
A new first-party modeling for LLVM IR types in the LLVM dialect has been
developed in parallel to the existing modeling based on wrapping LLVM `Type *`
instances. It resolves the long-standing problem of modeling identified
structure types, including recursive structures, and enables future removal of
LLVMContext and related locking mechanisms from LLVMDialect.

This commit only switches the modeling by (a) renaming LLVMTypeNew to LLVMType,
(b) removing the old implementaiton of LLVMType, and (c) updating the tests. It
is intentionally minimal. Separate commits will remove the infrastructure built
for the transition and update API uses where appropriate.

Depends On D85020

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85021
2020-08-04 14:29:25 +02:00
Alex Zinenko 6abd7e2e62 [mlir] provide same APIs as existing LLVMType in the new LLVM type modeling
These are intended to smoothen the transition and may be removed in the future
in favor of more MLIR-compatible APIs. They intentionally have the same
semantics as the existing functions, which must remain stable until the
transition is complete.

Depends On D85019

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D85020
2020-08-04 13:49:14 +02:00
Alex Zinenko d4fbbab2e4 [mlir] translate types between MLIR LLVM dialect and LLVM IR
With new LLVM dialect type modeling, the dialect types no longer wrap LLVM IR
types. Therefore, they need to be translated to and from LLVM IR during export
and import. Introduce the relevant functionality for translating types. It is
currently exercised by an ad-hoc type translation roundtripping test that will
be subsumed by the actual translation test when the type system transition is
complete.

Depends On D84339

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D85019
2020-08-04 13:42:43 +02:00
Alexander Belyaev 8979a9cdf2 [mlir] Fix adding wrong operand value in `promoteMemRefDescriptors`.
The bug was not noticed because we didn't have a lot of custom type conversions
directly to LLVM dialect.

Differential Revision: https://reviews.llvm.org/D85192
2020-08-04 13:39:56 +02:00
Jakub Lichman 689096965d [mlir][Linalg] Conv ops lowering to std calls added.
Lowering of newly defined Conv ops in TC syntax to standard
dialect is not supported and therefore this commit adds support
for it.

Differential Revision: https://reviews.llvm.org/D84840
2020-08-04 07:12:58 +00:00
MaheshRavishankar 32f3a9a9d6 [mlir][DialectConversion] Remove usage of std::distance to track position.
Remove use of iterator::difference_type to know where to insert a
moved or erased block during undo actions.

Differential Revision: https://reviews.llvm.org/D85066
2020-08-03 10:06:05 -07:00
MaheshRavishankar e888886cc3 [mlir][DialectConversion] Add support for mergeBlocks in ConversionPatternRewriter.
Differential Revision: https://reviews.llvm.org/D84795
2020-08-03 10:06:04 -07:00
Nicolas Vasilache d313e9c12e [mlir][Vector] Add transformation + pattern to split vector.transfer_read into full and partial copies.
This revision adds a transformation and a pattern that rewrites a "maybe masked" `vector.transfer_read %view[...], %pad `into a pattern resembling:

```
   %1:3 = scf.if (%inBounds) {
      scf.yield %view : memref<A...>, index, index
    } else {
      %2 = vector.transfer_read %view[...], %pad : memref<A...>, vector<...>
      %3 = vector.type_cast %extra_alloc : memref<...> to
      memref<vector<...>> store %2, %3[] : memref<vector<...>> %4 =
      memref_cast %extra_alloc: memref<B...> to memref<A...> scf.yield %4 :
      memref<A...>, index, index
   }
   %res= vector.transfer_read %1#0[%1#1, %1#2] {masked = [false ... false]}
```
where `extra_alloc` is a top of the function alloca'ed buffer of one vector.

This rewrite makes it possible to realize the "always full tile" abstraction where vector.transfer_read operations are guaranteed to read from a padded full buffer.
The extra work only occurs on the boundary tiles.

Differential Revision: https://reviews.llvm.org/D84631
2020-08-03 12:58:18 -04:00
Mehdi Amini 7ba82a7320 Revert "[mlir][Vector] Add transformation + pattern to split vector.transfer_read into full and partial copies."
This reverts commit 35b65be041.

Build is broken with -DBUILD_SHARED_LIBS=ON with some undefined
references like:

VectorTransforms.cpp:(.text._ZN4llvm12function_refIFvllEE11callback_fnIZL24createScopedInBoundsCondN4mlir25VectorTransferOpInterfaceEE3$_8EEvlll+0xa5): undefined reference to `mlir::edsc::op::operator+(mlir::Value, mlir::Value)'
2020-08-03 16:16:47 +00:00
Alex Zinenko 0c40af6b59 [mlir] First-party modeling of LLVM types
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This model makes thread-safe type manipulation hard and is being
progressively replaced with a cleaner MLIR model that replicates the type
system.  Introduce a set of classes reflecting the LLVM IR type system in MLIR
instead of wrapping the existing types. These are currently introduced as
separate classes without affecting the dialect flow, and are exercised through
a test dialect. Once feature parity is reached, the old implementation will be
gradually substituted with the new one.

Depends On D84171

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D84339
2020-08-03 15:45:29 +02:00
Julian Gross 6d47431d7e [mlir] Extended Buffer Assignment to support AllocaOps.
Added support for AllocaOps in Buffer Assignment.

Differential Revision: https://reviews.llvm.org/D85017
2020-08-03 11:20:30 +02:00
Nicolas Vasilache 35b65be041 [mlir][Vector] Add transformation + pattern to split vector.transfer_read into full and partial copies.
This revision adds a transformation and a pattern that rewrites a "maybe masked" `vector.transfer_read %view[...], %pad `into a pattern resembling:

```
   %1:3 = scf.if (%inBounds) {
      scf.yield %view : memref<A...>, index, index
    } else {
      %2 = vector.transfer_read %view[...], %pad : memref<A...>, vector<...>
      %3 = vector.type_cast %extra_alloc : memref<...> to
      memref<vector<...>> store %2, %3[] : memref<vector<...>> %4 =
      memref_cast %extra_alloc: memref<B...> to memref<A...> scf.yield %4 :
      memref<A...>, index, index
   }
   %res= vector.transfer_read %1#0[%1#1, %1#2] {masked = [false ... false]}
```
where `extra_alloc` is a top of the function alloca'ed buffer of one vector.

This rewrite makes it possible to realize the "always full tile" abstraction where vector.transfer_read operations are guaranteed to read from a padded full buffer.
The extra work only occurs on the boundary tiles.

Differential Revision: https://reviews.llvm.org/D84631
2020-08-03 04:53:43 -04:00
Frederik Gossen 11492be9d7 [MLIR][Shape] Lower `shape.broadcast` to `scf`
Differential Revision: https://reviews.llvm.org/D85027
2020-08-03 08:20:14 +00:00
George Mitenkov 91f6a5f785 [MLIR][SPIRV] Control attributes support for loop and selection
This patch handles loopControl and selectionControl in parsing and
printing. In order to reuse the functionality, and avoid handling cases when
`{` of the region is parsed as a dictionary attribute, `control` keyword was
introduced.`None` is a default control attribute. This functionality can be
later extended to `spv.func`.
Also, loopControl and selectionControl can now be (de)serialized.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84175
2020-08-03 09:31:37 +03:00
Benjamin Kramer eb41f9edde [mlir][Vector] Simplify code a bit. NFCI. 2020-08-01 14:49:19 +02:00
Thomas Raoux cfb955ac37 [mlir][spirv] Relax restriction on pointer type for CooperativeMatrix load/store
This change allow CooperativeMatrix Load/Store operations to use pointer type
that may not match the matrix element type. This allow us to declare buffer
with a larger type size than the matrix element type. This follows SPIR-V spec
and this is needed to be able to use cooperative matrix in combination with
shared local memory efficiently.

Differential Revision: https://reviews.llvm.org/D84993
2020-07-31 08:02:21 -07:00
Sourabh Singh Tomar 793c29a267 [MLIR,OpenMP][NFCI] Removed loop for accessing regions of ParallelOp
`ParallelOp` has only one region associated with it.

Reviewed By: kiranchandramohan, ftynse

Differential Revision: https://reviews.llvm.org/D85008
2020-07-31 18:52:44 +05:30
Jakub Lichman eef1bfb2d2 [mlir][Linalg] Conv {1,2,3}D ops defined with TC syntax
Replaced definition of named ND ConvOps with tensor comprehension
syntax which reduces boilerplate code significantly. Furthermore,
new ops to support TF convolutions added (without strides and dilations).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84628
2020-07-31 13:20:17 +02:00
Thomas Raoux 59156bad03 [mlir][spirv] Add support for converting memref of vector to SPIR-V
This allow declaring buffers and alloc of vectors so that we can support vector
load/store.

Differential Revision: https://reviews.llvm.org/D84982
2020-07-30 15:05:40 -07:00
Alexander Belyaev 6b8c641d8e [mlir] NFC: Expose `getElementPtrType` and `getSizes` methods of AllocOpLowering.
Differential Revision: https://reviews.llvm.org/D84917
2020-07-30 20:18:29 +02:00
Johannes Doerfert 4d83aa4771 [MLIR][OpenMP] Fix OpenMPIRBuilder usage after D82470 2020-07-30 11:54:57 -05:00
Stephan Herhut 85defd23aa [mlir][shape] Use memref of index in shape lowering
Now that we can have a memref of index type, we no longer need to materialize shapes in i64 and then index_cast.

Differential Revision: https://reviews.llvm.org/D84938
2020-07-30 15:12:43 +02:00
Christian Sigg 13a3d88666 [MLIR] Don't pass separate LowerToLLVMOptions when we already pass a LLVMTypeConverter which contains those options already.
This also prevents passing inconsistent options.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D84915
2020-07-30 14:55:23 +02:00
Abhishek Varma 76d07503f0 [MLIR] Introduce inter-procedural memref layout normalization
-- Introduces a pass that normalizes the affine layout maps to the identity layout map both within and across functions by rewriting function arguments and call operands where necessary.
-- Memref normalization is now implemented entirely in the module pass '-normalize-memrefs' and the limited intra-procedural version has been removed from '-simplify-affine-structures'.
-- Run using -normalize-memrefs.
-- Return ops are not handled and would be handled in the subsequent revisions.

Signed-off-by: Abhishek Varma <abhishek.varma@polymagelabs.com>

Differential Revision: https://reviews.llvm.org/D84490
2020-07-30 18:12:56 +05:30