Commit Graph

5163 Commits

Author SHA1 Message Date
River Riddle 8c39e70679 [mlir][OpFormatGen] Add support for eliding UnitAttr when used to anchor an optional group
Unit attributes are given meaning by their existence, and thus have no meaningful value beyond "is it present". As such, in the format of an operation unit attributes are generally used to guard the printing of other elements and aren't generally printed themselves; as the presence of the group when parsing means that the unit attribute should be added. This revision adds support to the declarative format for eliding unit attributes in situations where they anchor an optional group, but aren't the first element.

For example,
```
let assemblyFormat = "(`is_optional` $unit_attr^)? attr-dict";
```

would print `foo.op is_optional` when $unit_attr is present, instead of the current `foo.op is_optional unit`.

Differential Revision: https://reviews.llvm.org/D84577
2020-08-03 14:31:41 -07:00
MaheshRavishankar 32f3a9a9d6 [mlir][DialectConversion] Remove usage of std::distance to track position.
Remove use of iterator::difference_type to know where to insert a
moved or erased block during undo actions.

Differential Revision: https://reviews.llvm.org/D85066
2020-08-03 10:06:05 -07:00
MaheshRavishankar e888886cc3 [mlir][DialectConversion] Add support for mergeBlocks in ConversionPatternRewriter.
Differential Revision: https://reviews.llvm.org/D84795
2020-08-03 10:06:04 -07:00
Nicolas Vasilache d313e9c12e [mlir][Vector] Add transformation + pattern to split vector.transfer_read into full and partial copies.
This revision adds a transformation and a pattern that rewrites a "maybe masked" `vector.transfer_read %view[...], %pad `into a pattern resembling:

```
   %1:3 = scf.if (%inBounds) {
      scf.yield %view : memref<A...>, index, index
    } else {
      %2 = vector.transfer_read %view[...], %pad : memref<A...>, vector<...>
      %3 = vector.type_cast %extra_alloc : memref<...> to
      memref<vector<...>> store %2, %3[] : memref<vector<...>> %4 =
      memref_cast %extra_alloc: memref<B...> to memref<A...> scf.yield %4 :
      memref<A...>, index, index
   }
   %res= vector.transfer_read %1#0[%1#1, %1#2] {masked = [false ... false]}
```
where `extra_alloc` is a top of the function alloca'ed buffer of one vector.

This rewrite makes it possible to realize the "always full tile" abstraction where vector.transfer_read operations are guaranteed to read from a padded full buffer.
The extra work only occurs on the boundary tiles.

Differential Revision: https://reviews.llvm.org/D84631
2020-08-03 12:58:18 -04:00
Mehdi Amini 7ba82a7320 Revert "[mlir][Vector] Add transformation + pattern to split vector.transfer_read into full and partial copies."
This reverts commit 35b65be041.

Build is broken with -DBUILD_SHARED_LIBS=ON with some undefined
references like:

VectorTransforms.cpp:(.text._ZN4llvm12function_refIFvllEE11callback_fnIZL24createScopedInBoundsCondN4mlir25VectorTransferOpInterfaceEE3$_8EEvlll+0xa5): undefined reference to `mlir::edsc::op::operator+(mlir::Value, mlir::Value)'
2020-08-03 16:16:47 +00:00
Alex Zinenko 0c40af6b59 [mlir] First-party modeling of LLVM types
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This model makes thread-safe type manipulation hard and is being
progressively replaced with a cleaner MLIR model that replicates the type
system.  Introduce a set of classes reflecting the LLVM IR type system in MLIR
instead of wrapping the existing types. These are currently introduced as
separate classes without affecting the dialect flow, and are exercised through
a test dialect. Once feature parity is reached, the old implementation will be
gradually substituted with the new one.

Depends On D84171

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D84339
2020-08-03 15:45:29 +02:00
Julian Gross 6d47431d7e [mlir] Extended Buffer Assignment to support AllocaOps.
Added support for AllocaOps in Buffer Assignment.

Differential Revision: https://reviews.llvm.org/D85017
2020-08-03 11:20:30 +02:00
Nicolas Vasilache 35b65be041 [mlir][Vector] Add transformation + pattern to split vector.transfer_read into full and partial copies.
This revision adds a transformation and a pattern that rewrites a "maybe masked" `vector.transfer_read %view[...], %pad `into a pattern resembling:

```
   %1:3 = scf.if (%inBounds) {
      scf.yield %view : memref<A...>, index, index
    } else {
      %2 = vector.transfer_read %view[...], %pad : memref<A...>, vector<...>
      %3 = vector.type_cast %extra_alloc : memref<...> to
      memref<vector<...>> store %2, %3[] : memref<vector<...>> %4 =
      memref_cast %extra_alloc: memref<B...> to memref<A...> scf.yield %4 :
      memref<A...>, index, index
   }
   %res= vector.transfer_read %1#0[%1#1, %1#2] {masked = [false ... false]}
```
where `extra_alloc` is a top of the function alloca'ed buffer of one vector.

This rewrite makes it possible to realize the "always full tile" abstraction where vector.transfer_read operations are guaranteed to read from a padded full buffer.
The extra work only occurs on the boundary tiles.

Differential Revision: https://reviews.llvm.org/D84631
2020-08-03 04:53:43 -04:00
Frederik Gossen 11492be9d7 [MLIR][Shape] Lower `shape.broadcast` to `scf`
Differential Revision: https://reviews.llvm.org/D85027
2020-08-03 08:20:14 +00:00
George Mitenkov 91f6a5f785 [MLIR][SPIRV] Control attributes support for loop and selection
This patch handles loopControl and selectionControl in parsing and
printing. In order to reuse the functionality, and avoid handling cases when
`{` of the region is parsed as a dictionary attribute, `control` keyword was
introduced.`None` is a default control attribute. This functionality can be
later extended to `spv.func`.
Also, loopControl and selectionControl can now be (de)serialized.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84175
2020-08-03 09:31:37 +03:00
Mehdi Amini 4091413c00 Remove debug flags from test (NFC) 2020-08-02 16:59:20 +00:00
Benjamin Kramer eb41f9edde [mlir][Vector] Simplify code a bit. NFCI. 2020-08-01 14:49:19 +02:00
Jacques Pienaar 86a78546b9 [mlir] Add shape.with_shape op
This is an operation that can returns a new ValueShape with a different shape. Useful for composing shape function calls and reusing existing shape transfer functions.

Just adding the op in this change.

Differential Revision: https://reviews.llvm.org/D84217
2020-07-31 14:46:48 -07:00
River Riddle 2a6c8b2e95 [mlir][PassIncGen] Refactor how pass registration is generated
The current output is a bit clunky and requires including files+macros everywhere, or manually wrapping the file inclusion in a registration function. This revision refactors the pass backend to automatically generate `registerFooPass`/`registerFooPasses` functions that wrap the pass registration. `gen-pass-decls` now takes a `-name` input that specifies a tag name for the group of passes that are being generated. For each pass, the generator now produces a `registerFooPass` where `Foo` is the name of the definition specified in tablegen. It also generates a `registerGroupPasses`, where `Group` is the tag provided via the `-name` input parameter, that registers all of the passes present.

Differential Revision: https://reviews.llvm.org/D84983
2020-07-31 13:20:37 -07:00
Rahul Joshi eb8c72cb0d [MLIR][NFC] Add FuncOp::getArgumentTypes()
Differential Revision: https://reviews.llvm.org/D85038
2020-07-31 13:18:03 -07:00
Thomas Raoux cfb955ac37 [mlir][spirv] Relax restriction on pointer type for CooperativeMatrix load/store
This change allow CooperativeMatrix Load/Store operations to use pointer type
that may not match the matrix element type. This allow us to declare buffer
with a larger type size than the matrix element type. This follows SPIR-V spec
and this is needed to be able to use cooperative matrix in combination with
shared local memory efficiently.

Differential Revision: https://reviews.llvm.org/D84993
2020-07-31 08:02:21 -07:00
Frederik Gossen 6983cf3a57 [MLIR][Shape] Allow unsafe `shape.broadcast`
In a context in which `shape.broadcast` is known not to produce an error value,
we want it to operate solely on extent tensors. The operation's behavior is
then undefined in the error case as the result type cannot hold this value.

Differential Revision: https://reviews.llvm.org/D84933
2020-07-31 14:18:06 +00:00
Sourabh Singh Tomar 793c29a267 [MLIR,OpenMP][NFCI] Removed loop for accessing regions of ParallelOp
`ParallelOp` has only one region associated with it.

Reviewed By: kiranchandramohan, ftynse

Differential Revision: https://reviews.llvm.org/D85008
2020-07-31 18:52:44 +05:30
Jakub Lichman eef1bfb2d2 [mlir][Linalg] Conv {1,2,3}D ops defined with TC syntax
Replaced definition of named ND ConvOps with tensor comprehension
syntax which reduces boilerplate code significantly. Furthermore,
new ops to support TF convolutions added (without strides and dilations).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84628
2020-07-31 13:20:17 +02:00
Alexander Belyaev 4d6eec8e70 [mlir] Add TFFramework dialect to DialectSymbolRegistry.
Differential Revision: https://reviews.llvm.org/D84918
2020-07-31 11:00:54 +02:00
Rahul Joshi a34a8d5260 [MLIR][NFC] Add SymbolUse::UseRange::empty()
Differential Revision: https://reviews.llvm.org/D84984
2020-07-30 15:18:58 -07:00
Thomas Raoux 59156bad03 [mlir][spirv] Add support for converting memref of vector to SPIR-V
This allow declaring buffers and alloc of vectors so that we can support vector
load/store.

Differential Revision: https://reviews.llvm.org/D84982
2020-07-30 15:05:40 -07:00
Alexander Belyaev 6b8c641d8e [mlir] NFC: Expose `getElementPtrType` and `getSizes` methods of AllocOpLowering.
Differential Revision: https://reviews.llvm.org/D84917
2020-07-30 20:18:29 +02:00
Johannes Doerfert 4d83aa4771 [MLIR][OpenMP] Fix OpenMPIRBuilder usage after D82470 2020-07-30 11:54:57 -05:00
Stephan Herhut 85defd23aa [mlir][shape] Use memref of index in shape lowering
Now that we can have a memref of index type, we no longer need to materialize shapes in i64 and then index_cast.

Differential Revision: https://reviews.llvm.org/D84938
2020-07-30 15:12:43 +02:00
Christian Sigg 13a3d88666 [MLIR] Don't pass separate LowerToLLVMOptions when we already pass a LLVMTypeConverter which contains those options already.
This also prevents passing inconsistent options.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D84915
2020-07-30 14:55:23 +02:00
Abhishek Varma 76d07503f0 [MLIR] Introduce inter-procedural memref layout normalization
-- Introduces a pass that normalizes the affine layout maps to the identity layout map both within and across functions by rewriting function arguments and call operands where necessary.
-- Memref normalization is now implemented entirely in the module pass '-normalize-memrefs' and the limited intra-procedural version has been removed from '-simplify-affine-structures'.
-- Run using -normalize-memrefs.
-- Return ops are not handled and would be handled in the subsequent revisions.

Signed-off-by: Abhishek Varma <abhishek.varma@polymagelabs.com>

Differential Revision: https://reviews.llvm.org/D84490
2020-07-30 18:12:56 +05:30
Stephan Herhut e12db3ed99 [mlir] Allow index as element type of memref
Differential Revision: https://reviews.llvm.org/D84934
2020-07-30 14:35:22 +02:00
Frederik Gossen a97940d4e0 [MLIR][Shape] Limit `shape.rank` lowering to its extent tensor variant
When lowering to the standard dialect, we currently support only the extent
tensor variant of the shape.rank operation. This change lets the conversion
pattern fail in a well-defined manner.

Differential Revision: https://reviews.llvm.org/D84852
2020-07-30 11:43:08 +00:00
George Mitenkov 1880532036 [MLIR][SPIRVToLLVM] Conversion of GLSL ops to LLVM intrinsics
This patch introduces new intrinsics in LLVM dialect:
-  `llvm.intr.floor`
-  `llvm.intr.maxnum`
-  `llvm.intr.minnum`
-  `llvm.intr.smax`
-  `llvm.intr.smin`
These intrinsics correspond to SPIR-V ops from GLSL
extended instruction set (`spv.GLSL.Floor`, `spv.GLSL.FMax`,
`spv.GLSL.FMin`,  `spv.GLSL.SMax` and `spv.GLSL.SMin`
respectively). Also conversion patterns for them were added.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84661
2020-07-30 11:22:44 +03:00
George Mitenkov 3aab320557 [MLIR][SPIRVToLLVM] Conversion for inverse sqrt and tanh
This is a second patch on conversion of GLSL ops to LLVM dialect.
It introduces patterns to convert `spv.InverseSqrt` and `spv.Tanh`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84633
2020-07-30 10:50:48 +03:00
George Mitenkov 647e9a54c7 [MLIR][SPIRVToLLVM] Conversion patterns for GLSL ops
This is the first patch that adds support for GLSL extended
instruction set ops. These are direct conversions, apart from `spv.Tan`
that is lowered to `sin() / cos()`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84627
2020-07-30 10:20:11 +03:00
aartbik 4f92ad508f [mlir] [VectorOps] [integration_test] Sparse matrix times vector (jagged SAXPY version)
Transposed jagged diagonal storage yields longer vector lengths. Also, in
contrast with naive SAXPY (one gather/scatter), this only performs one gather.

Reviewed By: reidtatge

Differential Revision: https://reviews.llvm.org/D84699
2020-07-29 13:25:56 -07:00
Tobias Gysi 77c3b016c4 [mlir] fix error handling in rocm runtime wrapper
The patch fixes minor issues in the rocm runtime wrapper due to api differences between CUDA and HIP.

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D84861
2020-07-29 22:08:42 +02:00
Tres Popp f05308a277 [MLIR][NFC] Move Shape::WitnessType Declaration.
This moves it from ShapeOps.td to ShapeBase.td

Differential Revision: https://reviews.llvm.org/D84845
2020-07-29 20:00:28 +02:00
Frederik Gossen 5fc34fafa7 [MLIR][Shape] Limit shape to SCF lowering patterns to their supported types
Differential Revision: https://reviews.llvm.org/D84444
2020-07-29 14:54:09 +00:00
Jakub Lichman 1aaf8aa53d [mlir][Linalg] Conv1D, Conv2D and Conv3D added as named ops
This commit is part of a greater project which aims to add
full end-to-end support for convolutions inside mlir. The
reason behind having conv ops for each rank rather than
having one generic ConvOp is to enable better optimizations
for every N-D case which reflects memory layout of input/kernel
buffers better and simplifies code as well. We expect plain linalg.conv
to be progressively retired.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D83879
2020-07-29 16:39:56 +02:00
Frederik Gossen 6673c6cd82 [MLIR][Shape] Limit shape to standard lowerings to their supported types
The lowering does not support all types for its source operations. This change
makes the patterns fail in a well-defined manner.

Differential Revision: https://reviews.llvm.org/D84443
2020-07-29 13:56:52 +00:00
Tres Popp ad793ed903 Forward extent tensors through shape.broadcast.
Differential Revision: https://reviews.llvm.org/D84832
2020-07-29 15:49:10 +02:00
Stephan Herhut 823ffef009 [mlir][Standard] Allow unranked memrefs as operands to dim and rank
`std.dim` currently only accepts ranked memrefs and `std.rank` is limited to
tensors.

Differential Revision: https://reviews.llvm.org/D84790
2020-07-29 14:42:58 +02:00
Alex Zinenko aec38c619d [mlir] LLVMType: make getUnderlyingType private
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This is model makes thread-safe type manipulation hard and is
being progressively replaced with a cleaner MLIR model that replicates the type
system. In the new model, LLVMType will no longer have an underlying LLVM IR
type. Restrict access to this type in the current model in preparation for the
change.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84389
2020-07-29 13:43:38 +02:00
Frederik Gossen b6b9d3ea85 [MLIR][Shape] Remove type conversion from lowering to standard
Operating on indices and extent tensors directly, the type conversion is no
longer needed for the supported cases.

Differential Revision: https://reviews.llvm.org/D84442
2020-07-29 10:48:05 +00:00
Stephan Herhut 5d9f33aaa0 [MLIR][Shape] Add conversion for missing ops to standard
This adds conversions for const_size and to_extent_tensor. Also, cast-like operations are now folded away if the source and target types are the same.

Differential Revision: https://reviews.llvm.org/D84745
2020-07-29 12:46:18 +02:00
Frederik Gossen 2e7baf6197 [MLIR][Shape] Allow `shape.add` to operate on indices
Differential Revision: https://reviews.llvm.org/D84441
2020-07-29 10:23:37 +00:00
George Mitenkov 1f4aa30a4f [MLIR][SPIRVToLLVM] Branch weights support for BranchConditional conversion
Conversion of `spv.BranchConditional` now supports branch weights
that are mapped to weights vector in `llvm.cond_br`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84657
2020-07-29 10:11:10 +03:00
George Mitenkov 8a66bb7a75 [MLIR][SPIRV] Added storage class constraint on global variable
Added a check for 'Function' storage class in `spv.globalVariable`
verifier since it only can be used with `spv.Variable`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84731
2020-07-29 09:15:00 +03:00
George Mitenkov b1e398920f [MLIR][SPIRVToLLVM] Support of volatile/nontemporal memory access in load/store
This patch adds support of Volatile and Nontemporal
memory accesses to `spv.Load` and `spv.Store`. These attributes are
modelled with a `volatile` and `nontemporal` flags.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84739
2020-07-29 08:45:40 +03:00
Rahul Joshi 706d992ced [NFC] Add getArgumentTypes() to Region
- Add getArgumentTypes() to Region (missed from before)
- Adopt Region argument API in `hasMultiplyAddBody`
- Fix 2 typos in comments

Differential Revision: https://reviews.llvm.org/D84807
2020-07-28 18:27:42 -07:00
Zahira Ammarguellat 80bd6ae13e On Windows build, making the /bigobj flag global , instead of passing it per file.
To avoid having this flag be passed in per/file manner, we are instead
passing it globally.

This fixes this bug: https://bugs.llvm.org/show_bug.cgi?id=46733

Reviewed-by: aaron.ballman, beanz, meinersbur

Differential Revision: https://reviews.llvm.org/D84038
2020-07-28 18:04:36 -05:00
Rahul Joshi 0b161def6c [MLIR] Add unit test for tblgen Op build methods
- Initiate the unit test with a test that tests variants of build() methods
  generated for ops with variadic operands and results.
- The intent is to migrate unit .td tests in mlir/test/mlir-tblgen that check for
  generated C++ code to these unit tests which test both that the generated code
  compiles and also is functionally correct.

Differential Revision: https://reviews.llvm.org/D84074
2020-07-28 15:43:37 -07:00