Commit Graph

3785 Commits

Author SHA1 Message Date
Stephan Herhut e12db3ed99 [mlir] Allow index as element type of memref
Differential Revision: https://reviews.llvm.org/D84934
2020-07-30 14:35:22 +02:00
Frederik Gossen a97940d4e0 [MLIR][Shape] Limit `shape.rank` lowering to its extent tensor variant
When lowering to the standard dialect, we currently support only the extent
tensor variant of the shape.rank operation. This change lets the conversion
pattern fail in a well-defined manner.

Differential Revision: https://reviews.llvm.org/D84852
2020-07-30 11:43:08 +00:00
George Mitenkov 1880532036 [MLIR][SPIRVToLLVM] Conversion of GLSL ops to LLVM intrinsics
This patch introduces new intrinsics in LLVM dialect:
-  `llvm.intr.floor`
-  `llvm.intr.maxnum`
-  `llvm.intr.minnum`
-  `llvm.intr.smax`
-  `llvm.intr.smin`
These intrinsics correspond to SPIR-V ops from GLSL
extended instruction set (`spv.GLSL.Floor`, `spv.GLSL.FMax`,
`spv.GLSL.FMin`,  `spv.GLSL.SMax` and `spv.GLSL.SMin`
respectively). Also conversion patterns for them were added.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84661
2020-07-30 11:22:44 +03:00
George Mitenkov 3aab320557 [MLIR][SPIRVToLLVM] Conversion for inverse sqrt and tanh
This is a second patch on conversion of GLSL ops to LLVM dialect.
It introduces patterns to convert `spv.InverseSqrt` and `spv.Tanh`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84633
2020-07-30 10:50:48 +03:00
George Mitenkov 647e9a54c7 [MLIR][SPIRVToLLVM] Conversion patterns for GLSL ops
This is the first patch that adds support for GLSL extended
instruction set ops. These are direct conversions, apart from `spv.Tan`
that is lowered to `sin() / cos()`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84627
2020-07-30 10:20:11 +03:00
Frederik Gossen 5fc34fafa7 [MLIR][Shape] Limit shape to SCF lowering patterns to their supported types
Differential Revision: https://reviews.llvm.org/D84444
2020-07-29 14:54:09 +00:00
Jakub Lichman 1aaf8aa53d [mlir][Linalg] Conv1D, Conv2D and Conv3D added as named ops
This commit is part of a greater project which aims to add
full end-to-end support for convolutions inside mlir. The
reason behind having conv ops for each rank rather than
having one generic ConvOp is to enable better optimizations
for every N-D case which reflects memory layout of input/kernel
buffers better and simplifies code as well. We expect plain linalg.conv
to be progressively retired.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D83879
2020-07-29 16:39:56 +02:00
Frederik Gossen 6673c6cd82 [MLIR][Shape] Limit shape to standard lowerings to their supported types
The lowering does not support all types for its source operations. This change
makes the patterns fail in a well-defined manner.

Differential Revision: https://reviews.llvm.org/D84443
2020-07-29 13:56:52 +00:00
Stephan Herhut 823ffef009 [mlir][Standard] Allow unranked memrefs as operands to dim and rank
`std.dim` currently only accepts ranked memrefs and `std.rank` is limited to
tensors.

Differential Revision: https://reviews.llvm.org/D84790
2020-07-29 14:42:58 +02:00
Alex Zinenko aec38c619d [mlir] LLVMType: make getUnderlyingType private
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This is model makes thread-safe type manipulation hard and is
being progressively replaced with a cleaner MLIR model that replicates the type
system. In the new model, LLVMType will no longer have an underlying LLVM IR
type. Restrict access to this type in the current model in preparation for the
change.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84389
2020-07-29 13:43:38 +02:00
Frederik Gossen b6b9d3ea85 [MLIR][Shape] Remove type conversion from lowering to standard
Operating on indices and extent tensors directly, the type conversion is no
longer needed for the supported cases.

Differential Revision: https://reviews.llvm.org/D84442
2020-07-29 10:48:05 +00:00
Stephan Herhut 5d9f33aaa0 [MLIR][Shape] Add conversion for missing ops to standard
This adds conversions for const_size and to_extent_tensor. Also, cast-like operations are now folded away if the source and target types are the same.

Differential Revision: https://reviews.llvm.org/D84745
2020-07-29 12:46:18 +02:00
George Mitenkov 1f4aa30a4f [MLIR][SPIRVToLLVM] Branch weights support for BranchConditional conversion
Conversion of `spv.BranchConditional` now supports branch weights
that are mapped to weights vector in `llvm.cond_br`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84657
2020-07-29 10:11:10 +03:00
George Mitenkov 8a66bb7a75 [MLIR][SPIRV] Added storage class constraint on global variable
Added a check for 'Function' storage class in `spv.globalVariable`
verifier since it only can be used with `spv.Variable`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84731
2020-07-29 09:15:00 +03:00
George Mitenkov b1e398920f [MLIR][SPIRVToLLVM] Support of volatile/nontemporal memory access in load/store
This patch adds support of Volatile and Nontemporal
memory accesses to `spv.Load` and `spv.Store`. These attributes are
modelled with a `volatile` and `nontemporal` flags.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84739
2020-07-29 08:45:40 +03:00
Rahul Joshi 706d992ced [NFC] Add getArgumentTypes() to Region
- Add getArgumentTypes() to Region (missed from before)
- Adopt Region argument API in `hasMultiplyAddBody`
- Fix 2 typos in comments

Differential Revision: https://reviews.llvm.org/D84807
2020-07-28 18:27:42 -07:00
Zahira Ammarguellat 80bd6ae13e On Windows build, making the /bigobj flag global , instead of passing it per file.
To avoid having this flag be passed in per/file manner, we are instead
passing it globally.

This fixes this bug: https://bugs.llvm.org/show_bug.cgi?id=46733

Reviewed-by: aaron.ballman, beanz, meinersbur

Differential Revision: https://reviews.llvm.org/D84038
2020-07-28 18:04:36 -05:00
Anand Kodnani 834133c950 [MLIR] Vector store to load forwarding
The MemRefDataFlow pass does store to load forwarding
only for affine store/loads. This patch updates the pass
to use affine read/write interface which enables vector
forwarding.

Reviewed By: dcaballe, bondhugula, ftynse

Differential Revision: https://reviews.llvm.org/D84302
2020-07-28 11:30:54 -07:00
Nicolas Vasilache 64cdd5b3da [mlir][Vector] Drop declarative transforms
For the purpose of vector transforms, the Tablegen-based infra is subsumed by simple C++ pattern application. Deprecate declarative transforms whose complexity does not pay for itself.

Differential Revision: https://reviews.llvm.org/D84753
2020-07-28 13:11:16 -04:00
Frederik Gossen dfcc09890a [MLIR][Shape] Lower `shape.const_shape` to `tensor_from_elements`
Differential Revision: https://reviews.llvm.org/D82848
2020-07-28 15:40:55 +00:00
Christian Sigg c64c04bbaa Clean up cuda-runtime-wrappers API.
Do not return error code, instead return created resource handles or void. Error reporting is done by the library function.

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D84660
2020-07-28 16:34:08 +02:00
Stephan Herhut 6d10d317d8 [MLIR][Shape] Support transforming shape.num_elements on tensors
The current transformation to shape.reduce does not support tensor values.
This adds the required changes to make that work, including fixing the builder
for shape.reduce.

Differential Revision: https://reviews.llvm.org/D84744
2020-07-28 14:13:06 +02:00
lorenzo chelini 946be75b9e [MLIR][Linalg] Retire C++ DotOp in favor of a linalg-ods-gen'd op
- replace DotOp, now that DRR rules have been dropped.

- Capture arguments mismatch in the parser. The number of parsed arguments must
  equal the number of expected arguments.

Reviewed By: ftynse, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D82952
2020-07-28 12:34:19 +02:00
Ehsan Toosi 486d2750c7 [mlir][NFC] Polish copy removal transform
Address a few remaining comments in copy removal transform.

Differential Revision: https://reviews.llvm.org/D84529
2020-07-28 08:34:44 +02:00
MaheshRavishankar fbe911ee75 [mlir][AffineToStandard] Make LowerAffine pass Op-agnostic.
The LowerAffine psas was a FunctionPass only for legacy
reasons. Making this Op-agnostic allows it to be used from command
line when affine expressions are within operations other than
`std.func`.

Differential Revision: https://reviews.llvm.org/D84590
2020-07-27 12:14:17 -07:00
MaheshRavishankar 8f6e84ba7b [mlir][Linalg] Enable fusion of std.constant (producer) with
linalg.indexed_generic (consumer) with tensor arguments.

The implementation of fusing std.constant producer with a
linalg.indexed_generic consumer was already in place. It is exposed
with this change. Also cleaning up some of the patterns that implement
the fusion to not be templated, thereby avoiding lot of conditional
checks for calling the right instantiation.

Differential Revision: https://reviews.llvm.org/D84566
2020-07-27 09:51:20 -07:00
Alex Zinenko a51829913d [mlir] Support for mutable types
Introduce support for mutable storage in the StorageUniquer infrastructure.
This makes MLIR have key-value storage instead of just uniqued key storage. A
storage instance now contains a unique immutable key and a mutable value, both
stored in the arena allocator that belongs to the context. This is a
preconditio for supporting recursive types that require delayed initialization,
in particular LLVM structure types.  The functionality is exercised in the test
pass with trivial self-recursive type. So far, recursive types can only be
printed in parsed in a closed type system. Removing this restriction is left
for future work.

Differential Revision: https://reviews.llvm.org/D84171
2020-07-27 13:07:44 +02:00
George Mitenkov 36618274f3 [MLIR][LLVMDialect] Added volatile and nontemporal attributes to load/store
This patch introduces 2 new optional attributes to `llvm.load`
and `llvm.store` ops: `volatile` and `nontemporal`. These attributes
are translated into proper LLVM as a `volatile` marker and a metadata node
respectively. They are also helpful with SPIR-V to LLVM dialect conversion
since they are the mappings for `Volatile` and `NonTemporal` Memory Operands.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D84396
2020-07-27 10:55:56 +03:00
Vincent Zhao d135744c34 [MLIR][Affine] Add test for non-hyperrectangular loop tiling
This diff provides a concrete test case for the error that will be raised when the iteration space is non hyper-rectangular.

The corresponding emission method for this error message has been changed as well.

Differential Revision: https://reviews.llvm.org/D84531
2020-07-26 20:17:23 +05:30
Jacques Pienaar 595d214f47 [mlir][shape] Further operand and result type generalization
Previous changes generalized some of the operands and results. Complete
a larger group of those to simplify progressive lowering. Also update
some of the declarative asm form due to generalization. Tried to keep it
mostly mechanical.
2020-07-25 21:41:31 -07:00
Jacques Pienaar 5142448a5e [MLIR][Shape] Refactor verification
Based on https://reviews.llvm.org/D84439 but less restrictive, else we
don't allow shape_of to be able to produce a ranked output and doesn't
allow for iterative refinement here. We can consider making it more
restrictive later.
2020-07-25 14:55:19 -07:00
Jacques Pienaar dfa267a61c [mlir][shape] Fix missing dependency 2020-07-24 13:16:16 -07:00
George Mitenkov 8be0371eb7 [MLIR][SPIRVToLLVM] Conversion of load and store SPIR-V ops
This patch introduces conversion pattern for `spv.Store` and `spv.Load`.
Only op with `Function` Storage Class is supported at the moment
because `spv.GlobalVariable` has not been introduced yet. If the op
has memory access attribute, then there are the following cases.
If the access is `Aligned`, add alignment to the op builder. Otherwise
the conversion fails as other cases are not supported yet because they
require additional attributes for `llvm.store`/`llvm.load` ops: e.g.
`volatile` and `nontemporal`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D84236
2020-07-24 16:31:45 +03:00
Frederik Gossen 670ae4b6da [MLIR][Shape] Fold `shape.mul`
Implement constant folding for `shape.mul`.

Differential Revision: https://reviews.llvm.org/D84438
2020-07-24 13:30:45 +00:00
Frederik Gossen 783a351785 [MLIR][Shape] Allow `shape.mul` to operate in indices
Differential Revision: https://reviews.llvm.org/D84437
2020-07-24 13:25:40 +00:00
George Mitenkov 5c98631391 [MLIR][SPIRVToLLVM] Conversion of SPIR-V variable op
The patch introduces the conversion pattern for function-level
`spv.Variable`. It is modelled as `llvm.alloca` op. If initialized, then
additional store instruction is used. Note that there is no initialization
for arrays and structs since constants of these types are not supported in
LLVM dialect yet. Also, at the moment initialisation is only possible via
`spv.constant` (since `spv.GlobalVariable` conversion is not implemented
yet).

The input code has some scoping is not taken into account and will be
addressed in a different patch.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D84224
2020-07-24 15:49:55 +03:00
Frederik Gossen bb442bb51a [MLIR][Shape] Remove deprecated and unused lowerings
This concerns `from/to_extent_tensor`, `size_to_index`, `index_to_size`, and
`const_size` conversion patterns. The new lowering will work directly on indices
and extent tensors. The shape and size values will allow for error values but
are not yet supported by the dialect conversion.

Differential Revision: https://reviews.llvm.org/D84436
2020-07-24 11:19:36 +00:00
Frederik Gossen 5984d74139 [MLIR][Shape] Allow `get_extent` to operate on extent tensors and indices
Differential Revision: https://reviews.llvm.org/D84435
2020-07-24 11:13:17 +00:00
Frederik Gossen 274db1d21a [MLIR][Shape] Pass Ops instead of Operations in shape lowering
Shorten builder invocations by using Ops directly instead of `op.getOperation`.

Differential Revision: https://reviews.llvm.org/D84430
2020-07-24 10:47:23 +00:00
Frederik Gossen 23a65648c0 [MLIR][Shape] Allow `shape.rank` to operate on extent tensors
Differential Revision: https://reviews.llvm.org/D84429
2020-07-24 10:43:39 +00:00
Frederik Gossen 4baf18dba2 [MLIR][Shape] Clean up shape to standard lowering
Put only class declarations in anonymous namespaces.

Differential Revision: https://reviews.llvm.org/D84424
2020-07-24 08:55:50 +00:00
Frederik Gossen a85ca6be2a [MLIR][Shape] Simplify shape lowering
Differential Revision: https://reviews.llvm.org/D84161
2020-07-24 08:44:13 +00:00
Frederik Gossen d4e4d5d780 [MLIR][Shape] Allow for `shape_of` to return extent tensors
The operation `shape.shape_of` now returns an extent tensor `tensor<?xindex>` in
cases when no error are possible. All consuming operation will eventually accept
both, shapes and extent tensors.

Differential Revision: https://reviews.llvm.org/D84160
2020-07-24 08:40:40 +00:00
Frederik Gossen fb1e571687 [MLIR][Standard] Add default lowering for `assert`
The default lowering of `assert` calls `abort` in case the assertion is
violated. The failure message is ignored but should be used by custom lowerings
that can assume more about their environment.

Differential Revision: https://reviews.llvm.org/D83886
2020-07-24 08:31:12 +00:00
Frederik Gossen 14d3cef012 [MLIR][Shape] Generalze `shape.const_shape` to extent tensors
The operation `shape.const_shape` was used for constants of type shape only.
We can now also use it to create constant extent tensors.

Differential Revision: https://reviews.llvm.org/D84157
2020-07-24 08:06:24 +00:00
George Mitenkov 99d03f0391 [MLIR][LLVMDialect] Added branch weights attribute to CondBrOp
This patch introduces branch weights metadata to `llvm.cond_br` op in
LLVM Dialect. It is modelled as optional `ElementsAttr`, for example:
```
llvm.cond_br %cond weights(dense<[1, 3]> : vector<2xi32>), ^bb1, ^bb2
```
When exporting to proper LLVM, this attribute is transformed into metadata
node. The test for metadata creation is added to `../Target/llvmir.mlir`.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D83658
2020-07-24 10:11:13 +03:00
River Riddle 4589dd924d [mlir][DialectConversion] Enable deeper integration of type conversions
This revision adds support for much deeper type conversion integration into the conversion process, and enables auto-generating cast operations when necessary. Type conversions are now largely automatically managed by the conversion infra when using a ConversionPattern with a provided TypeConverter. This removes the need for patterns to do type cast wrapping themselves and moves the burden to the infra. This makes it much easier to perform partial lowerings when type conversions are involved, as any lingering type conversions will be automatically resolved/legalized by the conversion infra.

To support this new integration, a few changes have been made to the type materialization API on TypeConverter. Materialization has been split into three separate categories:
* Argument Materialization: This type of materialization is used when converting the type of block arguments when calling `convertRegionTypes`. This is useful for contextually inserting additional conversion operations when converting a block argument type, such as when converting the types of a function signature.
* Source Materialization: This type of materialization is used to convert a legal type of the converter into a non-legal type, generally a source type. This may be called when uses of a non-legal type persist after the conversion process has finished.
* Target Materialization: This type of materialization is used to convert a non-legal, or source, type into a legal, or target, type. This type of materialization is used when applying a pattern on an operation, but the types of the operands have not yet been converted.

Differential Revision: https://reviews.llvm.org/D82831
2020-07-23 19:40:31 -07:00
MaheshRavishankar 4ff48db68d [mlir][Linalg] Fixing bug in subview size computation in Linalg tiling.
The `makeTiledViews` did not use the sizes of the tiled views based on
the result of the loop bound inference computation. This manifested as
an error in computing tile sizes with convolution where not all the
result expression of concatenated affine maps are simple
AffineDimExpr.

Differential Revision: https://reviews.llvm.org/D84366
2020-07-23 11:09:55 -07:00
Kazuaki Ishizaki 06b90586a4 [mlir]: NFC: Fix trivial typo in documents and comments
Differential Revision: https://reviews.llvm.org/D84400
2020-07-23 23:40:57 +09:00
Jakub Lichman 20c3386f4a [mlir][Linalg] emitLoopRanges and emitLoopRangesWithSymbols merged into one
Right now there is a branching for 2 functions based on whether target map has
symbols or not. In this commit these functions are merged into one.
Furthermore, emitting does not require inverse and map applying as it computes
the correct Range in a single step and thus reduces unnecessary overhead.

Differential Revision: https://reviews.llvm.org/D83756
2020-07-23 12:33:46 +02:00
Jakub Lichman 919922b0c2 [mlir] Added verification check for linalg.conv to ensure memrefs are of rank > 2
linalg.conv does not support memrefs with rank smaller than 3 as stated here:
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/convolution

However it does not verify it and thus crashes with "LLVM ERROR: out of memory"
error for 1D case and "nWin > 0 && "expected at least one window dimension"" assertion
for 2D case. This commit adds check for that in the verification method.

Differential Revision: https://reviews.llvm.org/D84317
2020-07-23 12:27:05 +02:00
Jakub Lichman e4dd964df0 [mlir] Loop bounds inference in linalg.generic op improved to support bounds for convolution
Loop bound inference is right now very limited as it supports only permutation maps and thus
it is impossible to implement convolution with linalg.generic as it requires more advanced
loop bound inference. This commits solves it for the convolution case.

Depends On D83158

Differential Revision: https://reviews.llvm.org/D83191
2020-07-23 11:01:54 +02:00
aartbik 1485fd295b [mlir] [VectorOps] Improve scatter/gather CPU performance
Replaced the linearized address with the proper LLVM way of
defining vector of base + indices in SIMD style. This yields
much better code. Some prototype results with microbencmarking
sparse matrix x vector with 50% sparsity (about 2-3x faster):

         LINEARIZED     IMPROVED
GFLOPS  sdot  saxpy     sdot saxpy
16x16    1.6   1.4       4.4  2.1
32x32    1.7   1.6       5.8  5.9
64x64    1.7   1.7       6.4  6.4
128x128  1.7   1.7       5.9  5.9
256x256  1.6   1.6       6.1  6.0
512x512  1.4   1.4       4.9  4.7

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84368
2020-07-22 23:47:36 -07:00
Diego Caballero 3fff5acd8f [mlir][VectorOps] Expose SuperVectorizer as a utility
This patch refactors a small part of the Super Vectorizer code to
a utility so that it can be used independently from the pass. This
aligns vectorization with other utilities that we already have for loop
transformations, such as fusion, interchange, tiling, etc.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84289
2020-07-22 14:22:15 -07:00
Thomas Raoux a1b9fb220f [mlir][linalg] Add vectorization transform for CopyOp
CopyOp get vectorized to vector.transfer_read followed by vector.transfer_write

Differential Revision: https://reviews.llvm.org/D83739
2020-07-22 12:40:42 -07:00
Benjamin Kramer bf561dd2eb [mlir][Vector] Vectorize integer matmuls
The underlying infrastructure supports this already, just add the
pattern matching for linalg.generic.

Differential Revision: https://reviews.llvm.org/D84335
2020-07-22 19:39:56 +02:00
Haruki Imai 7f44a7130b [MLIR] Set alignment in AllocOp of normalizeMemref()
AllocOp is updated in normalizeMemref(AllocOp allocOp), but, when the
AllocOp has `alignment` attribute, it was ignored and updated AllocOp
does not have `alignment` attribute. This patch fixes it.

Differential Revision: https://reviews.llvm.org/D83656
2020-07-22 12:34:35 +05:30
aartbik 19dbb230a2 [mlir] [VectorOps] Add scatter/gather operations to Vector dialect
Introduces the scatter/gather operations to the Vector dialect
(important memory operations for sparse computations), together
with a first reference implementation that lowers to the LLVM IR
dialect to enable running on CPU (and other targets that support
the corresponding LLVM IR intrinsics).

The operations can be used directly where applicable, or can be used
during progressively lowering to bring other memory operations closer to
hardware ISA support for a gather/scatter. The semantics of the operation
closely correspond to those of the corresponding llvm intrinsics.

Note that the operation allows for a dynamic index vector (which is
important for sparse computations). However, this first reference
lowering implementation "serializes" the address computation when
base + index_vector is converted to a vector of pointers. Exploring
how to use SIMD properly during these step is TBD. More general
memrefs and idiomatic versions of striding are also TBD.

Reviewed By: arpith-jacob

Differential Revision: https://reviews.llvm.org/D84039
2020-07-21 10:57:40 -07:00
George Mitenkov 61dd481f11 [MLIR][LLVMDialect] SelectionOp conversion pattern
This patch introduces conversion pattern for `spv.selection` op.
The conversion can only be applied to selection with all blocks being
reachable. Moreover, selection with control attributes "Flatten" and
"DontFlatten" is not supported.
Since the `PatternRewriter` hook for block merging has not been implemented
for `ConversionPatternRewriter`, merge and continue blocks are kept
separately.

Reviewed By: antiagainst, ftynse

Differential Revision: https://reviews.llvm.org/D83860
2020-07-21 17:11:46 +03:00
George Mitenkov 05d3160c9c [MLIR][SPIRVToLLVM] Conversion of SPIR-V branch ops
This patch introduces conversion for `spv.Branch` and `spv.BranchConditional`
ops. Branch weigths for `spv.BranchConditional` are not supported at the
moment, and conversion in this case fails.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D83784
2020-07-21 10:52:20 +03:00
Jakub Lichman f9c8febc52 [mlir] Added support for symbols inside linalg.generic and map concatenation
This commit adds functionality needed for implementation of convolutions with
linalg.generic op. Since linalg.generic right now expects indexing maps to be
just permutations, offset indexing needed in convolutions is not possible.
Therefore in this commit we address the issue by adding support for symbols inside
indexing maps which enables more advanced indexing. The upcoming commit will
solve the problem of computing loop bounds from such maps.

Differential Revision: https://reviews.llvm.org/D83158
2020-07-20 19:20:47 +02:00
Frederik Gossen f9595857b9 [MLIR][Shape] Fold `shape.shape_eq`
Fold `shape.shape_eq`.

Differential Revision: https://reviews.llvm.org/D82533
2020-07-20 12:25:53 +00:00
Nicolas Vasilache 47cbd9f922 [mlir][Vector] NFC - Improve VectorInterfaces
This revision improves and makes better use of OpInterfaces for the Vector dialect.

Differential Revision: https://reviews.llvm.org/D84053
2020-07-20 08:24:22 -04:00
Alex Zinenko ebbdecdd57 [mlir] Support translating function linkage between MLIR and LLVM IR
Linkage support is already present in the LLVM dialect, and is being translated
for globals other than functions. Translation support has been missing for
functions because their conversion goes through a different code path than
other globals.

Differential Revision: https://reviews.llvm.org/D84149
2020-07-20 14:04:31 +02:00
Yash Jain 3382b7177f [MLIR] Add lowering for affine.parallel to scf.parallel
Add lowering conversion from affine.parallel to scf.parallel.

Differential Revision: https://reviews.llvm.org/D83239
2020-07-18 13:13:49 +05:30
Nicolas Vasilache cc0a58d7cd [mlir][Vector] Fix masking logic in VectorToSCF
Summary: The logic was conservative but inverted: cases that should remain unmasked became 1-D masked.

Differential Revision: https://reviews.llvm.org/D84051
2020-07-17 13:24:07 -04:00
Pierre Oechsel ec62e37c86 [mlir] [vector] Add an optional filter to vector contract lowering patterns.
Summary: Vector contract patterns were only parameterized by a `vectorTransformsOptions`. As a result, even if an mlir file was containing several occurrences of `vector.contract`, all of them would be lowered in the same way. More granularity might be required . This Diff adds a `constraint` argument to each of these patterns which allows the user to specify with more precision on which `vector.contract` should each of the lowering apply.

Differential Revision: https://reviews.llvm.org/D83960
2020-07-17 12:03:13 -04:00
Nicolas Vasilache 08521abb3a [mlir][EDSC] Allow conditionBuilder to capture the IfOp
When the IfOp returns values, it can easily be obtained from one of the Values.
However, when no values are returned, the information is lost.
This revision lets the caller specify a capture IfOp* to return the produced
IfOp.

Differential Revision: https://reviews.llvm.org/D84025
2020-07-17 11:16:26 -04:00
Lei Zhang 2dd9e43579 [spirv] Use owning module ref to avoid leaks and fix ASAN tests
Differential Revision: https://reviews.llvm.org/D83982
2020-07-16 17:30:59 -04:00
Rahul Joshi 86ae0dd7f7 [MLIR] Add OpPrintingFlags to IRPrinterConfig.
- This will enable tweaking IR printing options when enabling printing (for ex,
  tweak elideLargeElementsAttrs to create smaller IR logs)

Differential Revision: https://reviews.llvm.org/D83930
2020-07-16 08:05:33 -07:00
Frederik Gossen aca7b8dd63 [MLIR][Shape] Lower `shape.shape_eq` to `scf`
Lower `shape.shape_eq` to the `scf` (and `std`) dialect. For now, this lowering
is limited to extent tensor operands.

Differential Revision: https://reviews.llvm.org/D82530
2020-07-16 14:44:29 +00:00
Frederik Gossen c430c21202 [MLIR][Shape] Use callback builder again
The issue that callback builders caused during rollback of conversion patterns
has been resolved. We can use them again.
See https://bugs.llvm.org/show_bug.cgi?id=46731

Differential Revision: https://reviews.llvm.org/D83932
2020-07-16 13:58:38 +00:00
Frederik Gossen 67391a7045 [MLIR] Lower `shape.reduce` to `scf.for` only when argument is `tensor<?xindex>`
To make it clear when shape error values cannot occur the shape operations can
operate on extent tensors. This change updates the lowering for `shape.reduce`
accordingly.

Differential Revision: https://reviews.llvm.org/D83944
2020-07-16 13:55:48 +00:00
Frederik Gossen 0eb50e614c [MLIR][Shape] Allow `shape.reduce` to operate on extent tensors
Allow `shape.reduce` to take both `shape.shape` and `tensor<?xindex>` as an
argument.

Differential Revision: https://reviews.llvm.org/D83943
2020-07-16 13:53:37 +00:00
Aden Grue 941fecc536 Standardize `linalg.generic` on `args_in`/`args_out` instead of `inputCount`/`outputCount`
This also fixes the outdated use of `n_views` in the documentation.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D83795
2020-07-16 03:46:08 +00:00
Stephen Neuendorffer 628288658c [MLIR] Add RegionKindInterface
Some dialects have semantics which is not well represented by common
SSA structures with dominance constraints.  This patch allows
operations to declare the 'kind' of their contained regions.
Currently, two kinds are allowed: "SSACFG" and "Graph".  The only
difference between them at the moment is that SSACFG regions are
required to have dominance, while Graph regions are not required to
have dominance.  The intention is that this Interface would be
generated by ODS for existing operations, although this has not yet
been implemented. Presumably, if someone were interested in code
generation, we might also have a "CFG" dialect, which defines control
flow, but does not require SSA.

The new behavior is mostly identical to the previous behavior, since
registered operations without a RegionKindInterface are assumed to
contain SSACFG regions.  However, the behavior has changed for
unregistered operations.  Previously, these were checked for
dominance, however the new behavior allows dominance violations, in
order to allow the processing of unregistered dialects with Graph
regions.  One implication of this is that regions in unregistered
operations with more than one op are no longer CSE'd (since it
requires dominance info).

I've also reorganized the LangRef documentation to remove assertions
about "sequential execution", "SSA Values", and "Dominance".  Instead,
the core IR is simply "ordered" (i.e. totally ordered) and consists of
"Values".  I've also clarified some things about how control flow
passes between blocks in an SSACFG region. Control Flow must enter a
region at the entry block and follow terminator operation successors
or be returned to the containing op.  Graph regions do not define a
notion of control flow.

see discussion here:
https://llvm.discourse.group/t/rfc-allowing-dialects-to-relax-the-ssa-dominance-condition/833/53

Differential Revision: https://reviews.llvm.org/D80358
2020-07-15 14:27:05 -07:00
Uday Bondhugula ec85d7c8f3 [MLIR][NFC] Fix clang tidy warnings in misc utilities
Fix clang tidy warnings in misc utilities - missing const or a star in
declaration.

Differential Revision: https://reviews.llvm.org/D83861
2020-07-16 00:27:30 +05:30
Rahul Joshi a3ad8f92b4 [MLIR] Add type checking capability to RegionBranchOpInterface
- Add function `verifyTypes` that Op's can call to do type checking verification
  along the control flow edges described the Op's RegionBranchOpInterface.
- We cannot rely on the verify methods on the OpInterface because the interface
  functions assume valid Ops, so they may crash if invoked on unverified Ops.
  (For example, scf.for getSuccessorRegions() calls getRegionIterArgs(), which
  dereferences getBody() block. If the scf.for is invalid with no body, this
  can lead to a segfault). `verifyTypes` can be called post op-verification to
  avoid this.

Differential Revision: https://reviews.llvm.org/D82829
2020-07-15 11:14:07 -07:00
Stephan Herhut 8ef47244b9 [mlir][shape] Fold shape.broadcast with one scalar operand
This folds shape.broadcast where at least one operand is a scalar to the
other operand.

Also add an assemblyFormat for shape.broadcast and shape.concat.

Differential Revision: https://reviews.llvm.org/D83854
2020-07-15 18:49:12 +02:00
Frederik Gossen ad49330032 [MLIR][Shape] Fix `shape_of` lowering to `scf`
The use of the `scf.for` callback builder does not allow for a rollback of the
emitted conversions. Instead, we populate the loop body through the conversion
rewriter directly.

Differential Revision: https://reviews.llvm.org/D83873
2020-07-15 15:38:09 +00:00
George Mitenkov d431951343 [MLIR][SPIRVToLLVM] SPIRV function fix and nits
This patch addresses the comments from https://reviews.llvm.org/D83030 and
https://reviews.llvm.org/D82639. `this->` is removed when not inside the
template. Also, type conversion for `spv.func` takes `convertRegionTypes()`
in order to apply type conversion on all blocks within the function.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D83786
2020-07-15 10:29:46 +03:00
Stephan Herhut 1919c8bfe8 Make linalg::ReshapeOp implement ViewLikeOpInterface
Summary: A reshape aliases its input memref, so it acts like a view.

Differential Revision: https://reviews.llvm.org/D83773
2020-07-15 09:24:15 +02:00
Nicolas Vasilache 512da70be7 [mlir][Vector] Degrade masking information when forwarding linalg.copy to vector.transfer
Summary:
linalg.copy + linalg.fill can be used to create a padded local buffer.
The `masked` attribute is only valid on this padded buffer.
When forwarding to vector.transfer ops, the attribute must be reset
conservatively.

Differential Revision: https://reviews.llvm.org/D83782
2020-07-15 02:32:45 -04:00
River Riddle 6b476e2426 [mlir] Add support for parsing optional Attribute values.
This adds a `parseOptionalAttribute` method to the OpAsmParser that allows for parsing optional attributes, in a similar fashion to how optional types are parsed. This also enables the use of attribute values as the first element of an assembly format optional group.

Differential Revision: https://reviews.llvm.org/D83712
2020-07-14 13:14:59 -07:00
River Riddle b98f414a04 [mlir][DialectConversion] Emit an error if an operation marked as erased has live users after conversion
Up until now, there has been an implicit agreement that when an operation is marked as
"erased" all uses of that operation's results are guaranteed to be removed during conversion. How this works in practice is that there is either an assert/crash/asan failure/etc. This revision adds support for properly detecting when an erased operation has dangling users, emits and error and fails the conversion.

Differential Revision: https://reviews.llvm.org/D82830
2020-07-14 13:06:08 -07:00
Uday Bondhugula 9b974dfa72 [MLIR] [NFC] Buffer placement pass - clang tidy warnings
Add missing const - addresses clang tidy warnings.

Differential Revision: https://reviews.llvm.org/D83794
2020-07-14 23:49:24 +05:30
Rahul Joshi e2b716105b [MLIR] Add argument related API to Region
- Arguments of the first block of a region are considered region arguments.
- Add API on Region class to deal with these arguments directly instead of
  using the front() block.
- Changed several instances of existing code that can use this API
- Fixes https://bugs.llvm.org/show_bug.cgi?id=46535

Differential Revision: https://reviews.llvm.org/D83599
2020-07-14 09:28:29 -07:00
Frederik Gossen 1ee0d22f26 [MLIR][Standard] Erase redundant assertions `std.assert`
Differential Revision: https://reviews.llvm.org/D83118
2020-07-14 10:09:39 +00:00
Kiran Chandramohan d9067dca7b Lowering of OpenMP Parallel operation to LLVM IR 1/n
This patch introduces lowering of the OpenMP parallel operation to LLVM
IR using the OpenMPIRBuilder.

Functions topologicalSort and connectPhiNodes are generalised so that
they work with operations also. connectPhiNodes is also made static.

Lowering works for a parallel region with multiple blocks. Clauses and
arguments of the OpenMP operation are not handled.

Reviewed By: rriddle, anchu-rajendran

Differential Revision: https://reviews.llvm.org/D81660
2020-07-13 23:55:45 +01:00
Nicolas Vasilache affbc0cd1c [mlir] Add alignment attribute to LLVM memory ops and use in vector.transfer
Summary: The native alignment may generally not be used when lowering a vector.transfer to the underlying load/store operation. This revision fixes the unmasked load/store alignment to match that of the masked path.

Differential Revision: https://reviews.llvm.org/D83684
2020-07-13 17:35:20 -04:00
Rahul Joshi 0d988da6d1 [MLIR] Change ODS collective params build method to provide an empty default value for named attributes
- Provide default value for `ArrayRef<NamedAttribute> attributes` parameter of
  the collective params build method.
- Change the `genSeparateArgParamBuilder` function to not generate build methods
  that may be ambiguous with the new collective params build method.
- This change should help eliminate passing empty NamedAttribue ArrayRef when the
  collective params build method is used
- Extend op-decl.td unit test to make sure the ambiguous build methods are not
  generated.

Differential Revision: https://reviews.llvm.org/D83517
2020-07-13 13:35:44 -07:00
Thomas Raoux 2f23270af9 [mlir] Support operations with multiple results in slicing
Right now slicing would assert if an operation with multiple results is in the
slice.

Differential Revision: https://reviews.llvm.org/D83627
2020-07-13 13:24:27 -07:00
Lei Zhang 4ba45a778a [mlir][StandardToSPIRV] Fix conversion for signed remainder
Per the Vulkan's SPIR-V environment spec, "for the OpSRem and OpSMod
instructions, if either operand is negative the result is undefined."
So we cannot directly use spv.SRem/spv.SMod if either operand can be
negative. Emulate it via spv.UMod.

Because the emulation uses spv.SNegate, this commit also defines
spv.SNegate.

Differential Revision: https://reviews.llvm.org/D83679
2020-07-13 16:15:31 -04:00
Benjamin Kramer 3bffe6022c [mlir][VectorOps] Lower vector.fma to llvm.fmuladd instead of llvm.fma
Summary:
These are semantically equivalent, but fmuladd allows decaying the op
into fmul+fadd if there is no fma instruction available. llvm.fma lowers
to scalar calls to libm fmaf, which is a lot slower.

Reviewers: nicolasvasilache, aartbik, ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D83666
2020-07-13 12:26:03 +02:00
Frederik Gossen 9df6afbb5c [MLIR][Shape] Lower `shape.any`
Lower `shape.any` to its first operand.

Differential Revision: https://reviews.llvm.org/D83123
2020-07-13 08:30:05 +00:00
River Riddle 572c2905ae [mlir][ODS] Add support for specifying the namespace of an interface.
The namespace can be specified using the `cppNamespace` field. This matches the functionality already present on dialects, enums, etc. This fixes problems with using interfaces on operations in a different namespace than the interface was defined in.

Differential Revision: https://reviews.llvm.org/D83604
2020-07-12 14:18:32 -07:00
Mehdi Amini 44b0b7cf66 Fix one memory leak in the MLIRParser by using std::unique_ptr to hold the new block pointer
This is NFC when there is no parsing error.

Differential Revision: https://reviews.llvm.org/D83619
2020-07-11 20:05:37 +00:00
Mehdi Amini 3b04af4d84 Fix some memory leak in MLIRContext with respect to registered types/attributes interfaces
Differential Revision: https://reviews.llvm.org/D83618
2020-07-11 20:05:29 +00:00
Yash Jain 102828249c [MLIR] Parallelize affine.for op to 1-D affine.parallel op
Introduce pass to convert parallel affine.for op into 1-D affine.parallel op.
Run using --affine-parallelize. Removes test-detect-parallel: pass for checking
parallel affine.for ops.

Signed-off-by: Yash Jain <yash.jain@polymagelabs.com>

Differential Revision: https://reviews.llvm.org/D83193
2020-07-11 21:33:25 +05:30
Thomas Raoux 6d5aeb0dce [mlir][linalg] Improve aliasing approximation for hoisting transfer read/write
Improve the logic deciding if it is safe to hoist vector transfer read/write
out of the loop. Change the logic to prevent hoisting operations if there are
any unknown access to the memref in the loop no matter where the operation is.
For other transfer read/write in the loop check if we can prove that they
access disjoint memory and ignore them in this case.

Differential Revision: https://reviews.llvm.org/D83538
2020-07-10 14:55:04 -07:00