Commit Graph

2668 Commits

Author SHA1 Message Date
Eugene Burmako 5638df1950 Introduce linalg.vecmat
This patch adds a new named structured op to accompany linalg.matmul and
linalg.matvec. We needed it for our codegen, so I figured it would be useful
to add it to Linalg.

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D87292
2020-09-10 18:48:14 +02:00
Frederik Gossen 018f6936db [MLIR][Standard] Simplify `tensor_from_elements`
Define assembly format and add required traits.

Differential Revision: https://reviews.llvm.org/D87366
2020-09-10 14:42:51 +00:00
aartbik 3c42c0dcf6 [mlir] [VectorOps] Enable 32-bit index optimizations
Rationale:
After some discussion we decided that it is safe to assume 32-bit
indices for all subscripting in the vector dialect (it is unlikely
the dialect will be used; or even work; for such long vectors).
So rather than detecting specific situations that can exploit
32-bit indices with higher parallel SIMD, we just optimize it
by default, and let users that don't want it opt-out.

Reviewed By: nicolasvasilache, bkramer

Differential Revision: https://reviews.llvm.org/D87404
2020-09-10 00:26:27 -07:00
Sean Silva be35264ab5 Wordsmith RegionBranchOpInterface verification errors
I was having a lot of trouble parsing the messages. In particular, the
messages like:

```
<stdin>:3:8: error: 'scf.if' op  along control flow edge from Region #0 to scf.if source #1 type '!npcomprt.tensor' should match input #1 type 'tensor<?xindex>'
```

In particular, one thing that kept catching me was parsing the "to scf.if
source #1 type" as one thing, but really it is
"to parent results: source type #1".

Differential Revision: https://reviews.llvm.org/D87334
2020-09-09 12:50:23 -07:00
Jakub Lichman 53ffeea6d5 [mlir][Linalg] Reduction dimensions specified in TC definition of ConvOps.
This commit specifies reduction dimensions for ConvOps. This prevents
running reduction loops in parallel and enables easier detection of kernel dimensions
which we will need later on.

Differential Revision: https://reviews.llvm.org/D87288
2020-09-09 15:17:07 +00:00
Marcel Koester feb0b9c3bb [mlir] Added support for loops to BufferPlacement transformation.
The current BufferPlacement transformation cannot handle loops properly. Buffers
passed via backedges will not be freed automatically introducing memory leaks.
This CL adds support for loops to overcome these limitations.

Differential Revision: https://reviews.llvm.org/D85513
2020-09-09 10:53:35 +02:00
Frederik Gossen 5106a8b8f8 [MLIR][Shape] Lower `shape_of` to `dynamic_tensor_from_elements`
Take advantage of the new `dynamic_tensor_from_elements` operation in `std`.
Instead of stack-allocated memory, we can now lower directly to a single `std`
operation.

Differential Revision: https://reviews.llvm.org/D86935
2020-09-09 07:55:13 +00:00
Frederik Gossen 133322d2e3 [MLIR][Standard] Update `tensor_from_elements` assembly format
Remove the redundant parenthesis that are used for none of the other operation
formats.

Differential Revision: https://reviews.llvm.org/D86287
2020-09-09 07:45:46 +00:00
Lubomir Litchev e2394245eb Add an option for unrolling loops up to a factor.
Currently, there is no option to allow for unrolling a loop up to a specific factor (specified by the user).
The code for doing that is there and there are benefits when unrolling is done  to smaller loops (smaller than the factor specified).

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D87111
2020-09-08 09:23:38 -07:00
Ehsan Toosi 4e9f4d0b9d [mlir] Fix bug in copy removal
A crash could happen due to copy removal. The bug is fixed and two more
test cases are added.

Differential Revision: https://reviews.llvm.org/D87128
2020-09-08 14:17:13 +02:00
Ehsan Toosi 847299d3f0 [mlir] remove BufferAssignmentPlacer from BufferAssignmentOpConversionPattern
BufferPlacement has been removed, as allocations are no longer placed during the conversion.

Differential Revision: https://reviews.llvm.org/D87079
2020-09-08 13:04:22 +02:00
Benjamin Kramer 239eff502b [mlir][VectorOps] Redo the scalar loop emission in VectoToSCF to pad instead of clipping
This replaces the select chain for edge-padding with an scf.if that
performs the memory operation when the index is in bounds and uses the
pad value when it's not. For transfer_write the same mechanism is used,
skipping the store when the index is out of bounds.

The integration test has a bunch of cases of how I believe this should
work.

Differential Revision: https://reviews.llvm.org/D87241
2020-09-08 11:15:25 +02:00
Jakub Lichman 67b37f571c [mlir] Conv ops vectorization pass
In this commit a new way of convolution ops lowering is introduced.
The conv op vectorization pass lowers linalg convolution ops
into vector contractions. This lowering is possible when conv op
is first tiled by 1 along specific dimensions which transforms
it into dot product between input and kernel subview memory buffers.
This pass converts such conv op into vector contraction and does
all necessary vector transfers that make it work.

Differential Revision: https://reviews.llvm.org/D86619
2020-09-08 08:47:42 +00:00
Nicolas Vasilache 9be6178449 [mlir][Vector] Make VectorToSCF deterministic
Differential Revision: https://reviews.llvm.org/D87273
2020-09-08 04:18:22 -04:00
Mehdi Amini 63d1dc6665 Add a doc/tutorial on traversing the IR
Reviewed By: stephenneuendorffer

Differential Revision: https://reviews.llvm.org/D87221
2020-09-08 00:07:03 +00:00
Frederik Gossen a70f2eb3e3 [MLIR][Shape] Merge `shape` to `std`/`scf` lowerings.
Merge the two lowering passes because they are not useful by themselves. The new
pass lowers to `std` and `scf` is considered an auxiliary dialect.

See also
https://llvm.discourse.group/t/conversions-with-multiple-target-dialects/1541/12

Differential Revision: https://reviews.llvm.org/D86779
2020-09-07 14:39:37 +00:00
David Truby 973800dc7c Revert "[MLIR][Shape] Merge `shape` to `std`/`scf` lowerings."
This reverts commit 15acdd7543.
2020-09-07 13:37:32 +01:00
Nicolas Vasilache 1c849ec40a [MLIR] Fix Win test due to partial order of CHECK directives
Differential Revision: https://reviews.llvm.org/D87230
2020-09-07 08:14:35 -04:00
Frederik Gossen 15acdd7543 [MLIR][Shape] Merge `shape` to `std`/`scf` lowerings.
Merge the two lowering passes because they are not useful by themselves. The new
pass lowers to `std` and `scf` is considered an auxiliary dialect.

See also
https://llvm.discourse.group/t/conversions-with-multiple-target-dialects/1541/12

Differential Revision: https://reviews.llvm.org/D86779
2020-09-07 12:12:36 +00:00
Frederik Gossen 136eb79a88 [MLIR][Standard] Add `dynamic_tensor_from_elements` operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
Nicolas Vasilache 8d64df9f13 [mlir][Vector] Revisit VectorToSCF.
Vector to SCF conversion still had issues due to the interaction with the natural alignment derived by the LLVM data layout. One traditional workaround is to allocate aligned. However, this does not always work for vector sizes that are non-powers of 2.

This revision implements a more portable mechanism where the intermediate allocation is always a memref of elemental vector type. AllocOp is extended to use the natural LLVM DataLayout alignment for non-scalar types, when the alignment is not specified in the first place.

An integration test is added that exercises the transfer to scf.for + scalar lowering with a 5x5 transposition.

Differential Revision: https://reviews.llvm.org/D87150
2020-09-07 05:19:43 -04:00
Stella Laurenzo 7403e3ee32 Extend PyConcreteType to support intermediate base classes.
* Resolves todos from D87091.
* Also modifies PyConcreteAttribute to follow suite (should be useful for ElementsAttr and friends).
* Adds a test to ensure that the ShapedType base class functions as expected.

Differential Revision: https://reviews.llvm.org/D87208
2020-09-06 23:39:47 -07:00
zhanghb97 54d432aa6b [mlir] Add Shaped Type, Tensor Type and MemRef Type to python bindings.
Based on the PyType and PyConcreteType classes, this patch implements the bindings of Shaped Type, Tensor Type and MemRef Type subclasses.
The Tensor Type and MemRef Type are bound as ranked and unranked separately.
This patch adds the ***GetChecked C API to make sure the python side can get a valid type or a nullptr.
Shaped type is not a kind of standard types, it is the base class for vectors, memrefs and tensors, this patch binds the PyShapedType class as the base class of Vector Type, Tensor Type and MemRef Type subclasses.

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D87091
2020-09-06 11:45:54 -07:00
Alex Zinenko aec9e20a3e [mlir] introduce type constraints for operands of LLVM dialect operations
Historically, the operations in the MLIR's LLVM dialect only checked that the
operand are of LLVM dialect type without more detailed constraints. This was
due to LLVM dialect types wrapping LLVM IR types and having clunky verification
methods. With the new first-class modeling, it is possible to define type
constraints similarly to other dialects and use them to enforce some
correctness rules in verifiers instead of having LLVM assert during translation
to LLVM IR. This hardening discovered several issues where MLIR was producing
LLVM dialect operations that cannot exist in LLVM IR.

Depends On D85900

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85901
2020-09-04 10:01:59 +02:00
aartbik 060c9dd1cc [mlir] [VectorOps] Improve SIMD compares with narrower indices
When allowed, use 32-bit indices rather than 64-bit indices in the
SIMD computation of masks. This runs up to 2x and 4x faster on
a number of AVX2 and AVX512 microbenchmarks.

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D87116
2020-09-03 21:43:38 -07:00
Lei Zhang 8d420fb3a0 [spirv][nfc] Simplify resource limit with default values
These deafult values are gotten from Vulkan required limits.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D87090
2020-09-03 13:29:26 -04:00
Benjamin Kramer dfb7b3fe02 [mlir][VectorOps] Fall back to a loop when accessing a vector from a strided memref
The scalar loop is slow but correct.

Differential Revision: https://reviews.llvm.org/D87082
2020-09-03 16:05:38 +02:00
Zhibin Li 1e21ca4d25 [spirv] Add SPIR-V GLSL extended Round op
Reviewed By: mravishankar, antiagainst

Differential Revision: https://reviews.llvm.org/D86914
2020-09-03 09:42:35 -04:00
Ling, Liyang 2860b2c14b [mlir] Add Acos, Asin, Atan, Sinh, Cosh, Pow to SPIRVGLSLOps
Reviewed By: mravishankar, antiagainst

Differential Revision: https://reviews.llvm.org/D86929
2020-09-03 09:28:34 -04:00
Jakub Lichman 8d35080ebb [mlir][Linalg] Wrong tile size for convolutions fixed
Sizes of tiles (subviews) are bigger by 1 than they should. Let's consider
1D convolution without batches or channels. Furthermore let m iterate over
the output and n over the kernel then input is accessed with m + n. In tiling
subview sizes for convolutions are computed by applying requested tile size
together with kernel size to the above mentioned expression thus let's say
for tile size of 2 the subview size is 2 + size(n), which is bigger by one
than it should since we move kernel only once. The problem behind it is that
range is not turned into closed interval before the composition. This commit
fixes the problem by turning ranges first into closed intervals by substracting
1 and after the composition back to half open by adding 1.

Differential Revision: https://reviews.llvm.org/D86638
2020-09-03 06:01:21 +00:00
Artur Bialas d9b4245f56 [mlir][spirv] Add block read and write from SPV_INTEL_subgroups
Added support to OpSubgroupBlockReadINTEL and OpSubgroupBlockWriteINTEL

Differential Revision: https://reviews.llvm.org/D86876
2020-09-02 20:06:59 -07:00
Diego Caballero 46781630a3 [MLIR][Affine][VectorOps] Vectorize uniform values in SuperVectorizer
This patch adds basic support for vectorization of uniform values to SuperVectorizer.
For now, only invariant values to the target vector loops are considered uniform. This
enables the vectorization of loops that use function arguments and external definitions
to the vector loops. We could extend uniform support in the future if we implement some
kind of divergence analysis algorithm.

Reviewed By: nicolasvasilache, aartbik

Differential Revision: https://reviews.llvm.org/D86756
2020-09-03 01:17:06 +03:00
Diego Caballero 553bfc8fa1 [mlir][Affine] Support affine vector loads/stores in LICM
Make use of affine memory op interfaces in AffineLoopInvariantCodeMotion so
that it can also work on affine.vector_load and affine.vector_store ops.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D86986
2020-09-03 00:43:24 +03:00
Diego Caballero 65f20ea113 [mlir][Affine] Fix AffineLoopInvariantCodeMotion
Make sure that memory ops that are defined inside the loop are registered
as such in 'defineOp'. In the test provided, the 'mulf' op was hoisted
outside the loop nest even when its 'affine.load' operand was not.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D86982
2020-09-03 00:06:41 +03:00
Ehsan Toosi 39cf83cc78 [mlir] Extend BufferAssignmentTypeConverter with result conversion callbacks
In this PR, the users of BufferPlacement can configure
BufferAssginmentTypeConverter. These new configurations would give the user more
freedom in the process of converting function signature, and return and call
operation conversions.

These are the new features:
    - Accepting callback functions for decomposing types (i.e. 1 to N type
    conversion such as unpacking tuple types).
    - Defining ResultConversionKind for specifying whether a function result
    with a certain type should be appended to the function arguments list or
    should be kept as function result. (Usage:
    converter.setResultConversionKind<MemRefType>(AppendToArgumentList))
    - Accepting callback functions for composing or decomposing values (i.e. N
    to 1 and 1 to N value conversion).

Differential Revision: https://reviews.llvm.org/D85133
2020-09-02 17:53:42 +02:00
Lei Zhang 1b88bbf5eb Revert "[mlir] Extend BufferAssignmentTypeConverter with result conversion callbacks"
This reverts commit 94f5d24877 because
of failing the following tests:

MLIR :: Dialect/Linalg/tensors-to-buffers.mlir
MLIR :: Transforms/buffer-placement-preparation-allowed-memref-results.mlir
MLIR :: Transforms/buffer-placement-preparation.mlir
2020-09-02 09:24:36 -04:00
Jakub Lichman f5ed22f09d [mlir][VectorToSCF] 128 byte alignment of alloc ops
Added 128 byte alignment to alloc ops created in VectorToSCF pass.
128b alignment was already introduced to this pass but not to all alloc
ops. This commit changes that by adding 128b alignment to the remaining ops.
The point of specifying alignment is to prevent possible memory alignment errors
on weakly tested architectures.

Differential Revision: https://reviews.llvm.org/D86454
2020-09-02 12:37:35 +00:00
Ehsan Toosi 94f5d24877 [mlir] Extend BufferAssignmentTypeConverter with result conversion callbacks
In this PR, the users of BufferPlacement can configure
BufferAssginmentTypeConverter. These new configurations would give the user more
freedom in the process of converting function signature, and return and call
operation conversions.

These are the new features:
    - Accepting callback functions for decomposing types (i.e. 1 to N type
    conversion such as unpacking tuple types).
    - Defining ResultConversionKind for specifying whether a function result
    with a certain type should be appended to the function arguments list or
    should be kept as function result. (Usage:
    converter.setResultConversionKind<MemRefType>(AppendToArgumentList))
    - Accepting callback functions for composing or decomposing values (i.e. N
    to 1 and 1 to N value conversion).

Differential Revision: https://reviews.llvm.org/D85133
2020-09-02 13:26:55 +02:00
ZHANG Hongbin 1d99472875 [mlir] Add Complex Type, Vector Type and Tuple Type subclasses to python bindings
Based on the PyType and PyConcreteType classes, this patch implements the bindings of Complex Type, Vector Type and Tuple Type subclasses.
For the convenience of type checking, this patch defines a `mlirTypeIsAIntegerOrFloat` function to check whether the given type is an integer or float type.
These three subclasses in this patch have similar binding strategy:
- The function pointer `isaFunction` points to `mlirTypeIsA***`.
- The `mlir***TypeGet` C API is bound with the `get_***` method in the python side.
- The Complex Type and Vector Type check whether the given type is an integer or float type.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D86785
2020-09-02 05:46:00 +00:00
River Riddle 431bb8b318 [mlir][ODS] Use c++ types for integer attributes of fixed width when possible.
Unsigned and Signless attributes use uintN_t and signed attributes use intN_t, where N is the fixed width. The 1-bit variants use bool.

Differential Revision: https://reviews.llvm.org/D86739
2020-09-01 13:43:32 -07:00
Valentin Clement 2bbbcae782 [mlir][openacc] Add missing attributes and operands for acc.loop
This patch add the missing operands to the acc.loop operation. Only the device_type
information is not part of the operation for now.

Reviewed By: rriddle, kiranchandramohan

Differential Revision: https://reviews.llvm.org/D86753
2020-08-31 19:50:05 -04:00
River Riddle eaeadce9bd [mlir][OpFormatGen] Add initial support for regions in the custom op assembly format
This adds some initial support for regions and does not support formatting the specific arguments of a region. For now this can be achieved by using a custom directive that formats the arguments and then parses the region.

Differential Revision: https://reviews.llvm.org/D86760
2020-08-31 13:26:24 -07:00
River Riddle 24b88920fe [mlir][ODS] Add new SymbolNameAttr and add support for in assemblyFormat
Symbol names are a special form of StringAttr that get treated specially in certain areas, such as formatting. This revision adds a special derived attr for them in ODS and adds support in the assemblyFormat for formatting them properly.

Differential Revision: https://reviews.llvm.org/D86759
2020-08-31 13:26:23 -07:00
River Riddle 88c6e25e4f [mlir][OpFormatGen] Add support for specifiy "custom" directives.
This revision adds support for custom directives to the declarative assembly format. This allows for users to use C++ for printing and parsing subsections of an otherwise declaratively specified format. The custom directive is structured as follows:

```
custom-directive ::= `custom` `<` UserDirective `>` `(` Params `)`
```

`user-directive` is used as a suffix when this directive is used during printing and parsing. When parsing, `parseUserDirective` will be invoked. When printing, `printUserDirective` will be invoked. The first parameter to these methods must be a reference to either the OpAsmParser, or OpAsmPrinter. The type of rest of the parameters is dependent on the `Params` specified in the assembly format.

Differential Revision: https://reviews.llvm.org/D84719
2020-08-31 13:26:23 -07:00
Stella Laurenzo 2d1362e09a Add Location, Region and Block to MLIR Python bindings.
* This is just enough to create regions/blocks and iterate over them.
* Does not yet implement the preferred iteration strategy (python pseudo containers).
* Refinements need to come after doing basic mappings of operations and values so that the whole hierarchy can be used.

Differential Revision: https://reviews.llvm.org/D86683
2020-08-28 15:26:05 -07:00
Hanhan Wang eb4efa8832 [mlir][Linalg] Enhance Linalg fusion on generic op and tensor_reshape op.
The tensor_reshape op was only fusible only if it is a collapsing case. Now we
propagate the op to all the operands so there is a further chance to fuse it
with generic op. The pre-conditions are:

1) The producer is not an indexed_generic op.
2) All the shapes of the operands are the same.
3) All the indexing maps are identity.
4) All the loops are parallel loops.
5) The producer has a single user.

It is possible to fuse the ops if the producer is an indexed_generic op. We
still can compute the original indices. E.g., if the reshape op collapses the d0
and d1, we can use DimOp to get the width of d1, and calculate the index
`d0 * width + d1`. Then replace all the uses with it. However, this pattern is
not implemented in the patch.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D86314
2020-08-28 01:55:49 -07:00
Vincent Zhao 28a7dfa33d [MLIR] Fixed missing constraint append when adding an AffineIfOp domain
The prior diff that introduced `addAffineIfOpDomain` missed appending
constraints from the ifOp domain. This revision fixes this problem.

Differential Revision: https://reviews.llvm.org/D86421
2020-08-28 00:34:23 +05:30
Kiran Chandramohan 875074c8a9 [OpenMP][MLIR] Conversion pattern for OpenMP to LLVM
Adding a conversion pattern for the parallel Operation. This will
help the conversion of parallel operation with standard dialect to
parallel operation with llvm dialect. The type conversion of the block
arguments in a parallel region are controlled by the pattern for the
parallel Operation. Without this pattern, a parallel Operation with
block arguments cannot be converted from standard to LLVM dialect.
Other OpenMP operations without regions are marked as legal. When
translation of OpenMP operations with regions are added then patterns
for these operations can also be added.
Also uses all the standard to llvm patterns. Patterns of other dialects
can be added later if needed.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D86273
2020-08-27 19:32:15 +01:00
Alexandre E. Eichenberger a14a2805b0 [MLIR] MemRef Normalization for Dialects
When dealing with dialects that will results in function calls to
external libraries, it is important to be able to handle maps as some
dialects may require mapped data.  Before this patch, the detection of
whether normalization can apply or not, operations are compared to an
explicit list of operations (`alloc`, `dealloc`, `return`) or to the
presence of specific operation interfaces (`AffineReadOpInterface`,
`AffineWriteOpInterface`, `AffineDMAStartOp`, or `AffineDMAWaitOp`).

This patch add a trait, `MemRefsNormalizable` to determine if an
operation can have its `memrefs` normalized.

This trait can be used in turn by dialects to assert that such
operations are compatible with normalization of `memrefs` with
nontrivial memory layout specification. An example is given in the
literal tests.

Differential Revision: https://reviews.llvm.org/D86236
2020-08-27 20:26:59 +05:30
Frederik Gossen 3cb63073ea [MLIR][Shape] Fix typo
Differential Revision: https://reviews.llvm.org/D86606
2020-08-27 08:19:13 +00:00