Commit Graph

1023 Commits

Author SHA1 Message Date
Lei Zhang 5b7619c90b [mlir] Fix scf.for single iteration canonicalization check
We should be check whether lb + step >= ub to determine
whether this is a single iteration. Previously we were
checking lb + lb >= ub.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D95440
2021-02-02 18:30:50 -05:00
Lei Zhang a2e791e396 Revert "[mlir] Fix scf.for single iteration canonicalization check"
This reverts commit b2b35697dc.
It gotten accidentially landed before LGTM.
2021-02-02 11:13:39 -05:00
Lei Zhang e901188cf9 [mlir][spirv] Define sp.VectorShuffle
This patch adds basic op definition, parser/printer, and verifier.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D95825
2021-02-02 11:08:56 -05:00
Lei Zhang b2b35697dc [mlir] Fix scf.for single iteration canonicalization check
We should be check whether lb + step >= ub to determine
whether this is a single iteration. Previously we were
checking lb + lb >= ub.

Differential Revision: https://reviews.llvm.org/D95440
2021-02-02 11:08:56 -05:00
Nicolas Vasilache 8fce22888b [mlir][Linalg] Fix and properly test CodegenStrategy API
Fix a bug that was introduced where calling the codegen strategy with actual concrete C++ Op types did not trigger the expected behavior.
Also introduce a test for the behavior that was missing.

Differential Revision: https://reviews.llvm.org/D95863
2021-02-02 13:01:12 +00:00
Nicolas Vasilache 0a2a260aab [mlir][Linalg] Refactor Linalg vectorization for better reuse and extensibility.
This revision unifies Linalg vectorization and paves the way for vectorization of Linalg ops with mixed-precision operations.
The new algorithm traverses the ops in the linalg block in order and avoids recursion.
It uses a BlockAndValueMapping to keep track of vectorized operations.

The revision makes the following modifications but is otherwise NFC:
1. vector.transfer_read are created eagerly and may appear in a different order than the original order.
2. a more progressive vectorization to vector.contract results in only the multiply operation being converted to `vector.contract %a, %b, %zero`, where `%zero` is a
constant of the proper type. Later vector canonicalizations are assumed to rewrite vector.contract %a, %b, %zero + add to a proper accumulate form.

Differential revision: https://reviews.llvm.org/D95797
2021-02-02 11:31:09 +00:00
MaheshRavishankar 342d4662e1 [mlir] Add custom directive hooks for printing mixed integer or value operands.
Add printer and parser hooks for a custom directive that allows
parsing and printing of idioms that can represent a list of values
each of which is either an integer or an SSA value. For example in

`subview %source[%offset_0, 1] [4, %size_1] [%stride_0, 3]`

each of the list (which represents offset, size and strides) is a mix
of either statically know integer values or dynamically computed SSA
values. Since this is used in many places adding a custom directive to
parse/print this idiom allows using assembly format on operations
which use this idiom.

Differential Revision: https://reviews.llvm.org/D95773
2021-02-01 19:03:49 -08:00
Hanhan Wang b3f611bfe7 [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
PadTensorOp takes a region different from SimplePadOp. This is not covered by
LinalgOp because it is not a structured op.

Also, remove a duplicated comment from cpp file, which is already described in a
header file. And update the pseudo-mlir in the comment.

This is as same as D95615 but fixing one dep in CMakeLists.txt

Different from D95671, the fix was applied to run target.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D95785
2021-02-01 11:38:43 -08:00
Tres Popp 2790cbedd0 Revert "[mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding"
This reverts commit d9b953d84b.

This commit resulted in build bot failures and the author is away from a
computer, so I am reverting on their behalf until they have a chance to
look into this.
2021-02-01 09:43:55 +01:00
Hanhan Wang d9b953d84b [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
PadTensorOp takes a region different from SimplePadOp. This is not covered by
LinalgOp because it is not a structured op.

Also, remove a duplicated comment from cpp file, which is already described in a
header file. And update the pseudo-mlir in the comment.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D95671
2021-02-01 00:02:37 -08:00
Matthias Springer 5ec59f021c [mlir][AVX512] Fix result type of vp2intersect
The result values of vp2intersect are vectors of bits, i.e.,
vector<8xi1> or vector<16xi8> (instead of i8 or i16).

Differential Revision: https://reviews.llvm.org/D95678
2021-01-31 12:03:46 +09:00
Tres Popp 0c5e4a25ee [mlir] Prevent segfault in Tensor canonicalization
This segfault could occur from out of bounds accesses when simplifying
tensor.extract with a constant index and a tensor created by
tensor.from_elements.

This IR is not necesarilly invalid as it might conditionally be
never executed.

Differential Revision: https://reviews.llvm.org/D95535
2021-01-29 10:57:58 +01:00
MaheshRavishankar 98835e3d98 [mlir][Linalg] Enable TileAndFusePattern to work with tensors.
Differential Revision: https://reviews.llvm.org/D94531
2021-01-28 14:13:01 -08:00
Hanhan Wang 2c7cc5fd20 Revert "[mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding"
This reverts commit 1e790b745d.

Differential Revision: https://reviews.llvm.org/D95636
2021-01-28 11:25:02 -08:00
Hanhan Wang 1e790b745d [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
PadTensorOp takes a region different from SimplePadOp. This is not covered by
LinalgOp because it is not a structured op.

Also, remove a duplicated comment from cpp file, which is already described in a
header file. And update the pseudo-mlir in the comment.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D95615
2021-01-28 11:09:57 -08:00
Aart Bik 8af0ccf5a4 [sparse][mlir] give all sparse kernels an explicit "output" tensor
Rationale:
Providing an output tensor, even if one is not used as input to
the kernel provides the right pattern for using lingalg sparse
kernels (in contrast with reusing a tensor just to provide the shape).
This prepares proper bufferization that will follow.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D95587
2021-01-28 10:41:17 -08:00
Alex Zinenko d6be277347 [mlir] turn complex-to-llvm into a partial conversion
It is no longer necessary to also convert other "standard" ops along with the
complex dialect: the element types are now built-in integers or floating point
types, and the top-level cast between complex and struct is automatically
inserted and removed in progressive lowering.

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D95625
2021-01-28 19:14:01 +01:00
Hanhan Wang 469096d18e [mlir][Linalg] Fix tests in tile-and-pad
The check match in D95555 was wrong, this patch fixes it.

Differential Revision: https://reviews.llvm.org/D95618
2021-01-28 07:59:33 -08:00
Hanhan Wang c818fa6729 [mlir][Linalg] Replace SimplePad with PadTensor in tile-and-pad
This revision creates a build method of PadTensorOp which can be mapped to
SimplePad op. The verifier is updated to accept a static custom result type,
which has the same semantic as SimplePadOp.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D95555
2021-01-28 06:50:26 -08:00
Nicolas Vasilache 8900acc796 [mlir][Linalg] Reenable test that was mistakenly disabled 2021-01-28 13:25:59 +00:00
Nicolas Vasilache d0c9fb1b8e [mlir][Linalg] Improve codegen strategy
This revision improves the usage of the codegen strategy by adding a few flags that
make it easier to control for the CLI.
Usage of ModuleOp is replaced by FuncOp as this created issues in multi-threaded mode.

A simple benchmarking capability is added for linalg.matmul as well as linalg.matmul_column_major.
This latter op is also added to linalg.

Now obsolete linalg integration tests that also take too long are deleted.

Correctness checks are still missing at this point.

Differential revision: https://reviews.llvm.org/D95531
2021-01-28 10:59:16 +00:00
Tres Popp bc8d8e69a6 [mlir] Fold shape.eq %a, %a to true
Differential Revision: https://reviews.llvm.org/D95430
2021-01-27 16:22:15 +01:00
Nicolas Vasilache 5133673df4 [mlir] Extend semantic of OffsetSizeAndStrideOpInterface.
OffsetSizeAndStrideOpInterface now have the ability to specify only a leading subset of
offset, sizes, strides operands/attributes.
The size of that leading subset must be limited by the corresponding entry in `getArrayAttrMaxRanks` to avoid overflows.
Missing trailing dimensions are assumed to span the whole range (i.e. [0 .. dim)).
This brings more natural semantics to slice-like op on top of subview and is a simplifies to removing all uses of SliceOp in dependent projects.

Differential revision: https://reviews.llvm.org/D95441
2021-01-27 09:02:35 +00:00
MaheshRavishankar 7c15e0f64c [mlir][Linalg] Add canonicalization for init_tensor -> subtensor op.
Differential Revision: https://reviews.llvm.org/D95305
2021-01-26 23:22:28 -08:00
Eugene Zhulenev 25f80e16d1 [mlir] Async: add a separate pass to lower from async to async.coro and async.runtime
Depends On D95000

Move async.execute outlining and async -> async.runtime lowering into the separate Async transformation pass

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D95311
2021-01-26 03:33:20 -08:00
Eugene Zhulenev 2f7baffdc1 [mlir:async] Use ODS to define async types
Depends On D94923

Migrate Async dialect to ODS `TypeDef`

Reviewed By: ftynse, rriddle

Differential Revision: https://reviews.llvm.org/D95000
2021-01-26 02:37:50 -08:00
Matthias Springer 90ebc489de Add vp2intersect to AVX512 dialect.
Adds vp2intersect to the AVX512 dialect and defines a lowering to the
LLVM dialect.

Author: Matthias Springer <springerm@google.com>

Differential Revision: https://reviews.llvm.org/D95301
2021-01-26 07:32:26 +00:00
Eugene Zhulenev 9c53b8e52e [mlir:Async] Add intermediate async.coro and async.runtime operations to simplify Async to LLVM lowering
[NFC] No new functionality, mostly a cleanup and one more abstraction level between Async and LLVM IR.

Instead of lowering from Async to LLVM coroutines and Async Runtime API in one shot, do it progressively via async.coro and async.runtime operations.

1. Lower from async to async.runtime/coro (e.g. async.execute to function with coro setup and runtime calls)
2. Lower from async.runtime/coro to LLVM intrinsics and runtime API calls

Intermediate coro/runtime operations will allow to run transformations on a higher level IR and do not try to match IR based on the LLVM::CallOp properties.

Although async.coro is very close to LLVM coroutines, it is not exactly the same API, instead it is optimized for usability in async lowering, and misses a lot of details that are present in @llvm.coro intrinsic.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D94923
2021-01-25 14:04:33 -08:00
Nicolas Vasilache 05d5125d8a [mlir] Generalize OpFoldResult usage in ops with offsets, sizes and operands.
This revision starts evolving the APIs to manipulate ops with offsets, sizes and operands towards a ValueOrAttr abstraction that is already used in folding under the name OpFoldResult.

The objective, in the future, is to allow such manipulations all the way to the level of ODS to avoid all the genuflexions involved in distinguishing between values and attributes for generic constant foldings.

Once this evolution is accepted, the next step will be a mechanical OpFoldResult -> ValueOrAttr.

Differential Revision: https://reviews.llvm.org/D95310
2021-01-25 14:17:03 +00:00
Nicolas Vasilache 68eee55ce6 [mlir][Linalg] Address missed review item
This revision addresses a remaining comment that was overlooked in https://reviews.llvm.org/D95243:
the pad hoisting transformation is made to additionally bail out on side effecting ops other than LoopLikeOps.
2021-01-25 13:47:44 +00:00
Nicolas Vasilache dbf9bedf40 [mlir][Linalg] Add a hoistPaddingOnTensors transformation
This transformation anchors on a padding op whose result is only used as an input
to a Linalg op and pulls it out of a given number of loops.
The result is a packing of padded tailes of ops that is amortized just before
the outermost loop from which the pad operation is hoisted.

Differential revision: https://reviews.llvm.org/D95243
2021-01-25 12:41:18 +00:00
Nicolas Vasilache 3747eb9c85 [mlir][Linalg] Add a padding option to Linalg tiling
This revision allows the base Linalg tiling pattern to optionally require padding to
a constant bounding shape.
When requested, a simple analysis is performed, similar to buffer promotion.
A temporary `linalg.simple_pad` op is added to model padding for the purpose of
connecting the dots. This will be replaced by a more fleshed out `linalg.pad_tensor`
op when it is available.
In the meantime, this temporary op serves the purpose of exhibiting the necessary
properties required from a more fleshed out pad op, to compose with transformations
properly.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D95149
2021-01-25 09:17:30 +00:00
MaheshRavishankar 6e8ef3b76a [mlir][Linalg] Make Fill operation work on tensors.
Depends on D95109
2021-01-22 14:39:27 -08:00
MaheshRavishankar 430d43e010 [mlir][Linalg] Disable fusion of tensor_reshape op by expansion when unit-dims are involved
Fusion of generic/indexed_generic operations with tensor_reshape by
expansion when the latter just adds/removes unit-dimensions is
disabled since it just adds unit-trip count loops.

Differential Revision: https://reviews.llvm.org/D94626
2021-01-22 12:55:25 -08:00
MaheshRavishankar 01defcc8d7 [mlir][Linalg] Extend tile+fuse to work on Linalg operation on tensors.
Differential Revision: https://reviews.llvm.org/D93086
2021-01-22 11:33:35 -08:00
MaheshRavishankar bce318f58d [mlir][Linalg] NFC: Refactor LinalgDependenceGraphElem to allow
representing dependence from producer result to consumer.

With Linalg on tensors the dependence between operations can be from
the result of the producer to the consumer. This change just does a
NFC refactoring of the LinalgDependenceGraphElem to allow representing
both OpResult and OpOperand*.

Differential Revision: https://reviews.llvm.org/D95208
2021-01-22 11:19:59 -08:00
Lei Zhang e27197f360 [mlir][spirv] Define spv.IsNan/spv.IsInf and add lowerings
spv.Ordered/spv.Unordered are meant for OpenCL Kernel capability.
For Vulkan Shader capability, we should use spv.IsNan to check
whether a number is NaN.

Add a new pattern for converting `std.cmpf ord|uno` to spv.IsNan
and bumped the pattern converting to spv.Ordered/spv.Unordered
to a higher benefit. The SPIR-V target environment will properly
select between these two patterns.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D95237
2021-01-22 13:09:33 -05:00
Hanhan Wang 16d4bbef30 [mlir][Linalg] Introduce linalg.pad_tensor op.
`linalg.pad_tensor` is an operation that pads the `source` tensor
with given `low` and `high` padding config.

Example 1:

```mlir
  %pad_value = ... : f32
  %1 = linalg.pad_tensor %0 low[1, 2] high[2, 3] {
  ^bb0(%arg0 : index, %arg1 : index):
    linalg.yield %pad_value : f32
  } : tensor<?x?xf32> to tensor<?x?xf32>
```

Example 2:
```mlir
  %pad_value = ... : f32
  %1 = linalg.pad_tensor %arg0 low[2, %arg1, 3, 3] high[3, 3, %arg1, 2] {
  ^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index):
    linalg.yield %pad_value : f32
  } : tensor<1x2x2x?xf32> to tensor<6x?x?x?xf32>
```

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D93704
2021-01-21 22:09:28 -08:00
River Riddle c78219f644 [mlir] Add a new builtin `unrealized_conversion_cast` operation
An `unrealized_conversion_cast` operation represents an unrealized conversion
from one set of types to another, that is used to enable the inter-mixing of
different type systems. This operation should not be attributed any special
representational or execution semantics, and is generally only intended to be
used to satisfy the temporary intermixing of type systems during the conversion
of one type system to another.

This operation was discussed in the following RFC(and ODM):

https://llvm.discourse.group/t/open-meeting-1-14-dialect-conversion-and-type-conversion-the-question-of-cast-operations/

Differential Revision: https://reviews.llvm.org/D94832
2021-01-20 16:28:18 -08:00
Aart Bik b5c542d64b [mlir][sparse] add narrower choices for pointers/indices
Use cases with 16- or even 8-bit pointer/index structures have been identified.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D95015
2021-01-19 20:20:38 -08:00
Sean Silva be7352c00d [mlir][splitting std] move 2 more ops to `tensor`
- DynamicTensorFromElementsOp
- TensorFromElements

Differential Revision: https://reviews.llvm.org/D94994
2021-01-19 13:49:25 -08:00
Lei Zhang 3a56a96664 [mlir][spirv] Define spv.GLSL.Fma and add lowerings
Also changes some rewriter.create + rewriter.replaceOp calls
into rewriter.replaceOpWithNewOp calls.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D94965
2021-01-19 09:14:21 -05:00
Nicolas Vasilache 93a873dfc9 [mlir][Affine] Revisit and simplify composeAffineMapAndOperands.
In prehistorical times, AffineApplyOp was allowed to produce multiple values.
This allowed the creation of intricate SSA use-def chains.
AffineApplyNormalizer was originally introduced as a means of reusing the AffineMap::compose method to write SSA use-def chains.
Unfortunately, symbols that were produced by an AffineApplyOp needed to be promoted to dims and reordered for the mathematical composition to be valid.

Since then, single result AffineApplyOp became the law of the land but the original assumptions were not revisited.

This revision revisits these assumptions and retires AffineApplyNormalizer.

Differential Revision: https://reviews.llvm.org/D94920
2021-01-19 13:52:07 +00:00
Alexander Belyaev 11f4c58c15 [mlir] Add `complex.abs`, `complex.div` and `complex.mul` to ComplexOps.
Differential Revision: https://reviews.llvm.org/D94911
2021-01-19 12:09:59 +01:00
MaheshRavishankar d7bc3b7ce2 [mlir][Linalg] Add missing check to canonicalization of GenericOp that are identity ops.
The operantion is an identity if the values yielded by the operation
is the argument of the basic block of that operation. Add this missing check.

Differential Revision: https://reviews.llvm.org/D94819
2021-01-15 13:55:35 -08:00
Alexander Belyaev d0cb0d30a4 [mlir] Add Complex dialect.
Differential Revision: https://reviews.llvm.org/D94764
2021-01-15 19:58:10 +01:00
Valentin Clement cf0173de69 [mlir] Add better support for f80 and f128
Add builtin f80 and f128 following @schweitz proposition
https://llvm.discourse.group/t/rfc-adding-better-support-for-higher-precision-floating-point/2526/5

Reviewed By: ftynse, rriddle

Differential Revision: https://reviews.llvm.org/D94737
2021-01-15 10:29:48 -05:00
Aart Bik 5508516b06 [mlir][sparse] retry sparse-only for cyclic iteration graphs
This is a very minor improvement during iteration graph construction.
If the first attempt considering the dimension order of all tensors fails,
a second attempt is made using the constraints of sparse tensors only.
Dense tensors prefer dimension order (locality) but provide random access
if needed, enabling the compilation of more sparse kernels.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D94709
2021-01-14 22:39:29 -08:00
MaheshRavishankar 42444d0cf0 [mlir][Linalg] NFC: Verify tiling on linalg.generic operation on tensors.
With the recent changes to linalg on tensor semantics, the tiling
operations works out-of-the-box for generic operations. Add a test to
verify that and some minor refactoring.

Differential Revision: https://reviews.llvm.org/D93077
2021-01-14 16:17:08 -08:00
MaheshRavishankar 774c9c6ef3 [mlir][Linalg] Add canonicalization of linalg op -> dim op.
Add canonicalization to replace use of the result of a linalg
operation on tensors in a dim operation, to use one of the operands of
the linalg operations instead. This allows the linalg op itself to be
deleted when all its non-dim uses are removed (say through tiling, etc.)

Differential Revision: https://reviews.llvm.org/D93076
2021-01-14 16:17:08 -08:00