Commit Graph

2456 Commits

Author SHA1 Message Date
Alex Zinenko 355216380b [mlir] Remove SDBM
This data structure and algorithm collection is no longer in use.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D105102
2021-06-29 14:46:26 +02:00
Butygin 293064222a [mlir] Add MemoryEffects::Allocate to memref::CloneOp
Without it BufferDeallocationPass process only CloneOps created during pass itself and ignore all CloneOps that were already present in IR.

For our specific usecase:

```
func @dealloc_existing_clones(%arg0: memref<?x?xf64>, %arg1: memref<?x?xf64>) -> memref<?x?xf64> {
  return %arg0 : memref<?x?xf64>
}
```

Input arguments will be freed immediately after return from function and we want to prolong lifetime for the returned argument.
To achieve this we explicitly add clones to all input memrefs and expect that BufferDeallocationPass will add correct deallocs to them (unnessesary clone+dealloc pairs will be canonicalized away later).

Differential Revision: https://reviews.llvm.org/D104973
2021-06-29 13:37:32 +03:00
Tobias Gysi a2a4bc561d [mlir][linalg] All StructuredOp parameters are inputs or outputs.
Adapt the StructuredOp verifier to ensure all operands are either in the input or the output group. The change is possible after adding support for scalar input operands (https://reviews.llvm.org/D104220).

Differential Revision: https://reviews.llvm.org/D104783
2021-06-29 07:45:50 +00:00
Alexander Belyaev d15663710c Revert "[mlir] Skip scalar operands when tiling to linalg.tiled_loop."
This reverts commit 69046b4a79. It did not
really break anything, but it was decided to allow scalars and other
non-shaped operands for tiled_loop.
2021-06-29 08:55:25 +02:00
harsh-nod 0d6e4199e3 [mlir][vector] Order parallel indices before transposing the input in multireductions
The current code does not preserve the order of the parallel
dimensions when doing multi-reductions and thus we can end
up in scenarios where the result shape does not match the
desired shape after reduction.

This patch fixes that by ensuring that the parallel indices
are in order and then concatenates them to the reduction dimensions
so that the reduction dimensions are innermost.

Differential Revision: https://reviews.llvm.org/D104884
2021-06-28 18:47:16 -07:00
Alexander Belyaev 69046b4a79 [mlir] Skip scalar operands when tiling to linalg.tiled_loop.
We are interested only in tensors/memrefs when creating a TiledLoopOp.

Differential Revision: https://reviews.llvm.org/D105059
2021-06-28 23:01:17 +02:00
William S. Moses 2ab27758d5 Revert "[MLIR][SCF] Inline ExecuteRegion if parent can contain multiple blocks"
This reverts commit 5d6240b77e.

The commit was mistakenly landed without a PR approval, this will be
reverted now and resubmitted.
2021-06-28 13:52:30 -04:00
Rob Suderman c7676d9993 [mlir][tosa] Update Tosa conv verifier to handle IntegerType input
Input/output types can be integers, which represent a quantized convolution.
Update verifier to expect this behavior.

Reviewed By: sjarus

Differential Revision: https://reviews.llvm.org/D104949
2021-06-28 10:18:45 -07:00
William S. Moses 5d6240b77e [MLIR][SCF] Inline ExecuteRegion if parent can contain multiple blocks
The executeregionop is used to allow multiple blocks within SCF constructs. If the container allows multiple blocks, inline the region

Differential Revision: https://reviews.llvm.org/D104960
2021-06-28 13:09:22 -04:00
Stephan Herhut 88d5eba139 Revert "Revert "[mlir][memref] Implement lowering of memref.copy to llvm""
This reverts commit 7d6e589fc8.

Windows build was unbroken.
2021-06-28 18:48:00 +02:00
William S. Moses 44826ecd92 [MLIR] Correct memrefdataflow behavior in the presence of cast and other operations
MemRefDataFlow performs mem2reg style operations for affine load/stores. Unfortunately, it is not presently correct in the presence of external operations such as memref.cast, or function calls. This diff extends the functionality of the pass to remain correct in the presence of such ops.

Differential Revision: https://reviews.llvm.org/D104053
2021-06-28 12:23:29 -04:00
William S. Moses cccc7e5aa8 [MLIR] Don't remove memref allocation if stored into another allocation
A canonicalization accidentally will remove a memref allocation if it is only stored into. However, this is incorrect if the allocation is the value being stored, not the allocation being stored into.

Differential Revision: https://reviews.llvm.org/D104947
2021-06-28 12:05:59 -04:00
William S. Moses 35c0ab72fc [MLIR] Simplify select to a not
Given a select that returns the logical negation of the condition, replace it with a not of the condition.

Differential Revision: https://reviews.llvm.org/D104966
2021-06-28 11:00:02 -04:00
Jacques Pienaar 7d6e589fc8 Revert "[mlir][memref] Implement lowering of memref.copy to llvm"
This reverts commit e939644977.

Breaks Windows build.
2021-06-28 07:50:11 -07:00
Stephan Herhut e939644977 [mlir][memref] Implement lowering of memref.copy to llvm
This lowering uses a library call to implement copying in the general case, i.e.,
supporting arbitrary rank and strided layouts.
2021-06-28 14:52:07 +02:00
Tobias Gysi bbf4436a82 [mlir][linalg] Remove the StructuredOp capture mechanism.
After https://reviews.llvm.org/D104109, structured ops support scalar inputs. As a result, the capture mechanism meant to pass non-shaped parameters got redundant. The patch removes the capture semantics after the FillOp migrated to use scalar operands https://reviews.llvm.org/D104121.

Differential Revision: https://reviews.llvm.org/D104785
2021-06-28 07:57:40 +00:00
Matthias Springer 0813700de1 [mlir][NFC] Cleanup: Move helper functions to StaticValueUtils
Reduce code duplication: Move various helper functions, that are duplicated in TensorDialect, MemRefDialect, LinalgDialect, StandardDialect, into a new StaticValueUtils.cpp.

Differential Revision: https://reviews.llvm.org/D104687
2021-06-27 15:56:48 +09:00
Gus Smith 043ce4e6bd [MLIR][Sparse] Move `buildLattices` into Merger
This allows us to use `buildLattices` in the `Merger` unittests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D104879
2021-06-26 05:05:05 +00:00
Aart Bik 557b101ce7 [mlir][sparse] add print methods to Merger (for debugging)
Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D104939
2021-06-25 15:10:06 -07:00
Eugene Zhulenev 34a164c938 [mlir:Async] Submit accidentally omitted changes
Accidentally pushed old branches that did not include all the changes discussed in the PRs.

https://reviews.llvm.org/rGd43b23608ad664f02f56e965ca78916bde220950
https://reviews.llvm.org/rG86ad0af87054c3cccd68d32e103a6f1f6c6194c7

Differential Revision: https://reviews.llvm.org/D104943
2021-06-25 12:23:02 -07:00
Eugene Zhulenev 86ad0af870 [mlir:Async] Implement recursive async work splitting for scf.parallel operation (async-parallel-for pass)
Depends On D104780

Recursive work splitting instead of sequential async tasks submission gives ~20%-30% speedup in microbenchmarks.

Algorithm outline:
1. Collapse scf.parallel dimensions into a single dimension
2. Compute the block size for the parallel operations from the 1d problem size
3. Launch parallel tasks
4. Each parallel task reconstructs its own bounds in the original multi-dimensional iteration space
5. Each parallel task computes the original parallel operation body using scf.for loop nest

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D104850
2021-06-25 10:34:39 -07:00
Eugene Zhulenev d43b23608a [mlir:Async] Add the size parameter to the async.group
Specify the `!async.group` size (the number of tokens that will be added to it) at construction time. `async.await_all` operation can potentially race with `async.execute` operations that keep updating the group, for this reason it is required to know upfront how many tokens will be added to the group.

Reviewed By: ftynse, herhut

Differential Revision: https://reviews.llvm.org/D104780
2021-06-25 10:26:50 -07:00
Gus Smith 744146f60b [MLIR][Sparse] Refactor lattice code into its own file
Moves iteration lattice/merger code into new SparseTensor/Utils directory. A follow-up CL will add lattice/merger unit tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D104757
2021-06-24 23:03:44 +00:00
William S. Moses 44985872b8 [MLIR][SCF] Inline single block ExecuteRegionOp
This commit adds a canonicalization pass which inlines any single block execute region

Differential Revision: https://reviews.llvm.org/D104865
2021-06-24 13:15:26 -04:00
Nicolas Vasilache 57fe7fd37d [mlir][Linalg] Add support for scf::ForOp in comprehensive bufferization (7/n)
scf::ForOp bufferization analysis proceeds just like for any other op (including FuncOp) at its boundaries; i.e. if:

1. The tensor operand is inplaceable.
2. The matching result has no subsequent read (i.e. all reads dominate the scf::ForOp).
3. In  and does not create a RAW interference.

then it can bufferize inplace.

Still there are a few differences:

1. bbArgs for an scf::ForOp are always considered inplaceable when seen from ops inside the body. This is because a) either the matching tensor operand is not inplaceable and an alloc will be inserted (which makes bbArg itself inplaceable); or b) the tensor operand and bbArg are both already inplaceable.
2. Bufferization within the scf::ForOp body has implications to the outside world : the scf.yield terminator may well ping-pong values of the same type. This muddies the water for alias analysis and is not supported atm. Such cases result in a pass failure.

Differential revision: https://reviews.llvm.org/D104490
2021-06-24 15:03:28 +00:00
Anthony Canino 3f429e82d3 Implement an scf.for range folding optimization pass.
In cases where arithmetic (addi/muli) ops are performed on an scf.for loops induction variable with a single use, we can fold those ops directly into the scf.for loop.

For example, in the following code:

```
scf.for %i = %c0 to %arg1 step %c1 {
  %0 = addi %arg2, %i : index
  %1 = muli %0, %c4 : index
  %2 = memref.load %arg0[%1] : memref<?xi32>
  %3 = muli %2, %2 : i32
  memref.store %3, %arg0[%1] : memref<?xi32>
}
```

we can lift `%0` up into the scf.for loop range, as it is the only user of %i:

```
%lb = addi %arg2, %c0 : index
%ub = addi %arg2, %i : index
scf.for %i = %lb to %ub step %c1 {
  %1 = muli %0, %c4 : index
  %2 = memref.load %arg0[%1] : memref<?xi32>
  %3 = muli %2, %2 : i32
  memref.store %3, %arg0[%1] : memref<?xi32>
}
```

Reviewed By: mehdi_amini, ftynse, Anthony

Differential Revision: https://reviews.llvm.org/D104289
2021-06-24 01:07:28 +00:00
Nicolas Vasilache f0d43a29e3 [mlir][LLVMIR] Fold ExtractValueOp coming from InsertValueOp
Differential Revision: https://reviews.llvm.org/D104769
2021-06-23 10:04:24 +00:00
Tobias Gysi 7cef24ee83 [mlir][linalg] Adapt the FillOp builder signature.
Change the build operand order from output, value to value, output. The patch makes the argument order consistent with the pretty printed order updated by https://reviews.llvm.org/D104356.

Differential Revision: https://reviews.llvm.org/D104359
2021-06-23 08:06:43 +00:00
Tobias Gysi a21a6f51bc [mlir][linalg] Change the pretty printed FillOp operand order.
The patch changes the pretty printed FillOp operand order from output, value to value, output. The change is a follow up to https://reviews.llvm.org/D104121 that passes the fill value using a scalar input instead of the former capture semantics.

Differential Revision: https://reviews.llvm.org/D104356
2021-06-23 07:03:00 +00:00
Aart Bik 36b66ab9ed [mlir][sparse] add support for "simply dynamic" sparse tensor expressions
Slowly we are moving toward full support of sparse tensor *outputs*. First
step was support for all-dense annotated "sparse" tensors. This step adds
support for truly sparse tensors, but only for operations in which the values
of a tensor change, but not the nonzero structure (this was refered to as
"simply dynamic" in the [Bik96] thesis).

Some background text was posted on discourse:
https://llvm.discourse.group/t/sparse-tensors-in-mlir/3389/25

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D104577
2021-06-22 13:37:32 -07:00
Butygin 82c1fb5750 [mlir] Fix invalid handling of AllocOp symbolOperands by SimplifyAllocConst.
symbolOperands were completely ignored by SimplifyAllocConst. Also, slightly improved diagnostic message for verifyAllocLikeOp.

Differential Revision: https://reviews.llvm.org/D104260
2021-06-22 15:39:53 +03:00
Matthias Springer 060208b4c8 [mlir][NFC] Move SubTensorOp and SubTensorInsertOp to TensorDialect
The main goal of this commit is to remove the dependency of Standard dialect on the Tensor dialect.

* Rename SubTensorOp -> tensor.extract_slice, SubTensorInsertOp -> tensor.insert_slice.
* Some helper functions are (already) duplicated between the Tensor dialect and the MemRef dialect. To keep this commit smaller, this will be cleaned up in a separate commit.
* Additional dialect dependencies: Shape --> Tensor, Tensor --> Standard
* Remove dialect dependencies: Standard --> Tensor
* Move canonicalization test cases to correct dialect (Tensor/MemRef).

Note: This is a fixed version of https://reviews.llvm.org/D104499, which was reverted due to a missing update to two CMakeFile.txt.

Differential Revision: https://reviews.llvm.org/D104676
2021-06-22 17:55:53 +09:00
Tobias Gysi 4882cacf12 [mlir][linalg] Adapt FillOp to use a scalar operand.
Adapt the FillOp definition to use a scalar operand instead of a capture. This patch is a follow up to https://reviews.llvm.org/D104109. As the input operands are in front of the output operands the patch changes the internal operand order of the FillOp. The pretty printed version of the operation remains unchanged though. The patch also adapts the linalg to standard lowering to ensure the c signature of the FillOp remains unchanged as well.

Differential Revision: https://reviews.llvm.org/D104121
2021-06-22 06:44:52 +00:00
Matthias Springer 2ba387a316 [mlir][linalg] Fusion of PadTensorOp
Note: This commit (and previous ones) implements the same functionality as https://reviews.llvm.org/D103243 (which is abandoned).

Differential Revision: https://reviews.llvm.org/D104683
2021-06-22 11:48:49 +09:00
Rob Suderman ad1a9d629b [mlir][tosa] Enable tosa.div for TosaMakeBroadcastable
TosaMakeBroadcastable needs to include tosa.div, which was added later in the
specification.

Reviewed By: sjarus, NatashaKnk

Differential Revision: https://reviews.llvm.org/D104157
2021-06-21 16:12:11 -07:00
Ahmed S. Taei 7e2d672a67 Add polynomial approximation for trigonometric sine and cosine functions
The approximation relays on range reduced version y \in [0, pi/2]. An input x will have
the property that sin(x) = sin(y), -sin(y), cos(y), -cos(y) depends on which quadrable x
is in, where sin(y) and cos(y) are approximated with 5th degree polynomial (of x^2).
As a result a single pattern can be used to compute approximation for both sine and cosine.

Reviewed By: ezhulenev

Differential Revision: https://reviews.llvm.org/D104582
2021-06-21 13:00:33 -07:00
thomasraoux 1244bca53f [mlir][vector] Support distributing transfer op with permutation map
Differential Revision: https://reviews.llvm.org/D104263
2021-06-21 12:56:08 -07:00
Mehdi Amini 60d97fb4cf Revert "[mlir][NFC] Move SubTensorOp and SubTensorInsertOp to TensorDialect"
This reverts commit 83bf801f5f.

This breaks the build with -DBUILD_SHARED_LIBS=ON
2021-06-21 16:39:24 +00:00
Matthias Springer 83bf801f5f [mlir][NFC] Move SubTensorOp and SubTensorInsertOp to TensorDialect
The main goal of this commit is to remove the dependency of Standard dialect on the Tensor dialect.

* Rename ops: SubTensorOp --> ExtractTensorOp, SubTensorInsertOp --> InsertTensorOp
* Some helper functions are (already) duplicated between the Tensor dialect and the MemRef dialect. To keep this commit smaller, this will be cleaned up in a separate commit.
* Additional dialect dependencies: Shape --> Tensor, Tensor --> Standard
* Remove dialect dependencies: Standard --> Tensor
* Move canonicalization test cases to correct dialect (Tensor/MemRef).

Differential Revision: https://reviews.llvm.org/D104499
2021-06-22 00:11:21 +09:00
Alexander Belyaev 2e972e366a [mlir] Remove "getNumPayloadInductionVariables".
This method always returns 0 after
https://reviews.llvm.org/rG7cddf56d608f07b8e49f7e2eeb4a20082611adb6

Differential Revision: https://reviews.llvm.org/D104645
2021-06-21 16:38:47 +02:00
Benjamin Kramer 596989da65 [mlir][Linalg] Silence warnings in Release builds. NFC.
mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferize.cpp:940:8: warning: unused variable 'opProducesRootRead' [-Wunused-variable]
  bool opProducesRootRead =
       ^
mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferize.cpp:942:8: warning: unused variable 'opProducesRootWrite' [-Wunused-variable]
  bool opProducesRootWrite =
       ^
mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferize.cpp:1498:11: warning: unused variable 'resultNumber' [-Wunused-variable]
  int64_t resultNumber = result.getResultNumber();
          ^
mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferize.cpp:1497:11: warning: unused variable 'operandNumber' [-Wunused-variable]
  int64_t operandNumber = operand.getOperandNumber();
          ^
mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferize.cpp:267:20: warning: unused function 'getInPlace' [-Wunused-function]
static InPlaceSpec getInPlace(Value v) {
                   ^
2021-06-21 12:56:41 +02:00
Matthias Springer 66f878cee9 [mlir][NFC] Remove Standard dialect dependency on MemRef dialect
* Remove dependency: Standard --> MemRef
* Add dependencies: GPUToNVVMTransforms --> MemRef, Linalg --> MemRef, MemRef --> Tensor
* Note: The `subtensor_insert_propagate_dest_cast` test case in MemRef/canonicalize.mlir will be moved to Tensor/canonicalize.mlir in a subsequent commit, which moves over the remaining Tensor ops from the Standard dialect to the Tensor dialect.

Differential Revision: https://reviews.llvm.org/D104506
2021-06-21 17:55:23 +09:00
Matthias Springer 225b960cfc [mlir][linalg] Support low padding in subtensor(pad_tensor) lowering
Differential Revision: https://reviews.llvm.org/D104591
2021-06-21 16:34:26 +09:00
Nicolas Vasilache 11e9a72dfc [mlir][Linalg] NFC - Drop unused variable definition. 2021-06-21 07:08:02 +00:00
Nicolas Vasilache e04533d38a [mlir][Linalg] Introduce a BufferizationAliasInfo (6/n)
This revision adds a BufferizationAliasInfo which maintains and updates information about which tensors will alias once bufferized, which bufferized tensors are equivalent to others and how to handle clobbers.

Bufferization greedily tries to bufferize inplace by:

1. first trying to bufferize SubTensorInsertOp inplace, in reverse order (these are deemed the most expensives).
2. then trying to bufferize all non SubTensorOp / SubTensorInsertOp, in reverse order.
3. lastly trying to bufferize all SubTensorOp in reverse order.

Reverse order is a heuristic that seems to work nicely because structured tensor codegen very often proceeds by:

1. take a subset of a tensor
2. compute on that subset
3. insert the result subset into the full tensor and yield a new tensor.

BufferizationAliasInfo + equivalence sets + clobber analysis allows bufferizing nested
subtensor/compute/subtensor_insert sequences inplace to a certain extent.
To fully realize inplace bufferization, additional container-containee analysis will be necessary and is left for a subsequent commit.

Differential revision: https://reviews.llvm.org/D104110
2021-06-21 06:59:42 +00:00
Fangrui Song 558ee5843f [mlir] Fix -Wunused-but-set-variable in -DLLVM_ENABLE_ASSERTIONS=off build. NFC 2021-06-20 11:55:00 -07:00
Marius Brehler 876de062f9 [mlir] Add EmitC dialect
This upstreams the EmitC dialect and the corresponding Cpp target, both
initially presented with [1], from [2] to MLIR core. For the related
discussion, see [3].

[1] https://reviews.llvm.org/D76571
[2] https://github.com/iml130/mlir-emitc
[3] https://llvm.discourse.group/t/emitc-generating-c-c-from-mlir/3388

Co-authored-by: Jacques Pienaar <jpienaar@google.com>
Co-authored-by: Simon Camphausen <simon.camphausen@iml.fraunhofer.de>
Co-authored-by: Oliver Scherf <oliver.scherf@iml.fraunhofer.de>

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D103969
2021-06-19 09:51:17 +02:00
Matthias Springer 24199f534f [mlir][linalg] Lower subtensor(pad_tensor) to pad_tensor(subtensor)
Only high padding is supported at the moment. Low padding will be added in a separate commit.

Differential Revision: https://reviews.llvm.org/D104357
2021-06-19 13:44:47 +09:00
Uday Bondhugula 18c8c934d8 [MLIR] Introduce scf.execute_region op
Introduce the execute_region op that is able to hold a region which it
executes exactly once. The op encapsulates a CFG within itself while
isolating it from the surrounding control flow. Proposal discussed here:
https://llvm.discourse.group/t/introduce-std-inlined-call-op-proposal/282

execute_region enables one to inline a function without lowering out all
other higher level control flow constructs (affine.for/if, scf.for/if)
to the flat list of blocks / CFG form. It thus allows the benefit of
transforms on higher level control flow ops available in the presence of
the inlined calls. The inlined calls continue to benefit from
propagation of SSA values across their top boundary. Functions won’t
have to remain outlined until later than desired.  Abstractions like
affine execute_regions, lambdas with implicit captures could be lowered
to this without first lowering out structured loops/ifs or outlining.
But two potential early use cases are of: (1) an early inliner (which
can inline functions by introducing execute_region ops), (2) lowering of
an affine.execute_region, which cleanly maps to an scf.execute_region
when going from the affine dialect to the scf dialect.

Differential Revision: https://reviews.llvm.org/D75837
2021-06-18 15:22:33 +05:30
Matthias Springer 6f665cd53d [mlir][linalg] Fix PadTensorOp constructor
Differential Revision: https://reviews.llvm.org/D104510
2021-06-18 17:35:08 +09:00
Alexander Belyaev 5b3cb31edb [mlir][linalg] Purge linalg.indexed_generic.
Differential Revision: https://reviews.llvm.org/D104449
2021-06-17 14:45:37 +02:00
MaheshRavishankar 3ed3e438a7 [mlir] Move `memref.dim` canonicalization using `InferShapedTypeOpInterface` to a separate pass.
Based on dicussion in
[this](https://llvm.discourse.group/t/remove-canonicalizer-for-memref-dim-via-shapedtypeopinterface/3641)
thread the pattern to resolve the `memref.dim` of a value that is a
result of an operation that implements the
`InferShapedTypeOpInterface` is moved to a separate pass instead of
running it as a canonicalization pass. This allows shape resolution to
happen when explicitly required, instead of automatically through a
canonicalization.

Differential Revision: https://reviews.llvm.org/D104321
2021-06-16 22:13:11 -07:00
Mehdi Amini b5e22e6d42 Migrate MLIR test passes to the new registration API
Make sure they all define getArgument()/getDescription().

Depends On D104421

Differential Revision: https://reviews.llvm.org/D104426
2021-06-16 23:42:17 +00:00
Uday Bondhugula 54384d1723 [MLIR] Make store to load fwd condition less conservative
Make store to load fwd condition for -memref-dataflow-opt less
conservative. Post dominance info is not really needed. Add additional
check for common cases.

Differential Revision: https://reviews.llvm.org/D104174
2021-06-17 01:26:38 +05:30
Prashant Kumar 51d43bbc46 [MLIR] Fix affine parallelize pass.
To control the number of outer parallel loops, we need to process the
 outer loops first and hence pre-order walk fixes the issue.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D104361
2021-06-17 01:25:24 +05:30
Aart Bik 619bfe8bd2 [mlir][sparse] support new kind of scalar in sparse linalg generic op
We have several ways of introducing a scalar invariant value into
linalg generic ops (should we limit this somewhat?). This revision
makes sure we handle all of them correctly in the sparse compiler.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D104335
2021-06-16 11:00:49 -07:00
MaheshRavishankar 621d93d263 [mlir][SCF] Remove empty else blocks of `scf.if` operations.
Differential Revision: https://reviews.llvm.org/D104273
2021-06-15 15:07:20 -07:00
Aart Bik 727a63e0d9 [mlir][sparse] allow all-dense annotated "sparse" tensor output
This is a very careful start with alllowing sparse tensors at the
left-hand-side of tensor index expressions (viz. sparse output).
Note that there is a subtle difference between non-annotated tensors
(dense, remain n-dim, handled by classic bufferization) and all-dense
annotated "sparse" tensors (linearized to 1-dim without overhead
storage, bufferized by sparse compiler, backed by runtime support library).
This revision gently introduces some new IR to facilitate annotated outputs,
to be generalized to truly sparse tensors in the future.

Reviewed By: gussmith23, bixia

Differential Revision: https://reviews.llvm.org/D104074
2021-06-15 14:55:07 -07:00
Benjamin Kramer cd93935146 [mlir][MemRef] Make sure types match when folding dim(reshape)
Reshape can take integer types in addition to index, but dim always
returns index.

Differential Revision: https://reviews.llvm.org/D104287
2021-06-15 12:33:44 +02:00
Matthias Springer b6ab4f1a8b [mlir][linalg] Fold linalg.pad_tensor if src type == result type
Fold PadTensorOp to source if source type and result type have static shape and are equal.

Differential Revision: https://reviews.llvm.org/D103778
2021-06-15 17:25:12 +09:00
Tres Popp 6c7be41767 Support buffers in LinalgFoldUnitExtentDims
This doesn't add any canonicalizations, but executes the same
simplification on bufferSemantic linalg.generic ops by using
linalg::ReshapeOp instead of linalg::TensorReshapeOp.

Differential Revision: https://reviews.llvm.org/D103513
2021-06-15 08:22:22 +02:00
Hanhan Wang e3bc4dbe8e [mlir][Linalg] Make printer/parser have the same behavior.
The parser of generic op did not recognize the output from mlir-opt when there
are multiple outputs. One would wrap the result types with braces, and one would
not. The patch makes the behavior the same.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D104256
2021-06-14 13:38:30 -07:00
River Riddle 66e2708205 [mlir:Linalg] Populate LinalgOp patterns on LinalgDialect as opposed to each op
Interface patterns are unique in that they get added to every operation that also implements that interface, given that they aren't tied to individual operations. When the same interface pattern gets added to multiple operations (such as the current behavior with Linalg), an reference to each of these patterns is added to every op (meaning that an operation will now have N references to effectively the same pattern). This revision fixes this problematic behavior in Linalg, and can bring upwards of a 25% reduction in compile time in Linalg based workloads.

Differential Revision: https://reviews.llvm.org/D104160
2021-06-14 11:20:15 -07:00
Uday Bondhugula 88e4aae57d [MLIR][NFC] Rename MemRefDataFlow -> AffineScalarReplacement
NFC. Rename MemRefDataFlow -> AffineScalarReplacement and move to
AffineTransforms library. Pass command line rename: -memref-dataflow-opt
-> affine-scalrep. Update outdated pass documentation.

Rationale:
https://llvm.discourse.group/t/move-and-rename-memref-dataflow-opt-lib-transforms-lib-affine-dialect-transforms/3640

Differential Revision: https://reviews.llvm.org/D104190
2021-06-14 17:52:53 +05:30
Guillaume Chatelet 1d49e5352f [llvm] remove Sequence::asSmallVector()
There's no need for `toSmallVector()` as `SmallVector.h` already provides a `to_vector` free function that takes a range.

Reviewed By: Quuxplusone

Differential Revision: https://reviews.llvm.org/D104024
2021-06-14 08:28:05 +00:00
Tobias Gysi 046922e100 [mlir][linalg] Add support for scalar input operands.
Up to now all structured op operands are assumed to be shaped. The patch relaxes this assumption and allows scalar input operands. In contrast to shaped operands scalar operands are not indexed and directly forwarded to the body of the operation. As all other operands, scalar operands are associated to an indexing map that in case of a scalar or a 0D-operand has an empty range.

We will use scalar operands as a replacement for the capture mechanism. In contrast to captures, the approach ensures we can generate the function signature from the operand list and it prevents outdated capture values in case a transformation updates only the capture operand but not the hidden body of a named operation.

Removing captures and updating existing operations such as linalg.fill is left for a later patch.

The patch depends on https://reviews.llvm.org/D103891 and https://reviews.llvm.org/D103890.

Differential Revision: https://reviews.llvm.org/D104109
2021-06-14 06:27:16 +00:00
Matthias Springer ddda52ce3c [mlir][linalg] Lower PadTensorOps with non-constant pad value
The padding of such ops is not generated in a vectorized way. Instead, emit a tensor::GenerateOp.

We may vectorize GenerateOps in the future.

Differential Revision: https://reviews.llvm.org/D103879
2021-06-14 15:11:13 +09:00
Matthias Springer 01e3b34469 [mlir][linalg] Vectorize linalg.pad_op source copying (improved)
Vectorize linalg.pad_op source copying if source or result shape are static.

Differential Revision: https://reviews.llvm.org/D103791
2021-06-14 14:43:56 +09:00
Matthias Springer 4c2f3d810b [mlir][linalg] Vectorize linalg.pad_op source copying (static source shape)
If the source operand of a linalg.pad_op operation has static shape, vectorize the copying of the source.

Differential Revision: https://reviews.llvm.org/D103747
2021-06-14 14:31:34 +09:00
Matthias Springer 98fff5153a [mlir][linalg] Lower PadTensorOp to InitTensorOp + FillOp + SubTensorInitOp
Currently limited to constant pad values. Any combination of dynamic/static tensor sizes and padding sizes is supported.

Differential Revision: https://reviews.llvm.org/D103679
2021-06-14 14:21:08 +09:00
Matthias Springer fdb21f0c5e [mlir][linalg] Remove generic PadTensorOp vectorization pattern
The generic vectorization pattern handles only those cases, where
low and high padding is zero. This is already handled by a
canonicalization pattern.

Also add a new canonicalization test case to ensure that tensor cast ops
are properly inserted.

A more general vectorization pattern will be added in a subsequent commit.

Differential Revision: https://reviews.llvm.org/D103590
2021-06-14 10:53:50 +09:00
Matthias Springer 562f9e995d [mlir] Vectorize linalg.pad_tensor consumed by transfer_write
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a transfer_write.

Differential Revision: https://reviews.llvm.org/D103137
2021-06-14 10:17:23 +09:00
Matthias Springer b1fd8a13cc [mlir] Vectorize linalg.pad_tensor consumed by subtensor_insert
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a subtensor_insert.

Differential Revision: https://reviews.llvm.org/D103780
2021-06-14 09:59:38 +09:00
Matthias Springer b1b822714d [mlir] Vectorize linalg.pad_tensor consumed by transfer_read
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a transfer_read.

Differential Revision: https://reviews.llvm.org/D103735
2021-06-14 09:52:25 +09:00
Matthias Springer bf5d3092f8 [mlir][linalg] Add constant padding helper to PadTensorOp
* Add a helper function that returns the constant padding value (if applicable).
* Remove existing getConstantYieldValueFromBlock function, which does almost the same.
* Adapted from D103243.

Differential Revision: https://reviews.llvm.org/D104004
2021-06-14 09:44:39 +09:00
Hanhan Wang b4baccc2a7 Introduce tensor.insert op to Tensor dialect.
Add `tensor.insert` op to make `tensor.extract`/`tensor.insert` work in pairs
for `scalar` domain. Like `subtensor`/`subtensor_insert` work in pairs in
`tensor` domain, and `vector.transfer_read`/`vector.transfer_write` work in
pairs in `vector` domain.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D104139
2021-06-13 13:45:40 -07:00
Shashij gupta 466e5aba64 [MLIR] Simplify affine.if ops with trivial conditions
The commit simplifies affine.if ops :
The affine if operation gets removed if the condition is universally true or false and then/else block is merged with the parent block.

Signed-off-by: Shashij Gupta shashij.gupta@polymagelabs.com

Reviewed By: bondhugula, pr4tgpt

Differential Revision: https://reviews.llvm.org/D104015
2021-06-12 19:29:10 +05:30
Stephen Neuendorffer 984e270a9a [mlir] make normalizeAffineFor public
Previously this was just a static method.
2021-06-11 20:12:37 -07:00
Denys Shabalin fdc0d4360b Introduce alloca_scope op
## Introduction

This proposal describes the new op to be added to the `std` (and later moved `memref`)
dialect called `alloca_scope`.

## Motivation

Alloca operations are easy to misuse, especially if one relies on it while doing
rewriting/conversion passes. For example let's consider a simple example of two
independent dialects, one defines an op that wants to allocate on-stack and
another defines a construct that corresponds to some form of looping:

```
dialect1.looping_op {
  %x = dialect2.stack_allocating_op
}
```

Since the dialects might not know about each other they are going to define a
lowering to std/scf/etc independently:

```
scf.for … {
   %x_temp = std.alloca …
   … // do some domain-specific work using %x_temp buffer
   … // and store the result into %result
   %x = %result
}
```

Later on the scf and `std.alloca` is going to be lowered to llvm using a
combination of `llvm.alloca` and unstructured control flow.

At this point the use of `%x_temp` is bound to either be either optimized by
llvm (for example using mem2reg) or in the worst case: perform an independent
stack allocation on each iteration of the loop. While the llvm optimizations are
likely to succeed they are not guaranteed to do so, and they provide
opportunities for surprising issues with unexpected use of stack size.

## Proposal

We propose a new operation that defines a finer-grain allocation scope for the
alloca-allocated memory called `alloca_scope`:

```
alloca_scope {
   %x_temp = alloca …
   ...
}
```

Here the lifetime of `%x_temp` is going to be bound to the narrow annotated
region within `alloca_scope`. Moreover, one can also return values out of the
alloca_scope with an accompanying `alloca_scope.return` op (that behaves
similarly to `scf.yield`):

```
%result = alloca_scope {
   %x_temp = alloca …
   …
   alloca_scope.return %myvalue
}
```

Under the hood the `alloca_scope` is going to lowered to a combination of
`llvm.intr.stacksave` and `llvm.intr.strackrestore` that are going to be invoked
automatically as control-flow enters and leaves the body of the `alloca_scope`.

The key value of the new op is to allow deterministic guaranteed stack use
through an explicit annotation in the code which is finer-grain than the
function-level scope of `AutomaticAllocationScope` interface. `alloca_scope`
can be inserted at arbitrary locations and doesn’t require non-trivial
transformations such as outlining.

## Which dialect

Before memref dialect is split, `alloca_scope` can temporarily reside in `std`
dialect, and later on be moved to `memref` together with the rest of
memory-related operations.

## Implementation

An implementation of the op is available [here](https://reviews.llvm.org/D97768).

Original commits:

* Add initial scaffolding for alloca_scope op
* Add alloca_scope.return op
* Add no region arguments and variadic results
* Add op descriptions
* Add failing test case
* Add another failing test
* Initial implementation of lowering for std.alloca_scope
* Fix backticks
* Fix getSuccessorRegions implementation

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D97768
2021-06-11 19:28:41 +02:00
Tobias Gysi d2661c6c51 [mlir][linalg] Prepare pad to static bounding box for scalar operands.
Adapt pad to static bounding box to support structured ops taking scalar operands.

Differential Revision: https://reviews.llvm.org/D103891
2021-06-11 13:51:29 +00:00
Tobias Gysi f6b4e081dc [mlir][linalg] Prepare drop unit dims for scalar operands.
Adapt drop unit dims for structured ops taking scalar operands.

Differential Revision: https://reviews.llvm.org/D103890
2021-06-11 13:18:06 +00:00
Guillaume Chatelet e0569033e2 [llvm] Make Sequence reverse-iterable
This is a roll forward of D102679.
This patch simplifies the implementation of Sequence and makes it compatible with llvm::reverse.
It exposes the reverse iterators through rbegin/rend which prevents a dangling reference in std::reverse_iterator::operator++().

Note: Compared to D102679, this patch introduces a `asSmallVector()` member function and fixes compilation issue with GCC 5.

Differential Revision: https://reviews.llvm.org/D103948
2021-06-10 11:15:28 +00:00
Alex Zinenko 7325aaefa5 [mlir] make LLVMPointerType implement the data layout type interface
This brings us closer to replacing the LLVM data layout string with a
first-class layout modeling in MLIR.

Depends On D103945

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D103946
2021-06-10 11:24:16 +02:00
Christian Sigg 0b21371e12 [mlir] Support pre-existing tokens in 'gpu-async-region'
Allow gpu ops implementing the async interface to already be async when running the GpuAsyncRegionPass.
That pass threads a 'current token' through a block with ops implementing the gpu async interface.

After this change, existing async ops (returning a !gpu.async.token) set the current token.
Existing synchronous `gpu.wait` ops reset the current token.

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D103396
2021-06-10 08:43:45 +02:00
Ahmed Taei b9d7ffd9cf Folds linalg.pad_tensor with zero padding
Differential Revision: https://reviews.llvm.org/D103984
2021-06-09 15:39:40 -07:00
Lei Zhang 56f60a1ce7 [mlir][spirv] Use SingleBlock + NoTerminator for spv.module
This allows us to remove the `spv.mlir.endmodule` op and
all the code associated with it.

Along the way, tightened the APIs for `spv.module` a bit
by removing some aliases. Now we use `getRegion` to get
the only region, and `getBody` to get the region's only
block.

Reviewed By: mravishankar, hanchung

Differential Revision: https://reviews.llvm.org/D103265
2021-06-09 14:00:06 -04:00
Javier Setoain 96ca2d92b5 [mlir][ArmSVE] Add basic load/store operations
ArmSVE-specific memory operations are needed to generate end-to-end
code for as long as MLIR core doesn't support scalable vectors. This
instructions will be eventually unnecessary, for now they're required
for more complex testing.

Differential Revision: https://reviews.llvm.org/D103535
2021-06-09 15:53:40 +01:00
Benjamin Kramer c0db8d50ca [mlir] Expose a function to populate tensor constant bufferization patterns
This makes it easier to use it from other bufferization passes.

Differential Revision: https://reviews.llvm.org/D103838
2021-06-09 13:47:33 +02:00
Javier Setoain f880bd261f [mlir][ArmSVE] Add basic mask generation operations
These `arm_sve.cmp` functions are needed to generate scalable vector
masks as long as scalable vectors are not part of the standard types.
Once in standard, these can be removed and `std.cmp` can be used
instead.

Differential Revision: https://reviews.llvm.org/D103473
2021-06-09 09:56:53 +01:00
Tobias Gysi 9c27fa3821 [mlir][linalg] Prepare fusion on tensors for scalar operands.
Adapt fusion on tensors to support structured ops taking scalar operands.

Differential Revision: https://reviews.llvm.org/D103889
2021-06-09 07:09:46 +00:00
Christian Sigg 674dd9d08e [mlir] Fix body-less async.execute printing
Reviewed By: ezhulenev

Differential Revision: https://reviews.llvm.org/D103686
2021-06-09 08:07:11 +02:00
Mehdi Amini a4e2cf712a Revert "[llvm] Make Sequence reverse-iterable"
This reverts commit e772216e70
(and fixup 7f6c878a2c).

The build is broken with gcc5 host compiler:

In file included from
                 from mlir/lib/Dialect/Utils/StructuredOpsUtils.cpp:9:
tools/mlir/include/mlir/IR/BuiltinAttributes.h.inc:424:57: error: type/value mismatch at argument 1 in template parameter list for 'template<class ItTy, class FuncTy, class FuncReturnTy> class llvm::mapped_iterator'
                               std::function<T(ptrdiff_t)>>;
                                                         ^
tools/mlir/include/mlir/IR/BuiltinAttributes.h.inc:424:57: note:   expected a type, got 'decltype (seq<ptrdiff_t>(0, 0))::const_iterator'
2021-06-08 17:03:10 +00:00
Chris Lattner 92a79dbe91 [Core] Add Twine support for StringAttr and Identifier. NFC.
This is both more efficient and more ergonomic than going
through an std::string, e.g. when using llvm::utostr and
in string concat cases.

Unfortunately we can't just overload ::get().  This causes an
ambiguity because both twine and stringref implicitly convert
from std::string.

Differential Revision: https://reviews.llvm.org/D103754
2021-06-08 09:47:07 -07:00
William S. Moses 965ad79ea7 [MLIR][MemRef] Only allow fold of cast for the pointer operand, not the value
Currently canonicalizations of a store and a cast try to fold all casts into the store.

In the case where the operand being stored is itself a cast, this is illegal as the type of the value being stored
will change. This PR fixes this by not checking the value for folding with a cast.

Depends on https://reviews.llvm.org/D103828

Differential Revision: https://reviews.llvm.org/D103829
2021-06-08 11:43:09 -04:00
Guillaume Chatelet e772216e70 [llvm] Make Sequence reverse-iterable
This patch simplifies the implementation of Sequence and makes it compatible with llvm::reverse.
It exposes the reverse iterators through rbegin/rend which prevents a dangling reference in std::reverse_iterator::operator++().

Differential Revision: https://reviews.llvm.org/D102679
2021-06-08 13:18:57 +00:00
Javier Setoain 57546f5b22 Revert "[mlir][ArmSVE] Add basic mask generation operations"
This reverts commit 392af6a78b
2021-06-08 10:02:19 +01:00
Javier Setoain 392af6a78b [mlir][ArmSVE] Add basic mask generation operations
These `arm_sve.cmp` functions are needed to generate scalable vector
masks as long as scalable vectors are not part of the standard types.
Once in standard, these can be removed and `std.cmp` can be used
instead.

Differential Revision: https://reviews.llvm.org/D103473
2021-06-08 08:56:31 +01:00
William S. Moses 00b6463b26 [MLIR][GPU] Simplify memcpy of cast
Introduce a simplification that allows memcpy of a cast to simply use the underlying op

Differential Revision: https://reviews.llvm.org/D103830
2021-06-07 14:00:13 -04:00
William S. Moses 854d0edce6 [MLIR] Conditional Branch Argument Propagation
In an operation in the true/false dest of a branch,
one can assume that the operation itself was true/false if
only that edge can reach the operation.

Differential Revision: https://reviews.llvm.org/D101709
2021-06-07 13:33:10 -04:00
Valentin Clement aa4e6a609a [mlir][openacc] Add canonicalization for standalone data operations for if condition
This patch add canonicalization for the standalone data operation with constant if condition.
It is extracted from this patch D103325.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D103712
2021-06-07 11:40:59 -04:00
Valentin Clement cfcdebaf32 [mlir][openacc] Conversion of data operands in acc.parallel to LLVM IR dialect
Convert data operands from the acc.parallel operation using the same conversion pattern than D102170.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D103337
2021-06-07 11:22:20 -04:00
KareemErgawy 2def12ebc6 [MLIR][SPIRV] Use getAsmResultName(...) hook for AddressOfOp.
Implements better naming for results of spv.mlir.addressof ops by making it
inherit from OpAsmOpInterface and implementing the associated
getAsmResultName(...) hook.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D103594
2021-06-07 13:58:26 +02:00
Matthias Springer 6e7bbdd6e7 [mlir] Add offset/stride helper functions to OffsetSizeAndStrideOpInterface
* Add hasUnitStride and hasZeroOffset to OffsetSizeAndStrideOpInterface. These functions are useful for various patterns. E.g., some vectorization patterns apply only for tensor ops with zero offsets and/or unit stride.
* Add getConstantIntValue and isEqualConstantInt helper functions, which are useful for implementing the two above functions, as well as various patterns.

Differential Revision: https://reviews.llvm.org/D103763
2021-06-07 20:11:41 +09:00
Tobias Gysi caf26612dd [mlir][linalg] Cleanup LinalgOp usage in comprehensive bufferization.
Replace the uses of deprecated Structured Op Interface methods in ComprehensiveBufferize.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103520
2021-06-07 09:08:13 +00:00
Aart Bik 86e9bc1a34 [mlir][sparse] add option for 32-bit indices in scatter/gather
Controlled by a compiler option, if 32-bit indices can be handled
with zero/sign-extention alike (viz. no worries on non-negative
indices), scatter/gather operations can use the more efficient
32-bit SIMD version.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D103632
2021-06-04 16:57:12 -07:00
Ahmed Taei bba8d8c186 Revert "Add memref.dim canonicalization patterns to TilingCanonicalizationPatterns"
This reverts commit a52959401d.

Differential Revision: https://reviews.llvm.org/D103724
2021-06-04 15:41:43 -07:00
Ahmed Taei a52959401d Add memref.dim canonicalization patterns to TilingCanonicalizationPatterns
Otherwise tiled and padded linalg op will be alive (after distribution).

Differential Revision: https://reviews.llvm.org/D103715
2021-06-04 13:40:36 -07:00
Matthias Springer e789efc92a [mlir][linalg] Refactor PadTensorOpVectorizationPattern (NFC)
* Rename PadTensorOpVectorizationPattern to GenericPadTensorOpVectorizationPattern.
* Make GenericPadTensorOpVectorizationPattern a private pattern, to be instantiated via populatePadTensorOpVectorizationPatterns.
* Factor out parts of PadTensorOpVectorizationPattern into helper functions.

This commit prepares PadTensorOpVectorizationPattern for a series of subsequent commits that add more specialized PadTensorOp vectorization patterns.

Differential Revision: https://reviews.llvm.org/D103681
2021-06-04 23:45:08 +09:00
Valentin Clement fcb1547229 [mlir][openacc] Conversion of data operands in acc.data to LLVM IR dialect
Convert data operands from the acc.data operation using the same conversion pattern than D102170.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D103332
2021-06-04 10:26:22 -04:00
Tobias Gysi 67b1c37d9f [mlir][linalg] Cleanup left over uses of deprecated LinalgOp methods.
Replace all remaining uses of deprecated Structured Op Interface methods. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103673
2021-06-04 08:48:02 +00:00
Alexander Belyaev 89df483d30 [mlir] Fix warnings. 2021-06-03 17:09:09 +02:00
Tobias Gysi f44e90b93a [mlir][linalg] Cleanup LinalgOp usage in scalar inlining.
Replace the uses of deprecated Structured Op Interface methods in InlineScalarOperands.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103518
2021-06-03 14:45:14 +00:00
Tobias Gysi 8fb6c31cbb [mlir][linalg] Cleanup LinalgOp usage in op declarations.
Replace the uses of deprecated Structured Op Interface methods in LinalgOps.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103506
2021-06-03 14:04:44 +00:00
Tobias Gysi 6b265f949f [mlir][linalg] Cleanup LinalgOp usage in loop lowering.
Replace the uses of deprecated Structured Op Interface methods in Loops.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103453
2021-06-03 13:29:52 +00:00
Nicolas Agostini 0804a88e48 [mlir][linalg] Transform PadTensorOp into InitOp, FillOp, GenericOp
Introduces a test pass that rewrites PadTensorOps with static shapes as a sequence of:

```
linalg.init_tensor // to create output
linalg.fill        // to initialize with padding value
linalg.generic     // to copy the original contents to the padded tensor
```

The pass can be triggered with:

- `--test-linalg-transform-patterns="test-transform-pad-tensor"`

Differential Revision: https://reviews.llvm.org/D102804
2021-06-03 22:09:09 +09:00
Tobias Gysi c698505257 [mlir][linalg] Cleanup LinalgOp usage in drop unit dims.
Replace the uses of deprecated Structured Op Interface methods in DropUnitDims.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103448
2021-06-03 12:27:05 +00:00
Tobias Gysi 7c234ae549 [mlir][linalg] Cleanup LinalgOp usage in bufferize, detensorize, and interchange.
Replace the uses of deprecated Structured Op Interface methods in Bufferize.cpp, Detensorize.cpp, and Interchange.cpp. The patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103530
2021-06-03 12:07:29 +00:00
Tobias Gysi 9f815cb578 [mlir][linalg] Cleanup LinalgOp usage in test passes.
Replace the uses of deprecated Structured Op Interface methods in TestLinalgElementwiseFusion.cpp, TestLinalgFusionTransforms.cpp, and Transforms.cpp. The patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103528
2021-06-03 12:07:29 +00:00
Tobias Gysi e70d2c8e6f [mlir][linalg] Cleanup LinalgOp usage in promotion.
Replace the uses of deprecated Structured Op Interface methods in Promotion.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103450
2021-06-03 11:01:02 +00:00
Tobias Gysi ad10d965c8 [mlir][linalg] Cleanup LinalgOp usage in generalization.
Replace the uses of deprecated Structured Op Interface methods in Generalization.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103531
2021-06-03 09:45:02 +00:00
Alexander Belyaev 485c21be8a [mlir] Split linalg reshape ops into expand/collapse.
Differential Revision: https://reviews.llvm.org/D103548
2021-06-03 11:40:22 +02:00
Mehdi Amini 8c948b18e9 Fix -Wsign-compare warning (NFC) 2021-06-02 17:28:57 +00:00
Tobias Gysi f84b908f89 [mlir][linalg] Cleanup LinalgOp usage in fusion on tensors (NFC).
Replace the uses of deprecated Structured Op Interface methods in FusionOnTensors.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103471
2021-06-02 12:20:45 +00:00
Tobias Gysi 2f2b5b7d28 [mlir][linalg] Cleanup LinalgOp usage in sparse compiler (NFC).
Replace the uses of deprecated Structured Op Interface methods in Sparsification.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103436
2021-06-02 06:21:56 +00:00
Tobias Gysi 07576cc4dc [mlir][linalg] Fix signed/unsigned comparison warnings (NFC).
Fix signedness warnings in Utils.cpp and LinalgInterfaces.cpp.
2021-06-01 10:56:43 +00:00
Tobias Gysi 94643fda13 [mlir][linalg] Cleanup LinalgOp usage in dependence analysis (NFC).
Replace the uses of deprecated Structured Op Interface methods in DependenceAnalysis.cpp and DependenceAnalysis.h. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103411
2021-06-01 08:44:15 +00:00
Tobias Gysi 7594f5028a [mlir][linalg] Cleanup LinalgOp usage in fusion (NFC).
Replace the uses of deprecated Structured Op Interface methods in Fusion.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103437
2021-06-01 08:21:30 +00:00
Tobias Gysi c2e5226a85 [mlir][linalg] Cleanup LinalgOp usage in tiling (NFC).
Replace the uses of deprecated Structured Op Interface methods in Tiling.cpp and Utils.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103438
2021-06-01 08:17:38 +00:00
Tobias Gysi 912ebf60b1 [mlir][linalg] Cleanup LinalgOp usage in vectorization (NFC).
Replace the uses of deprecated Structured Op Interface methods in Vectorization.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103410
2021-06-01 08:08:40 +00:00
Tobias Gysi f4f7bc1737 [mlir][linalg] Cleanup LinalgOp usage in verification (NFC).
Replace the uses of deprecated Structured Op Interface methods in LinalgInterfaces.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103404
2021-05-31 14:25:45 +00:00
Tobias Gysi 0a52d9006c [mlir][linalg] Update Structured Op Interface (NFC).
Adding methods to access operand properties via OpOperands and mark outdated methods as deprecated.

Differential Revision: https://reviews.llvm.org/D103394
2021-05-31 13:20:48 +00:00
Frederik Gossen 1288adaa73 [MLIR][Shape] Remove duplicate operands of `shape.assuming_all` op
Differential Revision: https://reviews.llvm.org/D103403
2021-05-31 14:37:55 +02:00
Uday Bondhugula 18c2106e28 [MLIR] Fix warnings in AffineOps.cpp
Fix warnings in AffineOps.cpp.

Differential Revision: https://reviews.llvm.org/D103374
2021-05-31 17:58:02 +05:30
Matthias Springer 2bc8ffa8af [mlir] Support permutation maps in vector transfer op folder
Fold away in_bounds attribute even if the transfer op has a non-identity permutation map.

Differential Revision: https://reviews.llvm.org/D103133
2021-05-31 17:22:46 +09:00
KareemErgawy e493abcf55 [MLIR][SPIRV] Use getAsmResultName(...) hook for ConstantOp.
Implements better naming for results of `spv.Constant` ops by making it
inherit from OpAsmOpInterface and implementing the associated
getAsmResultName(...) hook.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D103152
2021-05-28 09:28:02 +02:00
Eugene Zhulenev 8f23fac4da [mlir:Async] Convert assertions to async errors only inside async functions
Differential Revision: https://reviews.llvm.org/D103278
2021-05-27 12:49:00 -07:00
Eugene Zhulenev 9136b7d075 [mlir] AsyncRefCounting: check that LivenessBlockInfo is not nullptr
Differential Revision: https://reviews.llvm.org/D103270
2021-05-27 10:54:21 -07:00
Eugene Zhulenev d8c84d2a4e [mlir] Async: Add error propagation support to async groups
Depends On D103109

If any of the tokens/values added to the `!async.group` switches to the error state, than the group itself switches to the error state.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D103203
2021-05-27 09:35:11 -07:00
Eugene Zhulenev 39957aa424 [mlir] Add error state and error propagation to async runtime values
Depends On D103102

Not yet implemented:
1. Error handling after synchronous await
2. Error handling for async groups

Will be addressed in the followup PRs

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D103109
2021-05-27 09:28:47 -07:00
Eugene Zhulenev c412979cde [mlir] Async reference counting for block successors with divergent reference counted liveness
Support reference counted values implicitly passed (live) only to some of the successors.

Example: if branched to ^bb2 token will leak, unless `drop_ref` operation is properly created

```
^entry:
  %token = async.runtime.create : !async.token
   cond_br %cond, ^bb1, ^bb2
^bb1:
  async.runtime.await %token
  async.runtime.drop_ref %token
  br ^bb2
^bb2:
  return
```

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D103102
2021-05-27 09:21:59 -07:00
thomasraoux b44007bec2 [mlir][gpu] Relax restriction on MMA store op to allow chain of mma ops.
In order to allow large matmul operations using the MMA ops we need to chain
operations this is not possible unless "DOp" and "COp" type have matching
layout so remove the "DOp" layout and force accumulator and result type to
match.
Added a test for the case where the MMA value is accumulated.

Differential Revision: https://reviews.llvm.org/D103023
2021-05-27 09:13:51 -07:00
Nicolas Vasilache ce4f99e7f2 [mlir][Linalg] Add comprehensive bufferization support for subtensor (5/n)
This revision refactors and simplifies the pattern detection logic: thanks to SSA value properties, we can actually look at all the uses of a given value and avoid having to pattern-match specific chains of operations.

A bufferization pattern for subtensor is added and specific inplaceability analysis is implemented for the simple case of subtensor. More advanced use cases will follow.

Differential revision: https://reviews.llvm.org/D102512
2021-05-27 12:48:08 +00:00
Alexander Belyaev 281ee42911 [mlir] Add a pass to distribute linalg::TiledLoopOp.
Differential Revision: https://reviews.llvm.org/D103194
2021-05-27 08:45:20 +02:00
Frank Laub b5c3f17e70 [MLIR] Add support for empty IVs to affine.parallel
Allow support for specifying empty IVs in an `affine.parallel`.

For example:

```
affine.parallel () = () to () {
  affine.yield
}
```

Reviewed By: bondhugula, jbruestle

Differential Revision: https://reviews.llvm.org/D102895
2021-05-26 23:45:11 +00:00
Alexander Belyaev 74a89cba8c [mlir] Add `distributionTypes` to LinalgTilingOptions.
Differential Revision: https://reviews.llvm.org/D103161
2021-05-26 17:51:38 +02:00
Adrian Kuegel dee46d0829 [mlir] Fold complex.create(complex.re(op), complex.im(op))
Differential Revision: https://reviews.llvm.org/D103148
2021-05-26 14:02:53 +02:00
Adrian Kuegel cb65419b1a [mlir] Simplify folding code (NFC) 2021-05-26 11:00:07 +02:00
Adrian Kuegel b99f892b02 [mlir] Fold complex.re(complex.create) and complex.im(complex.create)
This extends the folding we already have. A test needs to be adjusted.

Differential Revision: https://reviews.llvm.org/D103141
2021-05-26 10:53:05 +02:00
Alexander Belyaev 2ea6e13bf8 [mlir] Add an optional distributionTypes attribute to TiledLoopOp.
Differential Revision: https://reviews.llvm.org/D103104
2021-05-25 20:04:41 +02:00
Vinayaka Bandishti eff269fc9f [MLIR][Affine][LICM] Mark users of `iter_args` variant
Prevent users of `iter_args` of an affine for loop from being hoisted
out of it. Otherwise, LICM leads to a violation of the SSA dominance
(as demonstrated in the added test case).

Fixes: https://bugs.llvm.org/show_bug.cgi?id=50103

Reviewed By: bondhugula, ayzhuang

Differential Revision: https://reviews.llvm.org/D102984
2021-05-25 15:56:52 +05:30
Tres Popp 9ccdc2e23b [mlir] Fold memref.dim of OffsetSizeAndStrideOpInterface outputs
This previously handled memref::SubviewOp, but this can be extended to
all ops implementing the interface.

Differential Revision: https://reviews.llvm.org/D103076
2021-05-25 12:16:10 +02:00
Uday Bondhugula 9c21ddb70a [MLIR] Make MLIR cmake variable names consistent
Fix inconsistent MLIR CMake variable names. Consistently name them as
MLIR_ENABLE_<feature>.

Eg: MLIR_CUDA_RUNNER_ENABLED -> MLIR_ENABLE_CUDA_RUNNER

MLIR follows (or has mostly followed) the convention of naming
cmake enabling variables in the from MLIR_ENABLE_... etc. Using a
convention here is easy and also important for convenience. A counter
pattern was started with variables named MLIR_..._ENABLED. This led to a
sequence of related counter patterns: MLIR_CUDA_RUNNER_ENABLED,
MLIR_ROCM_RUNNER_ENABLED, etc.. From a naming standpoint, the imperative
form is more meaningful. Additional discussion at:
https://llvm.discourse.group/t/mlir-cmake-enable-variable-naming-convention/3520

Switch all inconsistent ones to the ENABLE form. Keep the couple of old
mappings needed until buildbot config is migrated.

Differential Revision: https://reviews.llvm.org/D102976
2021-05-24 08:43:10 +05:30
Philipp Krones c2f819af73 [MC] Refactor MCObjectFileInfo initialization and allow targets to create MCObjectFileInfo
This makes it possible for targets to define their own MCObjectFileInfo.
This MCObjectFileInfo is then used to determine things like section alignment.

This is a follow up to D101462 and prepares for the RISCV backend defining the
text section alignment depending on the enabled extensions.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D101921
2021-05-23 14:15:23 -07:00
Butygin 4184018253 [mlir][SCF] Canonicalize nested ParallelOp's
Differential Revision: https://reviews.llvm.org/D102799
2021-05-22 14:00:00 +03:00
Aart Bik c194b49c9c [mlir][sparse] add full dimension ordering support
This revision completes the "dimension ordering" feature
of sparse tensor types that enables the programmer to
define a preferred order on dimension access (other than
the default left-to-right order). This enables e.g. selection
of column-major over row-major storage for sparse matrices,
but generalized to any rank, as in:

dimOrdering = affine_map<(i,j,k,l,m,n,o,p) -> (p,o,j,k,i,l,m,n)>

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102856
2021-05-21 12:35:13 -07:00
Alexander Belyaev 335fa18028 [mlir] NFC: Expose tiled_loop->scf pattern.
Differential Revision: https://reviews.llvm.org/D102921
2021-05-21 18:19:00 +02:00
Alexander Belyaev 9ecc8178d7 [mlir] Add support for fusion into TiledLoopOp.
Differential Revision: https://reviews.llvm.org/D102722
2021-05-21 18:13:45 +02:00
Stephan Herhut 90e55dfcf4 [mlir][memref] Improve canonicalization of memref.clone
The previous implementation did not handle casting behavior properly and
did not consider aliases.

Differential Revision: https://reviews.llvm.org/D102785
2021-05-21 16:34:50 +02:00
Stephan Herhut 884a6291f0 [mlir][linalg] Add scalar operands inlining pattern
This pattern inlines operands to a linalg.generic operation that use a constant
index and hence are loop-invariant scalars. This reduces the number of
linalg.generic operands and unlocks some canonicalizations that rely on seeing
an explicit tensor.extract.

Differential Revision: https://reviews.llvm.org/D102682
2021-05-21 15:23:28 +02:00
Nicolas Vasilache 8eb18a0f3e [mlir][Standard] NFC - Drop remaining EDSC usage
Drop the remaining EDSC subdirectories and update all uses.

Differential Revision: https://reviews.llvm.org/D102911
2021-05-21 10:40:39 +00:00
Nicolas Vasilache e84a9b9bb3 [mlir][Affine] NFC - Drop Affine EDSC usage
Drop the Affine dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102878
2021-05-20 21:45:45 +00:00
Nicolas Vasilache e3cf7c88c4 [mlir][MemRef] NFC - Drop MemRef EDSC usage
Drop the MemRef dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102868
2021-05-20 20:13:58 +00:00
Nicolas Vasilache 4519ca3d2e [mlir][Linalg] NFC - Drop Linalg EDSC usage
Drop the Linalg dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102848
2021-05-20 15:33:56 +00:00
Adrian Kuegel a28fe17d73 [mlir] Add EqualOp and NotEqualOp to complex dialect. 2021-05-20 13:25:07 +02:00
Nicolas Vasilache ef33c6e3ce [mlir][Linalg] Drop spurious usage of OperationFolder
Instead, use createOrFold builders which result in more static information available.

Differential Revision: https://reviews.llvm.org/D102832
2021-05-20 09:17:58 +00:00
Aart Bik bf9ef3efaa [mlir][sparse] skip sparsification for unannotated (or unhandled) cases
Skip the sparsification pass for Linalg ops without annotated tensors
(or cases that are not properly handled yet).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102787
2021-05-19 13:49:28 -07:00
Nicolas Vasilache 84a880e1e2 [mlir][SCF] NFC - Drop SCF EDSC usage
Drop the SCF dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102780
2021-05-19 15:52:14 +00:00
Tobias Gysi 9a2769db80 [mir][Python][linalg] Support OpDSL extensions in C++.
The patch extends the yaml code generation to support the following new OpDSL constructs:
- captures
- constants
- iteration index accesses
- predefined types
These changes have been introduced by revision
https://reviews.llvm.org/D101364.

Differential Revision: https://reviews.llvm.org/D102075
2021-05-19 13:36:56 +00:00
Nicolas Vasilache 6825bfe23e [mlir][Vector] NFC - Drop vector EDSC usage
Drop the vector dialect EDSC subdirectory and update all uses.
2021-05-19 12:44:38 +00:00
Matthias Springer fb7ec1f187 [mlir] Use VectorTransferPermutationMapLoweringPatterns in VectorToSCF
VectorTransferPermutationMapLoweringPatterns can be enabled via a pass option. These additional patterns lower permutation maps to minor identity maps with broadcasting, if possible, allowing for more efficient vector load/stores. The option is deactivated by default.

Differential Revision: https://reviews.llvm.org/D102593
2021-05-19 14:46:19 +09:00
MaheshRavishankar e2b365948b [mlir][Linalg] Break unnecessary dependency through unused `outs` tensor.
LinalgOps that are all parallel do not use the value of `outs`
tensor. The semantics is that the `outs` tensor is fully
overwritten. Using anything other than `init_tensor` can add false
dependencies between operations, when the use is just for the shape of
the tensor. Adding a canonicalization to always use `init_tensor` in
such cases, breaks this dependence.

Differential Revision: https://reviews.llvm.org/D102561
2021-05-18 22:31:42 -07:00
Wenyi Zhao 851d02f61e Enhance InferShapedTypeOpInterface to make it accessible during dialect conversion
Original interfaces are not safe to be called during dialect conversion.
This is because some ops (e.g. `dynamic_reshape(input, target_shape)`)
depend on the values of their operands to calculate the output shape.
However the operands may be out of reach during dialect conversion (e.g.
converting from tensor world to buffer world). This patch provides a new
kind of interface which accpets user-provided operands to solve this
problem.

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D102317
2021-05-19 02:51:14 +00:00
Adrian Kuegel fa765a0944 [mlir] Add folder for complex.ReOp and complex.ImOp.
Now that complex constants are supported, we can also fold.

Differential Revision: https://reviews.llvm.org/D102616
2021-05-18 11:27:23 +02:00
Jacques Pienaar 24bf554b10 Add type function for ConstShape op.
- Enables inferring return type for ConstShape, takes into account valid return types;
- The compatible return type function could be reused, leaving that for next use refactoring;

Differential Revision: https://reviews.llvm.org/D102182
2021-05-17 11:47:19 -07:00
Aart Bik 5879da496c [mlir][sparse] replace experimental flag with inplace attribute
The experimental flag for "inplace" bufferization in the sparse
compiler can be replaced with the new inplace attribute. This gives
a uniform way of expressing the more efficient way of bufferization.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102538
2021-05-17 11:43:44 -07:00
Matthias Springer 2c9688d201 [mlir] Improve TransferOp verifier: broadcasts are in_bounds
Broadcast dimensions of vector transfer ops are always in-bounds. This is consistent with the fact that the starting position of a transfer is always in-bounds.

Differential Revision: https://reviews.llvm.org/D102566
2021-05-17 22:35:44 +09:00
Adrian Kuegel 967f07f547 Revert "[mlir] Add folder for complex.ReOp and complex.ImOp."
This reverts commit 6b49834d65.

Some tests fail.
2021-05-17 13:49:42 +02:00
Adrian Kuegel 6b49834d65 [mlir] Add folder for complex.ReOp and complex.ImOp.
Now that complex constants are supported, we can also fold.

Differential Revision: https://reviews.llvm.org/D102609
2021-05-17 13:35:51 +02:00
Julian Gross 1fbb484ea4 [WIP][mlir] Resolve memref dependency in canonicalize pass.
Splitting the memref dialect lead to an introduction of several dependencies
to avoid compilation issues. The canonicalize pass also depends on the
memref dialect, but it shouldn't. This patch resolves the dependencies
and the unintuitive includes are removed. However, the dependency moves
to the constructor of the std dialect.

Differential Revision: https://reviews.llvm.org/D102060
2021-05-17 11:33:38 +02:00
Tobias Gysi 7c16f93c44 [mlir][linalg] Remove template parameter from loop lowering.
Replace the templated linalgLowerOpToLoops method by three specialized methods linalgOpToLoops, LinalgOpToParallelLoops, and linalgOpToAffineLoops.

Differential Revision: https://reviews.llvm.org/D102324
2021-05-17 09:31:53 +00:00
Adrian Kuegel 5ef21506b9 Add support for complex constants to MLIR core.
BEGIN_PUBLIC
Add support for complex constants to MLIR core.
END_PUBLIC

Differential Revision: https://reviews.llvm.org/D101908
2021-05-17 09:12:39 +02:00
Matthias Springer 7ddeffee55 [mlir] Lower permutation maps on TransferWriteOps
Add TransferWritePermutationLowering, which replaces permutation maps of TransferWriteOps with vector.transpose.

Differential Revision: https://reviews.llvm.org/D102548
2021-05-17 15:30:46 +09:00
Matthias Springer 6774e5a995 [mlir] Fix in_bounds attr handling in TransferReadPermutationLowering
The in_bounds attribute should also be transposed.

Differential Revision: https://reviews.llvm.org/D102572
2021-05-17 15:28:16 +09:00
Aart Bik 56fd4c1cf8 [mlir][sparse] prepare runtime support lib for multiple dim level types
We are moving from just dense/compressed to more general dim level
types, so we need more than just an "i1" array for annotations.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102520
2021-05-14 19:12:07 -07:00
Nicolas Vasilache dd65f420cd [mlir][Linalg] NFC - More gracefully degrade lookup into failure during comprehensive bufferization (4/n)
Differential revsion: https://reviews.llvm.org/D102420
2021-05-14 22:12:23 +00:00
Nicolas Vasilache 6f90955f69 [mlir][Linalg] Add support for subtensor_insert comprehensive bufferization (3/n)
Differential revision: https://reviews.llvm.org/D102417
2021-05-14 21:51:00 +00:00
Ian Bearman 0816b96a10 Allow same memory space for SRC and DST of dma_start operations
This change allows the SRC and DST of dma_start operations to be located in the
    same memory space. This applies to both the Affine dialect and Memref dialect
    versions of these Ops. The documention has been updated to reflect this by
    explicitly stating overlapping memory locations are not supported (undefined
    behavior).

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D102274
2021-05-14 10:40:15 -07:00
Rahul Joshi 23a84e1c60 [MLIR] Fix build failures due to unused variables in non-debug builds.
Differential Revision: https://reviews.llvm.org/D102458
2021-05-13 18:42:48 -07:00
Nicolas Vasilache bebf5d56bf [mlir][Linalg] Add support for vector.transfer ops to comprehensive bufferization (2/n).
Differential revision: https://reviews.llvm.org/D102395
2021-05-13 22:26:28 +00:00
Nicolas Vasilache 1e01a8919f [mlir][Linalg] Add ComprehensiveBufferize for functions(step 1/n)
This is the first step towards upstreaming comprehensive bufferization following the
discourse post: https://llvm.discourse.group/t/rfc-linalg-on-tensors-update-and-comprehensive-bufferization-rfc/3373/6.

This first commit introduces a basic pass for bufferizing within function boundaries,
assuming that the inplaceable function boundaries have been marked as such.

Differential revision: https://reviews.llvm.org/D101693
2021-05-13 22:24:40 +00:00
Sean Silva 12874e93a1 [mlir][NFC] Add helper for common pattern of replaceAllUsesExcept
This covers the extremely common case of replacing all uses of a Value
with a new op that is itself a user of the original Value.

This should also be a little bit more efficient than the
`SmallPtrSet<Operation *, 1>{op}` idiom that was being used before.

Differential Revision: https://reviews.llvm.org/D102373
2021-05-13 12:42:10 -07:00
Weiwei Li cd0eeb52ad [mlir][spirv] Define spv.ImageQuerySize operation
Support OpImageQuerySize in spirv dialect

co-authored-by: Alan Liu <alanliu.yf@gmail.com>

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D102029
2021-05-13 13:17:08 -04:00
Tobias Gysi cf194da1bb [mlir][linalg] Remove IndexedGenericOp support from FusionOnTensors...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102163
2021-05-13 14:57:16 +00:00
Tobias Gysi f358c37209 [mlir][linalg] Remove IndexedGenericOp support from DropUnitDims...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102235
2021-05-13 14:18:59 +00:00
Matthias Springer 60da33c2d4 [mlir] Support masks in TransferOpReduceRank and TransferReadPermutationLowering
These two patterns allow for more efficient codegen in VectorToSCF.

Differential Revision: https://reviews.llvm.org/D102222
2021-05-13 15:08:08 +09:00
Matthias Springer 864adf399e [mlir] Allow empty position in vector.insert and vector.extract
Such ops are no-ops and are folded to their respective `source`/`vector` operand.

Differential Revision: https://reviews.llvm.org/D101879
2021-05-13 12:54:18 +09:00
Matthias Springer c52cbe63e4 [mlir] Fix masked vector transfer ops with broadcasts
Broadcast dimensions of a vector transfer op have no corresponding dimension in the mask vector. E.g., a 2-D TransferReadOp, where one dimension is a broadcast, can have a 1-D `mask` attribute.

This commit also adds a few additional transfer op integration tests for various combinations of broadcasts, masking, dim transposes, etc.

Differential Revision: https://reviews.llvm.org/D101745
2021-05-13 12:46:03 +09:00
Matthias Springer 6555e53ab0 Revert "[mlir] Fix masked vector transfer ops with broadcasts"
This reverts commit c9087788f7.

Accidentally pushed old version of the commit.
2021-05-13 11:55:00 +09:00
Matthias Springer c9087788f7 [mlir] Fix masked vector transfer ops with broadcasts
Broadcast dimensions of a vector transfer op have no corresponding dimension in the mask vector. E.g., a 2-D TransferReadOp, where one dimension is a broadcast, can have a 1-D `mask` attribute.

This commit also adds a few additional transfer op integration tests for various combinations of broadcasts, masking, dim transposes, etc.

Differential Revision: https://reviews.llvm.org/D101745
2021-05-13 11:37:36 +09:00
Rob Suderman 7b57517507 [mlir][linalg] Fixed issue generating reassociation map with Rank-0 types
Rank-0 case causes a graph during linalg reshape operation.

Differential Revision: https://reviews.llvm.org/D102282
2021-05-12 11:00:51 -07:00
Inho Seo 5480ea6c84 Update static bound checker for Linalg to cover decreasing cases
The current static checker for linalg does not work on the decreasing
index cases well. So, this is to Update the current static bound checker
for linalg to cover decreasing index cases.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D102302
2021-05-12 10:29:19 -07:00
Aart Bik ca5d0a7310 [mlir][sparse] keep runtime support library signature consistent
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102285
2021-05-12 09:59:46 -07:00
Valentin Clement 6110b667b0 [mlir][openacc] Conversion of data operand to LLVM IR dialect
Add a conversion pass to convert higher-level type before translation.
This conversion extract meangingful information and pack it into a struct that
the translation (D101504) will be able to understand.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D102170
2021-05-12 11:34:15 -04:00
Tobias Gysi 06bb9cf30d [mlir][linalg] Remove IndexedGenericOp support from LinalgInterchangePattern...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102245
2021-05-12 13:01:37 +00:00
Tobias Gysi c6b96ae06f [mlir][linalg] Remove IndexedGenericOp support from LinalgBufferize...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102308
2021-05-12 12:15:05 +00:00
Dumitru Potop 9a0ea5994b [mlir] Support alignment in LLVM dialect GlobalOp
First step in adding alignment as an attribute to MLIR global definitions. Alignment can be specified for global objects in LLVM IR. It can also be specified as a named attribute in the LLVMIR dialect of MLIR. However, this attribute has no standing and is discarded during translation from MLIR to LLVM IR. This patch does two things: First, it adds the attribute to the syntax of the llvm.mlir.global operation, and by doing this it also adds accessors and verifications. The syntax is "align=XX" (with XX being an integer), placed right after the value of the operation. Second, it allows transforming this operation to and from LLVM IR. It is checked whether the value is an integer power of 2.

Reviewed By: ftynse, mehdi_amini

Differential Revision: https://reviews.llvm.org/D101492
2021-05-12 09:07:20 +02:00
Benjamin Kramer b20e150c9b [mlir] Use static shape knowledge when lowering memref.reshape
This is actually necessary for correctness, as memref.reinterpret_cast
doesn't verify if the output shape doesn't match the static sizes.

Differential Revision: https://reviews.llvm.org/D102232
2021-05-11 18:21:09 +02:00
Uday Bondhugula 1c777ab459 [MLIR] Switch llvm.noalias to a unit attribute
Switch llvm.noalias attribute from a boolean attribute to a unit
attribute.

Differential Revision: https://reviews.llvm.org/D102225
2021-05-11 15:41:09 +05:30
Tres Popp 88a48999d2 Support VectorTransfer splitting on writes also.
VectorTransfer split previously only split read xfer ops. This adds
the same logic to write ops. The resulting code involves 2
conditionals for write ops while read ops only needed 1, but the created
ops are built upon the same patterns, so pattern matching/expectations
are all consistent other than in regards to the if/else ops.

Differential Revision: https://reviews.llvm.org/D102157
2021-05-11 10:33:27 +02:00
Tobias Gysi 7bc6df2528 [mlir][linalg] Remove IndexedGenericOp support from LinalgToLoops...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102187
2021-05-11 06:53:47 +00:00
Tobias Gysi 6676e09b22 [mlir][linalg] Remove IndexedGenericOp support from Fusion...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102174
2021-05-11 06:49:25 +00:00
Tobias Gysi d69bccf1ed [mlir][linalg] Remove IndexedGenericOp support from Tiling...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102176
2021-05-11 05:53:58 +00:00
Aart Bik bf812ea484 [mlir][linalg] remove the -now- obsolete sparse support in linalg
All glue and clutter in the linalg ops has been replaced by proper
sparse tensor type encoding. This code is no longer needed. Thanks
to ntv@ for giving us a temporary home in linalg.

So long, and thanks for all the fish.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102098
2021-05-10 16:49:33 -07:00
Benjamin Kramer 7b52aeadfa [mlir][Tensor] Add folding for tensor.from_elements
This trivially folds into a constant when all operands are constant.

Differential Revision: https://reviews.llvm.org/D102199
2021-05-11 00:42:45 +02:00
Aart Bik 96a23911f6 [mlir][sparse] complete migration to sparse tensor type
A very elaborate, but also very fun revision because all
puzzle pieces are finally "falling in place".

1. replaces lingalg annotations + flags with proper sparse tensor types
2. add rigorous verification on sparse tensor type and sparse primitives
3. removes glue and clutter on opaque pointers in favor of sparse tensor types
4. migrates all tests to use sparse tensor types

NOTE: next CL will remove *all* obsoleted sparse code in Linalg

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102095
2021-05-10 12:55:22 -07:00
Lei Zhang 7e71823f1d [mlir][linalg] Restrict distribution to parallel dims
According to the API contract, LinalgLoopDistributionOptions
expects to work on parallel iterators. When getting processor
information, only loop ranges for parallel dimensions should
be fed in. But right now after generating scf.for loop nests,
we feed in *all* loops, including the ones materialized for
reduction iterators. This can cause unexpected distribution
of reduction dimensions. This commit fixes it.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D102079
2021-05-10 15:23:00 -04:00
Julian Gross fc253e69f9 Fixed bug in buffer deallocation pass using unranked memref types.
In the buffer deallocation pass, unranked memref types are not properly supported.
After investigating this issue, it turns out that the Clone and Dealloc operation
does not support unranked memref types in the current implementation.
This patch adds the missing feature and enables the transformation of any memref
type.

This patch solves this bug: https://bugs.llvm.org/show_bug.cgi?id=48385

Differential Revision: https://reviews.llvm.org/D101760
2021-05-10 10:50:29 +02:00
Frederik Gossen a81e45b8bc [MLIR][Shape] Concretize broadcast result type if possible
As a canonicalization, infer the resulting shape rank if possible.

Differential Revision: https://reviews.llvm.org/D102068
2021-05-10 10:24:08 +02:00
River Riddle 53b946aa63 [mlir] Refactor the representation of function-like argument/result attributes.
The current design uses a unique entry for each argument/result attribute, with the name of the entry being something like "arg0". This provides for a somewhat sparse design, but ends up being much more expensive (from a runtime perspective) in-practice. The design requires building a string every time we lookup the dictionary for a specific arg/result, and also requires N attribute lookups when collecting all of the arg/result attribute dictionaries.

This revision restructures the design to instead have an ArrayAttr that contains all of the attribute dictionaries for arguments and another for results. This design reduces the number of attribute name lookups to 1, and allows for O(1) lookup for individual element dictionaries. The major downside is that we can end up with larger memory usage, as the ArrayAttr contains an entry for each element even if that element has no attributes. If the memory usage becomes too problematic, we can experiment with a more sparse structure that still provides a lot of the wins in this revision.

This dropped the compilation time of a somewhat large TensorFlow model from ~650 seconds to ~400 seconds.

Differential Revision: https://reviews.llvm.org/D102035
2021-05-07 19:32:31 -07:00
thomasraoux 6aaf06f929 [mlir][vector] Fix warning
Previous change caused another warning in some build configuration:
"default label in switch which covers all enumeration values"
2021-05-07 17:12:47 -07:00
thomasraoux b90b66bcbe [mlir] Missed clang-format 2021-05-07 13:57:34 -07:00
thomasraoux d0453a8933 [mlir][vector] Extend pattern to trim lead unit dimension to Splat Op
Differential Revision: https://reviews.llvm.org/D102091
2021-05-07 13:54:41 -07:00
Alexander Belyaev 3444996b4c [mlir] Add a pattern to bufferize std.index_cast.
Differential Revision: https://reviews.llvm.org/D102088
2021-05-07 21:32:02 +02:00
Alexander Belyaev a3f22d020b [mlir] Add a pattern to bufferize linalg.tensor_reshape.
Differential Revision: https://reviews.llvm.org/D102089
2021-05-07 21:31:17 +02:00
thomasraoux a970e69d6b [mlir][vector] add pattern to cast away leading unit dim for elementwise op
Differential Revision: https://reviews.llvm.org/D102034
2021-05-07 07:54:09 -07:00
Tobias Gysi f31531a30b [mlir][linalg] Remove redundant indexOp builder.
Remove the builder signature taking a signed dimension identifier.

Reviewed By: ergawy

Differential Revision: https://reviews.llvm.org/D102055
2021-05-07 14:22:12 +00:00
Tobias Gysi 26e916334e [mlir][linalg] Add IndexedGenericOp to GenericOp canonicalization.
Replace all `linalg.indexed_generic` ops by `linalg.generic` ops that access the iteration indices using the `linalg.index` op.

Differential Revision: https://reviews.llvm.org/D101612
2021-05-07 06:00:16 +00:00
MaheshRavishankar 05a89312d8 [mlir][Linalg] Allow folding to rank-zero tensor when using rank-reducing subtensors.
The pattern to convert subtensor ops to their rank-reduced versions
(by dropping unit-dims in the result) can also convert to a zero-rank
tensor. Handle that case.
This also fixes a OOB access bug in the existing pattern for such
cases.

Differential Revision: https://reviews.llvm.org/D101949
2021-05-06 19:03:55 -07:00
Lei Zhang 41bc54cc56 [mlir][spirv] NFC: Replace OwningSPIRVModuleRef with OwningOpRef
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D102009
2021-05-06 17:17:44 -04:00
thomasraoux 71eb32d97e [mlir][vector] Fix typo 2021-05-06 10:12:31 -07:00
thomasraoux 52525cb20f [mlir][linalg][NFC] Make reshape folding control more fine grain
This expose a lambda control instead of just a boolean to control unit
dimension folding.
This however gives more control to user to pick a good heuristic.
Folding reshapes helps fusion opportunities but may generate sub-optimal
generic ops.

Differential Revision: https://reviews.llvm.org/D101917
2021-05-06 10:11:39 -07:00
thomasraoux 933551eaeb [mlir][NFC] Fix warning in VectorTransforms.cpp 2021-05-06 08:11:42 -07:00
thomasraoux 0b303da6f8 [mlir][vector] add pattern to cast away lead unit dimension for broadcast op
Differential Revision: https://reviews.llvm.org/D101955
2021-05-06 08:02:17 -07:00
Christian Sigg a0d019fc89 [mlir] Add support for ops with regions in 'gpu-async-region' rewriter.
Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D101757
2021-05-06 13:21:28 +02:00
Navdeep Kumar 875eb523c1 [MLIR][GPU][NVVM] Add warp synchronous matrix-multiply accumulate ops
Add warp synchronous matrix-multiply accumulate ops in GPU and NVVM
dialect. Add following three ops to GPU dialect :-
  1.) subgroup_mma_load_matrix
  2.) subgroup_mma_store_matrix
  3.) subgroup_mma_compute
Add following three ops to NVVM dialect :-
  1.) wmma.m16n16k16.load.[a,b,c].[f16,f32].row.stride
  2.) wmma.m16n16k16.store.d.[f16,f32].row.stride
  3.) wmma.m16n16k16.mma.row.row.[f16,f32].[f16,f32]

Reviewed By: bondhugula, ftynse, ThomasRaoux

Differential Revision: https://reviews.llvm.org/D95330
2021-05-06 12:06:25 +05:30
MaheshRavishankar b6060b7673 [mlir][Linalg] Fix element type of results when folding reshapes.
Fixing a minor bug which lead to element type of the output being
modified when folding reshapes with generic op.

Differential Revision: https://reviews.llvm.org/D101942
2021-05-05 15:40:41 -07:00
Emilio Cota 0edc4bc84a [mlir] Add polynomial approximation for math::ExpM1
This approximation matches the one in Eigen.

```
name                      old cpu/op  new cpu/op  delta
BM_mlir_Expm1_f32/10      90.9ns ± 4%  52.2ns ± 4%  -42.60%    (p=0.000 n=74+87)
BM_mlir_Expm1_f32/100      837ns ± 3%   231ns ± 4%  -72.43%    (p=0.000 n=79+69)
BM_mlir_Expm1_f32/1k      8.43µs ± 3%  1.58µs ± 5%  -81.30%    (p=0.000 n=77+83)
BM_mlir_Expm1_f32/10k     83.8µs ± 3%  15.4µs ± 5%  -81.65%    (p=0.000 n=83+69)
BM_eigen_s_Expm1_f32/10   68.8ns ±17%  72.5ns ±14%   +5.40%  (p=0.000 n=118+115)
BM_eigen_s_Expm1_f32/100   694ns ±11%   717ns ± 2%   +3.34%   (p=0.000 n=120+75)
BM_eigen_s_Expm1_f32/1k   7.69µs ± 2%  7.97µs ±11%   +3.56%   (p=0.000 n=95+117)
BM_eigen_s_Expm1_f32/10k  88.0µs ± 1%  89.3µs ± 6%   +1.45%   (p=0.000 n=74+106)
BM_eigen_v_Expm1_f32/10   44.3ns ± 6%  45.0ns ± 8%   +1.45%   (p=0.018 n=81+111)
BM_eigen_v_Expm1_f32/100   351ns ± 1%   360ns ± 9%   +2.58%    (p=0.000 n=73+99)
BM_eigen_v_Expm1_f32/1k   3.31µs ± 1%  3.42µs ± 9%   +3.37%   (p=0.000 n=71+100)
BM_eigen_v_Expm1_f32/10k  33.7µs ± 8%  34.1µs ± 9%   +1.04%    (p=0.007 n=99+98)
```

Reviewed By: ezhulenev

Differential Revision: https://reviews.llvm.org/D101852
2021-05-05 14:31:34 -07:00
Philipp Krones 632ebc4ab4 [MC] Untangle MCContext and MCObjectFileInfo
This untangles the MCContext and the MCObjectFileInfo. There is a circular
dependency between MCContext and MCObjectFileInfo. Currently this dependency
also exists during construction: You can't contruct a MOFI without a MCContext
without constructing the MCContext with a dummy version of that MOFI first.
This removes this dependency during construction. In a perfect world,
MCObjectFileInfo wouldn't depend on MCContext at all, but only be stored in the
MCContext, like other MC information. This is future work.

This also shifts/adds more information to the MCContext making it more
available to the different targets. Namely:

- TargetTriple
- ObjectFileType
- SubtargetInfo

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D101462
2021-05-05 10:03:02 -07:00
Javier Setoain 95861216ac [mlir][ArmSVE] Add masked arithmetic operations
These instructions map to SVE-specific instrinsics that accept a
predicate operand to support control flow in vector code.

Differential Revision: https://reviews.llvm.org/D100982
2021-05-05 17:41:58 +01:00
Sergei Grechanik d80b04ab00 [mlir][Affine][Vector] Support vectorizing reduction loops
This patch adds support for vectorizing loops with 'iter_args'
implementing known reductions along the vector dimension. Comparing to
the non-vector-dimension case, two additional things are done during
vectorization of such loops:
- The resulting vector returned from the loop is reduced to a scalar
  using `vector.reduce`.
- In some cases a mask is applied to the vector yielded at the end of
  the loop to prevent garbage values from being written to the
  accumulator.

Vectorization of reduction loops is disabled by default. To enable it, a
map from loops to array of reduction descriptors should be explicitly passed to
`vectorizeAffineLoops`, or `vectorize-reductions=true` should be passed
to the SuperVectorize pass.

Current limitations:
- Loops with a non-unit step size are not supported.
- n-D vectorization with n > 1 is not supported.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D100694
2021-05-05 09:03:59 -07:00
Tobias Gysi 4a6ee23d83 [mlir][linalg] Fix bug in the fusion on tensors index op handling.
The old index op handling let the new index operations point back to the
producer block. As a result, after fusion some index operations in the
fused block had back references to the old producer block resulting in
illegal IR. The patch now relies on a block and value mapping to avoid
such back references.

Differential Revision: https://reviews.llvm.org/D101887
2021-05-05 14:46:08 +00:00
Alexander Belyaev 2865d114f9 [mlir] Use ReassociationIndices instead of affine maps in linalg.reshape.
Differential Revision: https://reviews.llvm.org/D101861
2021-05-05 12:59:57 +02:00
Javier Setoain 001d601ac4 [mlir][ArmSVE] Add basic arithmetic operations
While we figure out how to best add Standard support for scalable
vectors, these instructions provide a workaround for basic arithmetic
between scalable vectors.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D100837
2021-05-05 09:50:18 +02:00
William S. Moses f4a2dbfe29 [MLIR][SCF] Combine adjacent scf.if with same condition
Differential Revision: https://reviews.llvm.org/D101798
2021-05-05 00:39:58 -04:00
Aart Bik a2c9d4bb04 [mlir][sparse] Introduce proper sparsification passes
This revision migrates more code from Linalg into the new permanent home of
SparseTensor. It replaces the test passes with proper compiler passes.

NOTE: the actual removal of the last glue and clutter in Linalg will follow

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D101811
2021-05-04 17:10:09 -07:00
William S. Moses cb395b84b0 [MLIR] Add not icmp canonicalization documentation
See: https://reviews.llvm.org/D101710
2021-05-04 11:44:25 -04:00
William S. Moses 8e211bf1c8 [MLIR][SCF] Assume uses of condition in the body of scf.while is true
Differential Revision: https://reviews.llvm.org/D101801
2021-05-04 11:39:07 -04:00
William S. Moses 93297e4bac [MLIR] Replace a not of a comparison with appropriate comparison
Differential Revision: https://reviews.llvm.org/D101710
2021-05-04 11:23:29 -04:00
Tobias Gysi 05d2297b86 [mlir][linalg] Always lower index operations during loop lowering.
Ensure the index operations are lowered on all linalg loop lowering paths.

Differential Revision: https://reviews.llvm.org/D101827
2021-05-04 14:30:59 +00:00
Matthias Springer aa58281979 [mlir] Fix bug in TransferOpReduceRank when all dims are broadcasts
TransferReadOps that are a scalar read + broadcast are handled by TransferReadToVectorLoadLowering.

Differential Revision: https://reviews.llvm.org/D101808
2021-05-04 11:21:44 +09:00