Commit Graph

48 Commits

Author SHA1 Message Date
Sean Silva caf4f2e372 [mlir] Handle unknown ops in dynamic_tensor_from_elements bufferization
Due to how the conversion infra works, the "clone" call that this
pattern was using required all the cloned ops to be immediately
legalized as part of this dialect conversion invocation.

That was previously working due to a couple factors:

- In the test case, there was scf.if, which we happen to mark as legal
  as part of marking the entire SCF dialect as legal for the scf.parallel
  we generate here.

- Originally, this test case had std.extract_element in the body, which
  we happened to have a pattern for in this pass. After I migrated that to
  `tensor.extract` (which removed the tensor.extract bufferization from
  here), I hacked this up to use `std.dim` which we still have patterns
  for in this pass.

This patch updates the test case to use a truly opaque op `test.source`
that properly stresses this aspect of the pattern.

(this also removes a stray dependency on the `tensor` dialect that I
must have left behind as part of my hacking this pass up when migrating
to `tensor.extract`)

Differential Revision: https://reviews.llvm.org/D93262
2020-12-15 12:50:56 -08:00
Sean Silva 444822d77a Revert "Revert "[mlir] Start splitting the `tensor` dialect out of `std`.""
This reverts commit 0d48d265db.

This reapplies the following commit, with a fix for CAPI/ir.c:

[mlir] Start splitting the `tensor` dialect out of `std`.

This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).

Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.

This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.

Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2

Differential Revision: https://reviews.llvm.org/D92991
2020-12-11 14:30:50 -08:00
Sean Silva 0d48d265db Revert "[mlir] Start splitting the `tensor` dialect out of `std`."
This reverts commit cab8dda90f.

I mistakenly thought that CAPI/ir.c failure was unrelated to this
change. Need to debug it.
2020-12-11 14:15:41 -08:00
Sean Silva cab8dda90f [mlir] Start splitting the `tensor` dialect out of `std`.
This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).

Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.

This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.

Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2

Differential Revision: https://reviews.llvm.org/D92991
2020-12-11 13:50:55 -08:00
River Riddle 1f5f006d9d [mlir][StandardOps] Verify that the result of an integer constant is signless
This was missed when supported for unsigned/signed integer types was first added, and results in crashes if a user tries to create/print a constant with the incorrect integer type.

Fixes PR#46222

Differential Revision: https://reviews.llvm.org/D92981
2020-12-10 12:40:10 -08:00
Sean Silva 774f1d3ffd [mlir] Small cleanups to func-bufferize/finalizing-bufferize
- Address TODO in scf-bufferize: the argument materialization issue is
  now fixed and the code is now in Transforms/Bufferize.cpp
- Tighten up finalizing-bufferize to avoid creating invalid IR when
  operand types potentially change
- Tidy up the testing of func-bufferize, and move appropriate tests
  to a new finalizing-bufferize.mlir
- The new stricter checking in finalizing-bufferize revealed that we
  needed a DimOp conversion pattern (found when integrating into npcomp).
  Previously, the converion infrastructure was blindly changing the
  operand type during finalization, which happened to work due to
  DimOp's tensor/memref polymorphism, but is generally not encouraged
  (the new pattern is the way to tell the conversion infrastructure that
  it is legal to change that type).
2020-11-30 17:04:14 -08:00
Benjamin Kramer 9549abcbb8 Remove stray debug-only from test 2020-11-26 15:37:18 +01:00
Stephan Herhut 4dd5f79f07 [mlir][bufferize] Add argument materialization for bufferization
This enables partial bufferization that includes function signatures. To test this, this
change also makes the func-bufferize partial and adds a dedicated finalizing-bufferize pass.

Differential Revision: https://reviews.llvm.org/D92032
2020-11-26 13:43:44 +01:00
Stephan Herhut a89e55ca57 [mlir][std] Canonicalize a dim(memref_reshape) into a load from the shape operand
This canonicalization helps propagate shape information through the program.

Differential Revision: https://reviews.llvm.org/D91854
2020-11-20 14:03:02 +01:00
Stephan Herhut 6af81ea1d6 [mlir][std] Fold load(tensor_to_memref) into extract_element
This canonicalization is useful to resolve loads into scalar values when
doing partial bufferization.

Differential Revision: https://reviews.llvm.org/D91855
2020-11-20 13:42:11 +01:00
Stephan Herhut cb778c3423 [mlir][std] Fold comparisons when the operands are equal
For equal operands, comparisons can be decided statically.

Differential Revision: https://reviews.llvm.org/D91856
2020-11-20 13:26:41 +01:00
Stephan Herhut 3598605c0b [mlir][std] Fold dim(dynamic_tensor_from_elements, %cst)
The shape of the result of a dynamic_tensor_from_elements is defined via its
result type and operands. We already fold dim operations when they reference
one of the statically sized dimensions. Now, also fold dim on the dynamically
sized dimensions by picking the corresponding operand.

Differential Revision: https://reviews.llvm.org/D91616
2020-11-17 14:39:59 +01:00
Rahul Joshi b7382ed3fe [MLIR] Extend Symbol verification to reject public symbol declarations.
- Extend the Symbol interface with `isDeclaration` to identify operations that declare
  a symbol as opposed to define it.
- Extend verification to disallow public declarations as per the discussion in
   https://llvm.discourse.group/t/rfc-symbol-definition-declaration-x-visibility-checks/2140
- Adopt the new interface for `FuncOp` and fix test and code to not have/create public
  function declarations.

Differential Revision: https://reviews.llvm.org/D91456
2020-11-16 16:05:32 -08:00
Sean Silva faa66b1b2c [mlir] Bufferize tensor constant ops
We lower them to a std.global_memref (uniqued by constant value) + a
std.get_global_memref to produce the corresponding memref value.
This allows removing Linalg's somewhat hacky lowering of tensor
constants, now that std properly supports this.

Differential Revision: https://reviews.llvm.org/D91306
2020-11-12 14:56:10 -08:00
Sean Silva b4fa28b408 [mlir] Add ElementwiseMappable trait and apply it to std elementwise ops.
This patch adds an `ElementwiseMappable` trait as discussed in the RFC
here:
https://llvm.discourse.group/t/rfc-std-elementwise-ops-on-tensors/2113/23

This trait can power a number of transformations and analyses.
A subsequent patch adds a convert-elementwise-to-linalg pass exhibits
how this trait allows writing generic transformations.
See https://reviews.llvm.org/D90354 for that patch.

This trait slightly changes some verifier messages, but the diagnostics
are usually about as good. I fiddled with the ordering of the trait in
the .td file trait lists to minimize the changes here.

Differential Revision: https://reviews.llvm.org/D90731
2020-11-10 13:44:44 -08:00
Alexander Belyaev 9d02e0e38d [mlir][std] Add ExpandOps pass.
The pass combines patterns of ExpandAtomic, ExpandMemRefReshape,
StdExpandDivs passes. The pass is meant to legalize STD for conversion to LLVM.

Differential Revision: https://reviews.llvm.org/D91082
2020-11-09 21:58:28 +01:00
Alexandre Eichenberger 0795715616 [mlir][std] Add SignedCeilDivIOp and SignedFloorDivIOp with std to std lowering triggered by -std-expand-divs option. The new operations support positive/negative nominator/denominator numbers.
Differential Revision: https://reviews.llvm.org/D89726

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-11-04 14:16:23 -05:00
Sean Silva f556af965f [mlir] Fix materializations for unranked tensors.
Differential Revision: https://reviews.llvm.org/D90656
2020-11-04 10:16:55 -08:00
Nicolas Vasilache 85ff2705cd [mlir][std] Add DimOp folding for dim(tensor_load(m)) -> dim(m).
Differential Revision: https://reviews.llvm.org/D90755
2020-11-04 13:06:22 +00:00
Rahul Joshi 549eac9d87 [MLIR] Remove unnecessary CHECK's from tests for which we do not run FileCheck.
Differential Revision: https://reviews.llvm.org/D90651
2020-11-02 15:21:33 -08:00
Rahul Joshi c254b0bb69 [MLIR] Introduce std.global_memref and std.get_global_memref operations.
- Add standard dialect operations to define global variables with memref types and to
  retrieve the memref for to a named global variable
- Extend unit tests to test verification for these operations.

Differential Revision: https://reviews.llvm.org/D90337
2020-11-02 13:43:04 -08:00
Sean Silva 52b0fe6404 [mlir] Add func-bufferize pass.
This is the most basic possible finalizing bufferization pass, which I
also think is sufficient for most new use cases. The more concentrated
nature of this pass also greatly clarifies the invariants that it
requires on its input to safely transform the program (see the
pass description in Passes.td).

With this pass, I have now upstreamed practically all of the
bufferizations from npcomp (the exception being std.constant, which can
be upstreamed when std.global_memref lands:
https://llvm.discourse.group/t/rfc-global-variables-in-mlir/2076/16 )

Differential Revision: https://reviews.llvm.org/D90205
2020-11-02 12:42:32 -08:00
Alexander Belyaev 7a996027b9 [mlir] Convert memref_reshape to memref_reinterpret_cast.
Differential Revision: https://reviews.llvm.org/D90235
2020-10-28 21:15:32 +01:00
Sean Silva 83154c5418 [mlir] Add bufferization for std.select op.
Differential Revision: https://reviews.llvm.org/D90204
2020-10-27 11:46:33 -07:00
Alexander Belyaev 461605c418 [mlir] Add MemRefReinterpretCastOp definition to Standard.
Reuse most code for printing/parsing/verification from SubViewOp.

https://llvm.discourse.group/t/rfc-standard-memref-cast-ops/1454/15

Differential Revision: https://https://reviews.llvm.org/D89720
2020-10-22 15:17:22 +02:00
Alexander Belyaev d2ed2f16b8 [mlir] Add MemRefReshapeOp definition to Standard.
https://llvm.discourse.group/t/rfc-standard-memref-cast-ops/1454/15

Differential Revision: https://reviews.llvm.org/D89784
2020-10-22 13:29:13 +02:00
Sean Silva 7885bf8b78 [mlir][DialectConversion] Fix recursive `clone` calls.
The framework was not tracking ops created in any regions of the cloned
op.

Differential Revision: https://reviews.llvm.org/D89668
2020-10-19 15:51:46 -07:00
Sean Silva f4abd3ed6d [mlir] Add std.dynamic_tensor_from_elements bufferization.
It's unfortunate that this requires adding a dependency on scf dialect
to std bufferization (and hence all of std transforms). This is a bit
perilous. We might want a lib/Transforms/Bufferize/ with a separate
bufferization library per dialect?

Differential Revision: https://reviews.llvm.org/D89667
2020-10-19 15:51:45 -07:00
Sean Silva e3f5073a96 [mlir] Add some more std bufferize patterns.
Add bufferizations for extract_element and tensor_from_elements.

Differential Revision: https://reviews.llvm.org/D89594
2020-10-19 15:51:45 -07:00
River Riddle a8feeee15f [mlir] Add canonicalization for cond_br that feed into a cond_br on the same condition
```
   ...
   cond_br %cond, ^bb1(...), ^bb2(...)
 ...
 ^bb1: // has single predecessor
   ...
   cond_br %cond, ^bb3(...), ^bb4(...)
```

 ->

```
   ...
   cond_br %cond, ^bb1(...), ^bb2(...)
 ...
 ^bb1: // has single predecessor
   ...
   br ^bb3(...)
```

Differential Revision: https://reviews.llvm.org/D89604
2020-10-18 13:51:02 -07:00
Sean Silva ee491ac91e [mlir] Add std.tensor_to_memref op and teach the infra about it
The opposite of tensor_to_memref is tensor_load.

- Add some basic tensor_load/tensor_to_memref folding.
- Add source/target materializations to BufferizeTypeConverter.
- Add an example std bufferization pattern/pass that shows how the
  materialiations work together (more std bufferization patterns to come
  in subsequent commits).
  - In coming commits, I'll document how to write composable
  bufferization passes/patterns and update the other in-tree
  bufferization passes to match this convention. The populate* functions
  will of course continue to be exposed for power users.

The naming on tensor_load/tensor_to_memref and their pretty forms are
not very intuitive. I'm open to any suggestions here. One key
observation is that the memref type must always be the one specified in
the pretty form, since the tensor type can be inferred from the memref
type but not vice-versa.

With this, I've been able to replace all my custom bufferization type
converters in npcomp with BufferizeTypeConverter!

Part of the plan discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17

Differential Revision: https://reviews.llvm.org/D89437
2020-10-15 12:19:20 -07:00
Benjamin Kramer 6e2b267d1c Promote transpose from linalg to standard dialect
While affine maps are part of the builtin memref type, there is very
limited support for manipulating them in the standard dialect. Add
transpose to the set of ops to complement the existing view/subview ops.
This is a metadata transformation that encodes the transpose into the
strides of a memref.

I'm planning to use this when lowering operations on strided memrefs,
using the transpose to remove the stride without adding a dependency on
linalg dialect.

Differential Revision: https://reviews.llvm.org/D88651
2020-10-05 10:58:20 +02:00
Frederik Gossen cdda7822d6 [MLIR][Standard] Add `atan2` to standard dialect
Differential Revision: https://reviews.llvm.org/D88168
2020-09-30 08:38:45 +00:00
Frederik Gossen 136eb79a88 [MLIR][Standard] Add `dynamic_tensor_from_elements` operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
Frederik Gossen 1ee0d22f26 [MLIR][Standard] Erase redundant assertions `std.assert`
Differential Revision: https://reviews.llvm.org/D83118
2020-07-14 10:09:39 +00:00
Frederik Gossen bcedc4fa0a [MLIR][Standard] Add `assert` operation to the standard dialect
Differential Revision: https://reviews.llvm.org/D83117
2020-07-14 10:00:54 +00:00
Hanhan Wang 9cb10296ec [mlir] Add support for lowering tanh to LLVMIR.
Summary:
Fixed build of D81618

Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
   1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
   2) exp^{2x}-1 / exp^{2x}+1  , if x < 0.

Differential Revision: https://reviews.llvm.org/D82040
2020-06-18 10:42:13 -07:00
Mehdi Amini a9a21bb4b6 Revert "[mlir] Add support for lowering tanh to LLVMIR."
This reverts commit 32c757e4f8.

Broke the build bot:

******************** TEST 'MLIR :: Examples/standalone/test.toy' FAILED ********************
[...]
/tmp/ci-KIMiRFcVZt/lib/libMLIRLinalgToLLVM.a(LinalgToLLVM.cpp.o): In function `(anonymous namespace)::ConvertLinalgToLLVMPass::runOnOperation()':
LinalgToLLVM.cpp:(.text._ZN12_GLOBAL__N_123ConvertLinalgToLLVMPass14runOnOperationEv+0x100): undefined reference to `mlir::populateExpandTanhPattern(mlir::OwningRewritePatternList&, mlir::MLIRContext*)'
2020-06-15 18:46:57 +00:00
Hanhan Wang 32c757e4f8 [mlir] Add support for lowering tanh to LLVMIR.
Summary:
Add a pattern for expanding tanh op into exp form.
A `tanh` is expanded into:
   1) 1-exp^{-2x} / 1+exp^{-2x}, if x => 0
   2) exp^{2x}-1 / exp^{2x}+1  , if x < 0.

Differential Revision: https://reviews.llvm.org/D81618
2020-06-15 10:29:31 -07:00
Jacques Pienaar 3dbb6678a5 [mlir] Mark CastOp class's shape constraint
These ops have the same operands and result shapes.

Differential Revision: https://reviews.llvm.org/D81664
2020-06-12 06:50:07 -07:00
Rob Suderman 3d56f166bd [mlir][StandardOps] Updated IndexCastOp to support tensor<index> cast
Summary:
We now support index casting for tensor<index> to tensor<int>. This
better supports compatibility with the Shape dialect.

Differential Revision: https://reviews.llvm.org/D81611
2020-06-10 17:19:08 -07:00
Mehdi Amini d31c9e5a46 Change filecheck default to dump input on failure
Having the input dumped on failure seems like a better
default: I debugged FileCheck tests for a while without knowing
about this option, which really helps to understand failures.

Remove `-dump-input-on-failure` and the environment variable
FILECHECK_DUMP_INPUT_ON_FAILURE which are now obsolete.

Differential Revision: https://reviews.llvm.org/D81422
2020-06-09 18:57:46 +00:00
River Riddle c0cd1f1c5c [mlir] Refactor BoolAttr to be a special case of IntegerAttr
This simplifies a lot of handling of BoolAttr/IntegerAttr. For example, a lot of places currently have to handle both IntegerAttr and BoolAttr. In other places, a decision is made to pick one which can lead to surprising results for users. For example, DenseElementsAttr currently uses BoolAttr for i1 even if the user initialized it with an Array of i1 IntegerAttrs.

Differential Revision: https://reviews.llvm.org/D81047
2020-06-04 16:41:24 -07:00
Alexander Belyaev b79751e83d [MLIR] Add conversion from AtomicRMWOp -> GenericAtomicRMWOp.
Adding this pattern reduces code duplication. There is no need to have a
custom implementation for lowering to llvm.cmpxchg.

Differential Revision: https://reviews.llvm.org/D78753
2020-05-05 10:32:13 +02:00
River Riddle 7f85adb54d [mlir][Standard] Allow select to use an i1 for vector and tensor values
It currently requires that the condition match the shape of the selected value, but this is only really useful for things like masks. This revision allows for the use of i1 to mean that all of the vector/tensor is selected. This also matches the behavior of LLVM select. A benefit of this change is that transformations that want to generate selects, like those on the CFG, don't have to special case vector/tensor. Previously the only way to generate  a select from an i1 was to use a splat, but that doesn't support dynamically shaped/unranked tensors.

Differential Revision: https://reviews.llvm.org/D78690
2020-04-23 04:50:09 -07:00
River Riddle 2fafe7ff59 [mlir][Standard] Add support for canonicalizing branches to passthrough blocks
This revision adds support for canonicalizing the following:

```
   br ^bb1
 ^bb1
   br ^bbN(...)

 br ^bbN(...)
```

Differential Revision: https://reviews.llvm.org/D78683
2020-04-23 04:42:02 -07:00
River Riddle af331bc52d [mlir][Standard] Add a canonicalization to simplify cond_br when the successors are identical
This revision adds support for canonicalizing the following:

```
cond_br %cond, ^bb1(A, ..., N), ^bb1(A, ..., N)

br ^bb1(A, ..., N)
```

 If the operands to the successor are different and the cond_br is the only predecessor, we emit selects for the branch operands.

```
cond_br %cond, ^bb1(A), ^bb1(B)

%select = select %cond, A, B
br ^bb1(%select)
```

Differential Revision: https://reviews.llvm.org/D78682
2020-04-23 04:42:02 -07:00
River Riddle 2f4b303d68 [mlir][Standard] Add canonicalization for collapsing pass through cond_br successors.
This revision adds support for the following canonicalization:

```
   cond_br %cond, ^bb1, ^bb2
 ^bb1
   br ^bbN(...)
 ^bb2
   br ^bbK(...)

   cond_br %cond, ^bbN(...), ^bbK(...)
```

Differential Revision: https://reviews.llvm.org/D78681
2020-04-23 04:42:01 -07:00