Commit Graph

4146 Commits

Author SHA1 Message Date
River Riddle d1baf28954 [mlir] Add support to SourceMgrDiagnosticHandler for filtering FileLineColLocs
This revision adds support for passing a functor to SourceMgrDiagnosticHandler for filtering out FileLineColLocs when emitting a diagnostic. More specifically, this can be useful in situations where there may be large CallSiteLocs with locations that aren't necessarily important/useful for users.

For now the filtering support is limited to FileLineColLocs, but conceptually we could allow filtering for all locations types if a need arises in the future.

Differential Revision: https://reviews.llvm.org/D103649
2021-06-18 21:12:28 +00:00
Uday Bondhugula 18c8c934d8 [MLIR] Introduce scf.execute_region op
Introduce the execute_region op that is able to hold a region which it
executes exactly once. The op encapsulates a CFG within itself while
isolating it from the surrounding control flow. Proposal discussed here:
https://llvm.discourse.group/t/introduce-std-inlined-call-op-proposal/282

execute_region enables one to inline a function without lowering out all
other higher level control flow constructs (affine.for/if, scf.for/if)
to the flat list of blocks / CFG form. It thus allows the benefit of
transforms on higher level control flow ops available in the presence of
the inlined calls. The inlined calls continue to benefit from
propagation of SSA values across their top boundary. Functions won’t
have to remain outlined until later than desired.  Abstractions like
affine execute_regions, lambdas with implicit captures could be lowered
to this without first lowering out structured loops/ifs or outlining.
But two potential early use cases are of: (1) an early inliner (which
can inline functions by introducing execute_region ops), (2) lowering of
an affine.execute_region, which cleanly maps to an scf.execute_region
when going from the affine dialect to the scf dialect.

Differential Revision: https://reviews.llvm.org/D75837
2021-06-18 15:22:33 +05:30
Gus Smith 22911585bb [mlir][sparse] Add Matricized Tensor Times Khatri-Rao Product (MTTKRP) integration test
See this documentation from taco:
http://tensor-compiler.org/docs/data_analytics/index.html

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D104417
2021-06-17 16:53:12 +00:00
Alexander Belyaev 5b3cb31edb [mlir][linalg] Purge linalg.indexed_generic.
Differential Revision: https://reviews.llvm.org/D104449
2021-06-17 14:45:37 +02:00
Alex Zinenko 23cdf7b6ed [mlir] separable registration of operation interfaces
This is similar to attribute and type interfaces and mostly the same mechanism
(FallbackModel / ExternalModel, ODS generation). There are minor differences in
how the concept-based polymorphism is implemented for operations that are
accounted for by ODS backends, and this essentially adds a test and exposes the
API.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D104294
2021-06-17 12:00:31 +02:00
MaheshRavishankar 3ed3e438a7 [mlir] Move `memref.dim` canonicalization using `InferShapedTypeOpInterface` to a separate pass.
Based on dicussion in
[this](https://llvm.discourse.group/t/remove-canonicalizer-for-memref-dim-via-shapedtypeopinterface/3641)
thread the pattern to resolve the `memref.dim` of a value that is a
result of an operation that implements the
`InferShapedTypeOpInterface` is moved to a separate pass instead of
running it as a canonicalization pass. This allows shape resolution to
happen when explicitly required, instead of automatically through a
canonicalization.

Differential Revision: https://reviews.llvm.org/D104321
2021-06-16 22:13:11 -07:00
Mehdi Amini b5e22e6d42 Migrate MLIR test passes to the new registration API
Make sure they all define getArgument()/getDescription().

Depends On D104421

Differential Revision: https://reviews.llvm.org/D104426
2021-06-16 23:42:17 +00:00
Mehdi Amini c8a3f561eb Decouple registring passes from specifying argument/description
This patch changes the (not recommended) static registration API from:

 static PassRegistration<MyPass> reg("my-pass", "My Pass Description.");

to:

 static PassRegistration<MyPass> reg;

And the explicit registration from:

  void registerPass("my-pass", "My Pass Description.",
                    [] { return createMyPass(); });

To:

  void registerPass([] { return createMyPass(); });

It is expected that Pass implementations overrides the getArgument() method
instead. This will ensure that pipeline description can be printed and parsed
back.

Differential Revision: https://reviews.llvm.org/D104421
2021-06-16 23:41:50 +00:00
Gus Smith f9a6d47c36 Add sparse matrix multiplication integration test
Adds an integration test for the SPMM (sparse matrix multiplication) kernel, which multiplies a sparse matrix by a dense matrix, resulting in a dense matrix. This is just a simple modification on the existing matrix-vector multiplication kernel.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D104334
2021-06-16 13:20:20 -07:00
Uday Bondhugula 54384d1723 [MLIR] Make store to load fwd condition less conservative
Make store to load fwd condition for -memref-dataflow-opt less
conservative. Post dominance info is not really needed. Add additional
check for common cases.

Differential Revision: https://reviews.llvm.org/D104174
2021-06-17 01:26:38 +05:30
Prashant Kumar 51d43bbc46 [MLIR] Fix affine parallelize pass.
To control the number of outer parallel loops, we need to process the
 outer loops first and hence pre-order walk fixes the issue.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D104361
2021-06-17 01:25:24 +05:30
Jacques Pienaar 0e760a0870 Add hook for dialect specializing processing blocks post inlining calls
This allows for dialects to do different post-processing depending on operations with the inliner (my use case requires different attribute propagation rules depending on call op). This hook runs before the regular processInlinedBlocks method.

Differential Revision: https://reviews.llvm.org/D104399
2021-06-16 12:53:21 -07:00
Mehdi Amini a6559b42ce Fix verifier crashing on some invalid IR
In a region with multiple blocks the verifier will try to look for
dominance and may get successor list for blocks, even though a block
may be empty or does not end with a terminator.

Differential Revision: https://reviews.llvm.org/D104411
2021-06-16 19:36:28 +00:00
Aart Bik 619bfe8bd2 [mlir][sparse] support new kind of scalar in sparse linalg generic op
We have several ways of introducing a scalar invariant value into
linalg generic ops (should we limit this somewhat?). This revision
makes sure we handle all of them correctly in the sparse compiler.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D104335
2021-06-16 11:00:49 -07:00
Aart Bik ec8910c4ad [mlir][sparse] integration test for all-dense annotated "sparse" output
Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D104277
2021-06-15 15:44:11 -07:00
MaheshRavishankar 621d93d263 [mlir][SCF] Remove empty else blocks of `scf.if` operations.
Differential Revision: https://reviews.llvm.org/D104273
2021-06-15 15:07:20 -07:00
Aart Bik 727a63e0d9 [mlir][sparse] allow all-dense annotated "sparse" tensor output
This is a very careful start with alllowing sparse tensors at the
left-hand-side of tensor index expressions (viz. sparse output).
Note that there is a subtle difference between non-annotated tensors
(dense, remain n-dim, handled by classic bufferization) and all-dense
annotated "sparse" tensors (linearized to 1-dim without overhead
storage, bufferized by sparse compiler, backed by runtime support library).
This revision gently introduces some new IR to facilitate annotated outputs,
to be generalized to truly sparse tensors in the future.

Reviewed By: gussmith23, bixia

Differential Revision: https://reviews.llvm.org/D104074
2021-06-15 14:55:07 -07:00
Arpith C. Jacob dd1992efd3 Support lowering of index-cast on vector types.
The index cast operation accepts vector types. Implement its lowering in this patch.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D104280
2021-06-15 12:51:30 -07:00
Tobias Gysi ff2ef4d684 [mlir][linalg] Adapt yaml codegen to support scalar parameters.
The patch updates the C++ yaml code generation to support scalar operands as added in https://reviews.llvm.org/D104220.

Differential Revision: https://reviews.llvm.org/D104224
2021-06-15 15:20:48 +00:00
Adrian Kuegel f112bd61eb [mlir] Add SignOp to complex dialect.
Also add a conversion pattern from Complex Dialect to Standard/Math Dialect.

Differential Revision: https://reviews.llvm.org/D104292
2021-06-15 15:22:31 +02:00
Alex Zinenko 9b2a1bcf6f [mlir] separable registration of attribute and type interfaces
It may be desirable to provide an interface implementation for an attribute or
a type without modifying the definition of said attribute or type. Notably,
this allows to implement interfaces for attributes and types outside of the
dialect that defines them and, in particular, provide interfaces for built-in
types. Provide the mechanism to do so.

Currently, separable registration requires the attribute or type to have been
registered with the context, i.e. for the dialect containing the attribute or
type to be loaded. This can be relaxed in the future using a mechanism similar
to delayed dialect interface registration.

See https://llvm.discourse.group/t/rfc-separable-attribute-type-interfaces/3637

Depends On D104233

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D104234
2021-06-15 15:20:27 +02:00
Tobias Gysi 662f9bff33 [mlir][linalg][python] Adapt the OpDSL to use scalars.
The patch replaces the existing capture functionality by scalar operands that have been introduced by https://reviews.llvm.org/D104109. Scalar operands behave as tensor operands except for the fact that they are not indexed. As a result ScalarDefs can be accessed directly as no indexing expression is needed.

The patch only updates the OpDSL. The C++ side is updated by a follow up patch.

Differential Revision: https://reviews.llvm.org/D104220
2021-06-15 12:54:00 +00:00
Benjamin Kramer cd93935146 [mlir][MemRef] Make sure types match when folding dim(reshape)
Reshape can take integer types in addition to index, but dim always
returns index.

Differential Revision: https://reviews.llvm.org/D104287
2021-06-15 12:33:44 +02:00
Adrian Kuegel 662e074d90 [mlir] Add NegOp to complex dialect.
Also add a lowering pattern from complex dialect to standard dialect.

Differential Revision: https://reviews.llvm.org/D104284
2021-06-15 12:16:22 +02:00
Matthias Springer b6ab4f1a8b [mlir][linalg] Fold linalg.pad_tensor if src type == result type
Fold PadTensorOp to source if source type and result type have static shape and are equal.

Differential Revision: https://reviews.llvm.org/D103778
2021-06-15 17:25:12 +09:00
Tres Popp 6c7be41767 Support buffers in LinalgFoldUnitExtentDims
This doesn't add any canonicalizations, but executes the same
simplification on bufferSemantic linalg.generic ops by using
linalg::ReshapeOp instead of linalg::TensorReshapeOp.

Differential Revision: https://reviews.llvm.org/D103513
2021-06-15 08:22:22 +02:00
Sean Silva 853a614864 [mlir:OpFormatGen] Add Support for `$_ctxt` in the transformer.
This is useful for "build tuple" type ops. In my case, in npcomp, I have
an op:

```
// Result type is `!torch.tuple<!torch.tensor, !torch.tensor>`.
torch.prim.TupleConstruct %0, %1 : !torch.tensor, !torch.tensor
```

and the context is required for the `Torch::TupleType::get` call (for
the case of an empty tuple).

The handling of these FmtContext's in the code is pretty ad-hoc -- I didn't
attempt to rationalize it and just made a targeted fix. As someone
unfamiliar with the code I had a hard time seeing how to more broadly fix
the situation.

Differential Revision: https://reviews.llvm.org/D104274
2021-06-14 18:02:55 -07:00
Hanhan Wang e3bc4dbe8e [mlir][Linalg] Make printer/parser have the same behavior.
The parser of generic op did not recognize the output from mlir-opt when there
are multiple outputs. One would wrap the result types with braces, and one would
not. The patch makes the behavior the same.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D104256
2021-06-14 13:38:30 -07:00
Uday Bondhugula 88e4aae57d [MLIR][NFC] Rename MemRefDataFlow -> AffineScalarReplacement
NFC. Rename MemRefDataFlow -> AffineScalarReplacement and move to
AffineTransforms library. Pass command line rename: -memref-dataflow-opt
-> affine-scalrep. Update outdated pass documentation.

Rationale:
https://llvm.discourse.group/t/move-and-rename-memref-dataflow-opt-lib-transforms-lib-affine-dialect-transforms/3640

Differential Revision: https://reviews.llvm.org/D104190
2021-06-14 17:52:53 +05:30
Tobias Gysi 046922e100 [mlir][linalg] Add support for scalar input operands.
Up to now all structured op operands are assumed to be shaped. The patch relaxes this assumption and allows scalar input operands. In contrast to shaped operands scalar operands are not indexed and directly forwarded to the body of the operation. As all other operands, scalar operands are associated to an indexing map that in case of a scalar or a 0D-operand has an empty range.

We will use scalar operands as a replacement for the capture mechanism. In contrast to captures, the approach ensures we can generate the function signature from the operand list and it prevents outdated capture values in case a transformation updates only the capture operand but not the hidden body of a named operation.

Removing captures and updating existing operations such as linalg.fill is left for a later patch.

The patch depends on https://reviews.llvm.org/D103891 and https://reviews.llvm.org/D103890.

Differential Revision: https://reviews.llvm.org/D104109
2021-06-14 06:27:16 +00:00
Matthias Springer ddda52ce3c [mlir][linalg] Lower PadTensorOps with non-constant pad value
The padding of such ops is not generated in a vectorized way. Instead, emit a tensor::GenerateOp.

We may vectorize GenerateOps in the future.

Differential Revision: https://reviews.llvm.org/D103879
2021-06-14 15:11:13 +09:00
Adrian Kuegel 73cbc91c93 [mlir] Add ExpOp to Complex dialect.
Also add a conversion pattern from Complex to Standard/Math dialect.

Differential Revision: https://reviews.llvm.org/D104108
2021-06-14 08:08:53 +02:00
Matthias Springer 01e3b34469 [mlir][linalg] Vectorize linalg.pad_op source copying (improved)
Vectorize linalg.pad_op source copying if source or result shape are static.

Differential Revision: https://reviews.llvm.org/D103791
2021-06-14 14:43:56 +09:00
Matthias Springer 4c2f3d810b [mlir][linalg] Vectorize linalg.pad_op source copying (static source shape)
If the source operand of a linalg.pad_op operation has static shape, vectorize the copying of the source.

Differential Revision: https://reviews.llvm.org/D103747
2021-06-14 14:31:34 +09:00
Matthias Springer 98fff5153a [mlir][linalg] Lower PadTensorOp to InitTensorOp + FillOp + SubTensorInitOp
Currently limited to constant pad values. Any combination of dynamic/static tensor sizes and padding sizes is supported.

Differential Revision: https://reviews.llvm.org/D103679
2021-06-14 14:21:08 +09:00
Chris Lattner 0dd4c4b5ae [Testsuite] Change these tests to only have a single verification error, NFC.
These are testing for various verification failures, but have missing returns
at the end of their function.  Add the returns to focus the tests better.
2021-06-13 21:36:31 -07:00
Matthias Springer fdb21f0c5e [mlir][linalg] Remove generic PadTensorOp vectorization pattern
The generic vectorization pattern handles only those cases, where
low and high padding is zero. This is already handled by a
canonicalization pattern.

Also add a new canonicalization test case to ensure that tensor cast ops
are properly inserted.

A more general vectorization pattern will be added in a subsequent commit.

Differential Revision: https://reviews.llvm.org/D103590
2021-06-14 10:53:50 +09:00
Matthias Springer 562f9e995d [mlir] Vectorize linalg.pad_tensor consumed by transfer_write
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a transfer_write.

Differential Revision: https://reviews.llvm.org/D103137
2021-06-14 10:17:23 +09:00
Matthias Springer b1fd8a13cc [mlir] Vectorize linalg.pad_tensor consumed by subtensor_insert
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a subtensor_insert.

Differential Revision: https://reviews.llvm.org/D103780
2021-06-14 09:59:38 +09:00
Matthias Springer b1b822714d [mlir] Vectorize linalg.pad_tensor consumed by transfer_read
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a transfer_read.

Differential Revision: https://reviews.llvm.org/D103735
2021-06-14 09:52:25 +09:00
Hanhan Wang b4baccc2a7 Introduce tensor.insert op to Tensor dialect.
Add `tensor.insert` op to make `tensor.extract`/`tensor.insert` work in pairs
for `scalar` domain. Like `subtensor`/`subtensor_insert` work in pairs in
`tensor` domain, and `vector.transfer_read`/`vector.transfer_write` work in
pairs in `vector` domain.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D104139
2021-06-13 13:45:40 -07:00
Shashij gupta 466e5aba64 [MLIR] Simplify affine.if ops with trivial conditions
The commit simplifies affine.if ops :
The affine if operation gets removed if the condition is universally true or false and then/else block is merged with the parent block.

Signed-off-by: Shashij Gupta shashij.gupta@polymagelabs.com

Reviewed By: bondhugula, pr4tgpt

Differential Revision: https://reviews.llvm.org/D104015
2021-06-12 19:29:10 +05:30
Uday Bondhugula c8b8e8e022 [MLIR] Execution engine python binding support for shared libraries
Add support to Python bindings for the MLIR execution engine to load a
specified list of shared libraries - for eg. to use MLIR runtime
utility libraries.

Differential Revision: https://reviews.llvm.org/D104009
2021-06-12 05:46:38 +05:30
Denys Shabalin fdc0d4360b Introduce alloca_scope op
## Introduction

This proposal describes the new op to be added to the `std` (and later moved `memref`)
dialect called `alloca_scope`.

## Motivation

Alloca operations are easy to misuse, especially if one relies on it while doing
rewriting/conversion passes. For example let's consider a simple example of two
independent dialects, one defines an op that wants to allocate on-stack and
another defines a construct that corresponds to some form of looping:

```
dialect1.looping_op {
  %x = dialect2.stack_allocating_op
}
```

Since the dialects might not know about each other they are going to define a
lowering to std/scf/etc independently:

```
scf.for … {
   %x_temp = std.alloca …
   … // do some domain-specific work using %x_temp buffer
   … // and store the result into %result
   %x = %result
}
```

Later on the scf and `std.alloca` is going to be lowered to llvm using a
combination of `llvm.alloca` and unstructured control flow.

At this point the use of `%x_temp` is bound to either be either optimized by
llvm (for example using mem2reg) or in the worst case: perform an independent
stack allocation on each iteration of the loop. While the llvm optimizations are
likely to succeed they are not guaranteed to do so, and they provide
opportunities for surprising issues with unexpected use of stack size.

## Proposal

We propose a new operation that defines a finer-grain allocation scope for the
alloca-allocated memory called `alloca_scope`:

```
alloca_scope {
   %x_temp = alloca …
   ...
}
```

Here the lifetime of `%x_temp` is going to be bound to the narrow annotated
region within `alloca_scope`. Moreover, one can also return values out of the
alloca_scope with an accompanying `alloca_scope.return` op (that behaves
similarly to `scf.yield`):

```
%result = alloca_scope {
   %x_temp = alloca …
   …
   alloca_scope.return %myvalue
}
```

Under the hood the `alloca_scope` is going to lowered to a combination of
`llvm.intr.stacksave` and `llvm.intr.strackrestore` that are going to be invoked
automatically as control-flow enters and leaves the body of the `alloca_scope`.

The key value of the new op is to allow deterministic guaranteed stack use
through an explicit annotation in the code which is finer-grain than the
function-level scope of `AutomaticAllocationScope` interface. `alloca_scope`
can be inserted at arbitrary locations and doesn’t require non-trivial
transformations such as outlining.

## Which dialect

Before memref dialect is split, `alloca_scope` can temporarily reside in `std`
dialect, and later on be moved to `memref` together with the rest of
memory-related operations.

## Implementation

An implementation of the op is available [here](https://reviews.llvm.org/D97768).

Original commits:

* Add initial scaffolding for alloca_scope op
* Add alloca_scope.return op
* Add no region arguments and variadic results
* Add op descriptions
* Add failing test case
* Add another failing test
* Initial implementation of lowering for std.alloca_scope
* Fix backticks
* Fix getSuccessorRegions implementation

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D97768
2021-06-11 19:28:41 +02:00
thomasraoux edd9515bd1 [mlir][VectorToGPU] First step to convert vector ops to GPU MMA ops
This is the first step to convert vector ops to MMA operations in order to
target GPUs tensor core ops. This currently only support simple cases,
transpose and element-wise operation will be added later.

Differential Revision: https://reviews.llvm.org/D102962
2021-06-11 07:52:32 -07:00
Alex Zinenko ad381e39a5 [mlir] Provide minimal Python bindings for the math dialect
Reviewed By: ulysseB

Differential Revision: https://reviews.llvm.org/D104045
2021-06-11 13:21:26 +02:00
River Riddle 8800047707 [mlir-ir-printing] Prefix the dump message with the split marker(// -----)
This allows for better interaction with tools (such as mlir-lsp-server), as it separates the IR into separate modules for consecutive dumps.

Differential Revision: https://reviews.llvm.org/D104073
2021-06-10 17:34:50 -07:00
Benoit Jacob 20daedacca 2d Arm Neon sdot op, and lowering to the intrinsic.
This adds Sdot2d op, which is similar to the usual Neon
intrinsic except that it takes 2d vector operands, reflecting the
structure of the arithmetic that it's performing: 4 separate
4-dimensional dot products, whence the vector<4x4xi8> shape.

This also adds a new pass, arm-neon-2d-to-intr, lowering
this new 2d op to the 1d intrinsic.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D102504
2021-06-10 14:36:39 -07:00
River Riddle ff81a2c95d [mlir-lsp-server] Add support for textDocument/documentSymbols
This allows for building an outline of the symbols and symbol tables within the IR. This allows for easy navigations to functions/modules and other symbol/symbol table operations within the IR.

Differential Revision: https://reviews.llvm.org/D103729
2021-06-10 10:58:39 -07:00
thomasraoux 428a62f65f [mlir][gpu] Add op to create MMA constant matrix
This allow creating a matrix with all elements set to a given value. This is
needed to be able to implement a simple dot op.

Differential Revision: https://reviews.llvm.org/D103870
2021-06-10 08:34:04 -07:00