Commit Graph

6446 Commits

Author SHA1 Message Date
Arjun P 2b44a7325c [MLIR] Simplex: support adding new variables dynamically
Reviewed By: Groverkss

Differential Revision: https://reviews.llvm.org/D109962
2021-09-18 21:32:17 +05:30
Jacques Pienaar 0a1e569d37 [mlir-c] Add getting fused loc
For creating a fused loc using array of locations and metadata.

Differential Revision: https://reviews.llvm.org/D110022
2021-09-18 06:57:51 -07:00
Uday Bondhugula 57eda9becc [MLIR][GPU] Add constant propagator for gpu.launch op
Add a constant propagator for gpu.launch op in cases where the
grid/thread IDs can be trivially determined to take a single constant
value of zero.

Differential Revision: https://reviews.llvm.org/D109994
2021-09-18 12:02:46 +05:30
Aart Bik 46e77b5d10 [mlir][sparse] add a sparse quantized_matmul example to integration test
Note that this revision adds a very tiny bit of constant folding in the
sparse compiler lattice construction. Although I am generally trying to
avoid such canonicalizations (and rely on other passes to fix this instead),
the benefits of avoiding a very expensive disjunction lattice construction
justify having this special code (at least for now).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109939
2021-09-17 13:04:44 -07:00
Krzysztof Drewniak 121aab84d1 [MLIR][Affine] Simplify nested modulo operations when able
It is the case that, for all positive a and b such that b divides a
(e mod (a * b)) mod b = e mod b. For example, ((d0 mod 35) mod 5) can
be simplified to (d0 mod 5), but ((d0 mod 35) mod 4) cannot be simplified
further (x = 36 is a counterexample).

This change enables more complex simplifications. For example,
((d0 * 72 + d1) mod 144) mod 9 can now simplify to (d0 * 72 + d1) mod 9
and thus to d1 mod 9. Expressions with chained modulus operators are
reasonably common in tensor applications, and this change _should_
improve code generation for such expressions.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109930
2021-09-17 19:06:00 +00:00
thomasraoux 08f0cb7719 [mlir] Prevent crash in DropUnitDim pattern due to tensor with encoding
Differential Revision: https://reviews.llvm.org/D109984
2021-09-17 12:03:16 -07:00
thomasraoux 36aac53b36 [mlir][linalg] Extend drop unit dim pattern to all cases of reduction
Even with all parallel loops reading the output value is still allowed so we
don't have to handle reduction loops differently.

Differential Revision: https://reviews.llvm.org/D109851
2021-09-17 10:09:57 -07:00
thomasraoux 416679615d [mlir] Linalg hoisting should ignore uses outside the loop
Differential Revision: https://reviews.llvm.org/D109859
2021-09-17 10:06:57 -07:00
thomasraoux a123e3c48b [mlir] Fix potential crash in hoistRedundantVectorTransfers
Differential Revision: https://reviews.llvm.org/D107856
2021-09-17 10:05:20 -07:00
Tobias Gysi 90b7817e03 [mlir][linalg] Add helper to update IndexOps after tiling (NFC).
Add the addTileLoopIvsToIndexOpResults method to shift the IndexOp results after tiling.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109761
2021-09-17 15:17:33 +00:00
Arjun P 58719f6153 [MLIR] PresbugerSet: slightly expand documentation 2021-09-17 18:04:46 +05:30
Arjun P 44db07f11f [MLIR] AffineStructures: support removing a range of constraints at once
Reviewed By: Groverkss, grosser

Differential Revision: https://reviews.llvm.org/D109892
2021-09-17 16:27:48 +05:30
Arjun P 6607bd9fd8 [MLIR] AffineStructures::removeIdRange: support specifying a range within an IdKind
Reviewed By: Groverkss, grosser

Differential Revision: https://reviews.llvm.org/D109896
2021-09-17 16:25:26 +05:30
Arjun P f263ea1571 [MLIR] Matrix: support resizing horizontally
Reviewed By: Groverkss

Differential Revision: https://reviews.llvm.org/D109897
2021-09-17 16:22:31 +05:30
MaheshRavishankar 04a66f8d2b Fixing vector add pattern that incorrectly returns success.
The pattern is returning success even if it does no work leading to pattern application running up to the max iteration count and failing.

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D109791
2021-09-16 14:48:09 -07:00
Rob Suderman 8662a2f208 [mlir][tosa] Relax ranked constraint on quantization builder
TosaOp defintion had an artificial constraint that the input/output types
needed to be ranked to invoke the quantization builder. This is correct as an
unranked tensor could still be quantized.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D109863
2021-09-16 11:43:47 -07:00
Aart Bik 860cbeb159 [mlir][sparse] add more asserts to sparse support lib
We are having issues running the integration test of the sparse compiler
on AArch64 (crashing in the lib). This revision adds more assertions.

Reviewed By: jsetoain

Differential Revision: https://reviews.llvm.org/D109861
2021-09-16 10:13:29 -07:00
Nicolas Vasilache ee2e414dde [mlir][Linalg] Cleanup doc and improve logging and readability in ComprehensiveBufferize.cpp - NFC 2021-09-16 16:41:47 +00:00
Aart Bik b1d44e5902 [mlir][sparse] add affine subscripts to sparse compilation pass
This enables the sparsification of more kernels, such as convolutions
where there is a x(i+j) subscript. It also enables more tensor invariants
such as x(1) or other affine subscripts such as x(i+1). Currently, we
reject sparsity altogether for such tensors. Despite this restriction,
however, we can already handle a lot more kernels with compound subscripts
for dense access (viz. convolution with dense input and sparse filter).
Some unit tests and an integration test demonstrate new capability.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109783
2021-09-15 20:28:04 -07:00
Mogball cb8c30d35d [DRR] Explicit Return Types in Rewrites
Adds a new rewrite directive returnType that can be added at the end of an op's
argument list to explicitly specify return types.

```
(OpX $v0, $v1, (returnType "$_builder.getI32Type()"))
```

Pass in a bound value to copy its return type, or pass a native code call to
dynamically create new types.

```
(OpX $v0, $v1, (returnType $v0, (NativeCodeCall<"..."> $v1)))
```

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D109472
2021-09-15 14:25:29 -07:00
Rob Suderman 1ac2d195ec [mlir][linalg] Add canonicalizers for depthwise conv
There are two main versions of depthwise conv depending whether the multiplier
is 1 or not. In cases where m == 1 we should use the version without the
multiplier channel as it can perform greater optimization.

Add lowering for the quantized/float versions to have a multiplier of one.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D108959
2021-09-15 14:09:15 -07:00
Simon Camphausen 1b79efdc72 [mlir] Fix printing of EmitC attrs/types with escape characters
Attributes and types were not escaped when printing.

Reviewed By: jpienaar, marbre

Differential Revision: https://reviews.llvm.org/D109143
2021-09-15 18:15:38 +00:00
Nicolas Vasilache 96ec0ff2b7 [mlir][Linalg] Revisit insertion points in comprehensive bufferization.
This revision fixes a corner case that could appear due to incorrect insertion point behavior in comprehensive bufferization.

Differential Revision: https://reviews.llvm.org/D109830
2021-09-15 18:11:38 +00:00
Mehdi Amini 13237c3b1e Add llvm_unreachable after fully covered switch (NFC)
This fixes a compiler warning for some version of GCC.
2021-09-15 17:53:05 +00:00
Uday Bondhugula f68939d3d9 [MLIR] Tighten type constraint on memref.global op def
Tighten the def of memref.global op to use the right kind of TypeAttr
(of MemRefType).

Differential Revision: https://reviews.llvm.org/D109822
2021-09-15 22:41:03 +05:30
Nicolas Vasilache 6fe77b1051 [mlir][Linalg] Fail comprehensive bufferization if a memref is returned.
Summary:

Reviewers:

Subscribers:

Differential revision: https://reviews.llvm.org/D109824
2021-09-15 15:11:17 +00:00
Nicolas Vasilache e3889b3059 [mlir][Linalg] Replace DenseSet by UnionFind in ComprehensiveBufferize - NFC
AliasInfo can now use union-find for a much more efficient implementation.
This brings no functional changes but large performance gains on more complex examples.

Differential Revision: https://reviews.llvm.org/D109819
2021-09-15 10:35:54 +00:00
Matthias Springer 934e2f695e [mlir][linalg] ComprehensiveBufferize: Do not copy InitTensorOp results
E.g.:

```
%2 = memref.alloc() {alignment = 128 : i64} : memref<256x256xf32>
%3 = memref.alloc() {alignment = 128 : i64} : memref<256x256xf32>

// ... (%3 is not written to)

linalg.copy(%3, %2) : memref<256x256xf32>, memref<256x256xf32>
vector.transfer_write %11, %2[%c0, %c0] {in_bounds = [true, true]} : vector<256x256xf32>, memref<256x256xf32>
```

Avoid copies of %3 if %3 came directly from an InitTensorOp.

Differential Revision: https://reviews.llvm.org/D109742
2021-09-15 17:28:04 +09:00
Mehdi Amini a32300a68f Make the --mlir-disable-threading command line option overrides the C++ API usage
This seems in-line with the intent and how we build tools around it.
Update the description for the flag accordingly.
Also use an injected thread pool in MLIROptMain, now we will create
threads up-front and reuse them across split buffers.

Differential Revision: https://reviews.llvm.org/D109802
2021-09-15 03:20:48 +00:00
cwz920716 500d4c45ba [MLIR] Use memref.copy ops in BufferResultsToOutParams pass.
Both copy/alloc ops are using memref dialect after this change.

Reviewed By: silvas, mehdi_amini

Differential Revision: https://reviews.llvm.org/D109480
2021-09-15 02:59:30 +00:00
Matthias Springer 9adc0114bf [mlir][linalg] PadTensorOp vectorization: Avoid redundant FillOps
Do not generate FillOps when these would be entirely overwritten.

Differential Revision: https://reviews.llvm.org/D109741
2021-09-15 09:28:37 +09:00
Mehdi Amini 1a406cd5f2 Remove unused llvm/Support/Parallel.h from MLIR (NFC)
This header aren't needed anymore: MLIR is using a thread pool
injected in the context instead of a global one.
2021-09-14 23:30:42 +00:00
Sean Silva 8dca953dd3 [mlir] Apply py::module_local() to a few more classes.
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D109776
2021-09-14 21:56:14 +00:00
Tobias Gysi 6091873651 [mli][linalg] Reuse getValueOrCreateConstantIndexOp method (NFC).
Use getValueOrCreateConstantIndexOp introduced by https://reviews.llvm.org/D109601 in multiple places in LinalgOps.cpp.

Reviewed By: nicolasvasilache, springerm

Differential Revision: https://reviews.llvm.org/D109756
2021-09-14 15:32:29 +00:00
Tobias Gysi 44a889778c [mlir][linalg] Fold ExtractSliceOps during tiling.
Add the makeComposedExtractSliceOp method that creates an ExtractSliceOp and folds chains of ExtractSliceOps by computing the sum of their offsets and by multiplying their strides.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109601
2021-09-14 11:43:52 +00:00
Uday Bondhugula a91cfd1990 [MLIR] Improve op parse error message for AtLeastNOperands trait
Improve parse error message for "at least N operands" op trait.

Differential Revision: https://reviews.llvm.org/D109747
2021-09-14 15:01:51 +05:30
Matthias Springer 62883459cd [mlir][linalg] makeTiledShape: No affine.min if tile size == 1
This improves codegen (more static type information) with `scalarize-dynamic-dims`.

Differential Revision: https://reviews.llvm.org/D109415
2021-09-14 10:48:20 +09:00
Matthias Springer fb1def9c66 [mlir][linalg] New tiling option: Scalarize dynamic dims
This tiling option scalarizes all dynamic dimensions, i.e., it tiles all dynamic dimensions by 1.

This option is useful for linalg ops with partly dynamic tensor dimensions. E.g., such ops can appear in the partial iteration after loop peeling. After scalarizing dynamic dims, those ops can be vectorized.

Differential Revision: https://reviews.llvm.org/D109268
2021-09-14 10:40:50 +09:00
Matthias Springer 8faf35c0a5 [mlir][linalg] Add scf.for loop peeling to codegen strategy
Only scf.for loops are supported at the moment. linalg.tiled_loop support will be added in a subsequent commit.

Only static tensor sizes are supported. Loops for dynamic tensor sizes can be peeled, but the generated code is not optimal due to a missing canonicalization pattern.

Differential Revision: https://reviews.llvm.org/D109043
2021-09-14 10:35:01 +09:00
Nicolas Vasilache 181d18ef53 [mlir][Linalg] Insert static buffers as high as possible during ComprehensiveBufferization.
This revision allows hoisting static alloc/dealloc pairs as high as possible during ComprehensiveBufferization.
This also aligns such allocated buffers to 128B by default.

This change exhibited some issues wrt insertion points and a missing copy that are also fixed in this revision; tests are updated accordingly.

Differential Revision: https://reviews.llvm.org/D109684
2021-09-13 15:59:03 +00:00
Simon Camphausen ec92f788f3 [mlir][emitc] Print signed integers properly
Previously negative integers were printed as large unsigned values.

Reviewed By: marbre

Differential Revision: https://reviews.llvm.org/D109690
2021-09-13 15:29:30 +00:00
Matthias Springer 7c9b6a3355 [mlir][linalg] ComprehensiveBufferize: Do not copy InitTensorOps
Do not copy InitTensorOps or casts thereof.

Differential Revision: https://reviews.llvm.org/D109656
2021-09-13 22:31:54 +09:00
Nicolas Vasilache b01d223faf [mlir][Linalg] Use reify for padded op shape derivation.
Previously, we would insert a DimOp and rely on later canonicalizations.
Unfortunately, reifyShape kind of rewrites are not canonicalizations anymore.
This introduces undesirable pass dependencies.

Instead, immediately reify the result shape and avoid the DimOp altogether.
This is akin to a local folding, which avoids introducing more reliance on `-resolve-shaped-type-result-dims` (similar to compositions of `affine.apply` by construction to avoid chains of size > 1).

It does not completely get rid of the reliance on the pass as the process is merely local: calling the pass may still be necessary for global effects. Indeed, one of the tests still requires the pass.

Differential Revision: https://reviews.llvm.org/D109571
2021-09-13 11:54:59 +00:00
Valentin Clement 57bf856011
[mlir] Add missing namespace to createInlinerPass
One of the createInlinerPass does not have the mlir:: namespace

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D109580
2021-09-13 11:58:27 +02:00
Mehdi Amini 7fb2394a4f Add sanity check in MLIR ODS to catch case where an arguments/results/regions/successors names overlap
This is making a tablegen crash with a more friendly error.

Differential Revision: https://reviews.llvm.org/D109474
2021-09-13 06:21:25 +00:00
Kiran Chandramohan 187d9f8cd9 [OpenMP][MLIR] Add a conversion pattern for the master op
The conversion pattern is particularly useful for conversion of
block arguments in the master op.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D109610
2021-09-12 10:13:40 +00:00
Rob Suderman b0532286fe [mlir][tosa] Add shape inference for tosa.while
Tosa.while shape inference requires repeatedly running shape inference across
the body of the loop until the types become static as we do not know the number
of iterations required by the loop body. Once the least specific arguments are
known they are propagated to both regions.

To determine the final end type, the least restrictive types are determined
from all yields.

Differential Revision: https://reviews.llvm.org/D108801
2021-09-10 13:11:53 -07:00
Alex Zinenko 61bc6aa5a7 [mlir] spelling and style changes in ReconcileUnrealizedCasts.cpp. NFC. 2021-09-10 14:09:29 +02:00
Stephan Herhut 5e6c170b3f [mlir][linalg] Fix bufferize pattern to allow unknown operations in body of generic
The original version of the bufferization pattern for linalg.generic would
manually clone operations within the region to the bufferized clone of the
operation. This triggers legality requirements on those operations in the
conversion infra. Instead, this now uses the rewriter to inline the region
instead, avoiding those legality requirements.

Differential Revision: https://reviews.llvm.org/D109581
2021-09-10 13:37:42 +02:00
Matthias Springer 0f3544d185 [mlir][scf] Loop peeling: Use scf.for for partial iteration
Generate an scf.for instead of an scf.if for the partial iteration. This is for consistency reasons: The peeling of linalg.tiled_loop also uses another loop for the partial iteration.

Note: Canonicalizations patterns may rewrite partial iterations to scf.if afterwards.

Differential Revision: https://reviews.llvm.org/D109568
2021-09-10 19:07:09 +09:00
Tobias Gysi 16488dc300 [mlir][linalg] Pass all operands to tile to the tile loop region builder (NFC).
Extend the signature of the tile loop nest region builder to take all operand values to use and not just the scf::For iterArgs. This change allows us to pass in all block arguments of TiledLoop and use them directly instead of replacing them after the loop generation.

Reviewed By: pifon2a

Differential Revision: https://reviews.llvm.org/D109569
2021-09-10 08:35:11 +00:00
Nicolas Vasilache 5f1a1af4bf [mlir][Linalg] Properly order extract_slice traversal in comprehensive bufferization
This revision fixes the traversal order of extract_slice during the inplace analysis.
It was previously thought that such ops could be analyzed at the very end.
This is unfortunately not true as the AliasInfo for dependents of these ops need to be updated.

This change allows the aliases introduced by the bufferization of extract_slice to be properly propagated.

Differential Revision: https://reviews.llvm.org/D109519
2021-09-10 07:10:06 +00:00
natashaknk d4d50e4710 [mlir][tosa] Add lowering for tosa.clz using scf::whileOp
Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D109540
2021-09-09 15:57:35 -07:00
Aart Bik 066d786ce0 [mlir][sparse] add folding to sparse_tensor.convert
folds conversion between identical types (with tests)

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D109545
2021-09-09 15:45:19 -07:00
Alexander Slepko 89837a0e1b Adding min(f/s/u) and max(f/s/u) cases for vector reduction
This PR adds missing AtomicRMWKind::min/max cases which we would like to use for min/max reduction loop vectorizations.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D104881
2021-09-09 12:00:43 -07:00
Chris Lattner 735f46715d [APInt] Normalize naming on keep constructors / predicate methods.
This renames the primary methods for creating a zero value to `getZero`
instead of `getNullValue` and renames predicates like `isAllOnesValue`
to simply `isAllOnes`.  This achieves two things:

1) This starts standardizing predicates across the LLVM codebase,
   following (in this case) ConstantInt.  The word "Value" doesn't
   convey anything of merit, and is missing in some of the other things.

2) Calling an integer "null" doesn't make any sense.  The original sin
   here is mine and I've regretted it for years.  This moves us to calling
   it "zero" instead, which is correct!

APInt is widely used and I don't think anyone is keen to take massive source
breakage on anything so core, at least not all in one go.  As such, this
doesn't actually delete any entrypoints, it "soft deprecates" them with a
comment.

Included in this patch are changes to a bunch of the codebase, but there are
more.  We should normalize SelectionDAG and other APIs as well, which would
make the API change more mechanical.

Differential Revision: https://reviews.llvm.org/D109483
2021-09-09 09:50:24 -07:00
Aart Bik e2d3db42e5 [mlir][sparse] add casts to operations to lattice and exp builders
Further enhance the set of operations that can be handled by the sparse compiler

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109413
2021-09-09 08:49:50 -07:00
Alex Zinenko 8b58ab8ccd [mlir] Factor type reconciliation out of Standard-to-LLVM conversion
Conversion to the LLVM dialect is being refactored to be more progressive and
is now performed as a series of independent passes converting different
dialects. These passes may produce `unrealized_conversion_cast` operations that
represent pending conversions between built-in and LLVM dialect types.
Historically, a more monolithic Standard-to-LLVM conversion pass did not need
these casts as all operations were converted in one shot. Previous refactorings
have led to the requirement of running the Standard-to-LLVM conversion pass to
clean up `unrealized_conversion_cast`s even though the IR had no standard
operations in it. The pass must have been also run the last among all to-LLVM
passes, in contradiction with the partial conversion logic. Additionally, the
way it was set up could produce invalid operations by removing casts between
LLVM and built-in types even when the consumer did not accept the uncasted
type, or could lead to cryptic conversion errors (recursive application of the
rewrite pattern on `unrealized_conversion_cast` as a means to indicate failure
to eliminate casts).

In fact, the need to eliminate A->B->A `unrealized_conversion_cast`s is not
specific to to-LLVM conversions and can be factored out into a separate type
reconciliation pass, which is achieved in this commit. While the cast operation
itself has a folder pattern, it is insufficient in most conversion passes as
the folder only applies to the second cast. Without complex legality setup in
the conversion target, the conversion infra will either consider the cast
operations valid and not fold them (a separate canonicalization would be
necessary to trigger the folding), or consider the first cast invalid upon
generation and stop with error. The pattern provided by the reconciliation pass
applies to the first cast operation instead. Furthermore, having a separate
pass makes it clear when `unrealized_conversion_cast`s could not have been
eliminated since it is the only reason why this pass can fail.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109507
2021-09-09 16:51:24 +02:00
Uday Bondhugula 524eafa5b2 [MLIR] Avoid double space print on llvm global op
Fix extra space print for llvm global op when the 'unamed_addr'
attribute was empty. This led to two spaces being printed in the custom
form between non-whitespace chars. A round trip would add an extra space
to a typical spaced form. NFC.

Differential Revision: https://reviews.llvm.org/D109502
2021-09-09 19:52:38 +05:30
Alex Zinenko 1ce752b741 [mlir] support reductions in SCF to OpenMP conversion
OpenMP reductions need a neutral element, so we match some known reduction
kinds (integer add/mul/or/and/xor, float add/mul, integer and float min/max) to
define the neutral element and the atomic version when possible to express
using atomicrmw (everything except float mul). The SCF-to-OpenMP pass becomes a
module pass because it now needs to introduce new symbols for reduction
declarations in the module.

Reviewed By: chelini

Differential Revision: https://reviews.llvm.org/D107549
2021-09-09 13:04:27 +02:00
Matthias Springer c7d569b8f7 [mlir][scf] Fold dim(scf.for) to dim(iter_arg)
Fold dim ops of scf.for results to dim ops of the respective iter args if the loop is shape preserving.

Differential Revision: https://reviews.llvm.org/D109430
2021-09-09 13:47:13 +09:00
Matthias Springer e2c8fcb9d0 [mlir][linalg] Fold dim(linalg.tiled_loop) to dim(output_arg)
Fold dim ops of linalg.tiled_loop results to dim ops of the respective iter args if the loop is shape preserving.

Differential Revision: https://reviews.llvm.org/D109431
2021-09-09 13:37:28 +09:00
Matthias Springer f7137da174 [mlir][linalg] Fix dim(iter_arg) canonicalization
Run a small analysis to see if the runtime type of the iter_arg is changing. Fold only if the runtime type stays the same. (Same as `DimOfIterArgFolder` in SCF.)

Differential Revision: https://reviews.llvm.org/D109299
2021-09-09 12:13:05 +09:00
Matthias Springer c95a7246a3 [mlir][linalg] Tiling: Use loop ub in extract_slice size computation if possible
When tiling a LinalgOp, extract_slice/insert_slice pairs are inserted. To avoid going out-of-bounds when the tile size does not divide the shape size evenly (at the boundary), AffineMin ops are inserted. Some ops have assumptions regarding the dimensions of inputs/outputs. E.g., in a `A * B` matmul, `dim(A, 1) == dim(B, 0)`. However, loop bounds use either `dim(A, 1)` or `dim(B, 0)`.

With this change, AffineMin ops are expressed in terms of loop bounds instead of tensor sizes. (Both have the same runtime value.) This simplifies canonicalizations.

Differential Revision: https://reviews.llvm.org/D109267
2021-09-09 11:06:22 +09:00
Chris Lattner 40a89da65c [Canonicalize] Don't call isBeforeInBlock in OperationFolder::tryToFold.
This patch (e4635e6328) fixed a bug where a newly generated/reused
constant wouldn't dominate a folded operation.  It did so by calling
isBeforeInBlock to move the constant around on demand.  This introduced
a significant compile time regression, because "isBeforeInBlock" is
O(n) in the size of a block the first time it is called, and the cache
is invalidated any time canonicalize changes something big in the block.

This fixes LLVM PR51738 and this CIRCT issue:
https://github.com/llvm/circt/issues/1700

This does affect the order of constants left in the top of a block,
I staged in the testsuite changes in rG42431b8207a5.

Differential Revision: https://reviews.llvm.org/D109454
2021-09-08 13:33:22 -07:00
Kunwar Shaanjeet Singh Grover dea76ccaf4 [MLIR] FlatAffineConstraints: Refactored computation of explicit representation for identifiers
This patch refactors the existing implementation of computing an explicit
representation of an identifier as a floordiv in terms of other identifiers and
exposes this computation as a public function.

The computation of this representation is required to support local identifiers
in PresburgerSet subtract, complement and isEqual.

Reviewed By: bondhugula, arjunp

Differential Revision: https://reviews.llvm.org/D106662
2021-09-08 20:24:46 +05:30
Arnab Dutta 1524b01541 [MLIR] Add loop coalesce utility for affine.for
Add loop coalesce utility for affine.for. This expects loops to have
been normalized a-priori. This works for both constant as well non
constant upper bounds having single/multiple result upper bound affine
map.

With contributions from Arnab Dutta and Uday Bondhugula.

Reviewed By: bondhugula, ayzhuang

Differential Revision: https://reviews.llvm.org/D108126
2021-09-08 18:02:23 +05:30
Aart Bik d02e12fadf [mlir][sparse] fix typos
Perhaps one of these days I will actually learn how to spell opaque....

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109391
2021-09-07 14:20:05 -07:00
Alex Zinenko b841ae55e5 [mlir] Fix SplatOp lowering to the LLVM dialect
The lowering has been incorrectly using the operands of the original op instead
of rewritten operands provided to matchAndRewrite call. This may lead to
spurious materializations and generally invalid IR.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D109355
2021-09-07 19:14:28 +02:00
Alex Zinenko 821262eef2 [mlir] Fix GPU LaunchFunc conversion to the LLVM dialect
The conversion has been incorrectly using the operands of the original
operation instead of the converted operands provided to the matchAndRewrite
call. This may lead to spurious materializations and generally invalid IR if
the producer of the original operands is deleted in the process of conversion.

Reviewed By: csigg

Differential Revision: https://reviews.llvm.org/D109356
2021-09-07 16:50:11 +02:00
Matthias Springer c57c4f888c [mlir][linalg] linalg.tiled_loop peeling
Differential Revision: https://reviews.llvm.org/D108270
2021-09-07 09:50:08 +09:00
Alexander Belyaev 58c188507f [mlir][linalg] Fix `FoldInitTensorWithDimOp` if dim(init_tensor) is static.
It looks like it was a typo. Instead of `*maybeConstantIndex`,
`initTensorOp.getStaticSize(*maybeConstantIndex)` should be used to access the
dim size of the tensor. There is a test for that in `canonicalize.mlir`, but it
was working correctly because `ReplaceStaticShapeDims` was canonicalizing DimOp
before `FoldInitTensorWithDimOp`. So, to make the patterns more "orthogonal",
this case is disabled.

Differential Revision: https://reviews.llvm.org/D109247
2021-09-06 10:47:26 +02:00
Eugene Zhulenev fd52b4357a [mlir] Async: check awaited operand error state after sync await
Previously only await inside the async function (coroutine after lowering to async runtime) would check the error state

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D109229
2021-09-04 05:00:17 -07:00
Loren Maggiore 361458b1ce [mlir] create gpu memset op
Create a gpu memset op and corresponding CUDA and ROCm wrappers.

Reviewed By: herhut, lorenrose1013

Differential Revision: https://reviews.llvm.org/D107548
2021-09-04 08:13:04 +02:00
William S. Moses 21d43daf8f [MLIR] Primitive linkage lowering of FuncOp
FuncOp always lowers to an LLVM external linkage presently. This makes it impossible to define functions in mlir which are local to the current module. Until MLIR FuncOps have a more formal linkage specification, this commit allows funcop's to have an optionally specified llvm.linkage attribute, whose value will be used as the linkage of the llvm funcop when lowered.

Differential Revision: https://reviews.llvm.org/D108524

Support LLVM linkage
2021-09-03 20:41:39 -04:00
Mehdi Amini 78accf9f35 Make LLVM Linkage a first class attribute instead of using an integer attribute
This makes the IR more readable, in particular when this will be used on
the builtin func outside of the LLVM dialect.

Reviewed By: wsmoses

Differential Revision: https://reviews.llvm.org/D109209
2021-09-03 21:21:46 +00:00
Aart Bik eee1f1c8fb [mlir][sparse] add convenience method for sparse tensor setup
This simplifies setting up sparse tensors through C-style data structures.
Useful for runtimes that want to interact with MLIR-generated code
without knowning about all bufferization details (viz. memrefs).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109251
2021-09-03 13:35:59 -07:00
Alexander Belyaev 5ee5bbd0ff [mlir][linalg] Extend tiled_loop to SCF conversion to generate scf.parallel.
Differential Revision: https://reviews.llvm.org/D109230
2021-09-03 18:05:54 +02:00
Aart Bik b6d1a31c1b [mlir][sparse] refine heuristic for iteration graph topsort
The sparse index order must always be satisfied, but this
may give a choice in topsorts for several cases. We broke
ties in favor of any dense index order, since this gives
good locality. However, breaking ties in favor of pushing
unrelated indices into sparse iteration spaces gives better
asymptotic complexity. This revision improves the heuristic.

Note that in the long run, we are really interested in using
ML for ML to find the best loop ordering as a replacement for
such heuristics.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109100
2021-09-03 08:37:15 -07:00
Jean Perier 49af2a6275 [mlir][flang] Do not prevent integer types from being parsed as MLIR keywords
DialectAsmParser::parseKeyword is rejecting `'i' digit+` while it is
a valid identifier according to mlir/docs/LangRef.md.

Integer types actually used to be TOK_KEYWORD a while back before the
change: 6af866c58d.

This patch Modifies `isCurrentTokenAKeyword` to return true for tokens that
match integer types too.

The motivation for this change is the parsing of `!fir.type<{` `component-name: component-type,`+ `}>`
type in FIR that represent Fortran derived types. The component-names are
parsed as keywords, and can very well be i32 or any ixxx (which are
valid Fortran derived type component names).

The Quant dialect type parser had to be modified since it relied on `iw` not
being parsed as keywords.

Differential Revision: https://reviews.llvm.org/D108913
2021-09-03 08:20:49 +02:00
Matthias Springer 4fa6c2734c [mlir][scf] Allow runtime type of iter_args to change
The limitation on iter_args introduced with D108806 is too restricting. Changes of the runtime type should be allowed.

Extends the dim op canonicalization with a simple analysis to determine when it is safe to canonicalize.

Differential Revision: https://reviews.llvm.org/D109125
2021-09-03 10:03:05 +09:00
Stella Laurenzo cb7b03819a [mlir][python] Simplify python extension loading.
* Now that packaging has stabilized, removes old mechanisms for loading extensions, preferring direct importing.
* Removes _cext_loader.py, _dlloader.py as unnecessary.
* Fixes the path where the CAPI dll is written on Windows. This enables that path of least resistance loading behavior to work with no further drama (see: https://bugs.python.org/issue36085).
* With this patch, `ninja check-mlir` on Windows with Python bindings works for me, modulo some failures that are actually due to a couple of pre-existing Windows bugs. I think this is the first time the Windows Python bindings have worked upstream.
* Downstream changes needed:
  * If downstreams are using the now removed `load_extension`, `reexport_cext`, etc, then those should be replaced with normal import statements as done in this patch.

Reviewed By: jdd, aartbik

Differential Revision: https://reviews.llvm.org/D108489
2021-09-03 00:43:28 +00:00
Alex Zinenko f9be7a7afd [mlir] speed up construction of LLVM IR constants when possible
The translation to LLVM IR used to construct sequential constants by recurring
down to individual elements, creating constant values for them, and wrapping
them into aggregate constants in post-order. This is highly inefficient for
large constants with known data such as DenseElementsAttr. Use LLVM's
ConstantData for the innermost dimension instead. LLVM does seem to support
data constants for nested sequential constants so the outer dimensions are
still handled recursively. Nevertheless, this speeds up the translation of
large constants with equal dimensions by up to 30x.

Users are advised to rewrite large constants to use flat types before
translating to LLVM IR if more efficiency in translation is necessary. This is
not done automatically as the translation is not aware of the expectations of
the overall compilation flow about type changes and indexing, in particular for
global constants with external linkage.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D109152
2021-09-02 23:07:30 +02:00
Marius Brehler f6063fedb4 [mlir] Add missing dep on MLIRTranslation 2021-09-02 16:54:46 +00:00
Kiran Chandramohan 711aa35759 [MLIR][OpenMP] Add support for declaring critical construct names
Add an operation omp.critical.declare to declare names/symbols of
critical sections. Named omp.critical operations should use symbols
declared by omp.critical.declare. Having a declare operation ensures
that the names of critical sections are global and unique. In the
lowering flow to LLVM IR, the OpenMP IRBuilder creates unique names
for critical sections.

Reviewed By: ftynse, jeanPerier

Differential Revision: https://reviews.llvm.org/D108713
2021-09-02 14:31:19 +00:00
Marius Brehler 2f0750dd2e [mlir] Add Cpp emitter
This upstreams the Cpp emitter, initially presented with [1], from [2]
to MLIR core. Together with the previously upstreamed EmitC dialect [3],
the target allows to translate MLIR to C/C++.

[1] https://reviews.llvm.org/D76571
[2] https://github.com/iml130/mlir-emitc
[3] https://reviews.llvm.org/D103969

Co-authored-by: Jacques Pienaar <jpienaar@google.com>
Co-authored-by: Simon Camphausen <simon.camphausen@iml.fraunhofer.de>
Co-authored-by: Oliver Scherf <oliver.scherf@iml.fraunhofer.de>

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D104632
2021-09-02 13:51:05 +00:00
Alex Zinenko 8647e4c3a0 [mlir] support translating OpenMP loops with reductions
Use the recently introduced OpenMPIRBuilder facility to transate OpenMP
workshare loops with reductions to LLVM IR calling OpenMP runtime. Most of the
heavy lifting is done at the OpenMPIRBuilder. When other OpenMP dialect
constructs grow support for reductions, the translation can be updated to
operate on, e.g., an operation interface for all reduction containers instead
of workshare loops specifically. Designing such a generic translation for the
single operation that currently supports reductions is premature since we don't
know how the reduction modeling itself will be generalized.

Reviewed By: kiranchandramohan

Differential Revision: https://reviews.llvm.org/D107343
2021-09-02 15:38:20 +02:00
Alexander Belyaev f68de11c10 [mlir][linalg] Expose function to create op on buffers during bufferization.
Differential Revision: https://reviews.llvm.org/D109140
2021-09-02 11:09:05 +02:00
Aart Bik 2754604e54 [mlir][sparse] sparse runtime support library improvements
(1) renamed SparseTensor to SparseTensorCOO, the other one remains SparseTensorStorage to focus on contrast

(2) documents difference between public API exclusively for compiler-generated code and methods that could be used by other runtimes (TBD) that want to interact with MLIR

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109039
2021-09-01 16:51:14 -07:00
Jacques Pienaar f7bf8a8658 [mlir][capi] Add NameLoc
Add method to get NameLoc. Treat null child location as unknown to avoid
needing to create UnknownLoc in C API where child loc is not needed.

Differential Revision: https://reviews.llvm.org/D108678
2021-09-01 16:16:35 -07:00
Weiwei Li a79d7c2c85 [mlir][SPIRV] Add Image Operands for Image Instructions
This patch is to add Image Operands in SPIR-V Dialect and also let ImageDrefGather to use Image Operands.

Image Operands are used in many image instructions. "Image Operands encodes what oprands follow, as per Image Operands". And ususally, they are optional to image instructions.

The format of image operands looks like:

    %0 = spv.ImageXXXX %1, ... %3 : f32 ["Bias|Lod"](%4, %5 : f32, f32) -> ...

This patch doesn’t implement all operands (see Section 3.14 in SPIR-V Spec) but provides a skeleton of it. There is TODO in verifyImageOperands function.

Co-authored: Alan Liu <alanliu.yf@gmail.com>

Reviewed by: antiagainst

Differential Revision: https://reviews.llvm.org/D108501
2021-09-02 04:14:17 +08:00
Mehdi Amini 43a894365e Remove deprecated registration APIs (NFC)
In D104421, we changed the API for pass registration.
Before you would write:

      void registerPass("my-pass", "My Pass Description.",
                        [] { return createMyPass(); });
while now you’d only write:

      void registerPass([] { return createMyPass(); });

If you’re using TableGen to define your pass registration, you shouldn’t have anything to do. If you’re using directly the C++ API here are some changes.
Your project may also be broken even if you use TableGen and you call the
generated registration API in case your pass implementation didn’t inherit from
the MyPassBase class generated by TableGen.

If you don't use TableGen, the "my-pass" and "My Pass Description." fields must
be provided by overriding methods on the pass itself:

  llvm::StringRef getArgument() const final { return "my-pass"; }
  llvm::StringRef getDescription() const final {
    return "My Pass Description.";
  }

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D104429
2021-09-01 18:53:30 +00:00
natashaknk f596acc74d [mlir][tosa] Small refactor to the functionality of Depthwise_Conv2D to add the bias at the end of the convolution
Follow-up to the Conv2d and fully_connected lowering adjustments

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D108949
2021-09-01 10:01:00 -07:00
wren romano b04b757a8e [mlir][sparse] Rename the public SparseTensorStorage::asCOO to toCOO
Trying to reduce confusion by having the name of the public method match that of the private method for handling the recursion.  Also adding some comments to SparseTensorStorage::fromCOO to help clarify what the recursive calls are doing in the dense case.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D108954
2021-08-31 15:44:34 -07:00
MaheshRavishankar b686fdbf92 [mlir][Linalg] Drop output tensor from `linalg.pad_tensor` op.
The output tensor was added for tiling purposes. With use of
`TilingInterface` for tiling pad operations, there is no need for an
explicit operand for the shape of result of `linalg.pad_tensor`
op. The interface allows the tiling pattern to query the value that
can be used for the "init" needed for tiling dynamically.

Differential Revision: https://reviews.llvm.org/D108613
2021-08-31 11:12:24 -07:00
Mehdi Amini 387f95541b Add a new interface allowing to set a default dialect to be used for printing/parsing regions
Currently the builtin dialect is the default namespace used for parsing
and printing. As such module and func don't need to be prefixed.
In the case of some dialects that defines new regions for their own
purpose (like SpirV modules for example), it can be beneficial to
change the default dialect in order to improve readability.

Differential Revision: https://reviews.llvm.org/D107236
2021-08-31 17:52:40 +00:00
Mehdi Amini c41b16c26b Change ASM Op printer to print the operation name in the framework instead of leaving it up to each individual operation
This aligns the printer with the parser contract: the operation isn't part of the user-controllable part of the syntax.

Differential Revision: https://reviews.llvm.org/D108804
2021-08-31 17:52:40 +00:00
Mehdi Amini fd87963eee Change dialect `printOperation()` hook to `getOperationPrinter()`
This makes the hook return a printer if available, instead of using LogicalResult  to
indicate if a printer was available (and invoked). This allows the caller to detect that
the dialect has a printer for a given operation without actually invoking the printer.
It'll be leveraged in a future revision to move printing the op name itself under control
of the ASMPrinter.

Differential Revision: https://reviews.llvm.org/D108803
2021-08-31 17:52:39 +00:00
Tres Popp 44485fcd97 [mlir] Prevent assertion failure in DropUnitDims
Don't assert fail on strided memrefs when dropping unit dims.
Instead just leave them unchanged.

Differential Revision: https://reviews.llvm.org/D108205
2021-08-31 12:15:13 +02:00
marina kolpakova a.k.a. geexie 0080d2aa55 [mlir][gpu] folds memref.dim of gpu.alloc
implements canonicalization which folds memref.dim(gpu.alloc(%size), %idx) -> %size

Differential Revision: https://reviews.llvm.org/D108892
2021-08-31 12:33:10 +03:00
Stella Laurenzo f05ff4f757 [mlir][python] Apply py::module_local() to all classes.
* This allows multiple MLIR-API embedding downstreams to co-exist in the same process.
* I believe this is the last thing needed to enable isolated embedding.

Differential Revision: https://reviews.llvm.org/D108605
2021-08-30 22:18:43 -07:00
MaheshRavishankar 2dfb66833f Fix unused variable in release build.
Differential Revision: https://reviews.llvm.org/D108963
2021-08-30 19:34:52 -07:00
MaheshRavishankar ba72cfe734 [mlir] Add an interface to allow operations to specify how they can be tiled.
An interface to allow for tiling of operations is introduced. The
tiling of the linalg.pad_tensor operation is modified to use this
interface.

Differential Revision: https://reviews.llvm.org/D108611
2021-08-30 16:31:18 -07:00
Chris Lattner faf1c22408 [Builder] Eliminate the StringRef/StringAttr forms of getSymbolRefAttr.
The StringAttr version doesn't need a context, so we can just use the
existing `SymbolRefAttr::get` form.  The StringRef version isn't preferred
so we want to encourage people to use StringAttr.

There is an additional form of getSymbolRefAttr that takes a (SymbolTrait
implementing) operation.  This should also be moved, but I'll do that as
a separate patch.

Differential Revision: https://reviews.llvm.org/D108922
2021-08-30 16:05:36 -07:00
natashaknk 203d38b234 [mlir][tosa] Small refactor to the functionality of Conv2D and Fully_connected to add the bias at the end of the convolution
Made to adjust for a modification to the tiling algorithm

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D108746
2021-08-30 13:18:43 -07:00
Stella Laurenzo 8e6c55c92c [mlir][python] Extend C/Python API to be usable for CFG construction.
* It is pretty clear that no one has tried this yet since it was both incomplete and broken.
* Fixes a symbol hiding issues keeping even the generic builder from constructing an operation with successors.
* Adds ODS support for successors.
* Adds CAPI `mlirBlockGetParentRegion`, `mlirRegionEqual` + tests (and missing test for `mlirBlockGetParentOperation`).
* Adds Python property: `Block.region`.
* Adds Python methods: `Block.create_before` and `Block.create_after`.
* Adds Python property: `InsertionPoint.block`.
* Adds new blocks.py test to verify a plausible CFG construction case.

Differential Revision: https://reviews.llvm.org/D108898
2021-08-30 08:28:00 -07:00
Chris Lattner 41d4aa7de6 [SymbolRefAttr] Revise SymbolRefAttr to hold a StringAttr.
SymbolRefAttr is fundamentally a base string plus a sequence
of nested references.  Instead of storing the string data as
a copies StringRef, store it as an already-uniqued StringAttr.

This makes a lot of things simpler and more efficient because:
1) references to the symbol are already stored as StringAttr's:
   there is no need to copy the string data into MLIRContext
   multiple times.
2) This allows pointer comparisons instead of string
   comparisons (or redundant uniquing) within SymbolTable.cpp.
3) This allows SymbolTable to hold a DenseMap instead of a
   StringMap (which again copies the string data and slows
   lookup).

This is a moderately invasive patch, so I kept a lot of
compatibility APIs around.  It would be nice to explore changing
getName() to return a StringAttr for example (right now you have
to use getNameAttr()), and eliminate things like the StringRef
version of getSymbol.

Differential Revision: https://reviews.llvm.org/D108899
2021-08-29 21:54:47 -07:00
Matthias Springer d18ffd61d4 [mlir][SCF] Canonicalize dim(x) where x is an iter_arg
* Add `DimOfIterArgFolder`.
* Move existing cross-dialect canonicalization patterns to `LoopCanonicalization.cpp`.
* Rename `SCFAffineOpCanonicalization` pass to `SCFForLoopCanonicalization`.
* Expand documentaton of scf.for: The type of loop-carried variables may not change with iterations. (Not even the dynamic type.)

Differential Revision: https://reviews.llvm.org/D108806
2021-08-30 01:39:56 +00:00
Matthias Springer eedc997b7d [mlir][Analysis] Add batched version of FlatAffineConstraints::addId
* Add batched version of all `addId` variants, so that multiple IDs can be added at a time.
* Rename `addId` and variants to `insertId` and `appendId`. Most external users call `appendId`. Splitting `addId` into two functions also makes it possible to provide batched version for both. (Otherwise, the overloads are ambigious when calling `addId`.)

Differential Revision: https://reviews.llvm.org/D108532
2021-08-30 00:56:44 +00:00
Lei Zhang a5621e26db [mlir][spirv] Use type dyn_cast when scanning spv.GlobalVariable
This avoids crashes when there are spv.GlobalVariable without
pointer type.
2021-08-29 12:01:19 -04:00
Aart Bik b9f87e24f2 [mlir] add missing include, fix broken build
Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D108873
2021-08-28 09:36:38 -07:00
Markus Böck 0235e3c7a6 [mlir][NFC] Fully qualify default value of Attributes `getStorageType()` in files generated by mlir-tblgen 2021-08-28 15:37:56 +02:00
Uday Bondhugula 4edc9e2acf [MLIR][GPU] Drop mgpuMemHostRegisterMemRef's dependence on LLVM Support
Drop mgpuMemHostRegisterMemRef's dependence on LLVM Support. This
method is the only one in CUDA runtime wrappers library that creates
a dependence on libLLVMSupport due to its use of SmallVector and
ArrayRef. The code can be as easily/compactly written without those ADT.
The dependence on LLVMSupport adds a significant amount of additional
complexity for external things that want to link this library in (both
statically or as a shared object) since libLLVMSupport includes numerous
other objects that are sensitive to C++ compiler version and ABI.

Differential Revision: https://reviews.llvm.org/D108684
2021-08-28 11:37:55 +05:30
Aart Bik 0a7b8cc5dd [mlir][sparse] fully implement sparse tensor to sparse tensor conversions
with rigorous integration test

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D108721
2021-08-27 15:08:18 -07:00
Butygin 1e35a7690d [mlir][spirv] Initial support for 64 bit index type and builtins
Differential Revision: https://reviews.llvm.org/D108516
2021-08-27 01:38:53 +03:00
Rob Suderman 90478251c7 [mlir][tosa] Tosa reverse to linalg supporting dynamic shapes
Needed to switch to extract to support tosa.reverse using dynamic shapes.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108744
2021-08-26 13:23:59 -07:00
Rob Suderman 0600bb4d18 [mlir][tosa] Elementwise operation dynamic shape support
Added dynamic shape support for elementwise operations. This assumes equal
sizes (broadcasting 1-length dynamic is problematic).

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108730
2021-08-26 11:18:58 -07:00
Aart Bik 6b26857dbf [mlir][sparse] add asCOO() functionality to sparse tensor object
This prepares general sparse to sparse conversions. The code that
needs to be generated using this new feature is now simply:

(1) coo = sparse_tensor_1->asCOO();          // source format1
(2) sparse_tensor_2 = newSparseTensor(coo);  // destination format2

By using COO as an intermediate, we can do *all* conversions without
having to implement the full O(N^2) conversion matrix. Note that we
can always improve particular conversions individually if a faster
solution is required.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D108681
2021-08-25 21:50:39 -07:00
River Riddle c8d9e1ce43 [mlir][AttrTypeGen] Add support for specifying a "accessor" type of a parameter
This allows for using a different type when accessing a parameter than the
one used for storage. This allows for returning parameters by reference,
enables using more optimized/convient reference results, and more.

Differential Revision: https://reviews.llvm.org/D108593
2021-08-25 09:27:36 +00:00
River Riddle 9658b061dd [mlir] Update DialectAsmParser::parseString to use std::string instead of StringRef
This allows for parsing strings that have escape sequences, which require constructing
a string (as they can't be represented by looking at the Token contents directly).

Differential Revision: https://reviews.llvm.org/D108589
2021-08-25 09:27:35 +00:00
River Riddle aea3026ea7 [mlir] Move the Operation use iteration utilities to ResultRange
This allows for iterating and interacting with the uses of a specific subset of
results as opposed to just the full range.

Differential Revision: https://reviews.llvm.org/D108586
2021-08-25 09:27:35 +00:00
Tres Popp 868bd9938d [mlir] Add assertion in NamedAttrList to prevent adding null attributes
Differential Revision: https://reviews.llvm.org/D108570
2021-08-25 11:06:53 +02:00
Rob Suderman 5541a05d6a [mlir][tosa] Quantized tosa.avg_pool2d lowering to linalg
Includes the quantized version of average pool lowering to linalg dialect.
This includes a lit test for the transform. It is not 100% correct as the
multiplier / shift should be done in i64 however this is negligable rounding
difference.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108676
2021-08-24 18:54:23 -07:00
Rob Suderman 4ef1770abd [mlir][tosa] Table did not apply offset before extract on i8 input
Lowering to table was incorrect as it did not apply a 128 offset before
extracting the value from the table. Fixed and correct tensor length on input
table.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108436
2021-08-24 18:52:33 -07:00
Matthias Springer a9cff97f94 [mlir][SCF] Generalize AffineMinSCFCanonicalization to min/max ops
* Add support for affine.max ops to SCF loop peeling pattern.
* Add support for affine.max ops to `AffineMinSCFCanonicalizationPattern`.
* Rename `AffineMinSCFCanonicalizationPattern` to `AffineOpSCFCanonicalizationPattern`.
* Rename `AffineMinSCFCanonicalization` pass to `SCFAffineOpCanonicalization`.

Differential Revision: https://reviews.llvm.org/D108009
2021-08-25 10:40:34 +09:00
wren romano 90e0c657b7 [mlir][sparse] Correcting the use of emplace_back
The emplace commands are variadic and should take all the constructor arguments directly, since they implicitly call the constructor themselves in order to avoid the cost of constructing and then moving/copying temporaries.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D108670
2021-08-24 18:32:13 -07:00
Rob Suderman a7bf93807b [mlir][tosa] Fix conv/depthwise conv padding for quantized values
When padding quantized operations, the padding needs to equal the zero point
of the input value. Corrected the pass to change the padding value if quantized.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108440
2021-08-24 18:13:22 -07:00
Matthias Springer 2de2dbef2a [mlir][linalg] Replace AffineMinSCFCanonicalizationPattern with SCF reimplementation
Use the new canonicalization pattern in the SCF dialect.

Differential Revision: https://reviews.llvm.org/D107732
2021-08-25 08:52:56 +09:00
Matthias Springer 98aa694d0d [mlir][scf] Add general affine.min canonicalization pattern
This canonicalization simplifies affine.min operations inside "for loop"-like operations (e.g., scf.for and scf.parallel) based on two invariants:
* iv >= lb
* iv < lb + step * ((ub - lb - 1) floorDiv step) + 1

This commit adds a new pass `canonicalize-scf-affine-min` (instead of being a canonicalization pattern) to avoid dependencies between the Affine dialect and the SCF dialect.

Differential Revision: https://reviews.llvm.org/D107731
2021-08-25 07:32:30 +09:00
Tyler Augustine d25e91d7f6 Support alias.scope and noalias metadata
Introduces new Ops to represent 1. alias.scope metadata in LLVM, and 2. domains for these scopes. These correspond to the metadata described in https://llvm.org/docs/LangRef.html#noalias-and-alias-scope-metadata. Lists of scopes are modeled the same way as access groups - as an ArrayAttr on the Op (added in https://reviews.llvm.org/D97944).

Lowering 'noalias' attributes on function parameters is already supported. However, lowering `noalias` metadata on individual Ops is not, which is added in this change. LLVM uses the same keyword for these, but this change introduces a separate attribute name 'noalias_scopes' to represent this distinct concept.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D107870
2021-08-24 20:42:59 +02:00
Aart Bik fda176892e [mlir][sparse] use new permutation utility to avoid codedup
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D108636
2021-08-24 08:48:17 -07:00
Aart Bik a643bd3189 [mlir] add permutation utility
I found myself typing this code several times at different places
by now, so time to make this a general utility instead. Given
a permutation, it returns the permuted position of the input,
for example (i,j,k) -> (k,i,j) yields position 1 for input 0.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D108347
2021-08-24 08:07:40 -07:00
Matthias Springer ebf35370ff [mlir][tensor] Insert explicit tensor.cast ops for insert_slice src
If additional static type information can be deduced from a insert_slice's size operands, insert an explicit cast of the op's source operand.

This enables other canonicalization patterns that are matching for tensor_cast ops such as `ForOpTensorCastFolder` in SCF.

Differential Revision: https://reviews.llvm.org/D108617
2021-08-24 19:45:04 +09:00
Matthias Springer 0c36082963 [mlir][SCF] Use symbols in loop peeling rewrite
Use symbols in the affine map instead of dims. Dims should not be divided.

Differential Revision: https://reviews.llvm.org/D108431
2021-08-24 19:39:19 +09:00
MaheshRavishankar b546f4347b [mlir]Linalg] Allow controlling fusion of linalg.generic -> linalg.tensor_expand_shape.
Differential Revision: https://reviews.llvm.org/D108565
2021-08-23 16:28:10 -07:00
Aart Bik 236a90802d [mlir][sparse] replace support lib conversion with actual MLIR codegen
Rationale:
Passing in a pointer to the memref data in order to implement the
dense to sparse conversion was a bit too low-level. This revision
improves upon that approach with a cleaner solution of generating
a loop nest in MLIR code itself that prepares the COO object before
passing it to our "swiss army knife" setup.  This is much more
intuitive *and* now also allows for dynamic shapes.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D108491
2021-08-23 14:26:05 -07:00
River Riddle 4e103a12d9 [mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).

`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.

Differential Revision: https://reviews.llvm.org/D107774
2021-08-23 20:32:31 +00:00
MaheshRavishankar 4aeeb91a92 [mlir][Linalg] Allow all build methods of Structured ops to specify additional attributes.
Differential Revision: https://reviews.llvm.org/D108338
2021-08-23 13:06:34 -07:00
River Riddle da12d88b1c [mlir][NFC] Add inlineRegion overloads that take a block iterator insert position
This allows for inlining into an empty block or to the beginning of a block. NFC as the existing implementations now foward to this overload.

Differential Revision: https://reviews.llvm.org/D108572
2021-08-23 19:49:53 +00:00
River Riddle e4635e6328 [mlir][FoldUtils] Ensure the created constant dominates the replaced op
This revision fixes a bug where an operation would get replaced with
a pre-existing constant that didn't dominate it. This can occur when
a pattern inserts operations to be folded at the beginning of the
constants insertion block. This revision fixes the bug by moving the
existing constant before the replaced operation in such cases. This is
fine because if a constant didn't already exist, a new one would have
been inserted before this operation anyways.

Differential Revision: https://reviews.llvm.org/D108498
2021-08-23 18:48:24 +00:00
Matthias Springer bc194a5bb5 [mlir][SCF] Do not peel loops inside partial iterations
Do not apply loop peeling to loops that are contained in the partial iteration of an already peeled loop. This is to avoid code explosion when dealing with large loop nests. Can be controlled with a new pass option `skip-partial`.

Differential Revision: https://reviews.llvm.org/D108542
2021-08-23 21:35:46 +09:00
William S. Moses 973cb2c326 [MLIR][OMP] Ensure nested scf.parallel execute all iterations
Presently, the lowering of nested scf.parallel loops to OpenMP creates one omp.parallel region, with two (nested) OpenMP worksharing loops on the inside. When lowered to LLVM and executed, this results in incorrect results. The reason for this is as follows:

An OpenMP parallel region results in the code being run with whatever number of threads available to OpenMP. Within a parallel region a worksharing loop divides up the total number of requested iterations by the available number of threads, and distributes accordingly. For a single ws loop in a parallel region, this works as intended.

Now consider nested ws loops as follows:

omp.parallel {
   A: omp.ws %i = 0...10 {
      B: omp.ws %j = 0...10 {
          code(%i, %j)
      }
   }
}

Suppose we ran this on two threads. The first workshare loop would decide to execute iterations 0, 1, 2, 3, 4 on thread 0, and iterations 5, 6, 7, 8, 9 on thread 1. The second workshare loop would decide the same for its iteration. This means thread 0 would execute i \in [0, 5) and j \in [0, 5). Thread 1 would execute i \in [5, 10) and j \in [5, 10). This means that iterations i in [5, 10), j in [0, 5) and i in [0, 5), j in [5, 10) never get executed, which is clearly wrong.

This permits two options for a remedy:
1) Change the semantics of the omp.wsloop to be distinct from that of the OpenMP runtime call or equivalently #pragma omp for. This could then allow some lowering transformation to remedy the aforementioned issue. I don't think this is desirable for an abstraction standpoint.
2) When lowering an scf.parallel always surround the wsloop with a new parallel region (thereby causing the innermost wsloop to use the number of threads available only to it).

This PR implements the latter change.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D108426
2021-08-20 19:06:28 -04:00
Rob Suderman 871c812483 [mlir][linalg] Finish refactor of TC ops to YAML
Multiple operations were still defined as TC ops that had equivalent versions
as YAML operations. Reducing to a single compilation path guarantees that
frontends can lower to their equivalent operations without missing the
optimized fastpath.

Some operations are maintained purely for testing purposes (mainly conv{1,2,3}D
as they are included as sole tests in the vectorizaiton transforms.

Differential Revision: https://reviews.llvm.org/D108169
2021-08-20 12:35:04 -07:00
Vladislav Vinogradov 9775c0c9f0 [mlir] Fix ControlFlowInterfaces implementation for Async dialect
* Add `RegionBranchTerminatorOpInterface` to `YieldOp`.
* Implement `getSuccessorEntryOperands` in `ExecuteOp`.
* Fix `getSuccessorRegions` implementation in `ExecuteOp`.

Reviewed By: ezhulenev

Differential Revision: https://reviews.llvm.org/D108373
2021-08-20 12:14:45 +03:00
Rob Suderman 3205ee7e81 [mlir][tosa] Support UInt8 inputs and outputs for tosa.rescale
Tosa rescale can contain uint8 types. Added support for these types
using an unrealized conversion cast. Optimistically it would be better to
use bitcast however it does not support unsigned integers.

Differential Revision: https://reviews.llvm.org/D108427
2021-08-19 18:58:44 -07:00
Morten Borup Petersen 6c1436a9b0 [MLIR][SCF] Parenthesize multiple return types in scf.execute_region asm op
Previously, ExecuteRegionOps with multiple return values would fail a round-trip test due to missing parenthesis around the types.

Differential Revision: https://reviews.llvm.org/D108402
2021-08-19 21:31:51 +01:00
MaheshRavishankar 16ffb283c5 Revert "[mlir][Linalg] Allow all build methods of Structured ops to specify additional attributes."
This reverts commit 95ddc8341a.

Differential Revision: https://reviews.llvm.org/D108396
2021-08-19 11:53:41 -07:00
MaheshRavishankar 95ddc8341a [mlir][Linalg] Allow all build methods of Structured ops to specify additional attributes.
Differential Revision: https://reviews.llvm.org/D108338
2021-08-19 11:14:35 -07:00
Matthias Springer 76a1861816 [mlir][SparseTensor] Split scf.for loop into masked/unmasked parts
Apply the "for loop peeling" pattern from SCF dialect transforms. This pattern splits scf.for loops into full and partial iterations. In the full iteration, all masked loads/stores are canonicalized to unmasked loads/stores.

Differential Revision: https://reviews.llvm.org/D107733
2021-08-19 21:53:11 +09:00
Matthias Springer 8e8b70aa84 [mlir][scf] Simplify affine.min ops after loop peeling
Simplify affine.min ops, enabling various other canonicalizations inside the peeled loop body.

affine.min ops such as:
```
map = affine_map<(d0)[s0, s1] -> (s0, -d0 + s1)>
%r = affine.min #affine.min #map(%iv)[%step, %ub]
```
are rewritten them into (in the case the peeled loop):
```
%r = %step
```

To determine how an affine.min op should be rewritten and to prove its correctness, FlatAffineConstraints is utilized.

Differential Revision: https://reviews.llvm.org/D107222
2021-08-19 17:24:53 +09:00