Commit Graph

8809 Commits

Author SHA1 Message Date
Mehdi Amini 5de44d2521 Disable leak check for the MLIR Sparse CPU integration tests (NFC)
See http://llvm.org/pr52046 for tracking.
2021-10-03 03:35:31 +00:00
Mehdi Amini 51b9f0b82a Fix memory leaks in MLIR integration tests for vector dialect (NFC) 2021-10-03 03:28:24 +00:00
Mehdi Amini 2da3facd86 Fix memory leak in MLIR SPIRV ModuleCombiner 2021-10-02 23:55:25 +00:00
Mehdi Amini bac4529b43 Fix/disable more MLIR tests exposing leaks in ASAN builds (NFC) 2021-10-02 23:53:02 +00:00
Mehdi Amini 4b28638bcc Fix multiple memory leaks in mlir-cpu-runner tests (NFC) 2021-10-02 23:16:35 +00:00
Mehdi Amini fe48ecb047 Fix memory leak in mlir-cpu-runner/sgemm_naive_codegen.mlir (NFC) 2021-10-02 23:07:49 +00:00
Mehdi Amini 9312cb6f20 Fix Undefined Behavior in MLIR Diagnostic: don't call memcpy with a nullptr source
This happens when streaming an empty Twine as part of a diagnostic.

Differential Revision: https://reviews.llvm.org/D111002
2021-10-02 21:32:20 +00:00
Mehdi Amini 57d9adefa0 Fix memory leaks in MLIR unit-tests (NFC) 2021-10-02 21:31:46 +00:00
Mehdi Amini 107198fe7d Fix memory leaks in mlir/unittests/MLIRTableGenTests
Trying to get MLIR ASAN-clean.
2021-10-02 21:06:02 +00:00
Mehdi Amini db79f4a2e9 Free memory leak on duplicate interface registration
I guess this is why we should use unique_ptr as much as possible.
Also fix the InterfaceAttachmentTest.cpp test.

Differential Revision: https://reviews.llvm.org/D110984
2021-10-02 16:41:28 +00:00
Mehdi Amini 237d18a61a Fix memory leaks in mlir/test/CAPI/ir.c 2021-10-02 04:45:40 +00:00
Mehdi Amini a1d1c31746 Add a `check-mlir-build-only` build target that only builds the dependencies of the `check-mlir` test target (NFC) 2021-10-02 04:06:17 +00:00
wren romano af7ac1d95b [mlir][sparse] Sharing calls to adaptor.getOperands()[0]
This is preliminary work towards D110790. Depends On D110883.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110884
2021-10-01 14:20:31 -07:00
wren romano 14fffda979 [mlir][sparse] Factoring out allocaIndices()
This is preliminary work towards D110790. Depends On D110882.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110883
2021-10-01 14:18:56 -07:00
wren romano ca01034714 [mlir][sparse] Factoring out getZero() and avoiding unnecessary Type params
This is preliminary work towards D110790

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110882
2021-10-01 14:17:53 -07:00
Daniel Resnick 782a97a977 [mlir][capi] Add TypeID to MLIR C-API
Exposes mlir::TypeID to the C API as MlirTypeID along with various accessors
and helper functions.

Differential Revision: https://reviews.llvm.org/D110897
2021-10-01 14:21:18 -06:00
Lei Zhang a3f425946d [mlir][linalg] Include InitTensorOp in tiling canonicalization
Tiling can create dim ops and those dim ops can take `InitTensorOp`
as input. Including it in the tiling canonicalization patterns
allows us to fold those dim ops away.

Also sorted the existing ops along the way.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D110876
2021-10-01 14:13:19 -04:00
Tobias Gysi bf28849745 [mlir][linalg] Retire PoolingMaxOp/PoolingMinOp/PoolingSumOp.
The pooling ops are among the last remaining hard coded Linalg operations that have no region attached. They got obsolete due to the OpDSL pooling operations. Removing them allows us to delete specialized code and tests that are not needed for the OpDSL counterparts that rely on the standard code paths.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110909
2021-10-01 13:51:56 +00:00
Uday Bondhugula 08b63db8bb [MLIR][GPU] Add GPU launch op support for dynamic shared memory
Add support for dynamic shared memory for GPU launch ops: add an
optional operand to gpu.launch and gpu.launch_func ops to specify the
amount of "dynamic" shared memory to use. Update lowerings to connect
this operand to the GPU runtime.

Differential Revision: https://reviews.llvm.org/D110800
2021-10-01 16:46:07 +05:30
Alexander Belyaev 693c61b2e0 [mlir] Enable loop peeling for "reduction" dimensions of tiled_loop.
Differential Revision: https://reviews.llvm.org/D110919
2021-10-01 13:07:57 +02:00
Nicolas Vasilache b016bd1230 [mlir][Linalg] Refactor comprehensive bufferize for external uses - NFC
This revision exposes some minimal funcitonality to allow comprehensive
bufferization to interop with external projects.

Differential Revision: https://reviews.llvm.org/D110875
2021-09-30 20:21:08 +00:00
wren romano 218954865e [mlir][sparse] Correcting a few typos
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110773
2021-09-30 11:42:46 -07:00
Lei Zhang cb2e651800 [mlir][linalg] Fix incorrect bound calculation for tiling conv
For convolution, the input window dimension's access affine map
is of the form `(d0 * s0 + d1)`, where `d0`/`d1` is the output/
filter window dimension, and `s0` is the stride.

When tiling, https://reviews.llvm.org/D109267 changed how the
way dimensions are acquired. Instead of directly querying using
`*.dim` ops on the original convolution op, we now get it by
applying the access affine map to the loop upper bounds. This
is fine for dimensions having single-dimension affine maps,
like matmul, but not for convolution input. It will cause
incorrect compuation and out of bound. A concrete example, say
we have 1x225x225x3 (NHWC) input, 3x3x3x32 (HWCF) filter, and
1x112x112x3 (NHWC) output with stride 2, (112 * 2 + 3) would be
227, which is different from the correct input window dimension
size 225.

Instead, we should first calculate the max indices for each loop,
and apply the affine map to them, and then plus one to get the
dimension size. Note this makes no difference for matmul-like
ops given they will have `d0 - 1 + 1` effectively.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110849
2021-09-30 13:50:57 -04:00
Stella Laurenzo 267bb194f3 [mlir] Remove old "tc" linalg ods generator.
* This could have been removed some time ago as it only had one op left in it, which is redundant with the new approach.
* `matmul_i8_i8_i32` (the remaining op) can be trivially replaced by `matmul`, which natively supports mixed precision.

Differential Revision: https://reviews.llvm.org/D110792
2021-09-30 16:30:06 +00:00
Alex Zinenko 93a6b49d38 [mlir][python] provide bindings for ops from the sparse_tensor dialect
Previously, the dialect was exposed for linking and pass management purposes,
but we did not generate op classes for it. Generate them.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110819
2021-09-30 15:53:16 +02:00
Adrian Kuegel 68e56bd320 [mlir] Remove unused namespace alias. 2021-09-30 13:50:31 +02:00
Alex Zinenko 8c1b785ce1 [mlir][python] provide bindings for the SCF dialect
This is an important core dialect that has not been exposed previously. Set up
the default bindings generation and provide a nicer wrapper for the `for` loop
with access to the loop configuration and body.

Depends On D110758

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D110759
2021-09-30 09:38:15 +02:00
Alex Zinenko afeda4b9ed [mlir][python] provide access to function argument/result attributes
Without this change, these attributes can only be accessed through the generic
operation attribute dictionary provided the caller knows the special operation
attribute names used for this purpose. Add some Python wrapping to support this
use case.

Also provide access to function arguments usable inside the function along with
a couple of quality-of-life improvements in using block arguments (function
arguments being the arguments of its entry block).

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D110758
2021-09-30 09:38:13 +02:00
Chris Lattner d104db531e AsmParser::getContext() - there can be only one. This should unbreak the build. 2021-09-29 22:23:03 -07:00
Chris Lattner fb093c8314 [ODS/AsmParser] Don't pass MLIRContext with DialectAsmParser.
The former is redundant because the later carries it as part of
its builder.  Add a getContext() helper method to DialectAsmParser
to make this more convenient, and stop passing the context around
explicitly.  This simplifies ODS generated parser hooks for attrs
and types.

This resolves PR51985

Recommit 4b32f8bac4 after fixing a dependency.

Differential Revision: https://reviews.llvm.org/D110796
2021-09-30 05:10:28 +00:00
Chris Lattner 33f4315324 [AsmParser] move AsmParser::getContext to IR library.
This is (perhaps unintuitively) where the other AsmParser method
implementations are, which means that dialects don't generally need
to depend on MLIRParser directly.  This should fix a build failure
building .so files on the mlir-nvidia builder.
2021-09-29 22:07:00 -07:00
Mehdi Amini 3310e0020c Revert "[ODS/AsmParser] Don't pass MLIRContext with DialectAsmParser."
This reverts commit 4b32f8bac4.

Seems like the build is broken with -DDBUILD_SHARED_LIBS=ON
2021-09-30 05:01:17 +00:00
Chris Lattner 4b32f8bac4 [ODS/AsmParser] Don't pass MLIRContext with DialectAsmParser.
The former is redundant because the later carries it as part of
its builder.  Add a getContext() helper method to DialectAsmParser
to make this more convenient, and stop passing the context around
explicitly.  This simplifies ODS generated parser hooks for attrs
and types.

This resolves PR51985

Differential Revision: https://reviews.llvm.org/D110796
2021-09-29 21:36:05 -07:00
Matthias Springer 27451a05ed [mlir][vector] Fold transfer ops and tensor.extract/insert_slice.
* Fold vector.transfer_read and tensor.extract_slice.
* Fold vector.transfer_write and tensor.insert_slice.

Differential Revision: https://reviews.llvm.org/D110627
2021-09-30 09:28:00 +09:00
Rob Suderman 826d3eaae7 [mlir][tosa] Ranked check for transpose was wrong.
Should have verified the perm length and input rank were the same before
inferring shape. Caused a crash with invalid IR.

Differential Revision: https://reviews.llvm.org/D110674
2021-09-29 15:14:42 -07:00
Aart Bik 7f1cb43d60 [mlir][sparse] simplify negi code generation with subi
The lack of negi details leaked from merger class into codegen part.
Also, special case for vector code was not needed, the type can be used directly!

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D110677
2021-09-29 10:00:06 -07:00
Marcel Koester 09cd4a71ed Introduced AllocationOpInterface to create deallocation operations on-the-fly that are compatible with the allocation operation implementing this interface.
Added interface implementations for AllocOp and CloneOp defined in the MemRef diallect.
Adapted the BufferDeallocation pass to be compatible with the interface introduced in this CL.

Differential Revision: https://reviews.llvm.org/D109350
2021-09-29 15:54:21 +02:00
Nicolas Vasilache 92ea624a13 [mlir][Linalg] Rewrite CodegenStrategy to populate a pass pipeline.
This revision retires a good portion of the complexity of the codegen strategy and puts the logic behind pass logic.

Differential revision: https://reviews.llvm.org/D110678
2021-09-29 13:35:45 +00:00
Sean Silva 204d301bb1 [mlir][Python] Fix lifetime of ExecutionEngine runtime functions.
We weren't retaining the ctypes closures that the ExecutionEngine was
calling back into, leading to mysterious errors.

Open to feedback about how to test this. And an extra pair of eyes to
make sure I caught all the places that need to be aware of this.

Differential Revision: https://reviews.llvm.org/D110661
2021-09-28 22:32:20 +00:00
Rob Suderman 4f38f0640d [mlir][tosa] Add i32 to supported quantized type
Quantized int type should include I32 types as its the output of a quantizd
convolution or matmul operation.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D110651
2021-09-28 15:04:39 -07:00
bakhtiyar bdde959533 Remove unnecessary async group creates and awaits.
Reviewed By: ezhulenev

Differential Revision: https://reviews.llvm.org/D110605
2021-09-28 14:52:08 -07:00
bakhtiyar 55dfab39a2 Rename target block size to min task size for clarity.
Reviewed By: ezhulenev

Differential Revision: https://reviews.llvm.org/D110604
2021-09-28 14:51:55 -07:00
Amy Zhuang 7ab14b8886 [mlir] Unroll-and-jam loops with iter_args.
Unroll-and-jam currently doesn't work when the loop being unroll-and-jammed
or any of its inner loops has iter_args. This patch modifies the
unroll-and-jam utility to support loops with iter_args.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D110085
2021-09-28 14:13:27 -07:00
thomasraoux b12e4c17e0 [mlir] Fix bug in FoldSubview with rank reducing subview
Fix how we calculate the new permutation map of the transfer ops.

Differential Revision: https://reviews.llvm.org/D110638
2021-09-28 13:18:29 -07:00
Alexander Belyaev 9fb57c8c1d [mlir] Add min/max operations to Standard.
[RFC: Add min/max ops](https://llvm.discourse.group/t/rfc-add-min-max-operations/4353)

I was following the naming style for Arith dialect in
https://reviews.llvm.org/D110200,
i.e. similar to DivSIOp and DivUIOp I defined MaxSIOp, MaxUIOp.

When Arith PR is landed, I will migrate these ops as well.

Differential Revision: https://reviews.llvm.org/D110540
2021-09-28 09:40:22 +02:00
Tobias Gysi d20d0e145d [mlir][linalg] Finer-grained padding control.
Adapt the signature of the PaddingValueComputationFunction callback to either return the padding value or failure to signal padding is not desired.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110572
2021-09-27 19:21:37 +00:00
Aart Bik 06e2a0684e [mlir][sparse] sampled matrix multiplication fusion test
This integration tests runs a fused and non-fused version of
sampled matrix multiplication. Both should eventually have the
same performance!

NOTE: relies on pending tensor.init fix!

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D110444
2021-09-27 11:50:49 -07:00
Aart Bik ec97a205c3 [mlir][sparse] preserve zero-initialization for materializing buffers
This revision makes sure that when the output buffer materializes locally
(in contrast with the passing in of output tensors either in-place or not
in-place), the zero initialization assumption is preserved. This also adds
a bit more documentation on our sparse kernel assumption (viz. TACO
assumptions).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D110442
2021-09-27 11:22:05 -07:00
Sumesh Udayakumaran b2af2aeea6 [mlir] Mode for explicitly controlling the fusion kind
New mode option that allows for either running the default fusion kind that happens today or doing either of producer-consumer or sibling fusion. This will also be helpful to minimize the compile-time of the fusion tests.

Reviewed By: bondhugula, dcaballe

Differential Revision: https://reviews.llvm.org/D110102
2021-09-27 20:37:42 +03:00
William S. Moses 6dd5b1e33e [MLIR][LLVM] Add error if using incorrect attribute type for specifying LLVM linkage
Address post-commit review in https://reviews.llvm.org/D108524 to add appropriate diagnostics.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D110566
2021-09-27 13:24:05 -04:00
Bixia Zheng fbd5821c6f Implement the conversion from sparse constant to sparse tensors.
The sparse constant provides a constant tensor in coordinate format. We first split the sparse constant into a constant tensor for indices and a constant tensor for values. We then generate a loop to fill a sparse tensor in coordinate format using the tensors for the indices and the values. Finally, we convert the sparse tensor in coordinate format to the destination sparse tensor format.

Add tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110373
2021-09-27 09:47:29 -07:00
Eugene Zhulenev 92db09cde0 [mlir] AsyncRuntime: use int64_t for ref counting operations
Workaround for SystemZ ABI problem: https://bugs.llvm.org/show_bug.cgi?id=51898

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D110550
2021-09-27 07:55:01 -07:00
Tobias Gysi e158b5634a [mlir][linalg] Make fusion on tensor rewriter friendly (NFC).
Let the calling pass or pattern replace the uses of the original root operation. Internally, the tileAndFuse still replaces uses and updates operands but only of newly created operations.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110169
2021-09-27 11:28:25 +00:00
Nicolas Vasilache 1b49a72de9 [mlir] Factor out constraint set creation from hoist padding.
This revision adds a

```
FlatAffineValueConstraints(ValueRange ivs, ValueRange lbs, ValueRange ubs)
```

method and use it in hoist padding.

Differential Revision: https://reviews.llvm.org/D110427
2021-09-27 10:11:35 +00:00
Nicolas Vasilache b74493ecea [mlir][Linalg] Refactor padding hoisting - NFC
This revision extracts padding hoisting in a new file and cleans it up in prevision of future improvements and extensions.

Differential Revision: https://reviews.llvm.org/D110414
2021-09-27 09:50:31 +00:00
Matthias Springer ffdf0a370d [mlir][vector] Fix bug in vector-transfer-full-partial-split
When splitting with linalg.copy, cannot write into the destination alloc directly. Instead, write into a subview of the alloc.

Differential Revision: https://reviews.llvm.org/D110512
2021-09-27 18:12:17 +09:00
Mehdi Amini 9c2cd6e7c8 Fix clang-tidy warning "modernize-use-nullptr" in MLIR VulkanRuntime (NFC) 2021-09-26 22:06:00 +00:00
Mehdi Amini b3891f28a3 Fix ClangTidyLegacy warning: "'virtual' is redundant since the function is already declared 'final' " (NFC) 2021-09-26 22:02:23 +00:00
Mehdi Amini c3aed0d395 MLIR can't support -Bsymbolic link option, fail at CMake time with a helpful message instead of broken runtime
Differential Revision: https://reviews.llvm.org/D110483
2021-09-26 00:36:31 +00:00
Kunwar Shaanjeet Singh Grover 0f78ece169 [MLIR] Add functionality to remove redundant local variables
This patch adds functionality to FlatAffineConstraints to remove local
variables using equalities. This helps in keeping output representation of
FlatAffineConstraints smaller.

This patch is part of a series of patches aimed at generalizing affine
dependence analysis.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D110056
2021-09-25 16:10:43 +05:30
River Riddle ef764eeeb9 [mlir:ElementsAttr] Avoid crash on empty contiguous ranges
We currently, incorrectly, assume that a range always has at least
one element when building a contiguous range. This commit adds
a proper empty check to avoid crashing.

Differential Revision: https://reviews.llvm.org/D110457
2021-09-24 23:48:51 +00:00
Lei Zhang b45476c94c [mlir][tosa] Do not fold transpose with quantized types
For such cases, the type of the constant DenseElementsAttr is
different from the transpose op return type.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D110446
2021-09-24 16:57:55 -04:00
Diego Caballero 2a876a711d [mlir] Create a generic reduction detection utility
This patch introduces a generic reduction detection utility that works
across different dialecs. It is mostly a generalization of the reduction
detection algorithm in Affine. The reduction detection logic in Affine,
Linalg and SCFToOpenMP have been replaced with this new generic utility.

The utility takes some basic components of the potential reduction and
returns: 1) the reduced value, and 2) a list with the combiner operations.
The logic to match reductions involving multiple combiner operations disabled
until we can properly test it.

Reviewed By: ftynse, bondhugula, nicolasvasilache, pifon2a

Differential Revision: https://reviews.llvm.org/D110303
2021-09-24 20:45:59 +00:00
River Riddle 531206310a [mlir:OpAsm] Factor out the common bits of (Op/Dialect)Asm(Parser/Printer)
This has a few benefits:
* It allows for defining parsers/printer code blocks that
  can be shared between operations and attribute/types.
* It removes the weird duplication of generic parser/printer hooks,
  which means that newly added hooks only require touching one class.

Differential Revision: https://reviews.llvm.org/D110375
2021-09-24 20:12:19 +00:00
River Riddle aca9bea199 [mlir:MemRef] Move DmaStartOp/DmaWaitOp to ODS
These are among the last operations still defined explicitly in C++. I've
tried to keep this commit as NFC as possible, but these ops
definitely need a non-NFC cleanup at some point.

Differential Revision: https://reviews.llvm.org/D110440
2021-09-24 19:35:28 +00:00
Lei Zhang e325ebb9c7 [mlir][tosa] Add some transpose folders
* If the input is a constant splat value, we just
  need to reshape it.
* If the input is a general constant with one user,
  we can also constant fold it, without bloating
  the IR.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D110439
2021-09-24 15:25:14 -04:00
River Riddle ef976337f5 [mlir:OpConversion] Remove the remaing usages of the deprecated matchAndRewrite methods
This commits updates the remaining usages of the ArrayRef<Value> based
matchAndRewrite/rewrite methods in favor of the new OpAdaptor
overload.

Differential Revision: https://reviews.llvm.org/D110360
2021-09-24 17:51:41 +00:00
River Riddle b54c724be0 [mlir:OpConversionPattern] Add overloads for taking an Adaptor instead of ArrayRef
This has been a TODO for a long time, and it brings about many advantages (namely nice accessors, and less fragile code). The existing overloads that accept ArrayRef are now treated as deprecated and will be removed in a followup (after a small grace period). Most of the upstream MLIR usages have been fixed by this commit, the rest will be handled in a followup.

Differential Revision: https://reviews.llvm.org/D110293
2021-09-24 17:51:41 +00:00
Alex Zinenko 5988a3b7a0 [mlir] Linalg: ensure tile-and-pad always creates padding as requested
Initially, the padding transformation and the related operation were only used
to guarantee static shapes of subtensors in tiled operations. The
transformation would not insert the padding operation if the shapes were
already static, and the overall code generation would actively remove such
"noop" pads. However, this transformation can be also used to pack data into
smaller tensors and marshall them into faster memory, regardless of the size
mismatches. In context of expert-driven transformation, we should assume that,
if padding is requested, a potentially padded tensor must be always created.
Update the transformation accordingly. To do this, introduce an optional
`packing` attribute to the `pad_tensor` op that serves as an indication that
the padding is an intentional choice (as opposed to side effect of type
normalization) and should be left alone by cleanups.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110425
2021-09-24 18:40:13 +02:00
Arjun P 4a57f5d1e1 [MLIR] PresburgerSet: support divisions in operations
Add support for intersecting, subtracting, complementing and checking equality of sets having divisions.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D110138
2021-09-24 15:36:47 +05:30
Alex Zinenko 3f89e339bb [mlir] add pad_tensor(tensor.cast) -> pad_tensor canonicalizer
This canonicalization pattern complements the tensor.cast(pad_tensor) one in
propagating constant type information when possible. It contributes to the
feasibility of pad hoisting.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110343
2021-09-24 12:03:47 +02:00
Matthias Springer f3f25ffc04 [mlir][linalg] Fix result type in FoldSourceTensorCast
* Do not discard static result type information that cannot be inferred from lower/upper padding.
* Add optional argument to `PadTensorOp::inferResultType` for specifying known result dimensions.

Differential Revision: https://reviews.llvm.org/D110380
2021-09-24 16:47:18 +09:00
Mehdi Amini 83f3c615dd Add missing storageType to AttrDef to ODS
This is only noticeable when using an attribute across dialects I think.
Previously the namespace would be ommited, but it wouldn't matter as
long as the generated code stays within a single namespace.

Differential Revision: https://reviews.llvm.org/D110367
2021-09-24 01:30:29 +00:00
Matthias Springer 2190f8a8b1 [mlir][linalg] Support tile+peel with TiledLoopOp
Only scf.for was supported until now.

Differential Revision: https://reviews.llvm.org/D110220
2021-09-24 10:23:31 +09:00
Matthias Springer 8dc16ba8d2 [mlir][linalg] Merge all tiling passes into a single one.
Passes such as `linalg-tile-to-tiled-loop` are merged into `linalg-tile`.

Differential Revision: https://reviews.llvm.org/D110214
2021-09-24 10:16:46 +09:00
wren romano 221856f5cd [mlir][sparse] Moved a conditional from the RT library to the generated MLIR.
When generating code to add an element to SparseTensorCOO (e.g., when doing dense=>sparse conversion), we used to check for nonzero values on the runtime side, whereas now we generate MLIR code to do that check.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110121
2021-09-23 12:44:17 -07:00
Tharindu Rusira 1f3f144446 [NFC] Wrap MLIR addAffineForOpDomain warning with LLVM_DEBUG
Current warning message in method `addAffineForOpDomain` of mlir/lib/Analysis/AffineStructures.cpp is being printed to the stdout/stderr.
This patch redirects the warning with LLVM_DEBUG following standard llvm practice.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D108340
2021-09-23 13:20:16 +05:30
Diana Picus b7050c791d [mlir] Fix build on Windows on Arm
clang-cl errors out while handling the templated version of tgfmt. This
patch works around the issue by explicitly choosing the non-templated
version of tgfmt, which takes an ArrayRef<std::string>.

More details in this thread:
https://lists.llvm.org/pipermail/cfe-dev/2021-September/068936.html

Thanks @Mehdi Amini for suggesting the fix :)

Differential Revision: https://reviews.llvm.org/D110223
2021-09-23 09:04:28 +02:00
John Demme 47cc166bc0 [MLIR] [Python] Make Attribute and Type hashable
Enables putting types and attributes in sets and in dicts as keys.

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D110301
2021-09-22 19:59:03 -07:00
Aart Bik a924fcc7c3 [mlir][sparse] add sparse kernels test to sparse compiler test suite
This test makes sure kernels map to efficient sparse code, i.e. all
compressed for-loops, no co-iterating while loops.  In addition, this
revision removes the special constant folding inside the sparse
compiler in favor of Mahesh' new generic linalg folding. Thanks!

NOTE: relies on Mahesh fix, which needs to be rebased first

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D110001
2021-09-22 14:56:39 -07:00
Tyler Augustine cd36bab4ca Fix bug for Ops with default valued attributes and successors/variadic regions.
When both a DefaultValuedAttr and a successor or variadic region was specified, this would generate invalid C++ declaration. There would be the parameter with a default value, followed by the successors/regions, which don't have a default, which is invalid.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D110205
2021-09-22 21:22:31 +00:00
MaheshRavishankar a40a08ed98 [mlir][Linalg] Teach constant -> generic op fusion to handle scalar constants.
The current folder of constant -> generic op only handles splat
constants. The same logic holds for scalar constants. Teach the
pattern to handle such cases.

Differential Revision: https://reviews.llvm.org/D109982
2021-09-22 13:41:47 -07:00
River Riddle 6e60bb6883 [mlir:DataFlowAnalysis] Reprocess the arguments of already executable edges
This fixes a bug where we discover new information about the arguments of an
already executable edge, but don't visit the arguments. We only visit the arguments, and not the block itself, so this commit shouldn't really affect performance at all.

Fixes PR#51871

Differential Revision: https://reviews.llvm.org/D110197
2021-09-22 20:14:55 +00:00
Yi Zhang b2b63d1b91 Reset operation when canceling root update transaction
Should reset the operation to original state when canceling the updates.

Reviewed By: rriddle, ftynse

Differential Revision: https://reviews.llvm.org/D110176
2021-09-22 16:05:08 -04:00
Aart Bik 5da21338bc [mlir][sparse] generalize reduction support in sparse compiler
Now not just SUM, but also PRODUCT, AND, OR, XOR. The reductions
MIN and MAX are still to be done (also depends on recognizing
these operations in cmp-select constructs).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D110203
2021-09-22 12:36:46 -07:00
Tobias Gysi e828655313 [mlir][linalg] Fix interchange initialization in fusion on tensors.
If no interchange vector is given initialize it with the identity permutation from 0 to number of loops.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D110249
2021-09-22 17:45:54 +00:00
Aart Bik 56bddf3b1c [mlir][sparse] replace ad-hoc MemRef struct with CRunnerUtils definition
This revision removes the ad-hoc MemRefs that were needed using the old
ABI (when we still passed by value) and replaces them with the shared
StridedMemRef definitions of CRunnerUtils (possible now that we pass by
pointer). This avoids code duplication and makes sure we have a consistent
view of strided memory references in all our support libraries.

Reviewed By: jsetoain

Differential Revision: https://reviews.llvm.org/D110221
2021-09-22 09:23:26 -07:00
Aart Bik 128a9e1cb4 [mlir][sparse] cleanup ABI issues in C interface with memrefs
This change adds automatic wrapper functoins with emit_c_interface
to all methods in the sparse support library that deal with MEMREFs.
The wrappers will take care of passing MEMREFs by value internally
and by pointer externally, thereby avoiding ABI issues across platforms.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D110219
2021-09-21 21:58:12 -07:00
Chris Lattner da93829b44 [DialectAsmPrinter] Add missing 'printAttributeWithoutType' member.
DialectAsmParser has a `parseAttribute` member that takes a
contextual type, but DialectAsmPrinter doesn't have the corresponding
member to take advantage of it.  As such, custom attribute
implementations can't really use it.  This adds the obvious missing
method which fills this hole.

Differential Revision: https://reviews.llvm.org/D110211
2021-09-21 18:45:24 -07:00
Alex Zinenko bdaf038266 [mlir] Always create a list of alias scopes when emitting LLVM IR
Previously, the translation to LLVM IR would emit IR that directly uses
a scope metadata node in case only one scope was in use in alias.scopes
or noalias metadata. It should always be a list of scopes. The verifier
change in 8700f2bd36 enforced this and
broke the test. Fix the translation to always create a list of scopes
using a new metadata node, update and reenable the respective test.

Fixes PR51919.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D110140
2021-09-22 00:00:46 +02:00
Christian Sigg 9149ae09bd Support value-typed references in iterator facade's operator->()
Add a PointerProxy similar to the existing iterator_facade_base::ReferenceProxy and return it from the arrow operator. This prevents iterator facades with a reference type that is not a true reference to take the address of a temporary.

Forward the reference type of the mapped_iterator to the iterator adaptor which in turn forwards it to the iterator facade. This fixes mlir::op_iterator::operator->() to take the address of a temporary.

Make some polishing changes to op_iterator and op_filter_iterator.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D109490
2021-09-21 20:42:22 +02:00
Tobias Gysi 8b5236def5 [mlir][linalg] Simplify slice dim computation for fusion on tensors (NFC).
Compute the tiled producer slice dimensions directly starting from the consumer not using the producer at all.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110147
2021-09-21 15:09:46 +00:00
Tobias Gysi 9072f1b5f8 [mlir][linalg] Add isPermutation helper (NFC).
Add a helper method to check if an index vector contains a permutation of its indices. Additionally, refactor applyPermutationToVector to take int64_t.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110135
2021-09-21 15:07:39 +00:00
Nicolas Vasilache 101d017a64 [mlir][Linalg] Revisit heuristic ordering of tensor.insert_slice in comprehensive bufferize.
It was previously assumed that tensor.insert_slice should be bufferized first in a greedy fashion to avoid out-of-place bufferization of the large tensor. This heuristic does not hold upon further inspection.

This CL removes the special handling of such ops and adds a test that exhibits better behavior and appears in real use cases.

The only test adversely affected is an artificial test which results in a returned memref: this pattern is not allowed by comprehensive bufferization in real scenarios anyway and the offending test is deleted.

Differential Revision: https://reviews.llvm.org/D110072
2021-09-21 14:22:45 +00:00
Nicolas Vasilache 0d2c54e851 [mlir][Linalg] Revisit RAW dependence interference in comprehensive bufferize.
Previously, comprehensive bufferize would consider all aliasing reads and writes to
the result buffer and matching operand. This resulted in spurious dependences
being considered and resulted in too many unnecessary copies.

Instead, this revision revisits the gathering of read and write alias sets.
This results in fewer alloc and copies.
An exhaustive test cases is added that considers all possible permutations of
`matmul(extract_slice(fill), extract_slice(fill), ...)`.
2021-09-21 14:22:22 +00:00
Tobias Gysi c8eed8f9a7 [mlir][linalg] Assert tile loop nest invariants in fusion.
Assert the tile loop nest invariants are satisfied instead of failing silently.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110137
2021-09-21 14:20:57 +00:00
Uday Bondhugula 5c77ed0330 [MLIR] NFC. gpu.launch op argument const folder cleanup
NFC updates to gpu.launch op argument const folder.

Differential Revision: https://reviews.llvm.org/D110136
2021-09-21 14:30:03 +05:30
Morten Borup Petersen 032cb1650f [MLIR][SCF] Add for-to-while loop transformation pass
This pass transforms SCF.ForOp operations to SCF.WhileOp. The For loop condition is placed in the 'before' region of the while operation, and indctuion variable incrementation + the loop body in the 'after' region. The loop carried values of the while op are the induction variable (IV) of the for-loop + any iter_args specified for the for-loop.
Any 'yield' ops in the for-loop are rewritten to additionally yield the (incremented) induction variable.

This transformation is useful for passes where we want to consider structured control flow solely on the basis of a loop body and the computation of a loop condition. As an example, when doing high-level synthesis in CIRCT, the incrementation of an IV in a for-loop is "just another part" of a circuit datapath, and what we really care about is the distinction between our datapath and our control logic (the condition variable).

Differential Revision: https://reviews.llvm.org/D108454
2021-09-21 09:09:54 +01:00
Kunwar Shaanjeet Singh Grover 0d12c99191 [MLIR] Add mergeLocalIds and mergeSymbolIds
This patch adds mergeLocalIds andmergeSymbolIds as public functions
for FlatAffineConstraints and FlatAffineValueConstraints respectively.

mergeLocalIds is also required to support divisions in intersection,
subtraction, equality checks, and complement for PresburgerSet.

This patch is part of a series of patches aimed at generalizing affine
dependence analysis.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D110045
2021-09-21 13:02:23 +05:30
Chris Lattner 58abc8c34b [OpAsmParser] Add a parseCommaSeparatedList helper and beef up Delimeter.
Lots of custom ops have hand-rolled comma-delimited parsing loops, as does
the MLIR parser itself.  Provides a standard interface for doing this that
is less error prone and less boilerplate.

While here, extend Delimiter to support <> and {} delimited sequences as
well (I have a use for <> in CIRCT specifically).

Differential Revision: https://reviews.llvm.org/D110122
2021-09-20 20:59:11 -07:00
River Riddle d80d3a358f [mlir] Refactor ElementsAttr into an AttrInterface
This revision refactors ElementsAttr into an Attribute Interface.
This enables a common interface with which to interact with
element attributes, without needing to modify the builtin
dialect. It also removes a majority (if not all?) of the need for
the current OpaqueElementsAttr, which was originally intended as
a way to opaquely represent data that was not representable by
the other builtin constructs.

The new ElementsAttr interface not only allows for users to
natively represent their data in the way that best suits them,
it also allows for efficient opaque access and iteration of the
underlying data. Attributes using the ElementsAttr interface
can directly expose support for interacting with the held
elements using any C++ data type they claim to support. For
example, DenseIntOrFpElementsAttr supports iteration using
various native C++ integer/float data types, as well as
APInt/APFloat, and more. ElementsAttr instances that refer to
DenseIntOrFpElementsAttr can use all of these data types for
iteration:

```c++
DenseIntOrFpElementsAttr intElementsAttr = ...;

ElementsAttr attr = intElementsAttr;
for (uint64_t value : attr.getValues<uint64_t>())
  ...;
for (APInt value : attr.getValues<APInt>())
  ...;
for (IntegerAttr value : attr.getValues<IntegerAttr>())
  ...;
```

ElementsAttr also supports failable range/iterator access,
allowing for selective code paths depending on data type
support:

```c++
ElementsAttr attr = ...;
if (auto range = attr.tryGetValues<uint64_t>()) {
  for (uint64_t value : *range)
    ...;
}
```

Differential Revision: https://reviews.llvm.org/D109190
2021-09-21 01:57:43 +00:00
River Riddle 0cb5d7fc7f [mlir] Add value_begin/value_end methods to DenseElementsAttr
Currently DenseElementsAttr only exposes the ability to get the full range of values for a given type T, but there are many situations where we just want the beginning/end iterator. This revision adds proper value_begin/value_end methods for all of the supported T types, and also cleans up a bit of the interface.

Differential Revision: https://reviews.llvm.org/D104173
2021-09-21 01:57:43 +00:00
River Riddle 4f21152af1 [mlir] Tighten verification of SparseElementsAttr
SparseElementsAttr currently does not perform any verfication on construction, with the only verification existing within the parser. This revision moves the parser verification to SparseElementsAttr, and also adds additional verification for when a sparse index is not valid.

Differential Revision: https://reviews.llvm.org/D109189
2021-09-21 01:57:42 +00:00
Stella Laurenzo 1fb2e842a9 [mlir][python] Forward _OperationBase _CAPIPtr to the Operation.
* ODS generated operations extend _OperationBase and without this, cannot be marshalled to CAPI functions.
* No test case updates: this kind of interop is quite hard to verify with in-tree tests.

Differential Revision: https://reviews.llvm.org/D110030
2021-09-20 18:52:05 -07:00
Chia-hung Duan bb2506061b [mlir-tblgen] Add DagNode StaticMatcher.
Some patterns may share the common DAG structures. Generate a static
function to do the match logic to reduce the binary size.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D105797
2021-09-20 23:37:42 +00:00
Mehdi Amini 4e7c0a37c9 Update MLIR generate-test-checks.py to add the notice from the source into the generated file
Folks may not read the source of the tool and miss these instructions.

Differential Revision: https://reviews.llvm.org/D110082
2021-09-20 23:19:40 +00:00
natashaknk 38ff7e11c0 [mlir][tosa] Add several binary elementwise to the list of broadcastable ops.
Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D110096
2021-09-20 16:07:35 -07:00
natashaknk 4edf46f72a [mlir][tosa] Remove the documentation requirement for elements of several binary elementwise ops to be of the same rank.
Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D110095
2021-09-20 15:00:38 -07:00
MaheshRavishankar 4cf9bf6c9f [mlir][MemRef] Compute unused dimensions of a rank-reducing subviews using strides as well.
For `memref.subview` operations, when there are more than one
unit-dimensions, the strides need to be used to figure out which of
the unit-dims are actually dropped.

Differential Revision: https://reviews.llvm.org/D109418
2021-09-20 11:05:30 -07:00
MaheshRavishankar 0b33890f45 [mlir][Linalg] Add ConvolutionOpInterface.
Add an interface that allows grouping together all covolution and
pooling ops within Linalg named ops. The interface currently
- the indexing map used for input/image access is valid
- the filter and output are accessed using projected permutations
- that all loops are charecterizable as one iterating over
  - batch dimension,
  - output image dimensions,
  - filter convolved dimensions,
  - output channel dimensions,
  - input channel dimensions,
  - depth multiplier (for depthwise convolutions)

Differential Revision: https://reviews.llvm.org/D109793
2021-09-20 10:41:10 -07:00
Mehdi Amini 5edd79fc97 Revert "[MLIR][SCF] Add for-to-while loop transformation pass"
This reverts commit 644b55d57e.

The added test is failing the bots.
2021-09-20 17:21:59 +00:00
Mehdi Amini f18f1ab4fd Temporarily XFAIL MLIR test that fails the LLVM verifier after 8700f2bd3 2021-09-20 17:20:11 +00:00
Tobias Gysi 7be28d82b4 [mlir][linalg] Add IndexOp support to fusion on tensors.
This revision depends on https://reviews.llvm.org/D109761 and https://reviews.llvm.org/D109766.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109774
2021-09-20 15:59:35 +00:00
Morten Borup Petersen 644b55d57e [MLIR][SCF] Add for-to-while loop transformation pass
This pass transforms SCF.ForOp operations to SCF.WhileOp. The For loop condition is placed in the 'before' region of the while operation, and indctuion variable incrementation + the loop body in the 'after' region. The loop carried values of the while op are the induction variable (IV) of the for-loop + any iter_args specified for the for-loop.
Any 'yield' ops in the for-loop are rewritten to additionally yield the (incremented) induction variable.

This transformation is useful for passes where we want to consider structured control flow solely on the basis of a loop body and the computation of a loop condition. As an example, when doing high-level synthesis in CIRCT, the incrementation of an IV in a for-loop is "just another part" of a circuit datapath, and what we really care about is the distinction between our datapath and our control logic (the condition variable).

Differential Revision: https://reviews.llvm.org/D108454
2021-09-20 16:57:50 +01:00
Tobias Gysi 09100c75b5 [mlir][linalg] Fix typo (NFC). 2021-09-20 15:46:16 +00:00
Tobias Gysi 6db928b8f3 [mlir][linalg] Fusion on tensors.
Add a new version of fusion on tensors that supports the following scenarios:
- support input and output operand fusion
- fuse a producer result passed in via tile loop iteration arguments (update the tile loop iteration arguments)
- supports only linalg operations on tensors
- supports only scf::for
- cannot add an output to the tile loop nest

The LinalgTileAndFuseOnTensors pass tiles the root operation and fuses its producers.

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D109766
2021-09-20 14:45:34 +00:00
Valentin Clement d6929aaa67
[mlir][openacc] Make use of the second counter extension in DataOp translation
Make use of runtime extension for the second reference counter used in
structured data region. This extension is implemented in D106510 and D106509.

Differential Revision: https://reviews.llvm.org/D106517
2021-09-20 13:43:50 +02:00
Vladislav Vinogradov 798e4bfbed [mlir] Fix integration tests failures introduced in D108505 2021-09-20 11:48:24 +03:00
KareemErgawy-TomTom bdcf4b9b96 [MLIR][Linalg] Make detensoring cost-model more flexible.
So far, the CF cost-model for detensoring was limited to discovering
pure CF structures. This means, if while discovering the CF component,
the cost-model found any op that is not detensorable, it gives up on
detensoring altogether. This patch makes it a bit more flexible by
cleaning-up the detensorable component from non-detensorable ops without
giving up entirely.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D109965
2021-09-20 10:21:31 +02:00
Arjun P 76cb876563 [MLIR] Simplex::appendVariable: early return if count == 0 2021-09-20 13:16:56 +05:30
Vladislav Vinogradov ec03bbe8a7 [mlir] Fix bug in partial dialect conversion
The discussion on forum:
https://llvm.discourse.group/t/bug-in-partial-dialect-conversion/4115

The `applyPartialConversion` didn't handle the operations, that were
marked as illegal inside dynamic legality callback.
Instead of reporting error, if such operation was not converted to legal set,
the method just added it to `unconvertedSet` in the same way as unknown operations.

This patch fixes that and handle dynamically illegal operations as well.

The patch includes 2 fixes for existing passes:

* `tensor-bufferize` - explicitly mark `std.return` as legal.
* `convert-parallel-loops-to-gpu` - ugly fix with marking visited operations
  to avoid recursive legality checks.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D108505
2021-09-20 10:39:10 +03:00
Vladislav Vinogradov 9a2255dfa0 [mlir][NFC] Add explicit "::mlir" namespace to tblgen generated code
Reviewed By: lattner, ftynse

Differential Revision: https://reviews.llvm.org/D109223
2021-09-20 10:37:50 +03:00
xndcn 9de88fc0ea [mlir][emitc] Fix indent in CondBranchOp and block label
1. Add missing indent in CondBranchOp
2. Remove indent in block label

Differential Revision: https://reviews.llvm.org/D109805
2021-09-19 20:03:42 +08:00
Arjun P 33afea5488 [MLIR] Simplex: rename num{Variables,Constraints} to getNum{Variables,Constraints}
As per the LLVM Coding Standards, function names should be verb phrases.
2021-09-18 22:39:35 +05:30
Arjun P 2b44a7325c [MLIR] Simplex: support adding new variables dynamically
Reviewed By: Groverkss

Differential Revision: https://reviews.llvm.org/D109962
2021-09-18 21:32:17 +05:30
Jacques Pienaar 0a1e569d37 [mlir-c] Add getting fused loc
For creating a fused loc using array of locations and metadata.

Differential Revision: https://reviews.llvm.org/D110022
2021-09-18 06:57:51 -07:00
Uday Bondhugula 57eda9becc [MLIR][GPU] Add constant propagator for gpu.launch op
Add a constant propagator for gpu.launch op in cases where the
grid/thread IDs can be trivially determined to take a single constant
value of zero.

Differential Revision: https://reviews.llvm.org/D109994
2021-09-18 12:02:46 +05:30
Geoffrey Martin-Noble 2cda4f8ed7
[mlir] Fix syntax example for tensor.from_elements
Parens are not used here
2021-09-17 17:23:11 -07:00
Aart Bik 46e77b5d10 [mlir][sparse] add a sparse quantized_matmul example to integration test
Note that this revision adds a very tiny bit of constant folding in the
sparse compiler lattice construction. Although I am generally trying to
avoid such canonicalizations (and rely on other passes to fix this instead),
the benefits of avoiding a very expensive disjunction lattice construction
justify having this special code (at least for now).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109939
2021-09-17 13:04:44 -07:00
Aart Bik d4e16171e8 [mlir][sparse] add dce test for all sparse tensor ops
Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D109992
2021-09-17 13:03:42 -07:00
Krzysztof Drewniak 121aab84d1 [MLIR][Affine] Simplify nested modulo operations when able
It is the case that, for all positive a and b such that b divides a
(e mod (a * b)) mod b = e mod b. For example, ((d0 mod 35) mod 5) can
be simplified to (d0 mod 5), but ((d0 mod 35) mod 4) cannot be simplified
further (x = 36 is a counterexample).

This change enables more complex simplifications. For example,
((d0 * 72 + d1) mod 144) mod 9 can now simplify to (d0 * 72 + d1) mod 9
and thus to d1 mod 9. Expressions with chained modulus operators are
reasonably common in tensor applications, and this change _should_
improve code generation for such expressions.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109930
2021-09-17 19:06:00 +00:00
thomasraoux 08f0cb7719 [mlir] Prevent crash in DropUnitDim pattern due to tensor with encoding
Differential Revision: https://reviews.llvm.org/D109984
2021-09-17 12:03:16 -07:00
thomasraoux 36aac53b36 [mlir][linalg] Extend drop unit dim pattern to all cases of reduction
Even with all parallel loops reading the output value is still allowed so we
don't have to handle reduction loops differently.

Differential Revision: https://reviews.llvm.org/D109851
2021-09-17 10:09:57 -07:00
thomasraoux 416679615d [mlir] Linalg hoisting should ignore uses outside the loop
Differential Revision: https://reviews.llvm.org/D109859
2021-09-17 10:06:57 -07:00
thomasraoux a123e3c48b [mlir] Fix potential crash in hoistRedundantVectorTransfers
Differential Revision: https://reviews.llvm.org/D107856
2021-09-17 10:05:20 -07:00
Tobias Gysi 90b7817e03 [mlir][linalg] Add helper to update IndexOps after tiling (NFC).
Add the addTileLoopIvsToIndexOpResults method to shift the IndexOp results after tiling.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109761
2021-09-17 15:17:33 +00:00
Arjun P 58719f6153 [MLIR] PresbugerSet: slightly expand documentation 2021-09-17 18:04:46 +05:30
Arjun P 44db07f11f [MLIR] AffineStructures: support removing a range of constraints at once
Reviewed By: Groverkss, grosser

Differential Revision: https://reviews.llvm.org/D109892
2021-09-17 16:27:48 +05:30
Arjun P 6607bd9fd8 [MLIR] AffineStructures::removeIdRange: support specifying a range within an IdKind
Reviewed By: Groverkss, grosser

Differential Revision: https://reviews.llvm.org/D109896
2021-09-17 16:25:26 +05:30
Arjun P f263ea1571 [MLIR] Matrix: support resizing horizontally
Reviewed By: Groverkss

Differential Revision: https://reviews.llvm.org/D109897
2021-09-17 16:22:31 +05:30
MaheshRavishankar 04a66f8d2b Fixing vector add pattern that incorrectly returns success.
The pattern is returning success even if it does no work leading to pattern application running up to the max iteration count and failing.

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D109791
2021-09-16 14:48:09 -07:00
Aart Bik 233b42a8bb [mlir][sparse] remove unused TENSOR environment
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109919
2021-09-16 14:32:09 -07:00
Rob Suderman 8662a2f208 [mlir][tosa] Relax ranked constraint on quantization builder
TosaOp defintion had an artificial constraint that the input/output types
needed to be ranked to invoke the quantization builder. This is correct as an
unranked tensor could still be quantized.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D109863
2021-09-16 11:43:47 -07:00
Aart Bik 860cbeb159 [mlir][sparse] add more asserts to sparse support lib
We are having issues running the integration test of the sparse compiler
on AArch64 (crashing in the lib). This revision adds more assertions.

Reviewed By: jsetoain

Differential Revision: https://reviews.llvm.org/D109861
2021-09-16 10:13:29 -07:00
Nicolas Vasilache ee2e414dde [mlir][Linalg] Cleanup doc and improve logging and readability in ComprehensiveBufferize.cpp - NFC 2021-09-16 16:41:47 +00:00
Tobias Gysi 8f2db36b01 [mlir][OpDSL] Update op definitions to make shapes more concise (NFC).
Express the input shape definitions of convolution and pooling operations in terms of the output shapes, filter shapes, strides, and dilations.

Reviewed By: shabalin, rsuderman, stellaraccident

Differential Revision: https://reviews.llvm.org/D109815
2021-09-16 06:02:00 +00:00
Aart Bik b1d44e5902 [mlir][sparse] add affine subscripts to sparse compilation pass
This enables the sparsification of more kernels, such as convolutions
where there is a x(i+j) subscript. It also enables more tensor invariants
such as x(1) or other affine subscripts such as x(i+1). Currently, we
reject sparsity altogether for such tensors. Despite this restriction,
however, we can already handle a lot more kernels with compound subscripts
for dense access (viz. convolution with dense input and sparse filter).
Some unit tests and an integration test demonstrate new capability.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109783
2021-09-15 20:28:04 -07:00
Mogball cb8c30d35d [DRR] Explicit Return Types in Rewrites
Adds a new rewrite directive returnType that can be added at the end of an op's
argument list to explicitly specify return types.

```
(OpX $v0, $v1, (returnType "$_builder.getI32Type()"))
```

Pass in a bound value to copy its return type, or pass a native code call to
dynamically create new types.

```
(OpX $v0, $v1, (returnType $v0, (NativeCodeCall<"..."> $v1)))
```

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D109472
2021-09-15 14:25:29 -07:00
Rob Suderman 1ac2d195ec [mlir][linalg] Add canonicalizers for depthwise conv
There are two main versions of depthwise conv depending whether the multiplier
is 1 or not. In cases where m == 1 we should use the version without the
multiplier channel as it can perform greater optimization.

Add lowering for the quantized/float versions to have a multiplier of one.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D108959
2021-09-15 14:09:15 -07:00
Simon Camphausen 1b79efdc72 [mlir] Fix printing of EmitC attrs/types with escape characters
Attributes and types were not escaped when printing.

Reviewed By: jpienaar, marbre

Differential Revision: https://reviews.llvm.org/D109143
2021-09-15 18:15:38 +00:00