Commit Graph

127 Commits

Author SHA1 Message Date
natashaknk bedd3ee881 [mlir][tosa] Change tosa.depthwise_conv2d's ending reshape to a collapse.
TOSAs depthwise_conv2d operation includes a reshape to include the implicit x1 dimension.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D126212
2022-05-23 11:27:01 -07:00
Robert Suderman cb4a5eae1e [mlir][tosa] Use math.ctlz intrinsic for tosa.clz
We were custom counting per bit for the clz instruction. Math dialect
now has an intrinsic to do this in one instruction. Migrated to this
instruction and fixed a minor bug math-to-llvm for the intrinsic.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D125592
2022-05-16 11:31:35 -07:00
River Riddle 58ceae9561 [mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace
FuncOp has been moved to the `func` namespace for a little over a month, the
using directive can be dropped now.
2022-04-18 12:01:55 -07:00
natashaknk fac9f45e05 [tosa][mlir] Add dynamic width/height support for depthwise convolution in tosa-to-linalg
In addition, fixed a small bug with padding incorrectly inferring output shape for dynaic inputs in convolution

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D121872
2022-04-07 10:50:06 -07:00
gysit 7294be2b8e [mlir][linalg] Replace linalg.fill by OpDSL variant.
The revision removes the linalg.fill operation and renames the OpDSL generated linalg.fill_tensor operation to replace it. After the change, all named structured operations are defined via OpDSL and there are no handwritten operations left.

A side-effect of the change is that the pretty printed form changes from:
```
%1 = linalg.fill(%cst, %0) : f32, tensor<?x?xf32> -> tensor<?x?xf32>
```
changes to
```
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x?xf32>) -> tensor<?x?xf32>
```
Additionally, the builder signature now takes input and output value ranges as it is the case for all other OpDSL operations:
```
rewriter.create<linalg::FillOp>(loc, val, output)
```
changes to
```
rewriter.create<linalg::FillOp>(loc, ValueRange{val}, ValueRange{output})
```
All other changes remain minimal. In particular, the canonicalization patterns are the same and the `value()`, `output()`, and `result()` methods are now implemented by the FillOpInterface.

Depends On D120726

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D120728
2022-03-14 10:51:08 +00:00
River Riddle 47f175b09b [mlir] Update FuncOp conversion passes to Pass/InterfacePass<FunctionOpInterface>
These passes generally don't rely on any special aspects of FuncOp, and moving allows
for these passes to be used in many more situations. The passes that obviously weren't
relying on invariants guaranteed by a "function" were updated to be generic pass, the
rest were updated to be FunctionOpinterface InterfacePasses.

The test updates are NFC switching from implicit nesting (-pass -pass2) form to
the -pass-pipeline form (generic passes do not implicitly nest as op-specific passes do).

Differential Revision: https://reviews.llvm.org/D121190
2022-03-08 12:25:32 -08:00
natashaknk 8d7a833eed [tosa][mlir] Add support for dynamic width/height for Conv2D inputs in tosa-to-linalg
Infers output shape for dynamic width/height inputs.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D119977
2022-03-02 12:16:35 -08:00
River Riddle 1f971e23f0 [mlir] Trim a huge number of unnecessary dependencies on the Func dialect
The Func has a large number of legacy dependencies carried over from the old
Standard dialect, which was pervasive and contained a large number of varied
operations. With the split of the standard dialect and its demise, a lot of lingering
dead dependencies have survived to the Func dialect. This commit removes a
large majority of then, greatly reducing the dependence surface area of the
Func dialect.
2022-03-01 12:10:04 -08:00
River Riddle 23aa5a7446 [mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624
2022-03-01 12:10:04 -08:00
not-jenni fd2550d80c Adds a flag to optionally disable tosa decompositions
Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D120338
2022-02-28 15:41:13 -08:00
River Riddle dec8af701f [mlir] Move SelectOp from Standard to Arithmetic
This is part of splitting up the standard dialect. See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for discussion.

Differential Revision: https://reviews.llvm.org/D118648
2022-02-02 14:45:12 -08:00
Alexandre Ganea dc3b9365b6 [mlir] Silence warnings when building with MSVC
Differential Revision: https://reviews.llvm.org/D118536
2022-01-30 17:31:35 -05:00
natashaknk 024a1fab5c [tosa][mlir] Add dynamic shape support for remaining ops
Added support for concat, tile, pad, argmax and table ops

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D118397
2022-01-27 11:25:38 -08:00
Alexander Belyaev fd0c6f5391 [mlir] Move linalg::PadTensorOp to tensor::PadOp.
RFC: https://llvm.discourse.group/t/rfc-move-linalg-padtensorop-to-tensor-padop/5785

Differential Revision: https://reviews.llvm.org/D117892
2022-01-21 20:02:39 +01:00
River Riddle e084679f96 [mlir] Make locations required when adding/creating block arguments
BlockArguments gained the ability to have locations attached a while ago, but they
have always been optional. This goes against the core tenant of MLIR where location
information is a requirement, so this commit updates the API to require locations.

Fixes #53279

Differential Revision: https://reviews.llvm.org/D117633
2022-01-19 17:35:35 -08:00
natashaknk b9b10c0e61 [tosa][mlir] Lowering for dynamic shapes in the reduce_x ops in tosa-to-linalg
Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D117691
2022-01-19 11:15:14 -08:00
River Riddle 4157455425 [mlir][Pass] Deprecate FunctionPass in favor of OperationPass<FuncOp>
The only benefit of FunctionPass is that it filters out function
declarations. This isn't enough to justify carrying it around, as we can
simplify filter out declarations when necessary within the pass. We can
also explore with better scheduling primitives to filter out declarations
at the pipeline level in the future.

The definition of FunctionPass is left intact for now to allow time for downstream
users to migrate.

Differential Revision: https://reviews.llvm.org/D117182
2022-01-18 19:52:44 -08:00
Rob Suderman 173fce4205 [mlir][tosa] Update default tosa-to-linalg passes
Adding the optional decompositions have been verified to improve memory
usage on common models. Added the decomposition to the default tosa to linalg
passes.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D117175
2022-01-13 10:17:44 -08:00
natashaknk 310e9636ca [tosa][mlir] Support dynamic batch dimension for ops where the batch dim is explicit
Dynamic batch for rescale, gather, max_pool, avg_pool, conv2D and depthwise_conv2D. Split helper functions into a separate header file.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D117031
2022-01-12 14:16:50 -08:00
Aaron DeBattista dfd070820c [mlir][tosa] Allow optional TOSA decompositions to be populated separately
Moved all TOSA decomposition patterns so that they can be optionally populated
and used by external rewrites. This avoids decomposing TOSa operations when
backends may benefit from the non-decomposed version.

Reviewed By: rsuderman, mehdi_amini

Differential Revision: https://reviews.llvm.org/D116526
2022-01-11 10:26:30 -08:00
Mehdi Amini 78389de4d3 Add back missing return to non-void function
It was incorrectly removed accidentally in e4e463e747.
2022-01-03 06:18:25 +00:00
Mehdi Amini e4e463e747 Remove useless nesting blok and dead return statement in TosaToLinalg.cpp (NFC)
Flagged by Coverity.
2022-01-03 06:02:21 +00:00
Mehdi Amini 4f415216ca Apply clang-tidy fixes for performance-unnecessary-value-param to MLIR (NFC) 2022-01-02 22:37:13 +00:00
Mehdi Amini e4853be2f1 Apply clang-tidy fixes for performance-for-range-copy to MLIR (NFC) 2022-01-02 22:19:56 +00:00
Mehdi Amini 6786d7e4f5 Apply clang-tidy fixes for readability-simplify-boolean-expr to MLIR (NFC)
Reviewed By: rriddle, Mogball

Differential Revision: https://reviews.llvm.org/D116253
2022-01-02 01:59:31 +00:00
Mehdi Amini f0fff1dfde Remove unused applyPad function from TosaToLinalg.cpp (NFC) 2022-01-02 01:53:18 +00:00
Mehdi Amini 1fc096af1e Apply clang-tidy fixes for performance-unnecessary-value-param to MLIR (NFC)
Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D116250
2022-01-02 01:45:18 +00:00
Rob Suderman f0cb77d7d5 [mlir][tosa] Resubmit split tosa-to-linalg named ops out of pass
Includes dependency fix that resulted in canonicalizer pass not linking in.

Linalg named ops lowering are moved to a separate pass. This allows TOSA
canonicalizers to run between named-ops lowerings and the general TOSA
lowerings. This allows the TOSA canonicalizers to run between lowerings.

Differential Revision: https://reviews.llvm.org/D116057
2021-12-28 11:22:58 -08:00
Mehdi Amini 735fe1da6b Revert "[mlir][tosa] Split tosa-to-linalg named ops out of pass"
This reverts commit 313de31fbb.

There is a missing CMake dependency, building with shared libraries is
broken:

55.509 [45/4/3061] Linking CXX shared library lib/libMLIRTosaToLinalg.so.14git
FAILED: lib/libMLIRTosaToLinalg.so.14git
...
TosaToLinalgPass.cpp: undefined reference to `mlir::createCanonicalizerPass()'
2021-12-24 00:09:15 +00:00
Rob Suderman 313de31fbb [mlir][tosa] Split tosa-to-linalg named ops out of pass
Linalg named ops lowering are moved to a separate pass. This allows TOSA
canonicalizers to run between named-ops lowerings and the general TOSA
lowerings. This allows the TOSA canonicalizers to run between lowerings.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D116057
2021-12-23 12:23:19 -08:00
Mehdi Amini 02b6fb218e Fix clang-tidy issues in mlir/ (NFC)
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D115956
2021-12-20 20:25:01 +00:00
Jacques Pienaar c0342a2de8 [mlir] Switching accessors to prefixed form (NFC)
Makes eventual prefixing flag flip smaller change.
2021-12-20 08:03:43 -08:00
Rob Suderman 0763f12213 [mlir][tosa] Handle rescale case where shift > 63
It is possible for the shift value to exceed the number of bits. In these
cases we can just multiply by zero. This is relatively rare occurence but
should be handled.

Reviewed By: not-jenni

Differential Revision: https://reviews.llvm.org/D115779
2021-12-16 15:30:48 -08:00
gysit b7f2c108eb [mlir][linalg] Replace LinalgOps.h and LinalgTypes.h by a single header.
After removing the range type, Linalg does not define any type. The revision thus consolidates the LinalgOps.h and LinalgTypes.h into a single Linalg.h header. Additionally, LinalgTypes.cpp is renamed to LinalgDialect.cpp to follow the convention adopted by other dialects such as the tensor dialect.

Depends On D115727

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115728
2021-12-15 12:15:03 +00:00
Alexander Belyaev b618880e7b [mlir] Move `linalg.tensor_expand/collapse_shape` to TensorDialect.
RFC: https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310

linalg.fill gets a canonicalizer, because `FoldFillWithTensorReshape` cannot be moved to tensorops (it uses linalg::FillOp inside). Before it was listed as a canonicalization pattern for the reshape operations, now it became a canonicalization for FillOp.

Differential Revision: https://reviews.llvm.org/D115502
2021-12-10 12:11:48 +01:00
Nicolas Vasilache a08b750ce9 [mlir][tensor] InsertSliceOp verification.
This revision reintroduces tensor.insert_slice verification which seems
to have vanished over time: a verifier was initially introduced in cf9503c1b7
but for some reason the invalid.mlir was not properly updated; as time passed the verifier was not called anymore and later the code was deleted.

As a consequence, a non-negligible portion of tests has run astray using invalid
tensor.insert_slice semantics and needed to be fixed.

Also, extract isRankReducedType from TensorOps for better reuse
Originally, this facility was used by both tensor and memref forms but
it got copied around as dialects were split.

Differential Revision: https://reviews.llvm.org/D114715
2021-11-30 20:37:06 +00:00
Rob Suderman 54eec7cafc [mlir][tosa] Separate tosa.transpose_conv decomposition and added stride support
Transpose convolution decomposition is now performed in a separate pass. This
allows padding / constant propagation to be performed at the TOSA level. It
also adds support for striding when there is no dilation.

Differential Revision: https://reviews.llvm.org/D114409
2021-11-23 12:16:44 -08:00
natashaknk 381677dfbf [tosa][mlir] Refactor tosa.reshape lowering to linalg for dynamic cases.
Split tosa.reshape into three individual lowerings: collapse, expand and a
combination of both. Add simple dynamic shape support.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D113936
2021-11-15 15:31:37 -08:00
Nicolas Vasilache f67171ac58 [mlir][Linalg] Make depthwise convolution naming scheme consistent.
Names should be consistent across all operations otherwise painful bugs will surface.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D113762
2021-11-15 07:54:29 +00:00
Rob Suderman 860d3811a9 [mlir][tosa] Add lowering for tosa.pad with explicit value
New TOSA pad operation can support explicitly specifying the pad value. Added
lowering to linalg that uses the explicit value.

Differential Revision: https://reviews.llvm.org/D113515
2021-11-10 14:15:20 -08:00
Robert Suderman 58901a5a29 [mlir][tosa] Correct tosa.avg_pool2d for specification error
Specification specified the output type for quantized average pool should be
an i32. Only accumulator should be an i32, result type should match the input
type.

Caused in https://reviews.llvm.org/D111590

Reviewed By: sjarus, GMNGeoffrey

Differential Revision: https://reviews.llvm.org/D112484
2021-10-25 14:41:16 -07:00
Geoffrey Martin-Noble efc6fe963c [MLIR][TOSA] Drop "OnTensors" suffix
This is the only lowering to Linalg Tosa has, so it's needlessly
verbose. Likely this was a carry over from IREE's usage where we
originally lowered to linalg on buffers (the only linalg that existed at
the time), so the everything on tensors needed the suffix. We're dropping
it in IREE also, having transitioned entirely to using Linalg on
tensors.

Reviewed By: sjarus

Differential Revision: https://reviews.llvm.org/D111911
2021-10-15 16:01:19 -07:00
Rob Suderman 59dd418e89 [mlir][tosa] Fix tosa.cast UiToFp32 for tosa-to-linalg
Part of the arith update broke UiToFp32. Fixed the lowering and included a new
test to detect a regression.

Differential Revision: https://reviews.llvm.org/D111772
2021-10-14 11:34:10 -07:00
Mogball a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Rob Suderman 95e4b71519 [mlir][tosa] Fix tosa average_pool2d to linalg type issue
Average pool assumed the same input/output type. Result type for integers
is always an i32, should be updated appropriately.

Reviewed By: GMNGeoffrey

Differential Revision: https://reviews.llvm.org/D111590
2021-10-12 13:09:21 -07:00
natashaknk 4c48f7e29b [mlir][tosa] Create basic dynamic shape support for several ops.
Transpose, Matmul and Fully-connected dynamic shape support

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D111167
2021-10-06 10:36:04 -07:00
Rob Suderman d5a4c86d14 [mlir][tosa] tosa.cast support for unsigned integers
Unsigned integers need to be handled for cast to floating point.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D111102
2021-10-05 10:57:16 -07:00
Alex Zinenko 01d696e563 [mlir] rename the "packing" flag of linalg.pad_tensor to "nofold"
The discussion in https://reviews.llvm.org/D110425 demonstrated that "packing"
may be a confusing term to define the behavior of this op in presence of the
attribute. Instead, indicate the intended effect of preventing the folder from
being applied.

Reviewed By: nicolasvasilache, silvas

Differential Revision: https://reviews.llvm.org/D111046
2021-10-04 21:28:11 +02:00
River Riddle b54c724be0 [mlir:OpConversionPattern] Add overloads for taking an Adaptor instead of ArrayRef
This has been a TODO for a long time, and it brings about many advantages (namely nice accessors, and less fragile code). The existing overloads that accept ArrayRef are now treated as deprecated and will be removed in a followup (after a small grace period). Most of the upstream MLIR usages have been fixed by this commit, the rest will be handled in a followup.

Differential Revision: https://reviews.llvm.org/D110293
2021-09-24 17:51:41 +00:00
Alex Zinenko 5988a3b7a0 [mlir] Linalg: ensure tile-and-pad always creates padding as requested
Initially, the padding transformation and the related operation were only used
to guarantee static shapes of subtensors in tiled operations. The
transformation would not insert the padding operation if the shapes were
already static, and the overall code generation would actively remove such
"noop" pads. However, this transformation can be also used to pack data into
smaller tensors and marshall them into faster memory, regardless of the size
mismatches. In context of expert-driven transformation, we should assume that,
if padding is requested, a potentially padded tensor must be always created.
Update the transformation accordingly. To do this, introduce an optional
`packing` attribute to the `pad_tensor` op that serves as an indication that
the padding is an intentional choice (as opposed to side effect of type
normalization) and should be left alone by cleanups.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D110425
2021-09-24 18:40:13 +02:00