Commit Graph

3035 Commits

Author SHA1 Message Date
bakhtiyar ec0e4545ca Make AsyncParallelForRewrite parameterizable with a cost model which drives deciding the parallelization granularity.
Reviewed By: ezhulenev, mehdi_amini

Differential Revision: https://reviews.llvm.org/D115423
2021-12-19 08:41:01 -08:00
Mehdi Amini cc4781464f Fix warning "comparison of integers of different signs" (NFC) 2021-12-18 20:38:32 +00:00
Aaron DeBattista 64f694acaf [mlir][tosa] Move tosa canonicalizers to optional optimization pass
TOSA's canonicalizers that change dense operations should be moved to a
seperate optimization pass to avoid canonicalizing to operations not supported
for relevant backends.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D115890
2021-12-16 23:33:54 -08:00
Rob Suderman 0763f12213 [mlir][tosa] Handle rescale case where shift > 63
It is possible for the shift value to exceed the number of bits. In these
cases we can just multiply by zero. This is relatively rare occurence but
should be handled.

Reviewed By: not-jenni

Differential Revision: https://reviews.llvm.org/D115779
2021-12-16 15:30:48 -08:00
not-jenni f9cefc7b90 [mlir][tosa] Add tosa.max_pool2d as no-op canonicalization
When the input and output of a pool2d op are both 1x1, it can be canonicalized to a no-op

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D115908
2021-12-16 15:27:26 -08:00
Rob Suderman 9a2308e170 [mlir][tosa] Minor cleanup of tosa.conv2d canonicalizer
Slight rename and better variable type usage in tosa.conv2d to
tosa.fully_connected lowering. Included disabling pass for padded
convolutions.

Reviewed By: not-jenni

Differential Revision: https://reviews.llvm.org/D115776
2021-12-16 15:13:01 -08:00
Alexander Belyaev 88df30c8d8 [mlir] Add canonicalization for extract(tensor.from_elements) in 0d case.
Differential Revision: https://reviews.llvm.org/D115875
2021-12-16 15:46:57 +01:00
Alexander Belyaev f77e9f8768 [mlir] Extend `tensor.from_elements` to support N-D case.
RFC: https://llvm.discourse.group/t/rfc-extend-tensor-fromelementsop-to-n-d/4715

Differential Revision: https://reviews.llvm.org/D115821
2021-12-16 14:52:41 +01:00
Diego Caballero 32fe1a8a25 [mlir][GPU] Extend GPU kernel outlining to generate DL specification
This patch extends the GPU kernel outlining pass so that it can take in
an optional data layout specification that will be attached to the GPU
module operation generated. If the data layout specification is not provided
the default data layout is used instead.

Reviewed By: herhut, mehdi_amini

Differential Revision: https://reviews.llvm.org/D115722
2021-12-16 11:35:53 +00:00
Matthias Springer ec8628b1d6 [mlir][linalg][bufferize][NFC] Pass BufferizationState into all op interface methods
This allows op interface implementations to make decisions based on dialect-specific bufferization state.

This is in preparation of fixing conflict detection of CallOps in ModuleBufferization.

Differential Revision: https://reviews.llvm.org/D115705
2021-12-16 11:45:13 +09:00
Hanhan Wang 501674dc3b [mlir][Vector] Further fix to avoid infinite loop in InnerOuterDimReductionConversion
If all the dims are reduction dims, it is already in inner-most/outer-most
reduction form.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D115820
2021-12-15 13:54:15 -08:00
Mogball c7103810bd [mlir][scf] Add getNumRegionInvocations to IfOp
Implements the RegionBranchOpInterface method getNumRegionInvocations to `scf::IfOp` so that, when the condition is constant, the number of region executions can be analyzed by `NumberOfExecutions`.

Reviewed By: jpienaar, ftynse

Differential Revision: https://reviews.llvm.org/D115087
2021-12-15 14:56:20 +00:00
Matthias Springer 417014170b [mlir][linalg][bufferize] Replace remaining bvm usage with new API
* Call `replaceOp` instead of `mapBuffer`.
* Remove bvm and all helper functions around bvm.
* Simplify FuncOp bufferization and rely on existing functionality to generate ToMemrefOps for function BlockArguments.

Differential Revision: https://reviews.llvm.org/D115515
2021-12-15 23:21:39 +09:00
gysit b7f2c108eb [mlir][linalg] Replace LinalgOps.h and LinalgTypes.h by a single header.
After removing the range type, Linalg does not define any type. The revision thus consolidates the LinalgOps.h and LinalgTypes.h into a single Linalg.h header. Additionally, LinalgTypes.cpp is renamed to LinalgDialect.cpp to follow the convention adopted by other dialects such as the tensor dialect.

Depends On D115727

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115728
2021-12-15 12:15:03 +00:00
Tres Popp fc64a164ec [mlir] Use rewriter in linalg Detensorize
This is to allow rollbacks on failures of dialect lowering to succeed.

Differential Revision: https://reviews.llvm.org/D115789
2021-12-15 11:28:18 +01:00
Matthias Springer 1652871473 [mlir][linalg][bufferize] Reimplementation of TiledLoopOp bufferization
Instead of modifying the existing linalg.tiled_loop op, create a new op with memref input/outputs and delete the old op.

Differential Revision: https://reviews.llvm.org/D115493
2021-12-15 18:45:29 +09:00
Matthias Springer a5927737da [mlir][linalg][bufferize] Reimplementation of scf.if bufferization
Instead of modifying the existing scf.if op, create a new op with memref OpOperands/OpResults and delete the old op.

New allocations / other memrefs can now be yielded from the op. This functionality is deactivated by default and guarded against by AssertDestinationPassingStyle.

Differential Revision: https://reviews.llvm.org/D115491
2021-12-15 18:40:54 +09:00
Javier Setoain a4830d14ed [mlir][RFC] Add scalable dimensions to VectorType
With VectorType supporting scalable dimensions, we don't need many of
the operations currently present in ArmSVE, like mask generation and
basic arithmetic instructions. Therefore, this patch also gets
rid of those.

Having built-in scalable vector support also simplifies the lowering of
scalable vector dialects down to LLVMIR.

Scalable dimensions are indicated with the scalable dimensions
between square brackets:

        vector<[4]xf32>

Is a scalable vector of 4 single precission floating point elements.

More generally, a VectorType can have a set of fixed-length dimensions
followed by a set of scalable dimensions:

        vector<2x[4x4]xf32>

Is a vector with 2 scalable 4x4 vectors of single precission floating
point elements.

The scale of the scalable dimensions can be obtained with the Vector
operation:

        %vs = vector.vscale

This change is being discussed in the discourse RFC:

https://llvm.discourse.group/t/rfc-add-built-in-support-for-scalable-vector-types/4484

Differential Revision: https://reviews.llvm.org/D111819
2021-12-15 09:31:37 +00:00
Matthias Springer 7161aa06ef [mlir][linalg][bufferize] Reimplementation of scf.for bufferization
Instead of modifying the existing scf.for op, create a new op with memref OpOperands/OpResults and delete the old op.

New allocations / other memrefs can now be yielded from the loop. This functionality is deactivated by default and guarded against by AssertDestinationPassingStyle.

This change also introduces `replaceOp`, which will be utilized by all other `bufferize` implementations in future commits. Bufferization will then no longer rely on old (pre-bufferize) ops to DCE away. Instead old ops are deleted on the spot. This improves debuggability because there won't be any duplicate ops anymore (bufferized + not-yet-bufferized) when dumping IR during bufferization. It is also less fragile because unbufferized IR can no longer silently "hang around" due to an implementation bug.

Differential Revision: https://reviews.llvm.org/D114926
2021-12-15 18:29:22 +09:00
gysit 9912bed730 [mlir][linalg] Remove RangeOp and RangeType.
Remove the RangeOp and the RangeType that are not actively used anymore. After removing RangeType, the LinalgTypes header only includes the generated dialect header.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115727
2021-12-15 07:19:10 +00:00
Thomas Raoux 7d97678df7 [mlir][linalg] Break up linalg vectorization pre-condition
Break up the vectorization pre-condition into the part checking for
static shape and the rest checking if the linalg op is supported by
vectorization. This allows checking if an op could be vectorized if it
had static shapes.

Differential Revision: https://reviews.llvm.org/D115754
2021-12-14 13:38:14 -08:00
Alexander Belyaev a82a19c137 [mlir] Add a missing pattern to bufferize tensor.rank.
Differential Revision: https://reviews.llvm.org/D115745
2021-12-14 20:04:57 +01:00
Matthias Springer 81eece7f26 [mlir][linalg][bufferize] Debug output as IR attributes
Instead of printing analysis debug information to stderr, annotate the IR. This makes it easier to understand decisions made by the analysis, especially in larger input IR.

Differential Revision: https://reviews.llvm.org/D115575
2021-12-14 21:29:43 +09:00
Alexander Belyaev 15f8f3e20a [mlir] Split std.rank into tensor.rank and memref.rank.
Move `std.rank` similarly to how `std.dim` was moved to TensorOps and MemRefOps.

Differential Revision: https://reviews.llvm.org/D115665
2021-12-14 10:15:55 +01:00
Markus Böck ef5be2bb16 [mlir] Implement `DataLayoutTypeInterface` for `LLVMArrayType`
Implementation of the interface allows querying the size and alignments of an LLVMArrayType as well as query the size and alignment of a struct containing an LLVMArrayType.
The implementation should yield the same results as llvm::DataLayout, including support for over aligned element types.
There is no customization point for adjusting an arrays alignment; it is simply taken from the element type.

Differential Revision: https://reviews.llvm.org/D115704
2021-12-14 09:35:45 +01:00
Benoit Jacob aba437ceb2 [mlir][Vector] Patterns flattening vector transfers to 1D
This is the second part of https://reviews.llvm.org/D114993 after slicing
into 2 independent commits.

This is needed at the moment to get good codegen from 2d vector.transfer
ops that aim to compile to SIMD load/store instructions but that can
only do so if the whole 2d transfer shape is handled in one piece, in
particular taking advantage of the memref being contiguous rowmajor.

For instance, if the target architecture has 128bit SIMD then we would
expect that contiguous row-major transfers of <4x4xi8> map to one SIMD
load/store instruction each.

The current generic lowering of multi-dimensional vector.transfer ops
can't achieve that because it peels dimensions one by one, so a transfer
of <4x4xi8> becomes 4 transfers of <4xi8>.

The new patterns here are only enabled for now by
 -test-vector-transfer-flatten-patterns.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D114993
2021-12-13 22:39:41 +00:00
Benoit Jacob 0aea49a730 [mlir][Vector] Patterns flattening vector transfers to 1D
This is the first part of https://reviews.llvm.org/D114993 which has been
split into small independent commits.

This is needed at the moment to get good codegen from 2d vector.transfer
ops that aim to compile to SIMD load/store instructions but that can
only do so if the whole 2d transfer shape is handled in one piece, in
particular taking advantage of the memref being contiguous rowmajor.

For instance, if the target architecture has 128bit SIMD then we would
expect that contiguous row-major transfers of <4x4xi8> map to one SIMD
load/store instruction each.

The current generic lowering of multi-dimensional vector.transfer ops
can't achieve that because it peels dimensions one by one, so a transfer
of <4x4xi8> becomes 4 transfers of <4xi8>.

The new patterns here are only enabled for now by
 -test-vector-transfer-flatten-patterns.

Reviewed By: nicolasvasilache
2021-12-13 21:49:04 +00:00
Stella Laurenzo c10995a8ad Re-apply [NFC] Generalize a couple of passes so they can operate on any FunctionLike op.
* Generalizes passes linalg-detensorize, linalg-fold-unit-extent-dims, convert-elementwise-to-linalg.
* I feel that more work could be done in the future (i.e. make FunctionLike into a proper OpInterface and extend actions in dialect conversion to be trait based), and this patch would be a good record of why that is useful.
* Note for downstreams:
  * Since these passes are now generic, they do not automatically nest with pass managers set up for implicit nesting.
  * The Detensorize pass must run on a FunctionLike, and this requires explicit nesting.
* Addressed missed comments from the original and per-suggestion removed the assert on FunctionLike in ElementwiseToLinalg and DropUnitDims.cpp, which also is what was causing the integration test to fail.

This reverts commit aa8815e42e.

Differential Revision: https://reviews.llvm.org/D115671
2021-12-13 13:33:00 -08:00
Mehdi Amini aa8815e42e Revert "[NFC] Generalize a couple of passes so they can operate on any FunctionLike op."
This reverts commit 34696e6542.

A test is crashing on the mlir-nvidia bot.
2021-12-13 20:41:25 +00:00
Stella Laurenzo 34696e6542 [NFC] Generalize a couple of passes so they can operate on any FunctionLike op.
* Generalizes passes linalg-detensorize, linalg-fold-unit-extent-dims, convert-elementwise-to-linalg.
* I feel that more work could be done in the future (i.e. make FunctionLike into a proper OpInterface and extend actions in dialect conversion to be trait based), and this patch would be a good record of why that is useful.
* Note for downstreams:
  * Since these passes are now generic, they do not automatically nest with pass managers set up for that.
  * If running them over nested functions, you must nest explicitly. Upstream has adopted this style but *-opt still has some uses of implicit pipelines via args. See tests for argument changes needed.

Differential Revision: https://reviews.llvm.org/D115645
2021-12-13 12:01:53 -08:00
gysit 8fc0525a15 [mlir][linalg] Stage application of pad tensor op vectoriztaion.
Adapt the LinalgStrategyVectorizationPattern pass to apply the vectorization patterns in two stages. The change ensures the generic pad tensor op vectorization pattern does not run too early. Additionally, the revision adds the transfer op canonicalization patterns to the set of applied patterns, since they are needed to enable efficient vectorization for rank-reduced convolutions.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115627
2021-12-13 19:49:35 +00:00
Alexander Belyaev 206365bf8f [mlir] Update comments that mention `linalg.collapse/expand` shape. 2021-12-13 20:35:34 +01:00
gysit 6c85a49e22 [mlir][memref] Use current source type in getCanonicalSubViewResultType.
Use the current instead of the new source type to compute the rank-reduction map in getCanonicalSubViewResultType. Otherwise, the computation of the rank-reduction map fails when folding a cast into a subview since the strides of the new source type cannot be related to the strides of the current result type.

Depends On D115428

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115446
2021-12-13 14:50:41 +00:00
Markus Böck 664cc9312c [mlir] Implement `DataLayoutTypeInterface` for `LLVMStructType`
Using this implementation of the interface it is possible to query the size, ABI alignment as well as the preferred alignment of a struct. It should yield the same results as LLVMs `llvm::DataLayout` on an equivalent `llvm::StructType`, including for packed structs.

Additionally it is also possible to increase the ABI and preferred alignment using a data layout entry with the type `llvm.struct<()>, which serves the same functionality as the `a:` component in LLVMs data layout string.

Differential Revision: https://reviews.llvm.org/D115600
2021-12-13 15:09:16 +01:00
gysit db7a2e9176 [mlir][linalg] Only compose PadTensorOps if no ExtractSliceOp is rank-reducing.
Do not compose pad tensor operations if the extract slice of the outer pad tensor operation is rank reducing. The inner extract slice op cannot be rank-reducing since it source type must match the desired type of the padding.

Depends On D115359

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115428
2021-12-13 13:01:30 +00:00
gysit 6859f8ed1e [mlir][linalg] Adapt the PadTensorOpVectorizationWithInsertSlicePattern matching.
Tighten the matcher of the PadTensorOpVectorizationWithInsertSlicePattern pattern. Only match if the PadOp result is used by the InsertSliceOp source. Fail if the result is used by the InsertSliceOp dest.

Depends On D115336

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115359
2021-12-13 12:55:07 +00:00
gysit f895e95138 [mlir][linalg] Make padding work for rank-reducing slice ops.
Adapt the computation of a static bounding box to take rank-reducing slice operations into account by filtering out reduced size one dimensions. The revision is needed to make padding work for decomposed convolution operations. The decomposition introduces rank reducing extract slice operations that previously let padding fail.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115336
2021-12-13 12:34:20 +00:00
Nicolas Vasilache 408553dd96 [mlir][Vector] Support 0-D vectors in `CreateMaskOp`
The 0-D case gets lowered in almost the same way that the 1-D case does
in VectorCreateMaskOpConversion. I also had to slightly update the
verifier for the op to always require exactly 1 operand in the 0-D case.

Depends On D115220

Reviewed by: ftynse

Differential revision: https://reviews.llvm.org/D115221
2021-12-12 13:32:29 +00:00
Nicolas Vasilache f2e945a393 Revert "[mlir][tensor] Fix insert_slice + tensor cast overflow"
This reverts commit 5601821dae.

The prefix + canonical complete behavior is actually obsolete and should not be reintroduced.
Reverting.
2021-12-10 22:53:52 +00:00
Thomas Raoux f56933b263 [mlir][vector] NFC move vector unroll/distribute patterns to their own file
Differential Revision: https://reviews.llvm.org/D115548
2021-12-10 14:00:13 -08:00
Uday Bondhugula bc657b2eef [MLIR][NFC] Move out affine scalar replacement utility to affine utils
NFC. Move out and expose affine scalar replacement utility through
affine utils. Renaming misleading forwardStoreToLoad ->
affineScalarReplace. Update a stale doc comment.

Differential Revision: https://reviews.llvm.org/D115495
2021-12-11 03:26:42 +05:30
Nicolas Vasilache 5601821dae [mlir][tensor] Fix insert_slice + tensor cast overflow
InsertSliceOp may have subprefix semantics where missing trailing dimensions
are automatically inferred directly from the operand shape.
This revision fixes an overflow that occurs in such cases when the impl is based on the op rank.

Differential Revision: https://reviews.llvm.org/D115549
2021-12-10 21:41:26 +00:00
River Riddle 233e9476d8 [mlir:PDL] Allow non-bound pdl.attribute/pdl.type operations that create constants
This allows for passing in these attributes/types to constraints/rewrites as arguments.

Differential Revision: https://reviews.llvm.org/D114817
2021-12-10 19:38:43 +00:00
Alexander Belyaev b618880e7b [mlir] Move `linalg.tensor_expand/collapse_shape` to TensorDialect.
RFC: https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310

linalg.fill gets a canonicalizer, because `FoldFillWithTensorReshape` cannot be moved to tensorops (it uses linalg::FillOp inside). Before it was listed as a canonicalization pattern for the reshape operations, now it became a canonicalization for FillOp.

Differential Revision: https://reviews.llvm.org/D115502
2021-12-10 12:11:48 +01:00
Rob Suderman 46c96fca0e [mlir][tosa] Fix quantized type for tosa.conv2d canonicalization
Wrong type was used for the result type in the tosa.conv_2d canonicalization.
The type should match the result element type should match the result type
not the input element type.

Differential Revision: https://reviews.llvm.org/D115463
2021-12-09 12:39:23 -08:00
MaheshRavishankar 9cfd8d7c6c [mlir][Vector] Avoid infinite loop in InnerOuterDimReductionConversion.
This patterns tries to convert an inner (outer) dim reduction to an
outer (inner) dim reduction. Doing this on a 1D or 0D vector results
in an infinite loop since the converted op is same as the original
operation. Just returning failure when source rank <= 1 fixes the
issue.

Differential Revision: https://reviews.llvm.org/D115426
2021-12-09 09:30:05 -08:00
Krzysztof Drewniak e1da62910e [MLIR][GPU] Define gpu.printf op and its lowerings
- Define a gpu.printf op, which can be lowered to any GPU printf() support (which is present in CUDA, HIP, and OpenCL). This op only supports constant format strings and scalar arguments
- Define the lowering of gpu.pirntf to a call to printf() (which is what is required for AMD GPUs when using OpenCL) as well as to the hostcall interface present in the AMD Open Compute device library, which is the interface present when kernels are running under HIP.
- Add a "runtime" enum that allows specifying which of the possible runtimes a ROCDL kernel will be executed under or that the runtime is unknown. This enum controls how gpu.printf is lowered

This change does not enable lowering for Nvidia GPUs, but such a lowering should be possible in principle.

And:
[MLIR][AMDGPU] Always set amdgpu-implicitarg-num-bytes=56 on kernels

This is something that Clang always sets on both OpenCL and HIP kernels, and failing to include it causes mysterious crashes with printf() support.

In addition, revert the max-flat-work-group-size to (1, 256) to avoid triggering bugs in the AMDGPU backend.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D110448
2021-12-09 15:54:31 +00:00
Eugene Zhulenev 49ce40e9ab [mlir] AsyncParallelFor: align block size to be a multiple of inner loops iterations
Depends On D115263

By aligning block size to inner loop iterations parallel_compute_fn LLVM can later unroll and vectorize some of the inner loops with small number of trip counts. Up to 2x speedup in multiple benchmarks.

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D115436
2021-12-09 06:50:50 -08:00
Eugene Zhulenev 9f151b784b [mlir] AsyncParallelFor: sink constants into the parallel compute function
With complex recursive structure of async dispatch function LLVM can't always propagate constants to the parallel_compute_fn and it often prevents optimizations like loop unrolling and vectorization. We help LLVM by pushing known constants into the parallel_compute_fn explicitly.

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D115263
2021-12-09 06:48:23 -08:00
Matthias Springer cc45a13422 [mlir][linalg][bufferize] LinalgOps can bufferize inplace with input args
LinalgOp results usually bufferize inplace with output args. With this change, they may buffer inplace with input args if the value of the output arg is not used in the computation.

Differential Revision: https://reviews.llvm.org/D115022
2021-12-09 21:54:54 +09:00