Commit Graph

904 Commits

Author SHA1 Message Date
Lei Zhang 533ec929f6 [mlir][spirv] Add pattern to lower math.copysign
This follows the logic:
https://git.musl-libc.org/cgit/musl/tree/src/math/copysignf.c

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D122910
2022-04-01 12:06:47 -04:00
Mehdi Amini ba43d6f85c Revert "[GreedPatternRewriter] Preprocess constants while building worklist when not processing top down"
This reverts commit 59bbc7a085.

This exposes an issue breaking the contract of
`applyPatternsAndFoldGreedily` where we "converge" without applying
remaining patterns.
2022-04-01 06:16:55 +00:00
River Riddle 59bbc7a085 [GreedPatternRewriter] Preprocess constants while building worklist when not processing top down
This avoids accidentally reversing the order of constants during successive
application, e.g. when running the canonicalizer. This helps reduce the number
of iterations, and also avoids unnecessary changes to input IR.

Fixes #51892

Differential Revision: https://reviews.llvm.org/D122692
2022-03-31 12:08:55 -07:00
jacquesguan 01ad70fd1d [mlir][Vector] Fold ShuffleOp if result is identical to one of source vectors.
For example, we could do the following eliminations:
  fold vector.shuffle V1, V2, [0, 1, 2, 3] : <4xi32>, <2xi32> -> V1
  fold vector.shuffle V1, V2, [4, 5] : <4xi32>, <2xi32> -> V2

Differential Revision: https://reviews.llvm.org/D122706
2022-03-31 10:46:13 +08:00
Javier Setoain 7bc8ad5109 [mlir][vector][nfc] Rename index optimizations option
We are using "enable-index-optimizations" and "indexOptimizations" as
names for an optimization that consists of using i32 for indices within
a vector. For instance, when building a vector comparison for mask
generation. The name is confusing and suggests a scope beyond these
vector indices.  This change makes the function of the option explicit
in its name.

Differential Revision: https://reviews.llvm.org/D122415
2022-03-29 11:33:22 +01:00
xndcn c0ccb69228 [mlir][spirv] Convert func.call to spv.FunctionCall
Differential Revision: https://reviews.llvm.org/D122368
2022-03-26 19:21:23 +08:00
Javier Setoain a75a46db89 [mlir][Vector] Enable create_mask for scalable vectors
The way vector.create_mask is currently lowered is
vector-length-dependent, and therefore incompatible with scalable vector
types. This patch adds an alternative lowering path for create_mask
operations that return a scalable vector mask.

Differential Revision: https://reviews.llvm.org/D118248
2022-03-25 10:48:59 +00:00
Thomas Raoux d77f483640 [mlir][gpu] Relax restriction on mma load/store op
Those ops can support more complex layout as long as the most inner
dimension is contiguous.

Differential Revision: https://reviews.llvm.org/D122452
2022-03-25 04:03:40 +00:00
Shraiysh Vaishay b244bba582 [mlir][OpenMP] Added assembly format for omp.wsloop and remove parseClauses
This patch
 - adds assembly format for `omp.wsloop` operation
 - removes the `parseClauses` clauses as it is not required anymore

This is expected to be the final patch in a series of patches for replacing
parsers for clauses with `oilist`.

Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D121367
2022-03-23 10:02:02 +05:30
River Riddle 9595f3568a [mlir:PDL] Remove the ConstantParams support from native Constraints/Rewrites
This support has never really worked well, and is incredibly clunky to
use (it effectively creates two argument APIs), and clunky to generate (it isn't
clear how we should actually expose this from PDL frontends). Treating these
as just attribute arguments is much much cleaner in every aspect of the stack.
If we need to optimize lots of constant parameters, it would be better to
investigate internal representation optimizations (e.g. batch attribute creation),
that do not affect the user (we want a clean external API).

Differential Revision: https://reviews.llvm.org/D121569
2022-03-19 13:28:24 -07:00
River Riddle 3655069234 [mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func
dialect. This move has been planned in some capacity from the moment
we made FuncOp an operation (years ago). This commit handles the
functional aspects of the move, but various aspects are left untouched
to ease migration: func::FuncOp is re-exported into mlir to reduce
the actual API churn, the assembly format still accepts the unqualified
`func`. These temporary measures will remain for a little while to
simplify migration before being removed.

Differential Revision: https://reviews.llvm.org/D121266
2022-03-16 17:07:03 -07:00
Matthias Springer 8dca38d534 [mlir][bufferize] Support layout maps in bufferization.clone lowering
Differential Revision: https://reviews.llvm.org/D121278
2022-03-16 22:29:22 +09:00
Sam Carroll 541d89b02c [mlir] Fix --convert-func-to-llvm=emit-c-wrappers argument and result attribute handling
When using `--convert-func-to-llvm=emit-c-wrappers` the attribute arguments of the wrapper would not be created correctly in some cases.
This patch fixes that and introduces a set of tests for (hopefully) all corner cases.

See https://github.com/llvm/llvm-project/issues/53503

Author: Sam Carroll <sam.carroll@lmns.com>
Co-Author: Laszlo Kindrat <laszlo.kindrat@lmns.com>

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D119895
2022-03-15 15:29:43 +01:00
Ivan Butygin 9f864a5447 [mlir][gpu] Introduce gpu.global_id op
Introduce OpenCL-style global_id op and corresponding spirv lowering.

Differential Revision: https://reviews.llvm.org/D121548
2022-03-15 13:25:50 +03:00
gysit 7294be2b8e [mlir][linalg] Replace linalg.fill by OpDSL variant.
The revision removes the linalg.fill operation and renames the OpDSL generated linalg.fill_tensor operation to replace it. After the change, all named structured operations are defined via OpDSL and there are no handwritten operations left.

A side-effect of the change is that the pretty printed form changes from:
```
%1 = linalg.fill(%cst, %0) : f32, tensor<?x?xf32> -> tensor<?x?xf32>
```
changes to
```
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x?xf32>) -> tensor<?x?xf32>
```
Additionally, the builder signature now takes input and output value ranges as it is the case for all other OpDSL operations:
```
rewriter.create<linalg::FillOp>(loc, val, output)
```
changes to
```
rewriter.create<linalg::FillOp>(loc, ValueRange{val}, ValueRange{output})
```
All other changes remain minimal. In particular, the canonicalization patterns are the same and the `value()`, `output()`, and `result()` methods are now implemented by the FillOpInterface.

Depends On D120726

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D120728
2022-03-14 10:51:08 +00:00
Ivan Butygin 4df9544108 [mlir][spirv] Make EntryPointABIAttr.local_size optional
* It doesn't required by OpenCL/Intel Level Zero and can be set programmatically.
* Add GPU to spirv lowering in case when attribute is not present.
* Set higher benefit to WorkGroupSizeConversion pattern so it will always try to lower first from the attribute.

Differential Revision: https://reviews.llvm.org/D120399
2022-03-11 22:25:23 +03:00
River Riddle 47f175b09b [mlir] Update FuncOp conversion passes to Pass/InterfacePass<FunctionOpInterface>
These passes generally don't rely on any special aspects of FuncOp, and moving allows
for these passes to be used in many more situations. The passes that obviously weren't
relying on invariants guaranteed by a "function" were updated to be generic pass, the
rest were updated to be FunctionOpinterface InterfacePasses.

The test updates are NFC switching from implicit nesting (-pass -pass2) form to
the -pass-pipeline form (generic passes do not implicitly nest as op-specific passes do).

Differential Revision: https://reviews.llvm.org/D121190
2022-03-08 12:25:32 -08:00
Javier Setoain f2b89c7ae0 [mlir][Vector] Use create_mask in transfer mask materializations
Currently, the transfer mask is materialized by generating the vector
comparison: [offset + 0, .., offset + length - 1] < [dim, .., dim]

A better alternative is to materialize the transfer mask by using the
operation: `vector.create_mask (dim - offset)`, which will generate
simpler code and compose better with scalable vectors.

Differential Revision: https://reviews.llvm.org/D120487
2022-03-08 09:02:50 +00:00
River Riddle 5a7b919409 [mlir][NFC] Rename StandardToLLVM to FuncToLLVM
The current StandardToLLVM conversion patterns only really handle
the Func dialect. The pass itself adds patterns for Arithmetic/CFToLLVM, but
those should be/will be split out in a followup. This commit focuses solely
on being an NFC rename.

Aside from the directory change, the pattern and pass creation API have been renamed:
 * populateStdToLLVMFuncOpConversionPattern -> populateFuncToLLVMFuncOpConversionPattern
 * populateStdToLLVMConversionPatterns -> populateFuncToLLVMConversionPatterns
 * createLowerToLLVMPass -> createConvertFuncToLLVMPass

Differential Revision: https://reviews.llvm.org/D120778
2022-03-07 11:25:23 -08:00
River Riddle 3ba66435d9 [mlir][SPIRV] Split up StandardToSPIRV now that the Standard dialect is gone
StandardToSPIRV currently contains an assortment of patterns converting from
different dialects to SPIRV. This commit splits up StandardToSPIRV into separate
conversions for each of the dialects involved (some of which already exist).

Differential Revision: https://reviews.llvm.org/D120767
2022-03-02 13:14:36 -08:00
natashaknk 8d7a833eed [tosa][mlir] Add support for dynamic width/height for Conv2D inputs in tosa-to-linalg
Infers output shape for dynamic width/height inputs.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D119977
2022-03-02 12:16:35 -08:00
William S. Moses bf6477ebeb [MLIR][OpenMP] Place alloca scope within wsloop in scf.parallel to omp lowering
https://reviews.llvm.org/D120423 replaced the use of stacksave/restore with memref.alloca_scope, but kept the save/restore at the same location. This PR places the allocation scope within the wsloop, thus keeping the same allocation scope as the original scf.parallel (e.g. no longer over stack allocating).

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D120772
2022-03-02 12:46:58 -05:00
River Riddle 23aa5a7446 [mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624
2022-03-01 12:10:04 -08:00
William S. Moses 78fb4f9d5d [SCF][MemRef] Enable SCF.Parallel Lowering to use Scope Op
As discussed in https://reviews.llvm.org/D119743 scf.parallel would continuously stack allocate since the alloca op was placd in the wsloop rather than the omp.parallel. This PR is the second stage of the fix for that problem. Specifically, we now introduce an alloca scope around the inlined body of the scf.parallel and enable a canonicalization to hoist the allocations to the surrounding allocation scope (e.g. omp.parallel).

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D120423
2022-03-01 13:25:09 -05:00
Lei Zhang c809c9bd3b [mlir][spirv] Convert gpu.barrier to spv.ControlBarrier
Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D120722
2022-03-01 12:04:00 -05:00
Shraiysh Vaishay 5ee500acbb [mlir][OpenMP] Remove clauses that are not being handled
This patch removes the following clauses from OpenMP Dialect:

 - private
 - firstprivate
 - lastprivate
 - shared
 - default
 - copyin
 - copyprivate

The privatization clauses are being handled in the flang frontend. The
data copying clauses are not being handled anywhere for now. Once
we have a better picture of how to handle these clauses in OpenMP
Dialect, we can add these. For the time being, removing unneeded
clauses.

For detailed discussion about this refer to [[ https://discourse.llvm.org/t/rfc-privatisation-in-openmp-dialect/3526 | Privatisation in OpenMP dialect ]]

Reviewed By: kiranchandramohan, clementval

Differential Revision: https://reviews.llvm.org/D120029
2022-02-19 01:13:05 +05:30
William S. Moses 670aeece51 [MLIR][OpenMP][SCF] Mark parallel regions as allocation scopes
MLIR has the notion of allocation scopes which specify that stack allocations (e.g. memref.alloca, llvm.alloca) should be freed or equivalently aren't available at the end of the corresponding region.
Currently neither OpenMP parallel nor SCF parallel regions have the notion of such a scope.

This clearly makes sense for an OpenMP parallel as this is implemented in with a new function which outlines the region, and clearly any allocations in that newly outlined function have a lifetime that ends at the return of the function, by definition.

While SCF.parallel doesn't have a guaranteed runtime which it is implemented with, this similarly makes sense for SCF.parallel since otherwise an allocation within an SCF.parallel will needlessly continue to allocate stack memory that isn't cleaned up until the function (or other allocation scope op) which contains the SCF.parallel returns. This means that it is impossible to represent thread or iteration-local memory without causing a stack blow-up. In the case that this stack-blow-up behavior is intended, this can be equivalently represented with an allocation outside of the SCF.parallel with a size equal to the number of iterations.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D119743
2022-02-18 11:06:32 -05:00
Benjamin Kramer 27cd2a6284 [mlir][MemRef] Lower memref.copy with an offset to memcpy
memcpy can handle them as long as they're contiguous.

Differential Revision: https://reviews.llvm.org/D119938
2022-02-16 17:18:31 +01:00
Ivan Butygin 32389d0c2e [mlir][spirv] Add OpenCL fma op and lowering
Also, it seems Khronos has changed html spec format so small adjustment to script was needed.
Base op parsing is also probably broken.

Differential Revision: https://reviews.llvm.org/D119678
2022-02-15 11:28:20 +03:00
Adrian Kuegel 2219f9f57c [mlir][MemRef] Fix MemRefCopyOpLowering to use correct number of bytes
When lowering to memrefCopy call, the size for i1 type was calculated as 0.
Instead of using getTypeSizeInBits() and dividing by 8, we should just use getTypeSize().

Differential Revision: https://reviews.llvm.org/D119540
2022-02-11 13:59:08 +01:00
Adrian Kuegel 5b02a48085 [mlir][MemRef] Fix MemRefCastOpLowering for 32 bit index type.
The lowering creates llvm.insertvalue with the rank value, so it needs to use
index type instead of 64 bit integer type. Otherwise, we get an error:

llvm.insertvalue' op Type mismatch: cannot insert 'i64' into '!llvm.struct<(i32, ptr<i8>)>'

Differential Revision: https://reviews.llvm.org/D119534
2022-02-11 12:37:15 +01:00
Thomas Raoux 5ab04bc068 [mlir][gpu] Add device side async copy operations
Add new operations to the gpu dialect to represent device side
asynchronous copies. This also add the lowering of those operations to
nvvm dialect.
Those ops are meant to be low level and map directly to llvm dialects
like nvvm or rocdl.

We can further add higher level of abstraction by building on top of
those operations.
This has been discuss here:
https://discourse.llvm.org/t/modeling-gpu-async-copy-ampere-feature/4924

Differential Revision: https://reviews.llvm.org/D119191
2022-02-10 17:25:59 -08:00
Matthias Springer fe0bf7d469 [mlir][vector][NFC] Use CombiningKindAttr instead of StringAttr
This makes the op consistent with other ops in vector dialect.

Differential Revision: https://reviews.llvm.org/D119343
2022-02-10 19:13:29 +09:00
River Riddle ace01605e0 [mlir] Split out a new ControlFlow dialect from Standard
This dialect is intended to model lower level/branch based control-flow constructs. The initial set
of operations are: AssertOp, BranchOp, CondBranchOp, SwitchOp; all split out from the current
standard dialect.

See https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D118966
2022-02-06 14:51:16 -08:00
Lei Zhang 36e68c11ad [mlir][spirv] Add support for converting vector.shuffle
Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D119030
2022-02-04 16:31:11 -05:00
River Riddle dec8af701f [mlir] Move SelectOp from Standard to Arithmetic
This is part of splitting up the standard dialect. See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for discussion.

Differential Revision: https://reviews.llvm.org/D118648
2022-02-02 14:45:12 -08:00
River Riddle 6a8ba3186e [mlir] Split std.splat into tensor.splat and vector.splat
This is part of the larger effort to split the standard dialect. This will also allow for pruning some
additional dependencies on Standard (done in a followup).

Differential Revision: https://reviews.llvm.org/D118202
2022-02-02 14:45:12 -08:00
Nicolas Vasilache 3c3810e72e [mlir][vector] Avoid hoisting alloca'ed temporary buffers across AutomaticAllocationScope
This revision avoids incorrect hoisting of alloca'd buffers across an AutomaticAllocationScope boundary.
In the more general case, we will probably need a ParallelScope-like interface.

Differential Revision: https://reviews.llvm.org/D118768
2022-02-02 06:00:42 -05:00
Thomas Raoux a57ccad5a6 [VectorToGPU] Fix horizontal stride calculation for N-D memref
Fix a bug in how we calculate the stride of mma load/store ops for N-D
memrefs

Differential Revision: https://reviews.llvm.org/D118378
2022-01-27 13:35:56 -08:00
natashaknk 024a1fab5c [tosa][mlir] Add dynamic shape support for remaining ops
Added support for concat, tile, pad, argmax and table ops

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D118397
2022-01-27 11:25:38 -08:00
Benjamin Kramer 608cc6b163 [mlir][complex] Lower complex.constant to LLVM
This fixes a regression from 480cd4cb85

Differential Revision: https://reviews.llvm.org/D118347
2022-01-27 13:48:23 +01:00
River Riddle 632a4f8829 [mlir] Move std.generic_atomic_rmw to the memref dialect
This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rmw which is the non-region
sibling operation of std.generic_atomic_rmw (the relationship is even more clear given
they have nearly the same description % how they represent the inner computation).

Differential Revision: https://reviews.llvm.org/D118209
2022-01-26 11:52:01 -08:00
Chuanqi Xu dbbe010908 [MLIR] [AsyncToLLVM] Use llvm.coro.align intrinsic
Use llvm.coro.align to align coroutine frame properly.

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D117978
2022-01-25 19:28:25 +08:00
harsh e01e4c9115 Fix bugs in GPUToNVVM lowering
The current lowering from GPU to NVVM does
not correctly handle the following cases when
lowering the gpu shuffle op.

1. When the active width is set to 32 (all lanes),
then the current approach computes (1 << 32) -1 which
results in poison values in the LLVM IR. We fix this by
defining the active mask as (-1) >> (32 - width).

2. In the case of shuffle up, the computation of the third
operand c has to be different from the other 3 modes due to
the op definition in the ISA reference.
(https://docs.nvidia.com/cuda/parallel-thread-execution/index.html)
Specifically, the predicate value is computed as j >= maxLane
for up and j <= maxLane for all other modes. We fix this by
computing maskAndClamp as 32 - width for this mode.

TEST: We modify the existing test and add more checks for the up mode.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D118086
2022-01-25 03:24:14 +00:00
Rob Suderman 3e746c6d9e [mlir] Add support for ExpM1 to GLSL/OpenCL SPIRV Backends
Adding a similar decomposition for exponential minus one to the SPIRV
backends along with the necessary tests.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D118081
2022-01-24 15:38:34 -08:00
Alexander Belyaev fd0c6f5391 [mlir] Move linalg::PadTensorOp to tensor::PadOp.
RFC: https://llvm.discourse.group/t/rfc-move-linalg-padtensorop-to-tensor-padop/5785

Differential Revision: https://reviews.llvm.org/D117892
2022-01-21 20:02:39 +01:00
Lei Zhang 4710750854 [mlir][spirv] Support size-1 vector inserts during conversion
Differential Revision: https://reviews.llvm.org/D115517
2022-01-21 13:56:26 -05:00
Mogball e99835ffed [mlir][pdl] Make `pdl` the default dialect when parsing/printing
PDLDialect being a somewhat user-facing dialect and whose ops contain exclusively other PDL ops in their regions can take advantage of `OpAsmOpInterface` to provide nicer IR.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D117828
2022-01-20 20:22:53 +00:00
Mogball 7c471b56f2 [mlir][pdl] OperationOp should not be side-effect free
Unbound OperationOp in the matcher (i.e. one with no uses) is already disallowed by the verifier. However, an OperationOp in the rewriter is not side-effect free -- it's creating an op!

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D117825
2022-01-20 20:22:01 +00:00
River Riddle d75c3e8396 [mlir] Don't print `// no predecessors` on entry blocks
Entry blocks can never have predecessors, so this is unnecessary.

Fixes #53287

Differential Revision: https://reviews.llvm.org/D117713
2022-01-19 15:57:58 -08:00