Commit Graph

3441 Commits

Author SHA1 Message Date
Alexander Belyaev 80be54c08f [mlir] Lower Shape binary ops (AddOp, MulOp) to Standard.
Differential Revision: https://reviews.llvm.org/D81344
2020-06-08 17:48:01 +02:00
Wen-Heng (Jack) Chung 603b974cf7 [mlir][gpu] Fix logic error in D79508 computing number of private attributions.
Fix logic error in D79508. The old logic would make the first check in
`GPUFuncOp::verifyBody` always pass.
2020-06-08 07:40:34 -05:00
Frederik Gossen 215914151e [MLIR][Shape] Add support for `OpAsmInterface` in `shape.const_size`
The SSA values created with `shape.const_size` are now named depending on the
value.
A constant size of 3, e.g., is now automatically named `%c3`.

Differential Revision: https://reviews.llvm.org/D81249
2020-06-08 10:27:28 +00:00
Alexander Belyaev 250dcf61ae Revert "Revert "[MLIR] Lower shape.num_elements -> shape.reduce.""
This reverts commit a25f5cd70c.

Now the build with `-DBUILD_SHARED_LIBS=ON` is fixed.
2020-06-08 12:19:54 +02:00
Frederik Gossen 970bb4a291 [MLIR] Add `to/from_extent_tensor` lowering to the standard dialect
The operations `to_extent_tensor` and `from_extent_tensor` become no-ops when
lowered to the standard dialect.
This is possible with a lowering from `shape.shape` to `tensor<?xindex>`.

Differential Revision: https://reviews.llvm.org/D81162
2020-06-08 09:38:18 +00:00
Frederik Gossen 867bc41e85 [MLIR] Add type conversion for `shape.shape`
Convert `shape.shape` to `tensor<?xindex>` when lowering the `shape` to the
`std` dialect.

Differential Revision: https://reviews.llvm.org/D81161
2020-06-08 09:34:03 +00:00
Frederik Gossen 24edbdf99b [MLIR] Clean up `shape` to `std` lowering
Apply post-commit suggestions to the new lowering.

Differential Revision: https://reviews.llvm.org/D81160
2020-06-08 08:59:53 +00:00
Tres Popp 68a8336bf2 Revert "Revert "[mlir] Folding and canonicalization of shape.cstr_eq""
This reverts commit 12e31f6e40.
2020-06-08 10:06:55 +02:00
Tres Popp d216f983e6 Revert "Revert "[mlir] Canonicalization and folding of shape.cstr_broadcastable""
This reverts commit 4261b026ad.
2020-06-08 10:06:55 +02:00
Tres Popp 5d77bd733e [mlir] Restructure Shape dialect's CMakeLists.
Now targets are only built with files in the same directory.

Differential Revision: https://reviews.llvm.org/D81328
2020-06-08 10:06:38 +02:00
Ehsan Toosi 4214031d43 [mlir] Introduce allowMemrefFunctionResults for the helper operation converters of buffer placement
This parameter gives the developers the freedom to choose their desired function
signature conversion for preparing their functions for buffer placement. It is
introduced for BufferAssignmentFuncOpConverter, and also for
BufferAssignmentReturnOpConverter, and BufferAssignmentCallOpConverter to adapt
the return and call operations with the selected function signature conversion.
If the parameter is set, buffer placement won't also deallocate the returned
buffers.

Differential Revision: https://reviews.llvm.org/D81137
2020-06-08 09:25:41 +02:00
Mehdi Amini a25f5cd70c Revert "[MLIR] Lower shape.num_elements -> shape.reduce."
This reverts commit e80617df89.

This broke the build with `-DBUILD_SHARED_LIBS=ON`
2020-06-07 19:32:36 +00:00
Alexander Belyaev e80617df89 [MLIR] Lower shape.num_elements -> shape.reduce.
Differential Revision: https://reviews.llvm.org/D81279
2020-06-07 16:39:21 +02:00
Alexander Belyaev 50f68c1e33 [mlir] Add verifier for `shape.yield`.
Differential Revision: https://reviews.llvm.org/D81262
2020-06-07 15:40:11 +02:00
Jacques Pienaar 92cb0ce8f8 [mlir] Change to re-enable cuda-runner tests
mlir-cuda-runner tests were failing post
https://reviews.llvm.org/D80676, small change to get those passing
again. More cleanup may be needed post.
2020-06-06 09:31:51 -07:00
Tres Popp 4261b026ad Revert "[mlir] Canonicalization and folding of shape.cstr_broadcastable"
This reverts commit 6aab709459.

Some users have failing builds with ShapeCanonicalization.td, so revert
for now.
2020-06-06 11:17:44 +02:00
Tres Popp 12e31f6e40 Revert "[mlir] Folding and canonicalization of shape.cstr_eq"
This reverts commit 0a554e607f.

Some users have build failures when building ShapeCanonicalization.td,
so revert changes that created and rely on it.
2020-06-06 11:08:41 +02:00
Diego Caballero 7d59f49bda [mlir] Fix representation of BF16 constants
This patch is a follow-up on https://reviews.llvm.org/D81127

BF16 constants were represented as 64-bit floating point values due to the lack
of support for BF16 in APFloat. APFloat was recently extended to support
BF16 so this patch is fixing the BF16 constant representation to be 16-bit.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D81218
2020-06-05 17:43:06 -07:00
Nicolas Vasilache b56bf30d3c [mlir][Vector] Add folding of memref_cast into vector_transfer ops
Summary:
This revision adds a common folding pattern that starts appearing on
vector_transfer ops.

Differential Revision: https://reviews.llvm.org/D81281
2020-06-05 13:27:00 -04:00
Nicolas Vasilache 56ce65e2b6 [mlir][Linalg] NFC - Cleanup debug, address post-commit review. 2020-06-05 13:02:34 -04:00
Nicolas Vasilache 38c407bf00 [mlir][SCF] Add single iteration scf.for promotion to the FuncOp level helper.
Previously only the Affine version would be folded.

Differential Revision: https://reviews.llvm.org/D81261
2020-06-05 11:28:21 -04:00
Wen-Heng (Jack) Chung 2fd6403a6d [mlir][gpu] Introduce mlir-rocm-runner.
Summary:
`mlir-rocm-runner` is introduced in this commit to execute GPU modules on ROCm
platform. A small wrapper to encapsulate ROCm's HIP runtime API is also inside
the commit.

Due to behavior of ROCm, raw pointers inside memrefs passed to `gpu.launch`
must be modified on the host side to properly capture the pointer values
addressable on the GPU.

LLVM MC is used to assemble AMD GCN ISA coming out from
`ConvertGPUKernelToBlobPass` to binary form, and LLD is used to produce a shared
ELF object which could be loaded by ROCm HIP runtime.

gfx900 is the default target be used right now, although it could be altered via
an option in `mlir-rocm-runner`. Future revisions may consider using ROCm Agent
Enumerator to detect the right target on the system.

Notice AMDGPU Code Object V2 is used in this revision. Future enhancements may
upgrade to AMDGPU Code Object V3.

Bitcode libraries in ROCm-Device-Libs, which implements math routines exposed in
`rocdl` dialect are not yet linked, and is left as a TODO in the logic.

Reviewers: herhut

Subscribers: mgorny, tpr, dexonsmith, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits

Tags: #mlir, #llvm

Differential Revision: https://reviews.llvm.org/D80676
2020-06-05 09:46:39 -05:00
HazemAbdelhafez cc2349e3cf [MLIR][SPIRV] Support flat, location, and noperspective decorations
Add support for flat, location, and noperspective decorations in the
serializer and deserializer to be able to process basic shader files
for graphics applications.

Differential Revision: https://reviews.llvm.org/D80837
2020-06-05 08:55:22 -04:00
Nicolas Vasilache 247e185dd5 [mlir][Vector] Move temporary alloc to top of the function alloca when lowering vector_transfers
Recently introduced allocation hoisting is quite conservative on the cases when it triggers.
This revision makes it such that the allocations for vector transfer lowerings are hoisted
to the top of the function.
This should be revisited in the context of parallelism and is a temporary workaround.

Differential Revision: https://reviews.llvm.org/D81253
2020-06-05 08:45:52 -04:00
Nicolas Vasilache 6b0dfd703a [mlir][Linalg] Add missing CMake dependency on SCFTransforms 2020-06-05 07:38:55 -04:00
Nicolas Vasilache 6953cf6502 [mlir][Linalg] Add a hoistRedundantVectorTransfers helper function
This revision adds a helper function to hoist vector.transfer_read /
vector.transfer_write pairs out of immediately enclosing scf::ForOp
iteratively, if the following conditions are true:
   1. The 2 ops access the same memref with the same indices.
   2. All operands are invariant under the enclosing scf::ForOp.
   3. No uses of the memref either dominate the transfer_read or are
   dominated by the transfer_write (i.e. no aliasing between the write and
   the read across the loop)

To improve hoisting opportunities, call the `moveLoopInvariantCode` helper
function on the candidate loop above which to hoist. Hoisting the transfers
results in scf::ForOp yielding the value that originally transited through
memory.

This revision additionally exposes `moveLoopInvariantCode` as a helper in
LoopUtils.h and updates SliceAnalysis to support return scf::For values and
allow hoisting across multiple scf::ForOps.

Differential Revision: https://reviews.llvm.org/D81199
2020-06-05 06:50:24 -04:00
Alexander Belyaev 04fb2b6123 [Mlir] Implement printer, parser, verifier and builder for shape.reduce.
Differential Revision: https://reviews.llvm.org/D81186
2020-06-05 11:25:32 +02:00
Tres Popp 4ffe6bd8a7 [mlir] NFC formatting cleanup. 2020-06-05 11:00:20 +02:00
Tres Popp 655e08ceeb [mlir] Canonicalization of shape.assuming
Summary:
This will inline the region to a shape.assuming in the case that the
input witness is found to be statically true.

Differential Revision: https://reviews.llvm.org/D80302
2020-06-05 11:00:20 +02:00
Tres Popp 0a554e607f [mlir] Folding and canonicalization of shape.cstr_eq
In the case of all inputs being constant and equal, cstr_eq will be
replaced with a true_witness.

Differential Revision: https://reviews.llvm.org/D80303
2020-06-05 11:00:20 +02:00
Tres Popp 6aab709459 [mlir] Canonicalization and folding of shape.cstr_broadcastable
This allows replacing of this op with a true witness in the case of both
inputs being const_shapes and being found to be broadcastable.

Differential Revision: https://reviews.llvm.org/D80304
2020-06-05 11:00:19 +02:00
Tres Popp 4a255bbd29 [mlir] Add folding for shape.any
If any input to shape.any is a const_shape, shape.any can be replaced
with that input.

Differential Revision: https://reviews.llvm.org/D80305
2020-06-05 11:00:19 +02:00
Tres Popp 6b3a5bff93 [mlir] Folding of shape.assuming_all
This allows assuming_all to be replaced when all inputs are known to be
statically passing witnesses.

Differential Revision: https://reviews.llvm.org/D80306
2020-06-05 11:00:19 +02:00
Tres Popp 1c3e38d98c [mlir] Add a shape op that returns a constant witness
This will later be used during canonicalization and folding steps to replace
statically known passing constraints.

Differential Revision: https://reviews.llvm.org/D80307
2020-06-05 11:00:19 +02:00
Uday Bondhugula 0f6999af88 [MLIR] Update linalg.conv lowering to use affine load in the absence of padding
Update linalg to affine lowering for convop to use affine load for input
whenever there is no padding. It had always been using std.loads because
max in index functions (needed for non-zero padding if not materializing
zeros) couldn't be represented in the non-zero padding cases.

In the future, the non-zero padding case could also be made to use
affine - either by materializing or using affine.execute_region. The
latter approach will not impact the scf/std output obtained after
lowering out affine.

Differential Revision: https://reviews.llvm.org/D81191
2020-06-05 12:28:30 +05:30
River Riddle c0cd1f1c5c [mlir] Refactor BoolAttr to be a special case of IntegerAttr
This simplifies a lot of handling of BoolAttr/IntegerAttr. For example, a lot of places currently have to handle both IntegerAttr and BoolAttr. In other places, a decision is made to pick one which can lead to surprising results for users. For example, DenseElementsAttr currently uses BoolAttr for i1 even if the user initialized it with an Array of i1 IntegerAttrs.

Differential Revision: https://reviews.llvm.org/D81047
2020-06-04 16:41:24 -07:00
Nicolas Vasilache 3463d9835b [mlir][Linalg] Add a hoistViewAllocOps helper function
This revision adds a helper function to hoist alloc/dealloc pairs and
alloca op out of immediately enclosing scf::ForOp if both conditions are true:
   1. all operands are defined outside the loop.
   2. all uses are ViewLikeOp or DeallocOp.

This is now considered Linalg-specific and will be generalized on a per-need basis.

Differential Revision: https://reviews.llvm.org/D81152
2020-06-04 18:59:03 -04:00
Diego Caballero 5c990d6994 [mlir] Add support for bf16 to StandardToLLVM conversion
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D81127
2020-06-04 14:36:36 -07:00
aartbik c19fae507e [mlir] [VectorOps] Add missing comments to CreateMaskOp lowering
Summary: Add missing comment to CreateMask. Fixed typo in ConstantMask comment.

Reviewers: nicolasvasilache, rriddle, reidtatge, ftynse

Reviewed By: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D81125
2020-06-04 12:50:47 -07:00
Thomas Raoux 661235e126 [mlir][gpu] Add subgroup Id/Size/Num to GPU dialect
Add SubgroupId, SubgroupSize and NumSubgroups to GPU dialect ops and add the
lowering of those ops to SPIRV.

Differential Revision: https://reviews.llvm.org/D81042
2020-06-04 10:52:40 -07:00
Hanhan Wang 0b025d2733 [mlir][StandardToSPIRV] Handle i1 case for lowering std.zexti to SPIR-V.
Differential Revision: https://reviews.llvm.org/D80965
2020-06-03 15:01:18 -07:00
Hanhan Wang 27fca57546 [mlir][Linalg] Add support for fusion between indexed_generic ops and tensor_reshape ops
Summary:
The fusion for tensor_reshape is embedding the information to indexing maps,
thus the exising pattenr also works for indexed_generic ops.

Depends On D80347

Differential Revision: https://reviews.llvm.org/D80348
2020-06-03 14:59:47 -07:00
Hanhan Wang cc11ceda16 [mlir][Linalg] Add support for fusion between indexed_generic ops and generic ops on tensors.
Summary:
Different from the fusion between generic ops, indices are involved. In this
context, we need to re-map the indices for producer since the fused op is built
on consumer's perspective. This patch supports all combination of the fusion
between indexed_generic ops and generic ops, which includes tests case:
  1) generic op as producer and indexed_generic op as consumer.
  2) indexed_generic op as producer and generic op as consumer.
  3) indexed_generic op as producer and indexed_generic op as consumer.

Differential Revision: https://reviews.llvm.org/D80347
2020-06-03 14:58:43 -07:00
aartbik 6391da98f4 [mlir] [VectorOps] Use 'vector.flat_transpose' for 2-D 'vector.tranpose'
Summary:
Progressive lowering of vector.transpose into an operation that
is closer to an intrinsic, and thus the hardware ISA. Currently
under the common vector transform testing flag, as we prepare
deploying this transformation in the LLVM lowering pipeline.

Reviewers: nicolasvasilache, reidtatge, andydavis1, ftynse

Reviewed By: nicolasvasilache, ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits

Tags: #llvm, #mlir

Differential Revision: https://reviews.llvm.org/D80772
2020-06-03 14:55:50 -07:00
Jacques Pienaar 5b454b98d6 [mlir] Remove unneeded inference trait/fns
Therse are all handled with the simple return type inference in ODS.
Also update some summaries to match what is recommended in ODS doc.
2020-06-03 13:09:07 -07:00
Frederik Gossen 3713314bfa [MLIR] Shape to standard dialect lowering
Add a new pass to lower operations from the `shape` to the `std` dialect.
The conversion applies only to the `size_to_index` and `index_to_size`
operations and affected types.
Other patterns will be added as needed.

Differential Revision: https://reviews.llvm.org/D81091
2020-06-03 16:17:03 +00:00
Nicolas Vasilache e349fb70a2 [mlir][Linalg] NFC - Make markers use Identifier instead of StringRef
Summary: This removes string ownership worries by putting everything into the context and allows more constructing identifiers programmatically.

Reviewers: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D81027
2020-06-03 05:52:32 -04:00
Mehdi Amini 48c800cc1b Fix build: TableGen uses `is<T>` instead of `isa<T>` as predicate 2020-06-03 04:06:19 +00:00
Mehdi Amini a09bb6d77b Replace dyn_cast<>() with isa<>() when the result isn't used (NFC)
Fixed warning reported by some GCC version.
2020-06-03 03:09:45 +00:00
Thomas Raoux 81dd3a4718 [mlir][spirv] Fix coop matrix getExtension
Stack variable was being used beyond its lifetime.

Differential Revision: https://reviews.llvm.org/D80948
2020-06-02 16:32:31 -07:00