Commit Graph

269 Commits

Author SHA1 Message Date
MaheshRavishankar 2bcd1927dd [mlir][SCFToGPU] Remove conversions from scf.for to gpu.launch.
Keeping in the affine.for to gpu.launch conversions, which should
probably be the affine.parallel to gpu.launch conversion as well.

Differential Revision: https://reviews.llvm.org/D80747
2020-06-01 23:06:20 -07:00
Nicolas Vasilache 5f9e0466f2 [mlir][Vector] Fix vector.transfer alignment calculation
https://reviews.llvm.org/D79246 introduces alignment propagation for vector transfer operations. Unfortunately, the alignment calculation is incorrect and can result in crashes.

This revision fixes the calculation by using the natural alignment of the memref elemental type, instead of the resulting vector type.

If more alignment is desired, it can be done in 2 ways:
1. use a proper vector.type_cast to transform a memref<axbxcxdxf32> into a memref<axbxvector<cxdxf32>> giving a natural alignment of vector<cxdxf32>
2. add an alignment attribute to vector transfer operations and propagate it.

With this change the alignment in the relevant tests goes down from 128 to 4.

Lastly, a few minor cleanups are performed and the custom `isMinorIdentityMap` is deprecated.

Differential Revision: https://reviews.llvm.org/D80734
2020-05-28 17:58:51 -04:00
Wen-Heng (Jack) Chung 061fb8eb2d [mlir][gpu][mlir-cuda-runner] Refactor ConvertKernelFuncToCubin to be generic.
Make ConvertKernelFuncToCubin pass to be generic:

- Rename to ConvertKernelFuncToBlob.
- Allow specifying triple, target chip, target features.
- Initializing LLVM backend is supplied by a callback function.
- Lowering process from MLIR module to LLVM module is via another callback.
- Change mlir-cuda-runner to adopt the revised pass.
- Add new tests for lowering to ROCm HSA code object (HSACO).
- Tests for CUDA and ROCm are kept in separate directories.

Differential Revision: https://reviews.llvm.org/D80142
2020-05-28 09:08:28 -05:00
aartbik c295a65da4 [mlir] [VectorOps] Add 'vector.flat_transpose' operation
Summary:
Provides a representation of the linearized LLVM instrinsic.
With tests and lowering implementation to LLVM IR dialect.
Prepares better lowering for 2-D vector.transpose.

Reviewers: nicolasvasilache, ftynse, reidtatge, bkramer, dcaballe

Reviewed By: ftynse, dcaballe

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D80419
2020-05-27 11:09:48 -07:00
MaheshRavishankar 4d6f44f5f0 [mlir][spirv] Lower allocation/deallocations of workgroup memory.
This allocation of a workgroup memory is lowered to a
spv.globalVariable. Only static size allocation with element type
being int or float is handled. The lowering does account for the
element type that are not supported in the lowered spv.module based on
the extensions/capabilities and adjusts the number of elements to get
the same byte length.

Differential Revision: https://reviews.llvm.org/D80411
2020-05-27 09:53:16 -07:00
Wen-Heng (Jack) Chung 2cbbc266ec [mlir][gpu] Refactor ConvertGpuLaunchFuncToCudaCalls pass.
Due to similar APIs between CUDA and ROCm (HIP),
ConvertGpuLaunchFuncToCudaCalls pass could be used on both platforms with some
refactoring.

In this commit:

- Migrate ConvertLaunchFuncToCudaCalls from GPUToCUDA to GPUCommon, and rename.
- Rename runtime wrapper APIs be platform-neutral.
- Let GPU binary annotation attribute be specifiable as a PassOption.
- Naming changes within the implementation and tests.

Subsequent patches would introduce ROCm-specific tests and runtime wrapper
APIs.

Differential Revision: https://reviews.llvm.org/D80167
2020-05-21 08:53:47 -05:00
Mehdi Amini 5c3ebd7725 Revert "[mlir][gpu] Refactor ConvertGpuLaunchFuncToCudaCalls pass."
This reverts commit cdb6f05e2d.

The build is broken with:

  You have called ADD_LIBRARY for library obj.MLIRGPUtoCUDATransforms without any source files. This typically indicates a problem with your CMakeLists.txt file
2020-05-21 03:44:35 +00:00
Wen-Heng (Jack) Chung cdb6f05e2d [mlir][gpu] Refactor ConvertGpuLaunchFuncToCudaCalls pass.
Due to similar APIs between CUDA and ROCm (HIP),
ConvertGpuLaunchFuncToCudaCalls pass could be used on both platforms with some
refactoring.

In this commit:

- Migrate ConvertLaunchFuncToCudaCalls from GPUToCUDA to GPUCommon, and rename.
- Rename runtime wrapper APIs be platform-neutral.
- Let GPU binary annotation attribute be specifiable as a PassOption.
- Naming changes within the implementation and tests.

Subsequent patches would introduce ROCm-specific tests and runtime wrapper
APIs.

Differential Revision: https://reviews.llvm.org/D80167
2020-05-20 16:11:48 -05:00
MaheshRavishankar 0e88eb5c51 [mlir][spirv] Adapt subview legalization to the updated op semantics.
The subview semantics changes recently to allow for more natural
representation of constant offsets and strides. The legalization of
subview op for lowering to SPIR-V needs to account for this.
Also change the linearization to use the strides from the affine map
of a memref.

Differential Revision: https://reviews.llvm.org/D80270
2020-05-20 12:00:21 -07:00
Nicolas Vasilache 7c3c5b11b1 [mlir][Vector] Add option to fully unroll for VectorTransfer to SCF lowering
Summary:
Previously, the only support partial lowering from vector transfers to SCF was
going through loops. This requires a dedicated allocation and extra memory
roundtrips because LLVM aggregates cannot be indexed dynamically (for more
details see the [deep-dive](https://mlir.llvm.org/docs/Dialects/Vector/#deeperdive)).

This revision allows specifying full unrolling which removes this additional roundtrip.
This should be used carefully though because full unrolling will spill, negating the
benefits of removing the interim alloc in the first place.

Proper heuristics are left for a later time.

Differential Revision: https://reviews.llvm.org/D80100
2020-05-20 11:02:13 -04:00
Alex Zinenko a7d88a9038 [mlir] SCFToStandard: support any ops in and around the control flow ops
Originally, the SCFToStandard conversion only declared Ops from the Standard
dialect as legal after conversion. This is undesirable as it would fail the
conversion if the SCF ops contained ops from any other dialect. Furthermore,
this would be problematic for progressive lowering of `scf.parallel` to
`scf.for` after `ensureRegionTerminator` is made aware of the pattern rewriting
infrastructure because it creates temporary `scf.yield` operations declared
illegal. Change the legalization target to declare any op other than `scf.for`,
`scf.if` and `scf.parallel` legal.

Differential Revision: https://reviews.llvm.org/D80137
2020-05-20 16:12:05 +02:00
Alex Zinenko eab4a199d1 [mlir] NFC: rename tests related to SCF dialect from Loops to SCF
The dialect and conversions from/to it were renamed in previous commits.

Differential Revision: https://reviews.llvm.org/D80216
2020-05-20 15:08:23 +02:00
Hanhan Wang 520a570268 [mlir][StandardToSPIRV] Fix signedness issue in bitwidth emulation.
Summary:
Previously, after applying the mask, a negative number would convert to a
positive number because the sign flag was forgotten. This patch adds two more
shift operations to do the sign extension. This assumes that we're using two's
complement.

This patch applies sign extension unconditionally when loading a unspported integer width, and it relies the pattern to do the casting because the signedness semantic is carried by operator itself.

Differential Revision: https://reviews.llvm.org/D79753
2020-05-19 11:00:01 -07:00
Nicolas Vasilache 1870e787af [mlir][Vector] Add an optional "masked" boolean array attribute to vector transfer operations
Summary:
Vector transfer ops semantic is extended to allow specifying a per-dimension `masked`
attribute. When the attribute is false on a particular dimension, lowering to LLVM emits
unmasked load and store operations.

Differential Revision: https://reviews.llvm.org/D80098
2020-05-18 11:52:08 -04:00
Nicolas Vasilache 36cdc17f8c [mlir][Vector] Make minor identity permutation map optional in transfer op printing and parsing
Summary:
This revision makes the use of vector transfer operatons more idiomatic by
allowing to omit and inferring the permutation_map.

Differential Revision: https://reviews.llvm.org/D80092
2020-05-18 11:41:27 -04:00
Alex Zinenko 4ead2cf76c [mlir] Rename conversions involving ex-Loop dialect to mention SCF
The following Conversions are affected: LoopToStandard -> SCFToStandard,
LoopsToGPU -> SCFToGPU, VectorToLoops -> VectorToSCF. Full file paths are
affected. Additionally, drop the 'Convert' prefix from filenames living under
lib/Conversion where applicable.

API names and CLI options for pass testing are also renamed when applicable. In
particular, LoopsToGPU contains several passes that apply to different kinds of
loops (`for` or `parallel`), for which the original names are preserved.

Differential Revision: https://reviews.llvm.org/D79940
2020-05-15 10:45:11 +02:00
MaheshRavishankar 0b3e478b10 [mlir][GPUToSPIRV] Use default ABI only when none of the arguments
have abi attributes.

To ensure there is no conflict, use the default ABI only when none of
the arguments have the spv.interface_var_abi attribute. This also
implies that if one of the arguments has a spv.interface_var_abi
attribute, all of them should have it as well.

Differential Revision: https://reviews.llvm.org/D77232
2020-05-14 21:48:51 -07:00
Nicolas Vasilache f1b972041a [mlir][Linalg] Start a LinalgToStandard pass and move conversion to library calls.
This revision starts decoupling the include the kitchen sink behavior of Linalg to LLVM lowering by inserting a -convert-linalg-to-std pass.

The lowering of linalg ops to function calls was previously lowering to memref descriptors by having both linalg -> std and std -> LLVM patterns in the same rewrite.

When separating this step, a new issue occurred: the layout is automatically type-erased by this process. This revision therefore introduces memref casts to perform these type erasures explicitly. To connect everything end-to-end, the LLVM lowering of MemRefCastOp is relaxed because it is artificially more restricted than the op semantics. The op semantics already guarantee that source and target MemRefTypes are cast-compatible. An invalid lowering test now becomes valid and is removed.

Differential Revision: https://reviews.llvm.org/D79468
2020-05-15 00:24:03 -04:00
Diego Caballero bc5565f9ea [mlir][Affine] Introduce affine.vector_load and affine.vector_store
This patch adds `affine.vector_load` and `affine.vector_store` ops to
the Affine dialect and lowers them to `vector.transfer_read` and
`vector.transfer_write`, respectively, in the Vector dialect.

Reviewed By: bondhugula, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D79658
2020-05-14 13:17:58 -07:00
Alex Zinenko 60f443bb3b [mlir] Change dialect namespace loop->scf
All ops of the SCF dialect now use the `scf.` prefix instead of `loop.`. This
is a part of dialect renaming.

Differential Revision: https://reviews.llvm.org/D79844
2020-05-13 19:20:21 +02:00
MaheshRavishankar 49e6c19100 [mlir][StandardToLLVM] Add SinOp to LLVM dialect and lowering of std.sin to this op.
Differential Revision: https://reviews.llvm.org/D79505
2020-05-12 23:15:25 -07:00
aartbik fb2c4d50f1 [mlir] [VectorOps] Implement vector.constant_mask lowering to LLVM IR
Summary:
Makes this operation runnable on CPU by generating MLIR instructions
that are eventually folded into an LLVM IR constant for the mask.

Reviewers: nicolasvasilache, ftynse, reidtatge, bkramer, andydavis1

Reviewed By: nicolasvasilache, ftynse, andydavis1

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79815
2020-05-12 19:44:23 -07:00
Nicolas Vasilache 63c0e72b2f [mlir] Revisit std.subview handling of static information.
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.

In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.

The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.

The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.

Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.

Lowering to LLVM is updated, simplified and now supports all cases.
A mixed static-dynamic mode test that wouldn't previously lower is added.

It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.

Differential Revision: https://reviews.llvm.org/D79662
2020-05-12 20:04:44 -04:00
Sam McCall 691e826995 Revert "[mlir] Revisit std.subview handling of static information."
This reverts commit 80d133b24f.

Per Stephan Herhut: The canonicalizer pattern that was added creates
forms of the subview op that cannot be lowered.

This is shown by failing Tensorflow XLA tests such as:
  tensorflow/compiler/xla/service/mlir_gpu/tests:abs.hlo.test
Will provide more details offline, they rely on logs from private CI.
2020-05-12 15:18:50 +02:00
Hanhan Wang 756d6959d7 [mlir][StandardToSPIRV] Add support for lowering index_cast to SPIR-V.
Differential Revision: https://reviews.llvm.org/D79644
2020-05-11 15:41:25 -07:00
Nicolas Vasilache 80d133b24f [mlir] Revisit std.subview handling of static information.
Summary:
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.

In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.

The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.

The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.

Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.

It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.

Reviewers: ftynse, mravishankar, antiagainst, rriddle!, andydavis1, timshen, asaadaldien, stellaraccident

Reviewed By: mravishankar

Subscribers: aartbik, bondhugula, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79662
2020-05-11 17:44:24 -04:00
Reid Tatge 334a4159ec [mlir][Vector] NFC - Rename vector.strided_slice into vector.extract_strided_slice
Differential Revision: https://reviews.llvm.org/D79734
2020-05-11 14:21:10 -07:00
Nicolas Vasilache 6ed61a26c2 [mlir] Simplify and better document std.view semantics
This [discussion](https://llvm.discourse.group/t/viewop-isnt-expressive-enough/991/2) raised some concerns with ViewOp.

In particular, the handling of offsets is incorrect and does not match the op description.
Note that with an elemental type change, offsets cannot be part of the type in general because sizeof(srcType) != sizeof(dstType).

Howerver, offset is a poorly chosen term for this purpose and is renamed to byte_shift.

Additionally, for all intended purposes, trying to support non-identity layouts for this op does not bring expressive power but rather increases code complexity.

This revision simplifies the existing semantics and implementation.
This simplification effort is voluntarily restrictive and acts as a stepping stone towards supporting richer semantics: treat the non-common cases as YAGNI for now and reevaluate based on concrete use cases once a round of simplification occurred.

Differential revision: https://reviews.llvm.org/D79541
2020-05-11 12:29:23 -04:00
Hanhan Wang 3f07cab312 [mlir][StandardToLLVM] Add support for lowering FPToSIOp to LLVM.
Summary: Depends On D79374

Differential Revision: https://reviews.llvm.org/D79455
2020-05-11 01:29:24 -07:00
Hanhan Wang ac691c4fe7 [mlir][StandardToSPIRV] Add support for lowering FPToSIOp to SPIR-V.
Summary: Depends On D79373

Differential Revision: https://reviews.llvm.org/D79374
2020-05-11 01:27:54 -07:00
Frederik Gossen 5d5f61fc89 [MLIR] Add complex addition and substraction to the standard dialect
Complex addition and substraction are the first two binary operations on complex
numbers.
Remaining operations will follow the same pattern.

Differential Revision: https://reviews.llvm.org/D79479
2020-05-08 09:54:18 +00:00
Wen-Heng (Jack) Chung a23f190213 [mlir][vector] set alignment when lowering transfer_read and transfer_write.
When emitting masked load / store, set alignment from data layout.

Differential Revision: https://reviews.llvm.org/D79246
2020-05-07 11:44:25 +02:00
Alexander Belyaev b79751e83d [MLIR] Add conversion from AtomicRMWOp -> GenericAtomicRMWOp.
Adding this pattern reduces code duplication. There is no need to have a
custom implementation for lowering to llvm.cmpxchg.

Differential Revision: https://reviews.llvm.org/D78753
2020-05-05 10:32:13 +02:00
Hanhan Wang 5d10613b6e [mlir][StandardToSPIRV] Emulate bitwidths not supported for store op.
Summary:
As D78974, this patch implements the emulation for store op. The emulation is
done with atomic operations. E.g., if the storing value is i8, rewrite the
StoreOp to:

 1) load a 32-bit integer
 2) clear 8 bits in the loading value
 3) store 32-bit value back
 4) load a 32-bit integer
 5) modify 8 bits in the loading value
 6) store 32-bit value back

The step 1 to step 3 are done by AtomicAnd as one atomic step, and the step 4
to step 6 are done by AtomicOr as another atomic step.

Differential Revision: https://reviews.llvm.org/D79272
2020-05-04 15:18:44 -07:00
Frederik Gossen 031265ad8a [MLIR] Add complex numbers to standard dialect
Add `CreateComplexOp`, `ReOp`, and `ImOp` to the standard dialect.
This is the first step to support complex numbers.

Differential Revision: https://reviews.llvm.org/D79159
2020-05-04 14:04:28 +00:00
Wen-Heng (Jack) Chung bc23c1d85e [mlir][rocdl] add rocdl.barier op.
- Add rocdl.barrier op.
- Lower gpu.barier to rocdl.barrier in -convert-gpu-to-rocdl.

Differential Revision: https://reviews.llvm.org/D79126
2020-05-04 10:35:01 +02:00
Wen-Heng (Jack) Chung a581c6f8cd [mlir][vector] add tests for type_cast taking non-zero addrspace
Add tests for vector.type_cast that takes memrefs on non-zero
addrspaces.

Differential Revision: https://reviews.llvm.org/D79099
2020-05-04 10:31:12 +02:00
MaheshRavishankar 43b89ecdb9 [mlir] Add sine operation to Standard dialect.
Also add lowering of sine operation to SPIR-V dialect.

Differential Revision: https://reviews.llvm.org/D79102
2020-04-30 22:15:42 -07:00
Hanhan Wang be0ad5b034 [mlir][StandardToSPIRV] Add support for lowering integer casting.
Summary:
Maps ZeroExtendIOp and TruncateIOp to spirv::UConvertOp and spirv::SConvertOp.

Depends On D78974

Differential Revision: https://reviews.llvm.org/D79143
2020-04-30 19:29:31 -07:00
Hanhan Wang 6601b65aed [mlir][StandardToSPIRV] Emulate bitwidths not supported for load op.
Summary:
The current implementation in SPIRVTypeConverter just unconditionally turns
everything into 32-bit if it doesn't meet the requirements of extensions or
capabilities. In this case, we can load a 32-bit value and then do bit
extraction to get the value.

Differential Revision: https://reviews.llvm.org/D78974
2020-04-30 19:27:45 -07:00
Wen-Heng (Jack) Chung 9ad5e57316 [mlir][nvvm][rocdl] refactor NVVM and ROCDL dialect. NFC.
- Extract common logic between -convert-gpu-to-nvvm and -convert-gpu-to-rocdl.
- Cope with the fact that alloca operates on different addrspaces between NVVM
  and ROCDL.
- Modernize unit tests for ROCDL dialect.

Differential Revision: https://reviews.llvm.org/D79021
2020-05-01 00:13:26 +02:00
aartbik 6937251f01 [mlir] [VectorOps] Included i1 support for vector.print
Summary:
Added boolean support to vector.print.
Useful for upcoming "mask" tests.

Reviewers: ftynse, nicolasvasilache, andydavis1

Reviewed By: andydavis1

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79198
2020-04-30 14:56:26 -07:00
Nicolas Vasilache 7a80139059 [mlir][Vector] Provide progressive lowering of masked n-D vector transfers
This revision allows masked vector transfers with m-D buffers and n-D vectors to
progressively lower to m-D buffer and 1-D vector transfers.

For a vector.transfer_read, assuming a `memref<(leading_dims) x (major_dims) x (minor_dims) x type>` and a `vector<(minor_dims) x type>` are involved in the transfer, this generates pseudo-IR resembling:
```
     if (any_of(%ivs_major + %offsets, <, major_dims)) {
       %v = vector_transfer_read(
         {%offsets_leading, %ivs_major + %offsets_major, %offsets_minor},
          %ivs_minor):
         memref<(leading_dims) x (major_dims) x (minor_dims) x type>,
         vector<(minor_dims) x type>;
     } else {
       %v = splat(vector<(minor_dims) x type>, %fill)
     }
```

Differential Revision: https://reviews.llvm.org/D79062
2020-04-29 21:28:27 -04:00
MaheshRavishankar 1c12a95d9c [mlir][StandardToSPIRV] Handle conversion of cmpi operation with i1
type operands.

The instructions used to convert std.cmpi cannot have i1 types
according to SPIR-V specification. A different set of operations are
specified in the SPIR-V spec for comparing boolean types. Enhance the
StandardToSPIRV lowering to target these instructions when operands to
std.cmpi operation are of i1 type.

Differential Revision: https://reviews.llvm.org/D79049
2020-04-29 10:09:03 -07:00
Wen-Heng (Jack) Chung f2b505a459 [mlir][std] allow subview take memrefs from non-zero addrspaces.
On certain targets std.subview should be able to take memrefs from non-zero
addrspaces. Improve lowering logic to llvm dialect and amend the tests.

Differential Revision: https://reviews.llvm.org/D79024
2020-04-29 17:19:27 +02:00
Wen-Heng (Jack) Chung be16075bfc [mlir][vector] let transfer_read and transfer_write take non-zero addrspace.
Enhance lowering logic and tests so vector.transfer_read and
vector.transfer_write take memrefs on non-zero addrspaces.

Differential Revision: https://reviews.llvm.org/D79023
2020-04-29 17:11:48 +02:00
Nicolas Vasilache b2c79c50ed [mlir][VectorOps] Extend VectorTransfer lowering to n-D memref with minor identity map
Summary: This revision extends the lowering of vector transfers to work with n-D memref and 1-D vector where the permutation map is an identity on the most minor dimensions (1 for now).

Differential Revision: https://reviews.llvm.org/D78925
2020-04-27 11:20:55 -04:00
Tres Popp 2d2d696137 [MLIR] Propagate input side effect information
Summary:
Previously operations like std.load created methods for obtaining their
effects but did not inherit from the SideEffect interfaces when their
parameters were decorated with the information. The resulting situation
was that passes had no information on the SideEffects of std.load/store
and had to treat them more cautiously. This adds the inheritance
information when creating the methods.

As a side effect, many tests are modified, as they were using std.load
for testing and this oepration would be folded away as part of pattern
rewriting. Tests are modified to use store or to reutn the result of the
std.load.

Reviewers: mravishankar, antiagainst, nicolasvasilache, herhut, aartbik, ftynse!

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78802
2020-04-27 11:35:52 +02:00
Alexander Belyaev f31db760b3 [MLIR] Replace splitBlock() with createBlock in GenericAtomicRMWOp lowering.
`addArgument()` is not undoable and should not be used in
ConversionPattern, therefore replacing `splitBlock()` with
`createBlock()`, that creates a block with specified args.

Differential Revision: https://reviews.llvm.org/D78731
2020-04-25 17:42:45 +02:00
MaheshRavishankar 9391941bd3 [mlir][StandardToSPIRV] Fix test cases where DCE removes all the code.
Differential Revision: https://reviews.llvm.org/D78727
2020-04-23 09:48:37 -07:00