Commit Graph

5231 Commits

Author SHA1 Message Date
Jacques Pienaar 4b211b94d7 [mlir][drr] Make error easier to understand
Changes error from
  error: referencing unbound symbol ''
to
  error: raw string not supported as argument
2020-08-09 18:02:08 -07:00
Uday Bondhugula 231c554abc [MLIR][NFC] Fix misleading diagnostic error + clang-tidy fix
Fix misleading diagnostic error in affine.yield verifier + a clang-tidy fix.

Differential Revision: https://reviews.llvm.org/D85587
2020-08-09 11:35:29 +05:30
Vincent Zhao 654e8aadfd [MLIR] Consider AffineIfOp when getting the index set of an Op wrapped in nested loops
This diff attempts to resolve the TODO in `getOpIndexSet` (formerly
known as `getInstIndexSet`), which states "Add support to handle IfInsts
surronding `op`".

Major changes in this diff:

1. Overload `getIndexSet`. The overloaded version considers both
`AffineForOp` and `AffineIfOp`.
2. The `getInstIndexSet` is updated accordingly: its name is changed to
`getOpIndexSet` and its implementation is based on a new API `getIVs`
instead of `getLoopIVs`.
3. Add `addAffineIfOpDomain` to `FlatAffineConstraints`, which extracts
new constraints from the integer set of `AffineIfOp` and merges it to
the current constraint system.
4. Update how a `Value` is determined as dim or symbol for
`ValuePositionMap` in `buildDimAndSymbolPositionMaps`.

Differential Revision: https://reviews.llvm.org/D84698
2020-08-09 03:16:03 +05:30
Feng Liu 5c9c4ade9d Add the inline interface to the shape dialect
This patch also fixes a minor issue that shape.rank should allow
returning !shape.size. The dialect doc has such an example for
shape.rank.

Differential Revision: https://reviews.llvm.org/D85556
2020-08-07 23:29:43 -07:00
Mehdi Amini 872bdc0be7 Remove unused static helper getMemRefTypeFromTensorType() (NFC) 2020-08-08 05:37:42 +00:00
Mehdi Amini eebd0a57fc Remove unused class member (NFC)
Fix include/mlir/Reducer/ReductionNode.h:79:18: warning: private field 'parent' is not used [-Wunused-private-field]
2020-08-08 05:36:41 +00:00
Mehdi Amini 58acda1c16 Revert "[mlir] Add a utility class, ThreadLocalCache, for storing non static thread local objects."
This reverts commit 9f24640b7e.

We hit some dead-locks on thread exit in some configurations: TLS exit handler is taking a lock.
Temporarily reverting this change as we're debugging what is going on.
2020-08-08 05:31:25 +00:00
Vincent Zhao 754e09f9ce [MLIR] Add tiling validity check to loop tiling pass
This revision aims to provide a new API, `checkTilingLegality`, to
verify that the loop tiling result still satisifes the dependence
constraints of the original loop nest.

Previously, there was no check for the validity of tiling. For instance:

```
func @diagonal_dependence() {
  %A = alloc() : memref<64x64xf32>

  affine.for %i = 0 to 64 {
    affine.for %j = 0 to 64 {
      %0 = affine.load %A[%j, %i] : memref<64x64xf32>
      %1 = affine.load %A[%i, %j - 1] : memref<64x64xf32>
      %2 = addf %0, %1 : f32
      affine.store %2, %A[%i, %j] : memref<64x64xf32>
    }
  }

  return
}
```

You can find more information about this example from the Section 3.11
of [1].

In general, there are three types of dependences here: two flow
dependences, one in direction `(i, j) = (0, 1)` (notation that depicts a
vector in the 2D iteration space), one in `(i, j) = (1, -1)`; and one
anti dependence in the direction `(-1, 1)`.

Since two of them are along the diagonal in opposite directions, the
default tiling method in `affine`, which tiles the iteration space into
rectangles, will violate the legality condition proposed by Irigoin and
Triolet [2]. [2] implies two tiles cannot depend on each other, while in
the `affine` tiling case, two rectangles along the same diagonal are
indeed dependent, which simply violates the rule.

This diff attempts to put together a validator that checks whether the
rule from [2] is violated or not when applying the default tiling method
in `affine`.

The canonical way to perform such validation is by examining the effect
from adding the constraint from Irigoin and Triolet to the existing
dependence constraints.

Since we already have the prior knowlegde that `affine` tiles in a
hyper-rectangular way, and the resulting tiles will be scheduled in the
same order as their respective loop indices, we can simplify the
solution to just checking whether all dependence components are
non-negative along the tiling dimensions.

We put this algorithm into a new API called `checkTilingLegality` under
`LoopTiling.cpp`. This function iterates every `load`/`store` pair, and
if there is any dependence between them, we get the dependence component
  and check whether it has any negative component. This function returns
  `failure` if the legality condition is violated.

[1]. Bondhugula, Uday. Effective Automatic parallelization and locality optimization using the Polyhedral model. https://dl.acm.org/doi/book/10.5555/1559029
[2]. Irigoin, F. and Triolet, R. Supernode Partitioning. https://dl.acm.org/doi/10.1145/73560.73588

Differential Revision: https://reviews.llvm.org/D84882
2020-08-08 09:29:47 +05:30
Mauricio Sifontes 27d0e14da9 Create Reduction Tree Pass
Implement the Reduction Tree Pass framework as part of the MLIR Reduce tool. This is a parametarizable pass that allows for the implementation of custom reductions passes in the tool.
Implement the FunctionReducer class as an example of a Reducer class parameter for the instantiation of a Reduction Tree Pass.
Create a pass pipeline with a Reduction Tree Pass with the FunctionReducer class specified as parameter.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D83969
2020-08-07 23:17:31 +00:00
Sean Silva b0d76f454d [mlir] Centralize handling of memref element types.
This also beefs up the test coverage:
- Make unranked memref testing consistent with ranked memrefs.
- Add testing for the invalid element type cases.

This is not quite NFC: index types are now allowed in unranked memrefs.

Differential Revision: https://reviews.llvm.org/D85541
2020-08-07 15:17:23 -07:00
Kiran Chandramohan 660832c4e7 [OpenMP,MLIR] Translation of parallel operation: num_threads, if clauses 3/n
This simple patch translates the num_threads and if clauses of the parallel
operation. Also includes test cases.
A minor change was made to parsing of the if clause to parse AnyType and
return the parsed type. Updates to test cases also.

Reviewed by: SouraVX
Differential Revision: https://reviews.llvm.org/D84798
2020-08-07 20:54:24 +00:00
River Riddle c8c45985fb [mlir][Type] Remove usages of Type::getKind
This is in preparation for removing the use of "kinds" within attributes and types in MLIR.

Differential Revision: https://reviews.llvm.org/D85475
2020-08-07 13:43:25 -07:00
River Riddle fff39b62bb [mlir][Attribute] Remove usages of Attribute::getKind
This is in preparation for removing the use of "kinds" within attributes and types in MLIR.

Differential Revision: https://reviews.llvm.org/D85370
2020-08-07 13:43:25 -07:00
River Riddle 1d6a8deb41 [mlir] Remove the need to define `kindof` on attribute and type classes.
This revision refactors the default definition of the attribute and type `classof` methods to use the TypeID of the concrete class instead of invoking the `kindof` method. The TypeID is already used as part of uniquing, and this allows for removing the need for users to define any of the type casting utilities themselves.

Differential Revision: https://reviews.llvm.org/D85356
2020-08-07 13:43:25 -07:00
River Riddle dd48773396 [mlir][Types] Remove the subclass data from Type
Subclass data is useful when a certain amount of memory is allocated, but not all of it is used. In the case of Type, that hasn't been the case for a while and the subclass is just taking up a full `unsigned`. Removing this frees up ~8 bytes for almost every type instance.

Differential Revision: https://reviews.llvm.org/D85348
2020-08-07 13:43:25 -07:00
River Riddle 9f24640b7e [mlir] Add a utility class, ThreadLocalCache, for storing non static thread local objects.
This class allows for defining thread local objects that have a set non-static lifetime. This internals of the cache use a static thread_local map between the various different non-static objects and the desired value type. When a non-static object destructs, it simply nulls out the entry in the static map. This will leave an entry in the map, but erase any of the data for the associated value. The current use cases for this are in the MLIRContext, meaning that the number of items in the static map is ~1-2 which aren't particularly costly enough to warrant the complexity of pruning. If a use case arises that requires pruning of the map, the functionality can be added.

This is especially useful in the context of MLIR for implementing thread-local caching of context level objects that would otherwise have very high lock contention. This revision adds a thread local cache in the MLIRContext for attributes, identifiers, and types to reduce some of the locking burden. This led to a speedup of several hundred miliseconds when compiling a conversion pass on a very large mlir module(>300K operations).

Differential Revision: https://reviews.llvm.org/D82597
2020-08-07 13:43:25 -07:00
River Riddle 86646be315 [mlir] Refactor StorageUniquer to require registration of possible storage types
This allows for bucketing the different possible storage types, with each bucket having its own allocator/mutex/instance map. This greatly reduces the amount of lock contention when multi-threading is enabled. On some non-trivial .mlir modules (>300K operations), this led to a compile time decrease of a single conversion pass by around half a second(>25%).

Differential Revision: https://reviews.llvm.org/D82596
2020-08-07 13:43:24 -07:00
Tim Shen b53fd9cdba [MLIR] Add getSizeInBits() for tensor of complex
Differential Revision: https://reviews.llvm.org/D85382
2020-08-07 12:38:49 -07:00
Konrad Dobros 9414a71aaa [mlir][spirv] Add correct handling of Kernel and Addresses capabilities
This change adds initial support needed to generate OpenCL compliant SPIRV.
If Kernel capability is declared then memory model becomes OpenCL.
If Addresses capability is declared then addressing model becomes Physical64.
Additionally for Kernel capability interface variable ABI attributes are not
generated as entry point function is expected to have normal arguments.

Differential Revision: https://reviews.llvm.org/D85196
2020-08-07 12:29:21 -07:00
Nicolas Vasilache 2a01d7f7b6 [mlir][SCF] Add utility to outline the then and else branches of an scf.IfOp
Differential Revision: https://reviews.llvm.org/D85449
2020-08-07 14:49:49 -04:00
Nicolas Vasilache 3110e7b077 [mlir] Introduce AffineMinSCF folding as a pattern
This revision adds a folding pattern to replace affine.min ops by the actual min value, when it can be determined statically from the strides and bounds of enclosing scf loop .

This matches the type of expressions that Linalg produces during tiling and simplifies boundary checks. For now Linalg depends both on Affine and SCF but they do not depend on each other, so the pattern is added there.
In the future this will move to a more appropriate place when it is determined.

The canonicalization of AffineMinOp operations in the context of enclosing scf.for and scf.parallel proceeds by:
  1. building an affine map where uses of the induction variable of a loop
  are replaced by `%lb + %step * floordiv(%iv - %lb, %step)` expressions.
  2. checking if any of the results of this affine map divides all the other
  results (in which case it is also guaranteed to be the min).
  3. replacing the AffineMinOp by the result of (2).

The algorithm is functional in simple parametric tiling cases by using semi-affine maps. However simplifications of such semi-affine maps are not yet available and the canonicalization does not succeed yet.

Differential Revision: https://reviews.llvm.org/D82009
2020-08-07 14:30:38 -04:00
aartbik c3c95b9c80 [mlir] [VectorOps] Improve lowering of extract_strided_slice (and friends like shape_cast)
Using a shuffle for the last recursive step in progressive lowering not only
results in much more compact IR, but also more efficient code (since the
backend is no longer confused on subvector aliasing for longer vectors).

E.g. the following

  %f = vector.shape_cast %v0: vector<1024xf32> to vector<32x32xf32>

yields much better x86-64 code that runs 3x faster than the original.

Reviewed By: bkramer, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D85482
2020-08-07 09:21:05 -07:00
Mehdi Amini 575b22b5d1 Revisit Dialect registration: require and store a TypeID on dialects
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()

This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.

To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.

Differential Revision: https://reviews.llvm.org/D85495
2020-08-07 15:57:08 +00:00
Alexander Belyaev 9c94908320 BEGIN_PUBLIC
[mlir] Add support for unranked case for `tensor_store` and `tensor_load` ops.
END_PUBLIC

Differential Revision: https://reviews.llvm.org/D85518
2020-08-07 14:32:52 +02:00
Alex Zinenko 87a89e0f77 [mlir] Remove llvm::LLVMContext and llvm::Module from mlir::LLVMDialectImpl
Original modeling of LLVM IR types in the MLIR LLVM dialect had been wrapping
LLVM IR types and therefore required the LLVMContext in which they were created
to outlive them, which was solved by placing the LLVMContext inside the dialect
and thus having the lifetime of MLIRContext. This has led to numerous issues
caused by the lack of thread-safety of LLVMContext and the need to re-create
LLVM IR modules, obtained by translating from MLIR, in different LLVM contexts
to enable parallel compilation. Similarly, llvm::Module had been introduced to
keep track of identified structure types that could not be modeled properly.

A recent series of commits changed the modeling of LLVM IR types in the MLIR
LLVM dialect so that it no longer wraps LLVM IR types and has no dependence on
LLVMContext and changed the ownership model of the translated LLVM IR modules.
Remove LLVMContext and LLVM modules from the implementation of MLIR LLVM
dialect and clean up the remaining uses.

The only part of LLVM IR that remains necessary for the LLVM dialect is the
data layout. It should be moved from the dialect level to the module level and
replaced with an MLIR-based representation to remove the dependency of the
LLVMDialect on LLVM IR library.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85445
2020-08-07 14:30:31 +02:00
Alex Zinenko 16b0225377 [mlir] do not require LLVMDialect in conversion from LLVM IR
Historically, LLVMDialect has been required in the conversion from LLVM IR in
order to be able to construct types. This is no longer necessary with the new
type model and the dialect can be replaced with a local LLVM context.

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D85444
2020-08-07 14:27:04 +02:00
Alex Zinenko db1c197bf8 [mlir] take LLVMContext in MLIR-to-LLVM-IR translation
Due to the original type system implementation, LLVMDialect in MLIR contains an
LLVMContext in which the relevant objects (types, metadata) are created. When
an MLIR module using the LLVM dialect (and related intrinsic-based dialects
NVVM, ROCDL, AVX512) is converted to LLVM IR, it could only live in the
LLVMContext owned by the dialect. The type system no longer relies on the
LLVMContext, so this limitation can be removed. Instead, translation functions
now take a reference to an LLVMContext in which the LLVM IR module should be
constructed. The caller of the translation functions is responsible for
ensuring the same LLVMContext is not used concurrently as the translation no
longer uses a dialect-wide context lock.

As an additional bonus, this change removes the need to recreate the LLVM IR
module in a different LLVMContext through printing and parsing back, decreasing
the compilation overhead in JIT and GPU-kernel-to-blob passes.

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D85443
2020-08-07 14:22:30 +02:00
Nicolas Vasilache 3f906c54a2 [mlir][Vector] Add 2-D vector contract lowering to ReduceOp
This new pattern mixes vector.transpose and direct lowering to vector.reduce.
This allows more progressive lowering than immediately going to insert/extract and
composes more nicely with other canonicalizations.
This has 2 use cases:
1. for very wide vectors the generated IR may be much smaller
2. when we have a custom lowering for transpose ops we can target it directly
rather than rely LLVM

Differential Revision: https://reviews.llvm.org/D85428
2020-08-07 06:17:48 -04:00
MaheshRavishankar 25e8668e88 [mlir][SPIR-V] Fix wrongly placed Rationale section.
Differential Revision: https://reviews.llvm.org/D85461
2020-08-06 11:51:42 -07:00
Nicolas Vasilache 1353cbc257 [mlir][Vector] NFC - Use matchAndRewrite in ContractionOp lowering patterns
Replace the use of separate match and rewrite which unnecessarily duplicates logic.

Differential Revision: https://reviews.llvm.org/D85421
2020-08-06 09:02:25 -04:00
Nicolas Vasilache 54fafd17a7 [mlir][Linalg] Introduce canonicalization to remove dead LinalgOps
When any of the memrefs in a structured linalg op has a zero dimension, it becomes dead.
This is consistent with the fact that linalg ops deduce their loop bounds from their operands.

Note however that this is not the case for the `tensor<0xelt_type>` which is a special convention
that must be lowered away into either `memref<elt_type>` or just `elt_type` before this
canonicalization can kick in.

Differential Revision: https://reviews.llvm.org/D85413
2020-08-06 06:08:46 -04:00
Christian Sigg 45676a8936 [MLIR] Change GpuLaunchFuncToGpuRuntimeCallsPass to wrap a RewritePattern with the same functionality.
The RewritePattern will become one of several, and will be part of the LLVM conversion pass (instead of a separate pass following LLVM conversion).

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D84946
2020-08-06 11:55:46 +02:00
Alexander Belyaev 3effc35015 [mlir] Lower DimOp to LLVM for unranked memrefs.
Differential Revision: https://reviews.llvm.org/D85361
2020-08-06 11:46:11 +02:00
Alex Zinenko 5446ec8507 [mlir] take MLIRContext instead of LLVMDialect in getters of LLVMType's
Historical modeling of the LLVM dialect types had been wrapping LLVM IR types
and therefore needed access to the instance of LLVMContext stored in the
LLVMDialect. The new modeling does not rely on that and only needs the
MLIRContext that is used for uniquing, similarly to other MLIR types. Change
LLVMType::get<Kind>Ty functions to take `MLIRContext *` instead of
`LLVMDialect *` as first argument. This brings the code base closer to
completely removing the dependence on LLVMContext from the LLVMDialect,
together with additional support for thread-safety of its use.

Depends On D85371

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85372
2020-08-06 11:05:40 +02:00
Alex Zinenko d3a9807674 [mlir] Remove most uses of LLVMDialect::getModule
This prepares for the removal of llvm::Module and LLVMContext from the
mlir::LLVMDialect.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85371
2020-08-06 10:54:30 +02:00
aartbik 39379916a7 [mlir] [VectorOps] Add masked load/store operations to Vector dialect
The intrinsics were already supported and vector.transfer_read/write lowered
direclty into these operations. By providing them as individual ops, however,
clients can used them directly, and it opens up progressively lowering transfer
operations at higher levels (rather than direct lowering to LLVM IR as done now).

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D85357
2020-08-05 16:45:24 -07:00
Alex Zinenko b2ab375d1f [mlir] use the new stateful LLVM type translator by default
Previous type model in the LLVM dialect did not support identified structure
types properly and therefore could use stateless translations implemented as
free functions. The new model supports identified structs and must keep track
of the identified structure types present in the target context (LLVMContext or
MLIRContext) to avoid creating duplicate structs due to LLVM's type
auto-renaming. Expose the stateful type translation classes and use them during
translation, storing the state as part of ModuleTranslation.

Drop the test type translation mechanism that is no longer necessary and update
the tests to exercise type translation as part of the main translation flow.

Update the code in vector-to-LLVM dialect conversion that relied on stateless
translation to use the new class in a stateless manner.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85297
2020-08-06 00:36:33 +02:00
Lei Zhang 0d03b3901d [mlir][StandardToSPIRV] Use spv.UMod for index re-calculation
Per Vulkan's SPIR-V environment spec: "While the OpSRem and OpSMod
instructions are supported by the Vulkan environment, they require
non-negative values and thus do not enable additional functionality
beyond what OpUMod provides."

The `getOffsetForBitwidth` function is used for lowering std.load
and std.store, whose indices are of `index` type and cannot be
negative. So we should be okay to use spv.UMod directly here to
be exact. Also made the comment explicit about the assumption.

Differential Revision: https://reviews.llvm.org/D83714
2020-08-05 14:52:04 -04:00
Lei Zhang 48378a32af [spirv] Fix bitwidth emulation for Workgroup storage class
If Int16 is not available, 16-bit integers inside Workgroup storage
class should be emulated via 32-bit integers. This was previously
broken because the capability querying logic was incorrectly
intercepting all storage classes where it meant to only handle
interface storage classes. Adjusted where we return to fix this.

Differential Revision: https://reviews.llvm.org/D85308
2020-08-05 14:44:03 -04:00
Alexander Belyaev 9fdd0df949 [mlir][nfc] Rename `promoteMemRefDescriptors` to `promoteOperands`.
`promoteMemRefDescriptors` also converts types of every operand, not only
memref-typed ones. I think `promoteMemRefDescriptors` name does not imply that.

Differential Revision: https://reviews.llvm.org/D85325
2020-08-05 20:24:48 +02:00
Vincent Zhao b727cfed5e [MLIR][LinAlg] Use AnyTypeOf for LinalgOperand for better error msg.
Previously, `LinalgOperand` is defined with `Type<Or<..,>>`, which produces
not very readable error messages when it is not matched, e.g.,

```
'linalg.generic' op operand #0 must be anonymous_326, but got ....
```

It is simply because the `description` property is not properly set.

This diff switches to use `AnyTypeOf` for `LinalgOperand`, which automatically
generates a description based on the allowed types provided.

As a result, the error message now becomes:

```
'linalg.generic' op operand #0 must be ranked tensor of any type values or strided memref of any type values, but got ...
```

Which is clearer and more informative.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84428
2020-08-05 20:13:45 +02:00
Uday Bondhugula 1d75f004ab [MLIR][NFC] Fix clang-tidy warnings in std to llvm conversion
Fix clang-tidy warnings in std to llvm conversion.
2020-08-05 22:12:05 +05:30
Alexander Belyaev bc7456fd8a [mlir] Fix rank bitwidth in UnrankedMemRefType conversion.
Differential Revision: https://reviews.llvm.org/D85300
2020-08-05 18:35:23 +02:00
Alex Zinenko 75f239e975 [mlir] Initial version of C APIs
Introduce an initial version of C API for MLIR core IR components: Value, Type,
    Attribute, Operation, Region, Block, Location. These APIs allow for both
    inspection and creation of the IR in the generic form and intended for wrapping
    in high-level library- and language-specific constructs. At this point, there
    is no stability guarantee provided for the API.

Reviewed By: stellaraccident, lattner

Differential Revision: https://reviews.llvm.org/D83310
2020-08-05 15:04:08 +02:00
Alex Zinenko 4e491570b5 [mlir] Remove LLVMTypeTestDialect
This dialect was introduced during the bring-up of the new LLVM dialect type
system for testing purposes. The main LLVM dialect now uses the new type system
and the test dialect is no longer necessary, so remove it.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85224
2020-08-05 14:39:36 +02:00
Alex Zinenko bdb9295664 [mlir] Fix convert-to-llvmir.mlir test broken due to syntax change
The syntax of the LLVM dialect types changed between the time the code
was written and it was submitted, leading to a test failure. Update the
syntax.
2020-08-05 13:29:35 +02:00
Hans Wennborg 3ab01550b6 Revert "[CMake] Simplify CMake handling for zlib"
This quietly disabled use of zlib on Windows even when building with
-DLLVM_ENABLE_ZLIB=FORCE_ON.

> Rather than handling zlib handling manually, use find_package from CMake
> to find zlib properly. Use this to normalize the LLVM_ENABLE_ZLIB,
> HAVE_ZLIB, HAVE_ZLIB_H. Furthermore, require zlib if LLVM_ENABLE_ZLIB is
> set to YES, which requires the distributor to explicitly select whether
> zlib is enabled or not. This simplifies the CMake handling and usage in
> the rest of the tooling.
>
> This is a reland of abb0075 with all followup changes and fixes that
> should address issues that were reported in PR44780.
>
> Differential Revision: https://reviews.llvm.org/D79219

This reverts commit 10b1b4a231 and follow-ups
64d99cc6ab and
f9fec0447e.
2020-08-05 12:31:44 +02:00
Arpith C. Jacob fab4b59961 [mlir] Conversion of ViewOp with memory space to LLVM.
Handle the case where the ViewOp takes in a memref that has
an memory space.

Reviewed By: ftynse, bondhugula, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D85048
2020-08-05 12:19:52 +02:00
Alexander Belyaev a3d427d30c [mlir] Lower RankOp to LLVM for unranked memrefs.
Differential Revision: https://reviews.llvm.org/D85273
2020-08-05 12:13:43 +02:00
Frederik Gossen 4cd923784e [MLIR][Shape] Expose extent tensor type builder
The extent tensor type is a `tensor<?xindex>` that is used in the shape dialect.
To facilitate the use of this type when working with the shape dialect, we
expose the helper function for its construction.

Differential Revision: https://reviews.llvm.org/D85121
2020-08-05 09:42:57 +00:00