Commit Graph

4236 Commits

Author SHA1 Message Date
Alex Zinenko f9cdc61d11 [mlir] provide a version of data layout size hooks in bits
This is useful for bit-packing types such as vectors and tuples as well as for
exotic architectures that have non-8-bit bytes.

Depends On D98500

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D98524
2021-03-24 15:13:40 +01:00
Alex Zinenko 1916b0e098 [mlir] support data layout specs on ModuleOp
ModuleOp is a natural place to provide scoped data layout information. However,
it is undesirable for ModuleOp to implement the entirety of
DataLayoutOpInterface because that would require either pushing the interface
inside the IR library instead of a separate library, or putting the default
implementation of the interface as inline functions in headers leading to
binary bloat. Instead, ModuleOp accepts an arbitrary data layout spec attribute
and has a dedicated hook to extract it, and DataLayout is modified to know
about ModuleOp particularities.

Reviewed By: herhut, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98500
2021-03-24 15:13:38 +01:00
Mehdi Amini d905c10353 Add a mechanism for Dialects to provide a fallback for OpInterface
This mechanism makes it possible for a dialect to not register all
operations but still answer interface-based queries.
This can useful for dialects that are "open" or connected to an external
system and still interoperate with the compiler. It can also open up the
possibility to have a more extensible compiler at runtime: the compiler
does not need a pre-registration for each operation and the dialect can
inject behavior dynamically.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D93085
2021-03-24 08:41:40 +00:00
River Riddle 76f3c2f3f3 [mlir][Pattern] Add better support for using interfaces/traits to match root operations in rewrite patterns
To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to  *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern.

Differential Revision: https://reviews.llvm.org/D98986
2021-03-23 14:05:33 -07:00
Chris Lattner 782c534117 [ODS] Implement a new 'hasCanonicalizeMethod' bit for cann patterns.
This provides a simplified way to implement 'matchAndRewrite' style
canonicalization patterns for ops that don't need the full power of
RewritePatterns.  Using this style, you can implement a static method
with a signature like:

```
LogicalResult AssertOp::canonicalize(AssertOp op, PatternRewriter &rewriter) {
  return success();
}
```

instead of dealing with defining RewritePattern subclasses.  This also
adopts this for a few canonicalization patterns in the std dialect to
show how it works.

Differential Revision: https://reviews.llvm.org/D99143
2021-03-23 13:45:45 -07:00
Nicolas Vasilache 2240568579 [MLIR][Linalg] Hoist padding across multiple levels of tiling
This revision introduces proper backward slice computation during the hoisting of
PadTensorOp. This allows hoisting padding even across multiple levels of tiling.
Such hoisting requires the proper handling of loop bounds that may depend on enclosing
loop variables.

Differential revision: https://reviews.llvm.org/D98965
2021-03-23 17:47:32 +00:00
Frederik Gossen d78374b2d3 [MLIR] Add callback builder for `shape.assuming` op
Differential Revision: https://reviews.llvm.org/D99153
2021-03-23 11:46:01 +01:00
Christian Sigg ddae61dfef [mlir] Remove deprecated methods from mlir::OpState
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D99150
2021-03-23 11:08:04 +01:00
Chris Lattner 79d7f618af Rename FrozenRewritePatternList -> FrozenRewritePatternSet; NFC.
This nicely aligns the naming with RewritePatternSet.  This type isn't
as widely used, but we keep a using declaration in to help with
downstream consumption of this change.

Differential Revision: https://reviews.llvm.org/D99131
2021-03-22 17:40:45 -07:00
Mehdi Amini a0c776fc94 Add a mechanism for Dialects to customize printing/parsing operations when they are unregistered
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D99007
2021-03-23 00:40:03 +00:00
Chris Lattner dc4e913be9 [PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC
This doesn't change APIs, this just cleans up the many in-tree uses of these
names to use the new preferred names.  We'll keep the old names around for a
couple weeks to help transitions.

Differential Revision: https://reviews.llvm.org/D99127
2021-03-22 17:20:50 -07:00
Chris Lattner 549e190236 [PatternRewriter] Rename OwningRewritePatternList -> RewritePatternSet and insert -> add
This maintains the old name to have minimal source impact on downstream codes, and
does not do the huge mechanical patch.  I expect the huge mechanical patch to land
sometime this week, but we can keep around the old names for a couple weeks to reduce
impact on downstream projects.

Differential Revision: https://reviews.llvm.org/D99119
2021-03-22 16:33:18 -07:00
Chris Lattner 6874726610 [PatternMatching] Add convenience insert method to OwningRewritePatternList. NFC.
This allows adding a C function pointer as a matchAndRewrite style pattern, which
is a very common case.  This adopts it in ExpandTanh to show how it reduces a level
of nesting.

We could allow C++ lambdas here, but that doesn't work as well with type inference
in the common case.  Instead of:

  patterns.insert(convertTanhOp);

you need to specify:

  patterns.insert<math::TanhOp>(convertTanhOp);

which is boilerplate'y.  Capturing state like this is very uncommon, so we choose
to require clients to define their own structs and use the non-convenience method
when they need to do so.

Differential Revision: https://reviews.llvm.org/D99039
2021-03-22 11:18:21 -07:00
Adrian Kuegel c691b9686b [mlir] Add an option to still use bottom-up traversal
GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise.

Differential Revision: https://reviews.llvm.org/D99059
2021-03-22 09:49:44 +01:00
Stella Laurenzo bdf4e93b2c Fix extraneous context parameter in templated helper function.
(missed in lattner's overall updates related to D99028)
2021-03-22 05:08:44 +00:00
Chris Lattner 3a506b31a3 Change OwningRewritePatternList to carry an MLIRContext with it.
This updates the codebase to pass the context when creating an instance of
OwningRewritePatternList, and starts removing extraneous MLIRContext
parameters.  There are many many more to be removed.

Differential Revision: https://reviews.llvm.org/D99028
2021-03-21 10:06:31 -07:00
Chris Lattner 361b7d125b [Canonicalizer] Process regions top-down instead of bottom up & reuse existing constants.
This reapplies b5d9a3c / https://reviews.llvm.org/D98609 with a one line fix in
processExistingConstants to skip() when erasing a constant we've already seen.

Original commit message:

 1) Change the canonicalizer to walk the function in top-down order instead of
    bottom-up order.  This composes well with the "top down" nature of constant
    folding and simplification, reducing iterations and re-evaluation of ops in
    simple cases.
 2) Explicitly enter existing constants into the OperationFolder table before
    canonicalizing.  Previously we would "constant fold" them and rematerialize
    them, wastefully recreating a bunch fo constants, which lead to pointless
    memory traffic.

Both changes together provide a 33% speedup for canonicalize on some mid-size
CIRCT examples.

One artifact of this change is that the constants generated in normal pattern
application get inserted at the top of the function as the patterns are applied.
Because of this, we get "inverted" constants more often, which is an aethetic
change to the IR but does permute some testcases.

Differential Revision: https://reviews.llvm.org/D99006
2021-03-20 16:30:15 -07:00
Mehdi Amini cdb6eb7e83 Update syntax for amx.tile_muli to use two Unit attr to mark the zext case
This makes the annotation tied to the operand and the use of a keyword
more explicit/readable on what it means.

Differential Revision: https://reviews.llvm.org/D99001
2021-03-20 04:12:24 +00:00
River Riddle caddfbd2a9 [mlir][docs] Remove the BuiltinDialect documentation from langref and generate it from ODS
Now that all of the builtin dialect is generated from ODS, its documentation in LangRef can be split out and replaced with references to Dialects/Builtin.md. LangRef is quite crusty right now and should really have a full cleanup done in a followup.

Differential Revision: https://reviews.llvm.org/D98562
2021-03-19 18:21:33 -07:00
River Riddle d75a611afb [mlir] Update `simplifyRegions` to use RewriterBase for erasure notifications
This allows for notifying callers when operations/blocks get erased, which is especially useful for the greedy pattern driver. The current greedy pattern driver "throws away" all information on constants in the operation folder because it doesn't know if they get erased or not. By passing in RewriterBase, we can directly track this and prevent the need for the pattern driver to rediscover all of the existing constants. In some situations this cuts the compile time of the canonicalizer in half.

Differential Revision: https://reviews.llvm.org/D98755
2021-03-19 16:33:54 -07:00
Nicolas Vasilache 5b2d8503d1 [mlir][Linalg] NFC - Expose helper function `substituteMin`. 2021-03-19 16:26:52 +00:00
Christian Sigg a5f9cda173 [mlir] Rename gpu-to-llvm pass implementation file
Also remove populate patterns function and binary annotation name option.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D98930
2021-03-19 13:58:13 +01:00
Christian Sigg 74ffe8dc59 [mlir] Remove ConvertKernelFuncToBlob
All users have been converted to gpu::SerializeToBlobPass.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D98928
2021-03-19 09:33:47 +01:00
Christian Sigg a825fb2c07 [mlir] Remove mlir-rocm-runner
This change combines for ROCm what was done for CUDA in D97463, D98203, D98360, and D98396.

I did not try to compile SerializeToHsaco.cpp or test mlir/test/Integration/GPU/ROCM because I don't have an AMD card. I fixed the things that had obvious bit-rot though.

Reviewed By: whchung

Differential Revision: https://reviews.llvm.org/D98447
2021-03-19 00:24:10 -07:00
Vladislav Vinogradov 270a336ff4 [mlir] Fix Python bindings tests failure in Debug mode after D98474
Add extra `type.isa<FloatType>()` check to  `FloatAttr::get(Type, double)` method.
Otherwise it tries to call `type.cast<FloatType>()`, which fails with assertion in Debug mode.

The `!type.isa<FloatType>()` case just redirercts the call to `FloatAttr::get(Type, APFloat)`,
which will perform the actual check and emit appropriate error.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D98764
2021-03-19 05:32:32 +00:00
Rob Suderman 286a9d467e [mlir][tosa] Add lowering for tosa.rescale to linalg.generic
This adds a tosa.apply_scale operation that handles the scaling operation
common to quantized operatons. This scalar operation is lowered
in TosaToStandard.

We use a separate ApplyScale factorization as this is a replicable pattern
within TOSA. ApplyScale can be reused within pool/convolution/mul/matmul
for their quantized variants.

Tests are added to both tosa-to-standard and tosa-to-linalg-on-tensors
that verify each pass is correct.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D98753
2021-03-18 16:14:05 -07:00
Alexander Belyaev 283799157e [mlir][linalg] Add support for memref inputs/outputs for `linalg.tiled_loop`.
Also use `ArrayAttr` to pass iterator pass to the TiledLoopOp builder.

Differential Revision: https://reviews.llvm.org/D98871
2021-03-18 16:11:03 +01:00
David Truby de155f4af2 [MLIR][OpenMP] Pretty printer and parser for omp.wsloop
Co-authored-by: Kiran Chandramohan <kiran.chandramohan@arm.com>

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D92327
2021-03-18 13:37:01 +00:00
Vladislav Vinogradov 02834e1bd9 [mlir][ODS] Get rid of limitations in rewriters generator
Do not limit the number of arguments in rewriter pattern.

Introduce separate `FmtStrVecObject` class to handle
format of variadic `std::string` array.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D97839
2021-03-18 12:21:06 +03:00
Ahmed Taei f5963944d9 Add arm_neon.sdot operation
Differential Revision: https://reviews.llvm.org/D98198
2021-03-17 08:24:58 -07:00
Vladislav Vinogradov fee9054232 [mlir][ODS] Support specialized Attribute class for Enums
Add a feature to `EnumAttr` definition to generate
specialized Attribute class for the particular enumeration.

This class will inherit `StringAttr` or `IntegerAttr` and
will override `classof` and `getValue` methods.

With this class the enumeration predicate can be checked with simple
RTTI calls (`isa`, `dyn_cast`) and it will return the typed enumeration
directly instead of raw string/integer.

Based on the following discussion:
https://llvm.discourse.group/t/rfc-add-enum-attribute-decorator-class/2252

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97836
2021-03-17 16:44:24 +03:00
Adrian Kuegel 4a8c01a02b Move BaseOpWithOffsetSizesAndStrides to OpBase.td
It is used both by the Standard dialect and the MemRef dialect.

Differential Revision: https://reviews.llvm.org/D98777
2021-03-17 13:54:04 +01:00
Stephan Herhut 5837fdc4cc [mlir][llvm] Pass struct results as parameter in c wrapper
Returning structs directly in LLVM does not necessarily align with the C ABI of
the platform. This might happen to work on Linux but for small structs this
breaks on Windows. With this change, the wrappers work platform independently.

Differential Revision: https://reviews.llvm.org/D98725
2021-03-17 12:58:52 +01:00
River Riddle caa7038a89 [mlir][IR] Move the remaining builtin attributes to ODS.
With this revision, all builtin attributes and types will have been moved to the ODS generator.

Differential Revision: https://reviews.llvm.org/D98474
2021-03-16 16:31:53 -07:00
River Riddle 425e11eea1 [mlir][AttrTypeDefGen] Add support for custom parameter comparators
Some parameters to attributes and types rely on special comparison routines other than operator== to ensure equality. This revision adds support for those parameters by allowing them to specify a `comparator` code block that determines if `$_lhs` and `$_rhs` are equal. An example of one of these paramters is APFloat, which requires `bitwiseIsEqual` for bitwise comparison (which we want for attribute equality).

Differential Revision: https://reviews.llvm.org/D98473
2021-03-16 16:31:53 -07:00
River Riddle 85ab413b53 [mlir][PDL] Add support for variadic operands and results in the PDL byte code
Supporting ranges in the byte code requires additional complexity, given that a range can't be easily representable as an opaque void *, as is possible with the existing bytecode value types (Attribute, Type, Value, etc.). To enable representing a range with void *, an auxillary storage is used for the actual range itself, with the pointer being passed around in the normal byte code memory. For type ranges, a TypeRange is stored. For value ranges, a ValueRange is stored. The above problem represents a majority of the complexity involved in this revision, the rest is adapting/adding byte code operations to support the changes made to the PDL interpreter in the parent revision.

After this revision, PDL will have initial end-to-end support for variadic operands/results.

Differential Revision: https://reviews.llvm.org/D95723
2021-03-16 13:20:19 -07:00
River Riddle 3a833a0e0e [mlir][PDL] Add support for variadic operands and results in the PDL Interpreter
This revision extends the PDL Interpreter dialect to add support for variadic operands and results, with ranges of these values represented via the recently added !pdl.range type. To support this extension, three new operations have been added that closely match the single variant:
* pdl_interp.check_types : Compare a range of types with a known range.
* pdl_interp.create_types : Create a constant range of types.
* pdl_interp.get_operands : Get a range of operands from an operation.
* pdl_interp.get_results : Get a range of results from an operation.
* pdl_interp.switch_types : Switch on a range of types.

This revision handles adding support in the interpreter dialect and the conversion from PDL to PDLInterp. Support for variadic operands and results in the bytecode will be added in a followup revision.

Differential Revision: https://reviews.llvm.org/D95722
2021-03-16 13:20:19 -07:00
River Riddle 1eb6994d6a [mlir][PDL] Add support for variadic operands and results in PDL
This revision extends the PDL dialect to add support for variadic operands and results, with ranges of these values represented via the recently added !pdl.range type. To support this extension, three new operations have been added that closely match the single variant:
* pdl.operands : Define a range of input operands.
* pdl.results : Extract a result group from an operation.
* pdl.types : Define a handle to a range of types.

Support for these in the pdl interpreter dialect and byte code will be added in followup revisions.

Differential Revision: https://reviews.llvm.org/D95721
2021-03-16 13:20:18 -07:00
River Riddle 02c4c0d5b2 [mlir][pdl] Remove CreateNativeOp in favor of a more general ApplyNativeRewriteOp.
This has a numerous amount of benefits, given the overly clunky nature of CreateNativeOp:
* Users can now call into arbitrary rewrite functions from inside of PDL, allowing for more natural interleaving of PDL/C++ and enabling for more of the pattern to be in PDL.
* Removes the need for an additional set of C++ functions/registry/etc. The new ApplyNativeRewriteOp will use the same PDLRewriteFunction as the existing RewriteOp. This reduces the API surface area exposed to users.

This revision also introduces a new PDLResultList class. This class is used to provide results of native rewrite functions back to PDL. We introduce a new class instead of using a SmallVector to simplify the work necessary for variadics, given that ranges will require some changes to the structure of PDLValue.

Differential Revision: https://reviews.llvm.org/D95720
2021-03-16 13:20:18 -07:00
River Riddle 242762c9a3 [mlir][pdl] Restructure how results are represented.
Up until now, results have been represented as additional results to a pdl.operation. This is fairly clunky, as it mismatches the representation of the rest of the IR constructs(e.g. pdl.operand) and also isn't a viable representation for operations returned by pdl.create_native. This representation also creates much more difficult problems when factoring in support for variadic result groups, optional results, etc. To resolve some of these problems, and simplify adding support for variable length results, this revision extracts the representation for results out of pdl.operation in the form of a new `pdl.result` operation. This operation returns the result of an operation at a given index, e.g.:

```
%root = pdl.operation ...
%result = pdl.result 0 of %root
```

Differential Revision: https://reviews.llvm.org/D95719
2021-03-16 13:20:18 -07:00
Aart Bik b85d3e27ad [mlir][amx] reformatted examples
Examples were missing the underscore of the actual ops format.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D98723
2021-03-16 10:24:57 -07:00
Nicolas Vasilache b661788b77 [mlir] NFC - Expose GlobalCreator so it can be reused. 2021-03-16 12:29:04 +00:00
Aart Bik 6ad7b97e20 [mlir][amx] Add Intel AMX dialect (architectural-specific vector dialect)
The Intel Advanced Matrix Extensions (AMX) provides a tile matrix
multiply unit (TMUL), a tile control register (TILECFG), and eight
tile registers TMM0 through TMM7 (TILEDATA). This new MLIR dialect
provides a bridge between MLIR concepts like vectors and memrefs
and the lower level LLVM IR details of AMX.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98470
2021-03-15 17:59:05 -07:00
Alex Zinenko 0aceb61665 [mlir] make memref.cast implement ViewLikeOpInterface
This was seemingly dropped in e2310704d8,
potentially due to a misrebase. The absence of this trait makes aliasing
analysis incorrect, leading to, e.g., buffer deallocation pass inserting
deallocations too early.
2021-03-15 17:21:27 +01:00
Alex Zinenko 0fb4a201c0 [mlir] fix shared-lib build fallout of e2310704d8
The patch in question broke the build with shared libraries due to
missing dependencies, one of which would have been circular between
MLIRStandard and MLIRMemRef if added. Fix this by moving more code
around and swapping the dependency direction. MLIRMemRef now depends on
MLIRStandard, but MLIRStandard does _not_ depend on MLIRMemRef.
Arguably, this is the right direction anyway since numerous libraries
depend on MLIRStandard and don't necessarily need to depend on
MLIRMemref.

Other otable changes include:
- some EDSC code is moved inline to MemRef/EDSC/Intrinsics.h because it
  creates MemRef dialect operations;
- a utility function related to shape moved to BuiltinTypes.h/cpp
  because it only realtes to shaped types and not any particular dialect
  (standard dialect is erroneously believed to contain MemRefType);
- a Python test for the standard dialect is disabled completely because
  the ops it tests moved to the new MemRef dialect, but it is not
  exposed to Python bindings, and the change for that is non-trivial.
2021-03-15 13:41:38 +01:00
Julian Gross e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Alex Zinenko 03085156ec [mlir] fix cmake for generating data layout documentation 2021-03-15 11:02:03 +01:00
Alex Zinenko 40d8e4d3f9 Revert "[Canonicalizer] Process regions top-down instead of bottom up & reuse existing constants."
This reverts commit b5d9a3c923.

The commit introduced a memory error in canonicalization/operation
walking that is exposed when compiled with ASAN. It leads to crashes in
some "release" configurations.
2021-03-15 10:27:55 +01:00
Frederik Gossen b55f424ffc [MLIR] Add canonicalization for `shape.broadcast`
Remove redundant operands and fold if only one left.

Differential Revision: https://reviews.llvm.org/D98402
2021-03-15 10:11:28 +01:00
Chris Lattner 91a6ad5ad8 [m_Constant] Check #operands/results before hasTrait()
We know that all ConstantLike operations have one result and no operands,
so check this first before doing the trait check.  This change speeds up
Canonicalize on a CIRCT testcase by ~5%.

Differential Revision: https://reviews.llvm.org/D98615
2021-03-14 20:14:19 -07:00
Chris Lattner b5d9a3c923 [Canonicalizer] Process regions top-down instead of bottom up & reuse existing constants.
Two changes:
 1) Change the canonicalizer to walk the function in top-down order instead of
    bottom-up order.  This composes well with the "top down" nature of constant
    folding and simplification, reducing iterations and re-evaluation of ops in
    simple cases.
 2) Explicitly enter existing constants into the OperationFolder table before
    canonicalizing.  Previously we would "constant fold" them and rematerialize
    them, wastefully recreating a bunch fo constants, which lead to pointless
    memory traffic.

Both changes together provide a 33% speedup for canonicalize on some mid-size
CIRCT examples.

One artifact of this change is that the constants generated in normal pattern
application get inserted at the top of the function as the patterns are applied.
Because of this, we get "inverted" constants more often, which is an aethetic
change to the IR but does permute some testcases.

Differential Revision: https://reviews.llvm.org/D98609
2021-03-14 18:21:42 -07:00
Nikita Popov 42eb658f65 [OpaquePtrs] Remove some uses of type-less CreateGEP() (NFC)
This removes some (but not all) uses of type-less CreateGEP()
and CreateInBoundsGEP() APIs, which are incompatible with opaque
pointers.

There are a still a number of tricky uses left, as well as many
more variation APIs for CreateGEP.
2021-03-12 21:01:16 +01:00
Alex Zinenko 4affd0c40e [mlir] fix a memory leak in NestedPattern
NestedPattern uses a BumpPtrAllocator to store child (nested) pattern
objects to decrease the overhead of dynamic allocation. This assumes all
allocations happen inside the allocator that will be freed as a whole.
However, NestedPattern contains `std::function` as a member, which
allocates internally using `new`, unaware of the BumpPtrAllocator. Since
NestedPattern only holds pointers to the nested patterns allocated in
the BumpPtrAllocator, it never calls their destructors, so the
destructor of the `std::function`s they contain are never called either,
leaking the allocated memory.

Make NestedPattern explicitly call destructors of nested patterns. This
additionally requires to actually copy the nested patterns in
copy-construction and copy-assignment instead of just sharing the
pointer to the arena-allocated list of children to avoid double-free. An
alternative solution would be to add reference counting to the list of
arena-allocated list of children.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98485
2021-03-12 18:52:14 +01:00
Sergei Grechanik fd2b08969b [mlir][Vector] Lowering of transfer_read/write to vector.load/store
This patch introduces progressive lowering patterns for rewriting
vector.transfer_read/write to vector.load/store and vector.broadcast
in certain supported cases.

Reviewed By: dcaballe, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D97822
2021-03-11 18:17:51 -08:00
Diego Caballero 96891f0418 Reland: [mlir][Vector][Affine] Improve affine vectorizer algorithm
This patch replaces the root-terminal vectorization approach implemented in the
Affine vectorizer with a topological order approach that vectorizes all the
operations within the target loop nest. These are the most important changes
introduced by the new algorithm:
  * Removed tracking of root and terminal ops. Existing vectorization
    functionality is preserved and extended so that loop nests without
    root-terminal chains can be vectorized.
  * Vectorizing a loop nest now only requires a single topological traversal.
  * A new vector loop nest is incrementally built along the vectorization
    process. The original scalar loop is kept intact. No cloning guard is needed
    to recover the scalar loop if vectorization fails. This approach also
    simplifies the challenging task of replacing a loop operation amid the
    vectorization process without invalidating the analysis information that
    depends on the original loop.
  * Vectorization of specific operations has been implemented as independent,
    preparing them to be moved to a potential vectorization interface.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D97442
2021-03-12 00:19:50 +02:00
River Riddle 31bb8efd69 [mlir][StorageUniquer] Properly call the destructor on non-trivially destructible storage instances
This allows for storage instances to store data that isn't uniqued in the context, or contain otherwise non-trivial logic, in the rare situations that they occur. Storage instances with trivial destructors will still have their destructor skipped. A consequence of this is that the storage instance definition must be visible from the place that registers the type.

Differential Revision: https://reviews.llvm.org/D98311
2021-03-11 11:35:32 -08:00
Diego Caballero ed193bce9d [mlir][Vector][Affine] Fix heap-use-after-free in vectorizer
This patch fixes a heap-use-after-free introduced by the recent changes
in the vectorizer: https://reviews.llvm.org/rG95db7b4aeaad590f37720898e339a6d54313422f
The problem is due to the way candidate loops are visited. All candidate loops
are pattern-matched beforehand using the 'NestedMatch' utility. These matches may
intersect with each other so it may happen that we try to vectorize a loop that
was previously vectorized. The new vectorization algorithm replaces the original
loops that are vectorized with new loops and, therefore, any reference to the
original loops in the pre-computed matches becomes invalid.

This patch fixes the problem by classifying the candidate matches into buckets
before vectorization. Each bucket contains all the matches that intersect. The
vectorizer uses these buckets to make sure that we only vectorize *one* match from
each bucket, at most.

Differential Revision: https://reviews.llvm.org/D98382
2021-03-11 20:44:07 +02:00
Nikita Popov f3f0c6cd47 [mlir] Remove uses of type-less CreateLoad() APIs (NFC)
For the use in LLVMOps.td I used the getPointerElementType()
escape hatch, as it's not obvious to me how the load type
should be properly obtained here.
2021-03-11 18:39:20 +01:00
Alex Zinenko 3ba14fa0ce [mlir] Introduce data layout modeling subsystem
Data layout information allows to answer questions about the size and alignment
properties of a type. It enables, among others, the generation of various
linear memory addressing schemes for containers of abstract types and deeper
reasoning about vectors. This introduces the subsystem for modeling data
layouts in MLIR.

The data layout subsystem is designed to scale to MLIR's open type and
operation system. At the top level, it consists of attribute interfaces that
can be implemented by concrete data layout specifications; type interfaces that
should be implemented by types subject to data layout; operation interfaces
that must be implemented by operations that can serve as data layout scopes
(e.g., modules); and dialect interfaces for data layout properties unrelated to
specific types. Built-in types are handled specially to decrease the overall
query cost.

A concrete default implementation of these interfaces is provided in the new
Target dialect. Defaults for built-in types that match the current behavior are
also provided.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97067
2021-03-11 16:54:47 +01:00
Arpith C. Jacob b4a516cc43 [mlir] Add LLVM loop codegen options to control software pipelining
Support specifying the II and disabling pipelining.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D98420
2021-03-11 16:46:44 +01:00
Tres Popp 25a20b8aa6 [mlir] Correct verifyCompatibleShapes
verifyCompatibleShapes is not transitive. Create an n-ary version and
update SameOperandShapes and SameOperandAndResultShapes traits to use
it.

Differential Revision: https://reviews.llvm.org/D98331
2021-03-11 13:04:10 +01:00
Frederik Gossen b975e3b5aa [MLIR] Add canoncalization for `shape.is_broadcastable`
Canonicalize `is_broadcastable` to constant true if fewer than 2 unique shape
operands. Eliminate redundant operands, otherwise.

Differential Revision: https://reviews.llvm.org/D98361
2021-03-11 10:10:34 +01:00
Christian Sigg 2224221fb3 [mlir] Add NVVM to CUBIN conversion to mlir-opt
If MLIR_CUDA_RUNNER_ENABLED, register a 'gpu-to-cubin' conversion pass to mlir-opt.

The next step is to switch CUDA integration tests from mlir-cuda-runner to mlir-opt + mlir-cpu-runner and remove mlir-cuda-runner.

Depends On D98279

Reviewed By: herhut, rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D98203
2021-03-11 10:07:11 +01:00
River Riddle 4e02eb8014 [mlir] Optimize the implementation of RegionDCE
The current implementation has some inefficiencies that become noticeable when running on large modules. This revision optimizes the code, and updates some out-dated idioms with newer utilities. The main components of this optimization include:

* Add an overload of Block::eraseArguments that allows for O(N) erasure of disjoint arguments.
* Don't process entry block arguments given that we don't erase them at this point.
* Don't track individual operation results, given that we don't erase them. We can just track the parent operation.

Differential Revision: https://reviews.llvm.org/D98309
2021-03-10 16:39:50 -08:00
Weiwei Li 619c1505f9 [mlir][spirv] Define spv.Image Operation
co-authered-by: Alan Liu <alanliu.yf@gmail.com>

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D98270
2021-03-10 15:48:04 -05:00
Alex Zinenko 79da91c59a Revert "[mlir][Vector][Affine] Improve affine vectorizer algorithm"
This reverts commit 95db7b4aea.

This breaks vectorize_2d.mlir and vectorize_3d.mlir test under ASAN (use
after free).
2021-03-10 20:25:49 +01:00
Diego Caballero 95db7b4aea [mlir][Vector][Affine] Improve affine vectorizer algorithm
This patch replaces the root-terminal vectorization approach implemented in the
Affine vectorizer with a topological order approach that vectorizes all the
operations within the target loop nest. These are the most important changes
introduced by the new algorithm:
  * Removed tracking of root and terminal ops. Existing vectorization
    functionality is preserved and extended so that loop nests without
    root-terminal chains can be vectorized.
  * Vectorizing a loop nest now only requires a single topological traversal.
  * A new vector loop nest is incrementally built along the vectorization
    process. The original scalar loop is kept intact. No cloning guard is needed
    to recover the scalar loop if vectorization fails. This approach also
    simplifies the challenging task of replacing a loop operation amid the
    vectorization process without invalidating the analysis information that
    depends on the original loop.
  * Vectorization of specific operations has been implemented as independent,
    preparing them to be moved to a potential vectorization interface.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D97442
2021-03-10 20:29:58 +02:00
Alex Zinenko 78f3fb4f46 [mlir] Update comments in ArmNeon dialect. NFC
These were not updated when squashing LLVMArmNeon and ArmNeon dialects.
2021-03-10 13:35:57 +01:00
Alex Zinenko a776942ba1 [mlir] squash LLVM_AVX512 dialect into AVX512
The dialect separation was introduced to demarkate ops operating in different
type systems. This is no longer the case after the LLVM dialect has migrated to
using built-in vector types, so the original reason for separation is no longer
valid. Squash the two dialects into one.

The code size decrease isn't quite large: the ops originally in LLVM_AVX512 are
preserved because they match LLVM IR intrinsics specialized for vector element
bitwidth. However, it is still conceptually beneficial to have only one
dialect. I originally considered to use Tablegen multiclasses to define both
the type-polymorphic op and its two intrinsic-related instantiations, but
decided against it given both the complexity of the required Tablegen input and
its dissimilarity with the rest of ODS-defined ops, both potentially resulting
in very poor maintainability.

Depends On D98327

Reviewed By: nicolasvasilache, springerm

Differential Revision: https://reviews.llvm.org/D98328
2021-03-10 13:07:26 +01:00
Alex Zinenko 0af53de369 [mlir] simplify type constraints in AVX512 dialect
VectorOfLengthAndType accepts a cartesian product of given lengths and types
rather than types produced by co-indexed values in the corresponding lists.
Update the definitions accordingly. The type validity is already enforced by
op traits.

Reviewed By: nicolasvasilache, springerm

Differential Revision: https://reviews.llvm.org/D98327
2021-03-10 13:07:25 +01:00
Inho Seo 2ce4caf414 Moved getStaticLoopRanges and getStaticShape methods to LinalgInterfaces.td to add static shape verification
It is to use the methods in LinalgInterfaces.cpp for additional static shape verification to match the shaped operands and loop on linalgOps. If I used the existing methods, I would face circular dependency linking issue. Now we can use them as methods of LinalgOp.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D98163
2021-03-10 04:06:22 -08:00
Christian Sigg 4d295cf5b5 [mlir] Add base class for GpuKernelToBlobPass
Instead of configuring kernel-to-cubin/rocdl lowering through callbacks, introduce a base class that target-specific passes can derive from.

Put the base class in GPU/Transforms, according to the discussion in D98203.

The mlir-cuda-runner will go away shortly, and the mlir-rocdl-runner as well at some point. I therefore kept the existing code path working and will remove it in a separate step.

Depends On D98168

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D98279
2021-03-10 12:14:43 +01:00
Vladislav Vinogradov f3bf5c053b [mlir] Model MemRef memory space as Attribute
Based on the following discussion:
https://llvm.discourse.group/t/rfc-memref-memory-shape-as-attribute/2229

The goal of the change is to make memory space property to have more
expressive representation, rather then "magic" integer values.

It will allow to have more clean ASM form:

```
gpu.func @test(%arg0: memref<100xf32, "workgroup">)

// instead of

gpu.func @test(%arg0: memref<100xf32, 3>)
```

Explanation for `Attribute` choice instead of plain `string`:

* `Attribute` classes allow to use more type safe API based on RTTI.
* `Attribute` classes provides faster comparison operator based on
  pointer comparison in contrast to generic string comparison.
* `Attribute` allows to store more complex things, like structs or dictionaries.
  It will allows to have more complex memory space hierarchy.

This commit preserve old integer-based API and implements it on top
of the new one.

Depends on D97476

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D96145
2021-03-10 12:57:27 +03:00
River Riddle a776ecb6c2 [mlir][IR] Add an Operation::eraseOperands that supports batch erasure
This method allows for removing multiple disjoint operands at once, reducing the need to erase operands individually (which results in shifting the operand list).

Differential Revision: https://reviews.llvm.org/D98290
2021-03-09 15:07:53 -08:00
River Riddle 4a7aed4ee7 [mlir][IR] Add a new SymbolUserMap class
This class provides efficient implementations of symbol queries related to uses, such as collecting the users of a symbol, replacing all uses, etc. This provides similar benefits to use related queries, as SymbolTableCollection did for lookup queries.

Differential Revision: https://reviews.llvm.org/D98071
2021-03-09 15:07:52 -08:00
Mehdi Amini fe81e8f3b5 Add default LoopOptionsAttrBuilder constructor and method to check if empty() (NFC)
Also move setters out-of-line to make sure the templated helper is
actually instantiated.
2021-03-09 21:12:15 +00:00
Christian Sigg 840ff84d33 [mlir] Default for gpu-binary-annotation option.
Provide default for gpuBinaryAnnotation so that we don't need to specify it in tests.

The annotation likely only needs to be target specific if we want to lower to e.g. both CUDA and ROCDL.

Reviewed By: herhut, bondhugula

Differential Revision: https://reviews.llvm.org/D98168
2021-03-09 21:01:50 +01:00
Mehdi Amini 8205c1a90a Rework LLVM Dialect LoopOptions attribute
Instead of storing an array of LoopOpt attributes, which were just
wrapping std::pair<enum, int> anyway, we can have an attribute storing
a sorted ArrayRef<std::pair<enum, int>> as a single unit. This improves
here the textual format and the general API. Note that we're limiting
the options to fit into an int64_t by design, but this isn't a new
constraint.

Building the LoopOptions attribute is likely worth a specific builder
for efficient reason, that'll be the subject of a future patch.

Differential Revision: https://reviews.llvm.org/D98105
2021-03-09 19:43:45 +00:00
Alex Zinenko 8184247f0b [mlir] move LLVM target import header and tests
Move Target/LLVMIR.h to target/LLVMIR/Import.h to better reflect the purpose of
this file. Also move all LLVM IR target tests under the LLVMIR directory.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D98178
2021-03-09 09:22:14 +01:00
Tobias Gysi c1a4cd551f [mlir][linalg] refactor the result handling during vectorization.
Return the vectorization results using a vector passed by reference instead of returning them embedded in a structure.

Differential Revision: https://reviews.llvm.org/D98182
2021-03-09 07:11:57 +00:00
Mehdi Amini 038f2a337d Move LLVM::FMFAttr definition to TableGen (NFC)
This is using the new Attribute storage generation support in
TableGen to define the LLVM FastMathFlags.

Differential Revision: https://reviews.llvm.org/D98007
2021-03-09 05:29:54 +00:00
River Riddle 0d01dfbc37 [mlir][IR][NFC] Move the remaining builtin types to ODS
This will allow for removing the duplicated type documentation from LangRef and instead link to the builtin dialect documentation.

Differential Revision: https://reviews.llvm.org/D98093
2021-03-08 14:32:40 -08:00
River Riddle a4bb667d83 [mlir][IR][NFC] Define the Location classes in ODS instead of C++
This also removes the need for LocationDetail.h.

Differential Revision: https://reviews.llvm.org/D98092
2021-03-08 14:32:40 -08:00
Benjamin Kramer 42c195f0ec [mlir][Shape] Allow shape.split_at to return extent tensors and lower it to std.subtensor
split_at can return an error if the split index is out of bounds. If the
user knows that the index can never be out of bounds it's safe to use
extent tensors. This has a straight-forward lowering to std.subtensor.

Differential Revision: https://reviews.llvm.org/D98177
2021-03-08 16:48:05 +01:00
Frederik Gossen 3b9667a84c Clarify documentation for `Elementwise`, `Scalarizable`, `Vectorizable`, and
`Tensorizable` traits.

Differential Revision: https://reviews.llvm.org/D97841
2021-03-08 10:35:22 +01:00
KareemErgawy-TomTom 3fb384d50e [MLIR][SPIRV] Rename `spv.selection` to `spv.mlir.selection`.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere. For ops that
don't have a SPIR-V spec counterpart, we use spv.mlir.snake_case.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D98014
2021-03-06 16:05:31 +01:00
Lei Zhang bb6f5c8314 [mlir][spirv] Convert tensor.extract for very small tensors
Normally tensors will be stored in buffers before converting to SPIR-V,
given that is how a large amount of data is sent to the GPU. However,
SPIR-V supports converting from tensors directly too. This is for the
cases where the tensor just contains a small amount of elements and it
makes sense to directly inline them as a small data array in the shader.
To handle this, internally the conversion might create new local
variables. SPIR-V consumers in GPU drivers may or may not optimize that
away. So this has implications over register pressure. Therefore, a
threshold is used to control when the patterns should kick in.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D98052
2021-03-06 08:03:36 -05:00
Mehdi Amini f8fe6d9f3f Use gen-dialect-doc instead of gen-op-doc for the Builtin dialect
This is fixing the missing title and menu entry on the MLIR website.
2021-03-06 05:32:46 +00:00
Matthias Springer acce0ea70c [mlir][AVX512] Add mask.compress to AVX512 dialect.
Adds mask.compress to the AVX512 dialect and defines a lowering to the LLVM dialect.

Differential Revision: https://reviews.llvm.org/D97611
2021-03-06 10:02:48 +09:00
Alex Zinenko 6410ee0d09 [mlir] Squash LLVM_ArmNeon dialect into ArmNeon
The two dialects are largely redundant. The former was introduced as a mirror
of the latter operating on LLVM dialect types. This is no longer necessary
since the LLVM dialect operates on built-in types. Combine the two dialects.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D98060
2021-03-05 23:33:32 +01:00
Diego Caballero 2de6dbda66 [mlir] Add 'Skip' result to Operation visitor
This patch is a follow-up on D97217. It adds a new 'Skip' result to the Operation visitor
so that a callback can stop the ongoing visit of an operation/block/region and
continue visiting the next one without fully interrupting the walk. Skipping is
needed to be able to erase an operation/block in pre-order and do not continue
visiting the internals of that operation/block.

Related to the skipping mechanism, the patch also introduces the following changes:
 * Added new TestIRVisitors pass with basic testing for the IR visitors.
 * Fixed missing early increment ranges in visitor implementation.
 * Updated documentation of walk methods to include erasure information and walk
   order information.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97820
2021-03-06 00:02:20 +02:00
Diego Caballero 71a86245ca [mlir] Extend Operation visitor with pre-order traversal
This patch extends the Region, Block and Operation visitors to also support pre-order walks.
We introduce a new template argument that dictates the walk order (only pre-order and
post-order are supported for now). The default order for Regions, Blocks and Operations is
post-order. Mixed orders (e.g., Region/Block pre-order + Operation post-order) could easily
be implemented, as shown in NumberOfExecutions.cpp.

Reviewed By: rriddle, frgossen, bondhugula

Differential Revision: https://reviews.llvm.org/D97217
2021-03-06 00:02:20 +02:00
Diego Caballero b635492c3f [mlir][Affine][NFC] Return BlockArgument in AffineForOp::getInductionVar
This avoids unnecessary casts when a BlockArgument is required.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D97879
2021-03-06 00:02:19 +02:00
KareemErgawy-TomTom d48ceb45e3 [MLIR][SPIRV] Rename `spv.undef` to `spv.Undef`.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere. For ops that
don't have a SPIR-V spec counterpart, we use spv.mlir.snake_case.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D98016
2021-03-05 15:49:44 -05:00
KareemErgawy-TomTom 29812a6195 [MLIR][SPIRV] Rename `spv.loop` to `spv.mlir.loop`.
To unify the naming scheme across all ops in the SPIR-V dialect,
we are moving from spv.camelCase to spv.CamelCase everywhere.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97918
2021-03-05 15:44:30 -05:00
KareemErgawy-TomTom c74eb466d2 [MLIR][SPIRV] Rename `spv.globalVariable` to `spv.GlobalVariable`.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97919
2021-03-04 16:24:59 -05:00
KareemErgawy-TomTom 5abdca47b3 [MLIR][SPIRV] Rename `spv.constant` to `spv.Constant`.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from `spv.camelCase` to `spv.CamelCase` everywhere.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97917
2021-03-04 16:15:56 -05:00
KareemErgawy-TomTom 4d90e460bc [MLIR][SPIRV] Rename `spv.spcConstant...` to `spv.SpcConstant...`.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere.

Differential Revision: https://reviews.llvm.org/D97920
2021-03-04 16:07:41 -05:00
River Riddle 2f37cdd569 [mlir][IR][NFC] Move a majority of the builtin attributes to ODS
Now that attributes can be generated using ODS, we can move the builtin attributes as well. This revision removes a majority of the builtin attributes with a few left for followup revisions. The attributes moved to ODS in this revision are: AffineMapAttr, ArrayAttr, DictionaryAttr, IntegerSetAttr, StringAttr, SymbolRefAttr, TypeAttr, and UnitAttr.

Differential Revision: https://reviews.llvm.org/D97591
2021-03-04 13:04:06 -08:00
River Riddle 1447ec5182 [mlir][AttrDefGen] Add support for specifying the value type of an attribute
The value type of the attribute can be specified by either overriding the typeBuilder field on the AttrDef, or by providing a parameter of type `AttributeSelfTypeParameter`. This removes the need to define custom storage class constructors for attributes that have a value type other than NoneType.

Differential Revision: https://reviews.llvm.org/D97590
2021-03-04 13:04:05 -08:00
River Riddle 6bc767cd07 [mlir] Add a DialectAsmParser::getChecked method
This function simplifies calling the getChecked methods on Attributes and Types from within the parser, and removes any need to use `getEncodedSourceLocation` for these methods (by using an SMLoc instead). This is much more efficient than using an mlir::Location, as the encoding process to produce an mlir::Location is inefficient and undesirable for parsing (locations used during parsing should not persist afterwards unless otherwise necessary).

Differential Revision: https://reviews.llvm.org/D97900
2021-03-04 11:53:24 -08:00
Arpith C. Jacob 4e393350c5 [mlir] Add an AccessGroup attribute to load/store LLVM dialect ops and generate the access_group LLVM metadata.
This also includes LLVM dialect ops created from intrinsics.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D97944
2021-03-04 18:17:23 +01:00
Hanhan Wang b47c6c686c [mlir][linalg] Add suffix "Op" to pooling TC ops.
Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97946
2021-03-04 07:08:30 -08:00
Nicolas Vasilache 4f4f3f1e59 [mlir] NFC - Add runner util functions to only print MemRef metadata.
These are useful to debug execution, without having to print the whole
content of a memref.
2021-03-04 12:35:45 +00:00
Nicolas Vasilache 05882157db [mlir][Linalg] NFC - Add isOutputTensor to LinalgInterfaces.td 2021-03-04 12:33:21 +00:00
Alex Zinenko 32c49c7d73 [mlir] ODS: change OpBuilderDAG to OpBuilder
We no longer have the non-DAG version.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97856
2021-03-04 10:55:02 +01:00
Alex Zinenko 19db802e7b [mlir] make implementations of translation to LLVM IR interfaces private
There is no need for the interface implementations to be exposed, opaque
registration functions are sufficient for all users, similarly to passes.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D97852
2021-03-04 09:16:32 +01:00
Arpith C. Jacob 4a2930f495 [mlir] Add loop codegen options to some LLVM dialect ops.
Add a Loop Option attribute and generate llvm metadata attached to
branch instructions to control code generation.

Reviewed By: ftynse, mehdi_amini

Differential Revision: https://reviews.llvm.org/D96820
2021-03-04 09:01:57 +01:00
River Riddle 83ef862fad [mlir] Add support for generating Attribute classes for ODS
The support for attributes closely maps that of Types (basically 1-1) given that Attributes are defined in exactly the same way as Types. All of the current ODS TypeDef classes get an Attr equivalent. The generation of the attribute classes themselves share the same generator as types.

Differential Revision: https://reviews.llvm.org/D97589
2021-03-03 16:41:49 -08:00
River Riddle e07c968a6d [mlir][pdl][NFC] Rename InputOp to OperandOp
This better matches the actual IR concept that is being modeled, and is consistent with how the rest of PDL is structured.

Differential Revision: https://reviews.llvm.org/D95718
2021-03-03 15:48:00 -08:00
River Riddle 55f878bad9 [mlir][pdl] Add a new !pdl.range<> type
This type represents a range of positional values. It will be used in followup revisions to add support for variadic constructs to PDL, such as operand and result ranges.

Differential Revision: https://reviews.llvm.org/D95717
2021-03-03 15:48:00 -08:00
River Riddle 3dfa86149e [mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.

Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.

Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.

As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).

This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:

* Most of the methods on value are now branchless, and often one-liners.

* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.

* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.

This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic

From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.

Differential Revision: https://reviews.llvm.org/D97804
2021-03-03 14:33:37 -08:00
Hanhan Wang 83c56aa4ee [mlir][linalg] Add depthwise_conv_2d_input_nhwc_filter_hwcf to Linalg TC ops.
Different from the definition in Tensorflow and TOSA, the output is [N,H,W,C,M]. This can make transforms easier in LinAlg because the indexing maps are plain. E.g., to determine if the fill op has dependency between the depthwise conv op, the current pipeline only recognizes the dep if they are all projected affine map.

Reviewed By: asaadaldien

Differential Revision: https://reviews.llvm.org/D97798
2021-03-03 11:47:02 -08:00
Mehdi Amini 13cb431719 Add basic JIT Python Bindings
This offers the ability to create a JIT and invoke a function by passing
ctypes pointers to the argument and the result.

Differential Revision: https://reviews.llvm.org/D97523
2021-03-03 18:19:40 +00:00
Mehdi Amini 86c8a7857d Add C bindings for mlir::ExecutionEngine
This adds minimalistic bindings for the execution engine, allowing to
invoke the JIT from the C API. This is still quite early and
experimental and shouldn't be considered stable in any way.

Differential Revision: https://reviews.llvm.org/D96651
2021-03-03 18:19:40 +00:00
MaheshRavishankar 5d7e0a23c6 [mlir] Add LinalgInterface method to clone with a given BlockAndValueMapping.
Since Linalg operations have regions by default which are not isolated
from above, add an another method to the interface that will take a
BlockAndValueMapping to remap the values within the region as well.

Differential Revision: https://reviews.llvm.org/D97709
2021-03-03 09:25:20 -08:00
Benjamin Kramer 24acadef8a [mlir][Shape] Make shape_eq nary
This gets rid of a dubious shape_eq %a, %a fold, that folds shape_eq
even if %a is not an Attribute.

Differential Revision: https://reviews.llvm.org/D97728
2021-03-03 16:26:40 +01:00
Benjamin Kramer c714b441ef [mlir][Shape] Make cstr_eq more like cstr_broadcastable
This includes allowing extents and not just shapes.

Differential Revision: https://reviews.llvm.org/D97716
2021-03-03 16:20:05 +01:00
Vladislav Vinogradov 5d613e42d3 [mlir][ODS] Use StringLiteral instead of StringRef when applicable
Use `StringLiteral` for function return type if it is known to return
constant string literals only.

This will make it visible to API users, that such values can be safely
stored, since they refers to constant data, which will never be deallocated.

`StringRef` is general is not safe to store for a long term,
since it might refer to temporal data allocated in heap.

Add `inline` and `constexpr` methods support to `OpMethod`.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D97390
2021-03-03 16:15:12 +03:00
Frederik Gossen bcc9b371e4 Split `ElementwiseMappable` trait into four more precise traits.
Some elementwise operations are not scalarizable, vectorizable, or tensorizable.
Split `ElementwiseMappable` trait into the following, more precise traits.
  - `Elementwise`
  - `Scalarizable`
  - `Vectorizable`
  - `Tensorizable`
This allows for reuse of `Elementwise` in dialects like HLO.

Differential Revision: https://reviews.llvm.org/D97674
2021-03-02 15:31:19 +01:00
KareemErgawy-TomTom 3b021fbdc0 [MLIR][LinAlg] Detensorize interal function control flow.
This patch continues detensorizing implementation by detensoring
internal control flow in functions.

In order to detensorize functions, all the non-entry block's arguments
are detensored and branches between such blocks are properly updated to
reflect the detensored types as well. Function entry block (signature)
is left intact.

This continues work towards handling github/google/iree#1159.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D97148
2021-03-02 11:46:20 +01:00
Vladislav Vinogradov 37eca08e5b [mlir][NFC] Rename `MemRefType::getMemorySpace` to `getMemorySpaceAsInt`
Just a pure method renaming.

It is a preparation step for replacing "memory space as raw integer"
with more generic "memory space as attribute", which will be done in
separate commit.

The `MemRefType::getMemorySpace` method will return `Attribute` and
become the main API, while `getMemorySpaceAsInt` will be declared as
deprecated and will be replaced in all in-tree dialects (also in separate
commits).

Reviewed By: mehdi_amini, rriddle

Differential Revision: https://reviews.llvm.org/D97476
2021-03-02 11:08:54 +03:00
Stella Laurenzo 6d2fd3d9cd [mlir][linalg] Replace monomorphic contration ops with polymorphic variants.
* Moves `batch_matmul`, `matmul`, `matvec`, `vectmat`, `dot` to the new mechanism.
* This is not just an NFC change, in addition to using a new code generation mechanism, it also activates symbolic casting, allowing mixed precision operands and results.
* These definitions were generated from DSL by the tool: https://github.com/stellaraccident/mlir-linalgpy/blob/main/mlir_linalg/oplib/core.py (will be upstreamed in a subsequent set of changes).

Reviewed By: nicolasvasilache, ThomasRaoux

Differential Revision: https://reviews.llvm.org/D97719
2021-03-01 21:19:53 -08:00
MaheshRavishankar a9e68db973 [mlir] Add canonicaliations for subtensor_insert operation.
Add canonicalizers to subtensor_insert operations need canonicalizers
that propagate the constant arguments within offsets, sizes and
strides. Also add pattern to propogate tensor_cast operations.

Differential Revision: https://reviews.llvm.org/D97704
2021-03-01 14:59:18 -08:00
Jacques Pienaar 87e05eb03b Revert "Remove use of tuple for multiresult type storage"
This reverts commit 08f0764ff5.
2021-03-01 10:39:41 -08:00
Jacques Pienaar 08f0764ff5 Remove use of tuple for multiresult type storage
Move the results in line with the op instead. This results in each
operation having its own types recorded vs single tuple type, but comes
at benefit that every mutation doesn't incurs uniquing. Ran into cases
where updating result type of operation led to very large memory usage.

Differential Revision: https://reviews.llvm.org/D97652
2021-03-01 09:30:24 -08:00
Jacques Pienaar 2f0b4db5ea [mlir] Add convenience grouping for tensor type inference
For ops that produces tensor types and implement the shaped type component interface, the type inference interface can be used. Create a grouping of these together to make it easier to specify (it cannot be added into a list of traits, but must rather be appended/concated to one as it isn't a trait but a list of traits).

Differential Revision: https://reviews.llvm.org/D97636
2021-03-01 05:21:08 -08:00
Stella Laurenzo 2ceedc3a20 [mlir][linalg] Add symbolic type conversion to linalg named ops.
This enables this kind of construct in the DSL to generate a named op that is polymorphic over numeric type variables `T` and `U`, generating the correct arithmetic casts at construction time:

```
@tc_def_op
def polymorphic_matmul(A=TensorDef(T1, S.M, S.K),
                       B=TensorDef(T2, S.K, S.N),
                       C=TensorDef(U, S.M, S.N, output=True)):
  implements(ContractionOpInterface)
  C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
```

Presently, this only supports type variables that are bound to the element type of one of the arguments, although a further extension that allows binding a type variable to an attribute would allow some more expressiveness and may be useful for some formulations. This is left to a future patch. In addition, this patch does not yet materialize the verifier support which ensures that types are bound correctly (for such simple examples, failing to do so will yield IR that fails verification, it just won't yet fail with a precise error).

Note that the full grid of extensions/truncation/int<->float conversions are supported, but many of them are lossy and higher level code needs to be mindful of numerics (it is not the job of this level).

As-is, this should be sufficient for most integer matmul scenarios we work with in typical quantization schemes.

Differential Revision: https://reviews.llvm.org/D97603
2021-02-27 15:52:35 -08:00
Stella Laurenzo 5867c18e2c [mlir][linalg] Generate additional interfaces for named ops.
* Adds ContractionOpInterface to polymorphic_matmul.

Differential Revision: https://reviews.llvm.org/D97601
2021-02-27 15:43:41 -08:00
Mehdi Amini ee90bb3486 Store (cache) the Argument number (index in the argument list) inside the BlockArgumentImpl
This avoids linear search in BlockArgument::getArgNumber().

Differential Revision: https://reviews.llvm.org/D97596
2021-02-27 17:21:08 +00:00
River Riddle e6260ad043 [mlir] Simplify various pieces of code now that Identifier has access to the Context/Dialect
This also exposed a bug in Dialect loading where it was not correctly identifying identifiers that had the dialect namespace as a prefix.

Differential Revision: https://reviews.llvm.org/D97431
2021-02-26 18:00:05 -08:00
Rob Suderman 16abacaea9 [MLIR][TOSA] Resubmit Tosa to Standard/SCF Lowerings (const, if, while)"
Includes a lowering for tosa.const, tosa.if, and tosa.while to Standard/SCF dialects. TosaToStandard is
used for constant lowerings and TosaToSCF handles the if/while ops.

Resubmission of https://reviews.llvm.org/D97518 with ASAN fixes.

Differential Revision: https://reviews.llvm.org/D97529
2021-02-26 17:44:12 -08:00
Aart Bik df5ccf5a94 [mlir][vector] add higher dimensional support to gather/scatter
Similar to mask-load/store and compress/expand, the gather and
scatter operation now allow for higher dimension uses. Note that
to support the mixed-type index, the new syntax is:
   vector.gather %base [%i,%j] [%kvector] ....
The first client of this generalization is the sparse compiler,
which needs to define scatter and gathers on dense operands
of higher dimensions too.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97422
2021-02-26 14:20:19 -08:00
Geoffrey Martin-Noble 21bb63893e [MLIR][linalg] Make integer matmul ops cast before multiplying
Right now they multiply before casting which means they would frequently
overflow. There are various reasonable ways to do this, but until we
have robust op description infra, this is a simple and safe default. More
careful treatments are likely to be hardware specific, as well (e.g.
using an i8*i8->i16 mul instruction).

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D97505
2021-02-26 08:36:31 -08:00
Hanhan Wang bfd3771c9e [mlir][linalg] Add pooling ops to Linalg TC ops.
- Add EDSC builders for std_cmpf_ogt and std_cmpf_olt.
- Add pooling_nhwc_min/max/sum ops

Depends On D97384

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D97385
2021-02-26 07:18:03 -08:00
Benjamin Kramer 4941fef9c4 [mlir] Silence some deprecation warnings after dffc487b07 2021-02-26 15:15:56 +01:00
Christian Sigg dffc487b07 [mlir] Mark OpState::removeAttr() deprecated.
Fix call sites.

The method will be removed 2 weeks later.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97530
2021-02-26 12:04:41 +01:00
Christian Sigg 0b05908feb [mlir] Remove some rarely used OpState members and use Operation members instead.
Skipping the deprecation dance here.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97494
2021-02-26 08:37:11 +01:00
Rob Suderman c47aa3c8de Revert [MLIR][TOSA] Added Tosa to Standard/SCF Lowerings (const, if, while)
This reverts commit a813e9be5b.

Results in an ASAN failure due to bypassing rewriter.

Differential Revision: https://reviews.llvm.org/D97518
2021-02-25 18:05:16 -08:00
James Y Knight 24539f1ef2 Add Alignment argument to IRBuilder CreateAtomicRMW and CreateAtomicCmpXchg.
And then push those change throughout LLVM.

Keep the old signature in Clang's CGBuilder for now -- that will be
updated in a follow-on patch (D97224).

The MLIR LLVM-IR dialect is not updated to support the new alignment
attribute, but preserves its existing behavior.

Differential Revision: https://reviews.llvm.org/D97223
2021-02-25 18:29:42 -05:00
Rob Suderman a813e9be5b [MLIR][TOSA] Added Tosa to Standard/SCF Lowerings (const, if, while)
Includes a lowering for tosa.const, tosa.if, and tosa.while to Standard/SCF dialects. TosaToStandard is
used for constant lowerings and TosaToSCF handles the if/while ops.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D97352
2021-02-25 14:35:21 -08:00
Christian Sigg 8c074cb0b7 [mlir] Mark OpState::getAttrs() deprecated.
Fix call sites.

The method will be removed 2 weeks later.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97464
2021-02-25 20:54:42 +01:00
Jing Pu c519460745 Allow !shape.size type operands in "shape.from_extents" op.
This expands the op to support error propagation and also makes it symmetric with  "shape.get_extent" op.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D97261
2021-02-24 14:50:07 -08:00
Hanhan Wang 705068cb8c [mlir][linalg] Support for using output values in TC definitions.
This will allow us to define select(pred, in, out) for TC ops, which is useful
for pooling ops.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97312
2021-02-24 11:37:45 -08:00
Weiwei Li ce2ad938ff [mlir][spirv] Define spv.GLSL.Ldexp
co-authored-by: Alan Liu <alanliu.yf@gmail.com>

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97228
2021-02-24 13:07:46 -05:00
Lei Zhang 5f8a80882b [mlir] Add constBuilderCall to TypeAttr to simplify builders
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97344
2021-02-24 13:04:03 -05:00
Alexander Belyaev 7377ef9357 [mlir] Add a builder to `linalg.tiled_loop`.
https://llvm.discourse.group/t/rfc-add-linalg-tileop/2833

Differential Revision: https://reviews.llvm.org/D97372
2021-02-24 14:47:27 +01:00
River Riddle 65a3197a8f [mlir] Refactor InterfaceMap to use a sorted vector of interfaces, as opposed to a DenseMap
A majority of operations have a very small number of interfaces, which means that the cost of using a hash map is generally larger for interface lookups than just a binary search. In the future when there are a number of operations with large amounts of interfaces, we can switch to a hybrid approach that optimizes lookups based on the number of interfaces. For now, however, a binary search is the best approach.

This dropped compile time on a largish TF MLIR module by 20%(half a second).

Differential Revision: https://reviews.llvm.org/D96085
2021-02-23 14:36:45 -08:00
Adam Straw af8adea155 make Affine parallel and yield ops MemRefsNormalizable
Affine parallel ops may contain and yield results from MemRefsNormalizable ops in the loop body.  Thus, both affine.parallel and affine.yield should have the MemRefsNormalizable trait.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D96821
2021-02-23 10:16:47 -08:00
Nicolas Vasilache 8cf14b8dec [mlir][Linalg] Retire hoistViewAllocOps.
This transformation was only used for quick experimentation and is not general enough.
Retire it.

Differential Revision: https://reviews.llvm.org/D97266
2021-02-23 11:45:19 +00:00
Nicolas Vasilache 551ba72760 [mlir] NFC - Use declarative assembly for scf::YieldOp 2021-02-23 11:17:30 +00:00
River Riddle dc6a84fce6 [mlir] Add support for DebugCounters using the new DebugAction infrastructure
DebugCounters allow for selectively enabling the execution of a debug action based upon a "counter". This counter is comprised of two components that are used in the control of execution of an action, a "skip" value and a "count" value. The "skip" value is used to skip a certain number of initial executions of a debug action. The "count" value is used to prevent a debug action from executing after it has executed for a set number of times (not including any executions that have been skipped). For example, a counter for a debug action with `skip=47` and `count=2`, would skip the first 47 executions, then execute twice, and finally prevent any further executions.

This is effectively the same as the DebugCounter infrastructure in LLVM, but using the DebugAction infrastructure in MLIR. We can't simply reuse the DebugCounter support already present in LLVM due to its heavy reliance on global constructors (which are not allowed in MLIR). The DebugAction infrastructure already nicely supports the debug counter use case, and promotes the separation of policy and mechanism design philosophy.

Differential Revision: https://reviews.llvm.org/D96395
2021-02-23 01:01:17 -08:00
River Riddle 72d5afa4ac [mlir] Add a new debug action framework.
This revision adds the infrastructure for `Debug Actions`. This is a DEBUG only
API that allows for external entities to control various aspects of compiler
execution. This is conceptually similar to something like DebugCounters in LLVM, but at a lower level. This framework doesn't make any assumptions about how the higher level driver is controlling the execution, it merely provides a framework for connecting the two together. This means that on top of DebugCounter functionality, we could also provide more interesting drivers such as interactive execution. A high level overview of the workflow surrounding debug actions is
shown below:

*   Compiler developer defines an `action` that is taken by the a pass,
    transformation, utility that they are developing.
*   Depending on the needs, the developer dispatches various queries, pertaining
    to this action, to an `action manager` that will provide an answer as to
    what behavior the action should do.
*   An external entity registers an `action handler` with the action manager,
    and provides the logic to resolve queries on actions.

The exact definition of an `external entity` is left opaque, to allow for more
interesting handlers.

This framework was proposed here: https://llvm.discourse.group/t/rfc-debug-actions-in-mlir-debug-counters-for-the-modern-world

Differential Revision: https://reviews.llvm.org/D84986
2021-02-23 00:52:17 -08:00
KareemErgawy-TomTom 67e0d58de4 [MLIR][LinAlg] Start detensoring implementation.
This commit is the first baby step towards detensoring in
linalg-on-tensors.

Detensoring is the process through which a tensor value is convereted to one
or potentially more primitive value(s). During this process, operations with
such detensored operands are also converted to an equivalen form that works
on primitives.

The detensoring process is driven by linalg-on-tensor ops. In particular, a
linalg-on-tensor op is checked to see whether *all* its operands can be
detensored. If so, those operands are converted to thier primitive
counterparts and the linalg op is replaced by an equivalent op that takes
those new primitive values as operands.

This works towards handling github/google/iree#1159.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96271
2021-02-23 08:27:58 +01:00
River Riddle 06e25d5645 [mlir][IR] Refactor the `getChecked` and `verifyConstructionInvariants` methods on Attributes/Types
`verifyConstructionInvariants` is intended to allow for verifying the invariants of an attribute/type on construction, and `getChecked` is intended to enable more graceful error handling aside from an assert. There are a few problems with the current implementation of these methods:
* `verifyConstructionInvariants` requires an mlir::Location for emitting errors, which is prohibitively costly in the situations that would most likely use them, e.g. the parser.
This creates an unfortunate code duplication between the verifier code and the parser code, given that the parser operates on llvm::SMLoc and it is an undesirable overhead to pre-emptively convert from that to an mlir::Location.
* `getChecked` effectively requires duplicating the definition of the `get` method, creating a quite clunky workflow due to the subtle different in its signature.

This revision aims to talk the above problems by refactoring the implementation to use a callback for error emission. Using a callback allows for deferring the costly part of error emission until it is actually necessary.

Due to the necessary signature change in each instance of these methods, this revision also takes this opportunity to cleanup the definition of these methods by:
* restructuring the signature of `getChecked` such that it can be generated from the same code block as the `get` method.
* renaming `verifyConstructionInvariants` to `verify` to match the naming scheme of the rest of the compiler.

Differential Revision: https://reviews.llvm.org/D97100
2021-02-22 17:37:49 -08:00
Aart Bik 0df59f234b [sparse][mlir] simplify lattice optimization logic
Simplifies the way lattices are optimized with less, but more
powerful rules. This also fixes an inaccuracy where too many
lattices resulted (expecting a non-existing universal index).
Also puts no-side-effects on all proper getters and unifies
bufferization flags order in integration tests (for future,
more complex use cases).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97134
2021-02-22 16:52:06 -08:00
Geoffrey Martin-Noble e2224dd753 Fix typo introduced in https://reviews.llvm.org/D97006
Differential Revision: https://reviews.llvm.org/D97220
2021-02-22 13:11:37 -08:00
Geoffrey Martin-Noble 2ce6a42cc9 [MLIR] Add Linalg support for integer (generalized) matmuls
This patch adds Linalg named ops for various types of integer matmuls.
Due to limitations in the tc spec/linalg-ods-gen ops cannot be type
polymorphic, so this instead creates new ops (improvements to the
methods for defining Linalg named ops are underway with a prototype at
https://github.com/stellaraccident/mlir-linalgpy).

To avoid the necessity of directly referencing these many new ops, this
adds additional methods to ContractionOpInterface to allow classifying
types of operations based on their indexing maps.

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D97006
2021-02-22 11:13:26 -08:00
Tres Popp 5b20d80a03 [mlir] Mark std.subview as NoSideEffect
Differential Revision: https://reviews.llvm.org/D96951
2021-02-22 09:34:38 +01:00
Kern Handa 2d62212b06 [mlir] Export CUDA and Vulkan runtime wrappers on Windows
Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D97140
2021-02-21 22:58:55 -08:00
Stella Laurenzo 6c9541d4dd Implement simple type polymorphism for linalg named ops.
* It was decided that this was the end of the line for the existing custom tc parser/generator, and this is the first step to replacing it with a declarative format that maps well to mathy source languages.
* One such source language is implemented here: https://github.com/stellaraccident/mlir-linalgpy/blob/main/samples/mm.py
  * In fact, this is the exact source of the declarative `polymorphic_matmul` in this change.
  * I am working separately to clean this python implementation up and add it to MLIR (probably as `mlir.tools.linalg_opgen` or equiv). The scope of the python side is greater than just generating named ops: the ops are callable and directly emit `linalg.generic` ops fully dynamically, and this is intended to be a feature for frontends like npcomp to define custom linear algebra ops at runtime.
* There is more work required to handle full type polymorphism, especially with respect to integer formulations, since they require more specificity wrt types.
* Followups to this change will bring the new generator to feature parity with the current one and delete the current. Roughly, this involves adding support for interface declarations and attribute symbol bindings.

Differential Revision: https://reviews.llvm.org/D97135
2021-02-21 14:30:31 -08:00
Jacques Pienaar 02d7b260c6 [mlir] Register the print-op-graph pass using ODS
Move over to ODS & use pass options.
2021-02-20 15:42:02 -08:00
Eugene Zhulenev f99ccf6516 [mlir] Add math polynomial approximation pass
This gives ~30x speedup compared to expanding Tanh into exp operations:

```
name                  old cpu/op  new cpu/op  delta
BM_mlir_Tanh_f32/10    253ns ± 3%    55ns ± 7%  -78.35%  (p=0.000 n=44+41)
BM_mlir_Tanh_f32/100  2.21µs ± 4%  0.14µs ± 8%  -93.85%  (p=0.000 n=48+49)
BM_mlir_Tanh_f32/1k   22.6µs ± 4%   0.7µs ± 5%  -96.68%  (p=0.000 n=32+42)
BM_mlir_Tanh_f32/10k   225µs ± 5%     7µs ± 6%  -96.88%  (p=0.000 n=49+55)

name                  old time/op             new time/op             delta
BM_mlir_Tanh_f32/10    259ns ± 1%               56ns ± 2%  -78.31%        (p=0.000 n=41+39)
BM_mlir_Tanh_f32/100  2.27µs ± 1%             0.14µs ± 5%  -93.89%        (p=0.000 n=46+49)
BM_mlir_Tanh_f32/1k   22.9µs ± 1%              0.8µs ± 4%  -96.67%        (p=0.000 n=30+42)
BM_mlir_Tanh_f32/10k   230µs ± 0%                7µs ± 3%  -96.88%        (p=0.000 n=37+55)
```

This approximations is based on Eigen::generic_fast_tanh function

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D96739
2021-02-19 12:43:36 -08:00
Nicolas Vasilache 0ee4bf151c [mlir] Add folding of tensor.cast -> subtensor_insert
Differential Revision: https://reviews.llvm.org/D97059
2021-02-19 17:24:16 +00:00
Nicolas Vasilache 62f5c46eec [mlir][Linalg] NFC - Expose more options to the CodegenStrategy 2021-02-19 14:01:44 +00:00
Alexander Belyaev 53367b8fe1 [mlir][nfc] Fix indentation in LinalgOps.td. 2021-02-19 13:02:58 +01:00
Nicolas Vasilache b3c227a25a [mlir] Better support for rank-reducing subview / subtensor type inference.
Differential Revision: https://reviews.llvm.org/D96995
2021-02-19 08:30:50 +00:00
Geoffrey Martin-Noble db011775e4 Reland "[MLIR] Make structured op tests permutation invariant"
Relands with fix swapping DEPENDS for LINK_LIBS.

This reverts commit cd8cc00b9e.

Differential Revision: https://reviews.llvm.org/D97011
2021-02-18 18:09:49 -08:00
Jing Pu d690cbf821 Add DivOp to the Shape dialect
Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D96907
2021-02-18 16:58:47 -08:00
Mehdi Amini cd8cc00b9e Revert "[MLIR] Make structured op tests permutation invariant"
This reverts commit b9ff67099a.
The build is broken with -DBUILD_SHARED_LIBS=ON
2021-02-19 00:16:45 +00:00
Geoffrey Martin-Noble b9ff67099a [MLIR] Make structured op tests permutation invariant
Extracts the relevant dimensions from the map under test to build up the
maps to test against in a permutation-invariant way.

This also includes a fix to the indexing maps used by
isColumnMajorMatmul. The maps as currently written do not describe a
column-major matmul. The linalg named op column_major_matmul has the
correct maps (and notably fails the current test).

If `C = matmul(A, B)` we want an operation that given A in column major
format and B in column major format produces C in column major format.
Given that for a matrix, faux column major is just transpose.
`column_major_matmul(transpose(A), transpose(B)) = transpose(C)`. If
`A` is `NxK` and `B` is `KxM`, then `C` is `NxM`, so `transpose(A)` is
`KxN`, `transpose(B)` is `MxK` and `transpose(C)` is `MxN`, not `NxM`
as these maps currently have.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96984
2021-02-18 14:36:07 -08:00
Nicolas Vasilache b006902b2d [mlir] Fold trivial subtensor / subtensor_insert ops.
Static subtensor / subtensor_insert of the same size as the source / destination tensor and root @[0..0] with strides [1..1] are folded away.

Differential revision: https://reviews.llvm.org/D96991
2021-02-18 21:34:55 +00:00
Alexander Belyaev 624fccba87 [mlir] Add `linalg.tiled_loop` op.
`subtensor_insert` was used instead of `linalg.subtensor_yield` to make this PR
smaller. Verification will be added in a follow-up PR.

Differential Revision: https://reviews.llvm.org/D96943
2021-02-18 13:23:00 +01:00
Alexander Belyaev a89035d750 Revert "[MLIR] Create memref dialect and move several dialect-specific ops from std."
This commit introduced a cyclic dependency:
Memref dialect depends on Standard because it used ConstantIndexOp.
Std depends on the MemRef dialect in its EDSC/Intrinsics.h

Working on a fix.

This reverts commit 8aa6c3765b.
2021-02-18 12:49:52 +01:00
Julian Gross 8aa6c3765b [MLIR] Create memref dialect and move several dialect-specific ops from std.
Create the memref dialect and move several dialect-specific ops without
dependencies to other ops from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
DeallocOp -> MemRef_DeallocOp
MemRefCastOp -> MemRef_CastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
TransposeOp -> MemRef_TransposeOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D96425
2021-02-18 11:29:39 +01:00
Rob Suderman 55756f32f7 [MLIR][TOSA] Expand Tosa int types to I8 and I16
Tosa integers should include I8 and I16 values.

Differential Revision: https://reviews.llvm.org/D96900
2021-02-17 14:18:38 -08:00
Eugene Zhulenev 519f5917b4 [mlir] Add fma operation to std dialect
Will remove `vector.fma` operation in the followup CLs.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D96801
2021-02-17 10:06:01 -08:00
Weiwei Li 7742620620 [mlir][spirv] Add spv.GLSL.FrexpStruct
co-authored-by: Alan Liu <alanliu.yf@gmail.com>

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D96527
2021-02-17 09:02:03 -05:00
Benjamin Kramer 63a35f35ec [mlir][Shape] Generalize cstr_broadcastable folding for n-ary broadcasts
This is still fairly tricky code, but I tried to untangle it a bit.

Differential Revision: https://reviews.llvm.org/D96800
2021-02-17 11:44:52 +01:00
Benjamin Kramer 82b692e546 [mlir][Shape] Mark BroadcastOp as not having side effects
This allows it to be dead code eliminated when unused.

Differential Revision: https://reviews.llvm.org/D96797
2021-02-17 10:26:14 +01:00
MaheshRavishankar 81264dfbe8 [mlir][Linalg] Add utility method to reshape ops to express output shape in terms of input shape.
Resolving the dim of outputs of a tensor_reshape op in terms of its
input shape allows the op to be eliminated when its used only in its
dims. The init_tensor -> tensor_reshape canonicalization can be
simplified to use the dims of the output of the tensor_reshape which
gets canonicalized away later making the tensor_reshape dead.

Differential Revision: https://reviews.llvm.org/D96635
2021-02-16 13:42:08 -08:00
Adam Straw 99c0458f2f separate AffineMapAccessInterface from AffineRead/WriteOpInterface
Separating the AffineMapAccessInterface from AffineRead/WriteOp interface so that dialects which extend Affine capabilities (e.g. PlaidML PXA = parallel extensions for Affine) can utilize relevant passes (e.g. MemRef normalization).

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D96284
2021-02-16 13:05:27 -08:00
Alex Zinenko ce8f10d6cb [mlir] Simplify ModuleTranslation for LLVM IR
A series of preceding patches changed the mechanism for translating MLIR to
LLVM IR to use dialect interface with delayed registration. It is no longer
necessary for specific dialects to derive from ModuleTranslation. Remove all
virtual methods from ModuleTranslation and factor out the entry point to be a
free function.

Also perform some cleanups in ModuleTranslation internals.

Depends On D96774

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96775
2021-02-16 18:42:52 +01:00
Alex Zinenko 2ab57c503e [mlir] tighten LLVM dialect verifiers to generate valid LLVM IR
Verification of the LLVM IR produced when translating various MLIR dialects was
only active when calling the translation programmatically. This has led to
several cases of invalid LLVM IR being generated that could not be caught with
textual mlir-translate tests. Add verifiers for these cases and fix the tests
in preparation for enforcing the validation of LLVM IR.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96774
2021-02-16 18:18:21 +01:00
Alex Zinenko 9cd47a26d5 [mlir] add verifiers for NVVM and ROCDL kernel attributes
Make sure they can only be attached to LLVM functions as a result of converting
GPU functions to the LLVM Dialect.
2021-02-16 18:06:54 +01:00
Thomas Raoux 807e5467f3 [mlir] Add canonicalization for tensor_cast + tensor_to_memref
This helps bufferization passes by removing tensor_cast operations.

Differential Revision: https://reviews.llvm.org/D96745
2021-02-16 07:11:09 -08:00
Lei Zhang cb1a42359b [mlir][vector] Move splitting transfer ops into a separate entry point
These patterns unrolls transfer read/write ops if the vector consumers/
producers are extract/insert slices op. Transfer ops can map to hardware
load/store functionalities, where the vector size matters for bandwidth
considerations. So these patterns should be collected separately, instead
of being generic canonicalization patterns.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96782
2021-02-16 10:04:34 -05:00
Lei Zhang d8c7f442ea [mlir][vector] Add support for unrolling vector.fma
Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96781
2021-02-16 09:56:25 -05:00
Adrian Kuegel 9f581815ae Add Expm1 op to the math dialect.
Differential Revision: https://reviews.llvm.org/D96704
2021-02-16 08:33:37 +01:00
Nicolas Vasilache d01ea0edaa [mlir] Drop reliance of SliceAnalysis on specific ops.
SliceAnalysis originally was developed in the context of affine.for within mlfunc.
It predates the notion of region.
This revision updates it to not hardcode specific ops like scf::ForOp.
When rooted at an op, the behavior of the slice computation changes as it recurses into the regions of the op. This does not support gathering all values transitively depending on a loop induction variable anymore.
Additional variants rooted at a Value are added to also support the existing behavior.

Differential revision: https://reviews.llvm.org/D96702
2021-02-16 06:34:32 +00:00
Nicolas Vasilache 02d053ed2d [mlir][Vector] Add a canonicalization pattern for vector.contract + add
Differential Revision: https://reviews.llvm.org/D96701
2021-02-15 21:22:36 +00:00
Jacques Pienaar 381a65fa06 [mlir] Add clone method to ShapedType
Allow clients to create a new ShapedType of the same "container" type
but with different element or shape. First use case is when refining
shape during shape inference without needing to consider which
ShapedType is being refined.

Differential Revision: https://reviews.llvm.org/D96682
2021-02-15 11:04:16 -08:00
Tres Popp 3842d4b679 Make shape.is_broadcastable/shape.cstr_broadcastable nary
This corresponds with the previous work to make shape.broadcast nary.
Additionally, simplify the ConvertShapeConstraints pass. It now doesn't
lower an implicit shape.is_broadcastable. This is still the same in
combination with shape-to-standard when the 2 passes are used in either
order.

Differential Revision: https://reviews.llvm.org/D96401
2021-02-15 16:05:32 +01:00
Alex Zinenko 176379e0c8 [mlir] Use the interface-based translation for LLVM "intrinsic" dialects
Port the translation of five dialects that define LLVM IR intrinsics
(LLVMAVX512, LLVMArmNeon, LLVMArmSVE, NVVM, ROCDL) to the new dialect
interface-based mechanism. This allows us to remove individual translations
that were created for each of these dialects and just use one common
MLIR-to-LLVM-IR translation that potentially supports all dialects instead,
based on what is registered and including any combination of translatable
dialects. This removal was one of the main goals of the refactoring.

To support the addition of GPU-related metadata, the translation interface is
extended with the `amendOperation` function that allows the interface
implementation to post-process any translated operation with dialect attributes
from the dialect for which the interface is implemented regardless of the
operation's dialect. This is currently applied to "kernel" functions, but can
be used to construct other metadata in dialect-specific ways without
necessarily affecting operations.

Depends On D96591, D96504

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96592
2021-02-15 14:43:07 +01:00
Tres Popp 89d900b2a1 [mlir] Add error message on shape.broadcast verification failure 2021-02-15 10:58:53 +01:00
Alex Zinenko 34ea608a47 [mlir] Support repeated delayed registration of dialect interfaces
Dialects themselves do not support repeated addition of interfaces with the
same TypeID. However, in case of delayed registration, the registry may contain
such an interface, or have the same interface registered several times due to,
e.g., dependencies. Make sure we delayed registration does not attempt to add
an interface with the same TypeID more than once.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D96606
2021-02-15 10:46:26 +01:00
Tobias Gysi 99f3510b41 Reland "[mlir] add support for verification in integration tests"
The patch extends the runner utils by verification methods that compare two memrefs. The methods compare the content of the two memrefs and print success if the data is identical up to a small numerical error. The methods are meant to simplify the development of integration tests that compare the results against a reference implementation (cf. the updates to the linalg matmul integration tests).

Originally landed in 5fa893c (https://reviews.llvm.org/D96326) and reverted in dd719fd due to a Windows build failure.

Changes:
- Remove the max function that requires the "algorithm" header on Windows
- Eliminate the truncation warning in the float specialization of verifyElem by using a float constant

Reviewed By: Kayjukh

Differential Revision: https://reviews.llvm.org/D96593
2021-02-14 20:30:05 +01:00
Praveen Narayanan a65fb1916c Add a "kind" attribute to ContractionOp and OuterProductOp.
Currently, vector.contract joins the intermediate result and the accumulator
argument (of ranks K) using summation. We desire more joining operations ---
such as max --- to help vector.contract express reductions. This change extends
Vector_ContractionOp to take an optional attribute (called "kind", of enum type
CombiningKind) specifying the joining operation to be add/mul/min/max for int/fp
, and and/or/xor for int only. By default this attribute has value "add".

To implement this we also need to extend vector.outerproduct, since
vector.contract gets transformed to vector.outerproduct (and that to
vector.fma). The extension for vector.outerproduct is also an optional kind
attribute that uses the same enum type and possible values. The default is
"add". In case of max/min we transform vector.outerproduct to a combination of
compare and select.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D93280
2021-02-12 20:23:59 +00:00
Mehdi Amini aa4e466caa [mlir][Linalg] Improve region support in Linalg ops
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.

This reverts commit 3f22547fd1 and reland 973e133b76 with a workaround for
a gcc bug that does not accept lambda default parameters:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59949

Differential Revision: https://reviews.llvm.org/D96598
2021-02-12 19:11:24 +00:00
Diego Caballero 656674a7c4 [mlir][Vector] Align gather/scatter/expand/compress API
Align the vector gather/scatter/expand/compress API with
the vector load/store/maskedload/maskedstore API.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D96396
2021-02-12 20:48:38 +02:00