Commit Graph

38 Commits

Author SHA1 Message Date
Eugene Zhulenev 705f048cbb [mlir] MemRefToLLVM: convert memref.view operations for empty memrefs
Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D126094
2022-05-20 16:43:54 -07:00
Ashay Rane 5380e30e04
[mlir] translate memref.reshape ops that have static shapes
This patch references code for translating memref.reinterpret_cast ops
to add translation rules for memref.reshape ops that have a static shape
argument.  Since reshape ops don't have offsets, sizes, or strides, this
patch simply sets the allocated and aligned pointers of the MemRef
descriptor.

Reviewed By: ftynse, cathyzhyi

Differential Revision: https://reviews.llvm.org/D125039
2022-05-12 11:57:20 -07:00
Yi Zhang e1318078a4 Support non identity layout map for reshape ops in MemRefToLLVM lowering
This change borrows the ideas from `computeExpanded/CollapsedLayoutMap`
and computes the dynamic strides at runtime for the memref descriptors.

Differential Revision: https://reviews.llvm.org/D124001
2022-04-26 13:03:53 -04:00
River Riddle eda6f907d2 [mlir][NFC] Shift a bunch of dialect includes from the .h to the .cpp
Now that dialect constructors are generated in the .cpp file, we can
drop all of the dependent dialect includes from the .h file.

Differential Revision: https://reviews.llvm.org/D124298
2022-04-23 01:09:29 -07:00
River Riddle 58ceae9561 [mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace
FuncOp has been moved to the `func` namespace for a little over a month, the
using directive can be dropped now.
2022-04-18 12:01:55 -07:00
River Riddle 3655069234 [mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func
dialect. This move has been planned in some capacity from the moment
we made FuncOp an operation (years ago). This commit handles the
functional aspects of the move, but various aspects are left untouched
to ease migration: func::FuncOp is re-exported into mlir to reduce
the actual API churn, the assembly format still accepts the unqualified
`func`. These temporary measures will remain for a little while to
simplify migration before being removed.

Differential Revision: https://reviews.llvm.org/D121266
2022-03-16 17:07:03 -07:00
River Riddle 23aa5a7446 [mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624
2022-03-01 12:10:04 -08:00
Benjamin Kramer 27cd2a6284 [mlir][MemRef] Lower memref.copy with an offset to memcpy
memcpy can handle them as long as they're contiguous.

Differential Revision: https://reviews.llvm.org/D119938
2022-02-16 17:18:31 +01:00
Adrian Kuegel 2219f9f57c [mlir][MemRef] Fix MemRefCopyOpLowering to use correct number of bytes
When lowering to memrefCopy call, the size for i1 type was calculated as 0.
Instead of using getTypeSizeInBits() and dividing by 8, we should just use getTypeSize().

Differential Revision: https://reviews.llvm.org/D119540
2022-02-11 13:59:08 +01:00
Adrian Kuegel 5b02a48085 [mlir][MemRef] Fix MemRefCastOpLowering for 32 bit index type.
The lowering creates llvm.insertvalue with the rank value, so it needs to use
index type instead of 64 bit integer type. Otherwise, we get an error:

llvm.insertvalue' op Type mismatch: cannot insert 'i64' into '!llvm.struct<(i32, ptr<i8>)>'

Differential Revision: https://reviews.llvm.org/D119534
2022-02-11 12:37:15 +01:00
Benjamin Kramer 6635c12ada [mlir] Use SmallBitVector instead of SmallDenseSet for AffineMap::compressSymbols
This is both more efficient and more ergonomic to use, as inverting a
bit vector is trivial while inverting a set is annoying.

Sadly this leaks into a bunch of APIs downstream, so adapt them as well.

This would be NFC, but there is an ordering dependency in MemRefOps's
computeMemRefRankReductionMask. This is now deterministic, previously it
was dependent on SmallDenseSet's unspecified iteration order.

Differential Revision: https://reviews.llvm.org/D119076
2022-02-07 00:21:44 +01:00
River Riddle ace01605e0 [mlir] Split out a new ControlFlow dialect from Standard
This dialect is intended to model lower level/branch based control-flow constructs. The initial set
of operations are: AssertOp, BranchOp, CondBranchOp, SwitchOp; all split out from the current
standard dialect.

See https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D118966
2022-02-06 14:51:16 -08:00
Alexander Belyaev ebc8153786 Revert "Revert "[mlir] Purge `linalg.copy` and use `memref.copy` instead.""
This reverts commit 25bf6a2a9b.
2022-02-01 18:21:21 +01:00
River Riddle 632a4f8829 [mlir] Move std.generic_atomic_rmw to the memref dialect
This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rmw which is the non-region
sibling operation of std.generic_atomic_rmw (the relationship is even more clear given
they have nearly the same description % how they represent the inner computation).

Differential Revision: https://reviews.llvm.org/D118209
2022-01-26 11:52:01 -08:00
River Riddle e084679f96 [mlir] Make locations required when adding/creating block arguments
BlockArguments gained the ability to have locations attached a while ago, but they
have always been optional. This goes against the core tenant of MLIR where location
information is a requirement, so this commit updates the API to require locations.

Fixes #53279

Differential Revision: https://reviews.llvm.org/D117633
2022-01-19 17:35:35 -08:00
Stephan Herhut aa3cabe3cb [mlir][memref] Fix memref.copy of scalar memref
Also fix a memory leak in the test while at it.

Differential Revision: https://reviews.llvm.org/D117314
2022-01-14 16:13:12 +01:00
Stephan Herhut ab95ba704d [mlir][memref] Implement fast lowering of memref.copy
In the absence of maps, we can lower memref.copy to a memcpy.

Differential Revision: https://reviews.llvm.org/D116099
2022-01-14 14:22:15 +01:00
Benoit Jacob 499703e9c0 Enable ReassociatingReshapeOpConversion with "non-identity" layouts.
Enable ReassociatingReshapeOpConversion with "non-identity" layouts.

This removes an early-return in this function, which seems unnecessary and is
preventing some memref.collapse_shape from converting to LLVM (see included lit test).

It seems unnecessary because the return message says "only empty layout map is supported"
but there actually is code in this function to deal with non-empty layout maps. Maybe
it refers to an earlier state of implementation and is just out of date?

Though, there is another concern about this early return: the condition that it actually
checks, `{src,dst}MemrefType.getLayout().isIdentity()`, is not quite the same as what the
return message says, "only empty layout map is supported". Stepping through this
`getLayout().isIdentity()` code in GDB, I found that it evaluates to `.getAffineMap().isIdentity()`
which does (AffineMap.cpp:271):

```
  if (getNumDims() != getNumResults())
    return false;
```

This seems that it would always return false for memrefs of rank greater than 1 ?

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D114808
2022-01-13 17:46:20 +00:00
River Riddle 676bfb2a22 [mlir] Refactor ShapedType into an interface
ShapedType was created in a time before interfaces, and is one of the earliest
type base classes in the ecosystem. This commit refactors ShapedType into
an interface, which is what it would have been if interfaces had existed at that
time. The API of ShapedType and it's derived classes are essentially untouched
by this refactor, with the exception being the API surrounding kDynamicIndex
(which requires a sole home).

For now, the API of ShapedType and its name have been kept as consistent to
the current state of the world as possible (to help with potential migration churn,
among other reasons). Moving forward though, we should look into potentially
restructuring its API and possible its name as well (it should really have "Interface"
at the end like other interfaces at the very least).

One other potentially interesting note is that I've attached the ShapedType::Trait
to TensorType/BaseMemRefType to act as mixins for the ShapedType API. This
is kind of weird, but allows for sharing the same API (i.e. preventing API loss from
the transition from base class -> Interface). This inheritance doesn't affect any
of the derived classes, it is just for API mixin.

Differential Revision: https://reviews.llvm.org/D116962
2022-01-12 14:12:09 -08:00
Alex Zinenko cafaa35036 [mlir] Make it possible to directly supply constant values to LLVM GEPOp
In LLVM IR, the GEP indices that correspond to structures are required to be
i32 constants. MLIR models constants as just values defined by special
operations, and there is no verification that it is the case for structure
indices in GEP. Furthermore, some common transformations such as control flow
simplification may lead to the operands becoming non-constant. Make it possible
to directly supply constant values to LLVM GEPOp to guarantee they remain
constant until the translation to LLVM IR. This is not yet a requirement and
the verifier is not modified, this will be introduced separately.

Reviewed By: wsmoses

Differential Revision: https://reviews.llvm.org/D116757
2022-01-07 09:56:01 +01:00
Mehdi Amini e4853be2f1 Apply clang-tidy fixes for performance-for-range-copy to MLIR (NFC) 2022-01-02 22:19:56 +00:00
William S. Moses a6a583dae4 [MLIR] Move AtomicRMW into MemRef dialect and enum into Arith
Per the discussion in https://reviews.llvm.org/D116345 it makes sense
to move AtomicRMWOp out of the standard dialect. This was accentuated by the
need to add a fold op with a memref::cast. The only dialect
that would permit this is the memref dialect (keeping it in the standard dialect
or moving it to the arithmetic dialect would require those dialects to have a
dependency on the memref dialect, which breaks linking).

As the AtomicRMWKind enum is used throughout, this has been moved to Arith.

Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D116392
2021-12-30 14:31:33 -05:00
Alexander Belyaev 15f8f3e20a [mlir] Split std.rank into tensor.rank and memref.rank.
Move `std.rank` similarly to how `std.dim` was moved to TensorOps and MemRefOps.

Differential Revision: https://reviews.llvm.org/D115665
2021-12-14 10:15:55 +01:00
River Riddle 937e40a8cf [mlir] Remove the non-templated DenseElementsAttr::getSplatValue
This predates the templated variant, and has been simply forwarding
to getSplatValue<Attribute> for some time. Removing this makes the
API a bit more uniform, and also helps prevent users from thinking
it is "cheap".
2021-11-09 01:40:40 +00:00
River Riddle ae40d62541 [mlir] Refactor ElementsAttr's value access API
There are several aspects of the API that either aren't easy to use, or are
deceptively easy to do the wrong thing. The main change of this commit
is to remove all of the `getValue<T>`/`getFlatValue<T>` from ElementsAttr
and instead provide operator[] methods on the ranges returned by
`getValues<T>`. This provides a much more convenient API for the value
ranges. It also removes the easy-to-be-inefficient nature of
getValue/getFlatValue, which under the hood would construct a new range for
the type `T`. Constructing a range is not necessarily cheap in all cases, and
could lead to very poor performance if used within a loop; i.e. if you were to
naively write something like:

```
DenseElementsAttr attr = ...;
for (int i = 0; i < size; ++i) {
  // We are internally rebuilding the APFloat value range on each iteration!!
  APFloat it = attr.getFlatValue<APFloat>(i);
}
```

Differential Revision: https://reviews.llvm.org/D113229
2021-11-09 00:15:08 +00:00
Jacques Pienaar cfb72fd3a0 [mlir] Switch arith, llvm, std & shape dialects to accessors prefixed both form.
Following
https://llvm.discourse.group/t/psa-ods-generated-accessors-will-change-to-have-a-get-prefix-update-you-apis/4476,
this follows flipping these dialects to _Both prefixed form. This
changes the accessors to have a prefix. This was possibly mostly without
breaking breaking changes if the existing convenience methods were used.

(https://github.com/jpienaar/llvm-project/blob/main/clang-tools-extra/clang-tidy/misc/AddGetterCheck.cpp
was used to migrate the callers post flipping, using the output from
Operator.cpp)

Differential Revision: https://reviews.llvm.org/D112383
2021-10-24 18:36:33 -07:00
Vladislav Vinogradov e41ebbecf9 [mlir][RFC] Refactor layout representation in MemRefType
The change is based on the proposal from the following discussion:
https://llvm.discourse.group/t/rfc-memreftype-affine-maps-list-vs-single-item/3968

* Introduce `MemRefLayoutAttr` interface to get `AffineMap` from an `Attribute`
  (`AffineMapAttr` implements this interface).
* Store layout as a single generic `MemRefLayoutAttr`.

This change removes the affine map composition feature and related API.
Actually, while the `MemRefType` itself supported it, almost none of the upstream
can work with more than 1 affine map in `MemRefType`.

The introduced `MemRefLayoutAttr` allows to re-implement this feature
in a more stable way - via separate attribute class.

Also the interface allows to use different layout representations rather than affine maps.
For example, the described "stride + offset" form, which is currently supported in ASM parser only,
can now be expressed as separate attribute.

Reviewed By: ftynse, bondhugula

Differential Revision: https://reviews.llvm.org/D111553
2021-10-19 12:31:15 +03:00
Eugene Zhulenev 8276ac13e9 [mlir] Add alignment attribute to memref.global
Revived https://reviews.llvm.org/D102435

Add alignment attribute to `memref.global` and propagate it to llvm global in memref->llvm lowering

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D111309
2021-10-07 06:21:57 -07:00
River Riddle ef976337f5 [mlir:OpConversion] Remove the remaing usages of the deprecated matchAndRewrite methods
This commits updates the remaining usages of the ArrayRef<Value> based
matchAndRewrite/rewrite methods in favor of the new OpAdaptor
overload.

Differential Revision: https://reviews.llvm.org/D110360
2021-09-24 17:51:41 +00:00
MaheshRavishankar 4cf9bf6c9f [mlir][MemRef] Compute unused dimensions of a rank-reducing subviews using strides as well.
For `memref.subview` operations, when there are more than one
unit-dimensions, the strides need to be used to figure out which of
the unit-dims are actually dropped.

Differential Revision: https://reviews.llvm.org/D109418
2021-09-20 11:05:30 -07:00
Uday Bondhugula f68939d3d9 [MLIR] Tighten type constraint on memref.global op def
Tighten the def of memref.global op to use the right kind of TypeAttr
(of MemRefType).

Differential Revision: https://reviews.llvm.org/D109822
2021-09-15 22:41:03 +05:30
Chris Lattner faf1c22408 [Builder] Eliminate the StringRef/StringAttr forms of getSymbolRefAttr.
The StringAttr version doesn't need a context, so we can just use the
existing `SymbolRefAttr::get` form.  The StringRef version isn't preferred
so we want to encourage people to use StringAttr.

There is an additional form of getSymbolRefAttr that takes a (SymbolTrait
implementing) operation.  This should also be moved, but I'll do that as
a separate patch.

Differential Revision: https://reviews.llvm.org/D108922
2021-08-30 16:05:36 -07:00
William S. Moses 8c2ff7b69e [MLIR] Correct linkage of lowered globalop
LLVM considers global variables marked as externals to be defined within the module if it is initialized (including to an undef). Other external globals are considered as being defined externally and imported into the current translation unit. Lowering of MLIR Global Ops does not properly propagate undefined initializers, resulting in a global which is expected to be defined within the current TU, not being defined.

Differential Revision: https://reviews.llvm.org/D108252
2021-08-18 11:09:43 -04:00
Butygin 00a756d3f6 [mlir] Remove invalid DeallocOpLowering pattern insertion
It is inserted later under then condition.

Differential Revision: https://reviews.llvm.org/D107238
2021-08-02 11:27:44 +03:00
Yi Zhang 381c3b9299 Dyanamic shape support for memref reassociation reshape ops
Only memref with identity layout map is supported for now.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D106180
2021-07-19 15:14:36 -07:00
Alex Zinenko 881dc34f73 [mlir] replace llvm.mlir.cast with unrealized_conversion_cast
The dialect-specific cast between builtin (ex-standard) types and LLVM
dialect types was introduced long time before built-in support for
unrealized_conversion_cast. It has a similar purpose, but is restricted
to compatible builtin and LLVM dialect types, which may hamper
progressive lowering and composition with types from other dialects.
Replace llvm.mlir.cast with unrealized_conversion_cast, and drop the
operation that became unnecessary.

Also make unrealized_conversion_cast legal by default in
LLVMConversionTarget as the majority of convesions using it are partial
conversions that actually want the casts to persist in the IR. The
standard-to-llvm conversion, which is still expected to run last, cleans
up the remaining casts  standard-to-llvm conversion, which is still
expected to run last, cleans up the remaining casts

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D105880
2021-07-16 15:14:09 +02:00
Alexander Belyaev 46ef86b5d8 [mlir] Move linalg::Expand/CollapseShapeOp to memref dialect.
RFC: https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310

Differential Revision: https://reviews.llvm.org/D106141
2021-07-16 13:32:17 +02:00
Alex Zinenko 75e5f0aac9 [mlir] factor memref-to-llvm lowering out of std-to-llvm
After the MemRef has been split out of the Standard dialect, the
conversion to the LLVM dialect remained as a huge monolithic pass.
This is undesirable for the same complexity management reasons as having
a huge Standard dialect itself, and is even more confusing given the
existence of a separate dialect. Extract the conversion of the MemRef
dialect operations to LLVM into a separate library and a separate
conversion pass.

Reviewed By: herhut, silvas

Differential Revision: https://reviews.llvm.org/D105625
2021-07-09 14:49:52 +02:00