Commit Graph

112 Commits

Author SHA1 Message Date
Jacques Pienaar b0921f68e1 [mlir] Add verify method to adaptor
This allows verifying op-indepent attributes (e.g., attributes that do not require the op to have been created) before constructing an operation. These include checking whether required attributes are defined or constraints on attributes (such as I32 attribute). This is not perfect (e.g., if one had a disjunctive constraint where one part relied on the op and the other doesn't, then this would not try and extract the op independent from the op dependent).

The next step is to move these out to a trait that could be verified earlier than in the generated method. The first use case is for inferring the return type while constructing the op. At that point you don't have an Operation yet and that ends up in one having to duplicate the same checks, e.g., verify that attribute A is defined before querying A in shape function which requires that duplication. Instead this allows one to invoke a method to verify all the traits and, if this is checked first during verification, then all other traits could use attributes knowing they have been verified.

It is a little bit funny to have these on the adaptor, but I see the adaptor as a place to collect information about the op before the op is constructed (e.g., avoiding stringly typed accessors, verifying what is possible to verify before the op is constructed) while being cheap to use even with constructed op (so layer of indirection between the op constructed/being constructed). And from that point of view it made sense to me.

Differential Revision: https://reviews.llvm.org/D80842
2020-06-05 09:47:37 -07:00
Alexander Belyaev c3098e4f40 [MLIR] Add TensorFromElementsOp to Standard ops.
Differential Revision: https://reviews.llvm.org/D80705
2020-05-28 15:48:10 +02:00
Nicolas Vasilache 63c0e72b2f [mlir] Revisit std.subview handling of static information.
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.

In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.

The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.

The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.

Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.

Lowering to LLVM is updated, simplified and now supports all cases.
A mixed static-dynamic mode test that wouldn't previously lower is added.

It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.

Differential Revision: https://reviews.llvm.org/D79662
2020-05-12 20:04:44 -04:00
Sam McCall 691e826995 Revert "[mlir] Revisit std.subview handling of static information."
This reverts commit 80d133b24f.

Per Stephan Herhut: The canonicalizer pattern that was added creates
forms of the subview op that cannot be lowered.

This is shown by failing Tensorflow XLA tests such as:
  tensorflow/compiler/xla/service/mlir_gpu/tests:abs.hlo.test
Will provide more details offline, they rely on logs from private CI.
2020-05-12 15:18:50 +02:00
Nicolas Vasilache 80d133b24f [mlir] Revisit std.subview handling of static information.
Summary:
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.

In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.

The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.

The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.

Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.

It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.

Reviewers: ftynse, mravishankar, antiagainst, rriddle!, andydavis1, timshen, asaadaldien, stellaraccident

Reviewed By: mravishankar

Subscribers: aartbik, bondhugula, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79662
2020-05-11 17:44:24 -04:00
Nicolas Vasilache 6ed61a26c2 [mlir] Simplify and better document std.view semantics
This [discussion](https://llvm.discourse.group/t/viewop-isnt-expressive-enough/991/2) raised some concerns with ViewOp.

In particular, the handling of offsets is incorrect and does not match the op description.
Note that with an elemental type change, offsets cannot be part of the type in general because sizeof(srcType) != sizeof(dstType).

Howerver, offset is a poorly chosen term for this purpose and is renamed to byte_shift.

Additionally, for all intended purposes, trying to support non-identity layouts for this op does not bring expressive power but rather increases code complexity.

This revision simplifies the existing semantics and implementation.
This simplification effort is voluntarily restrictive and acts as a stepping stone towards supporting richer semantics: treat the non-common cases as YAGNI for now and reevaluate based on concrete use cases once a round of simplification occurred.

Differential revision: https://reviews.llvm.org/D79541
2020-05-11 12:29:23 -04:00
Alex Zinenko 9d273c0ef0 [mlir] Harden verifiers for DMA ops
DMA operation classes in the Standard dialect (`DmaStartOp` and `DmaWaitOp`)
provide helper functions that make numerous assumptions about the number and
order of operands, and about their types. However, these assumptions were not
checked in the verifier, leading to assertion failures or crashes when helper
functions were used on ill-formed ops. Some of the assuptions were checked in
the custom parser (and thus could not check assumption violations in ops
constructed programmatically, e.g., during rewrites) and others were not
checked at all. Introduce the verifiers for all these assumptions and drop
unnecessary checks in the parser that are now covered by the verifier.

Addresses PR45560.

Differential Revision: https://reviews.llvm.org/D79408
2020-05-05 20:40:41 +02:00
Frederik Gossen 031265ad8a [MLIR] Add complex numbers to standard dialect
Add `CreateComplexOp`, `ReOp`, and `ImOp` to the standard dialect.
This is the first step to support complex numbers.

Differential Revision: https://reviews.llvm.org/D79159
2020-05-04 14:04:28 +00:00
River Riddle 7f85adb54d [mlir][Standard] Allow select to use an i1 for vector and tensor values
It currently requires that the condition match the shape of the selected value, but this is only really useful for things like masks. This revision allows for the use of i1 to mean that all of the vector/tensor is selected. This also matches the behavior of LLVM select. A benefit of this change is that transformations that want to generate selects, like those on the CFG, don't have to special case vector/tensor. Previously the only way to generate  a select from an i1 was to use a splat, but that doesn't support dynamically shaped/unranked tensors.

Differential Revision: https://reviews.llvm.org/D78690
2020-04-23 04:50:09 -07:00
River Riddle af331bc52d [mlir][Standard] Add a canonicalization to simplify cond_br when the successors are identical
This revision adds support for canonicalizing the following:

```
cond_br %cond, ^bb1(A, ..., N), ^bb1(A, ..., N)

br ^bb1(A, ..., N)
```

 If the operands to the successor are different and the cond_br is the only predecessor, we emit selects for the branch operands.

```
cond_br %cond, ^bb1(A), ^bb1(B)

%select = select %cond, A, B
br ^bb1(%select)
```

Differential Revision: https://reviews.llvm.org/D78682
2020-04-23 04:42:02 -07:00
Alexander Belyaev 146d52e732 [MLIR] Verify there are no side-effecting ops in GenericAtomicRMWOp body.
Differential Revision: https://reviews.llvm.org/D78559
2020-04-22 09:02:58 +02:00
Alexander Belyaev 871beba234 [MLIR] Add AtomicRMWRegionOp.
https://llvm.discourse.group/t/rfc-add-std-atomic-rmw-op/489

Differensial revision: https://reviews.llvm.org/D78352
2020-04-20 17:13:28 +02:00
Uday Bondhugula db054d7115 [MLIR] Introduce an op trait that defines a new scope for auto allocation
Introduce a new operation property / trait (AutomaticAllocationScope)
for operations with regions that define a new scope for automatic allocations;
such allocations (typically realized on stack) are automatically freed when
control leaves such ops' regions. std.alloca's are freed at the closest
surrounding op that has this trait. All FunctionLike operations should normally
have this trait.

Differential Revision: https://reviews.llvm.org/D77787
2020-04-10 12:05:52 +05:30
Mehdi Amini bab5bcf8fd Add a flag on the context to protect against creation of operations in unregistered dialects
Differential Revision: https://reviews.llvm.org/D76903
2020-03-30 19:37:31 +00:00
River Riddle 7c211cf3af [mlir][NFC] Move the definition of AffineApplyOp to ODS
This has been a long standing cleanup TODO.

Differential Revision: https://reviews.llvm.org/D76019
2020-03-12 14:26:15 -07:00
Lei Zhang 5b2cc6c3d0 [mlir][ods] Improve integer signedness modelling
A previous commit added support for integer signedness in C++
IntegerType. This change introduces ODS definitions for
integer types and integer (element) attributes w.r.t. signedness.

This commit also updates various existing definitions' descriptions
to mention signless where suitable to make it more clear.

Positive and non-negative integer attributes are removed to avoid
the explosion of subclasses. Instead, one should use more atmoic
constraints together with Confined to model that. For example,
`Confined<..., [IntPositive]>`.

Differential Revision: https://reviews.llvm.org/D75610
2020-03-04 15:05:42 -05:00
River Riddle c10896682d [mlir] Generate CmpFPredicate as an EnumAttr in tablegen
Summary: This allows for attaching the attribute to CmpF as a proper argument, and thus enables the removal of a bunch of c++ code.

Differential Revision: https://reviews.llvm.org/D75539
2020-03-03 13:19:25 -08:00
Tim Shen 67c1615440 [MLIR] Add vector support for fpexp and fptrunc.
Differential Revision: https://reviews.llvm.org/D75150
2020-02-28 12:24:45 -08:00
Frank Laub fe210a1ff2 [MLIR] Add std.atomic_rmw op
Summary:
The RFC for this op is here: https://llvm.discourse.group/t/rfc-add-std-atomic-rmw-op/489

The std.atmomic_rmw op provides a way to support read-modify-write
sequences with data race freedom. It is intended to be used in the lowering
of an upcoming affine.atomic_rmw op which can be used for reductions.

A lowering to LLVM is provided with 2 paths:
- Simple patterns: llvm.atomicrmw
- Everything else: llvm.cmpxchg

Differential Revision: https://reviews.llvm.org/D74401
2020-02-24 16:54:21 -08:00
River Riddle 26222db01b [mlir][DeclarativeParser] Add support for the TypesMatchWith trait.
This allows for injecting type constraints that are not direct 1-1 mappings, for example when one type is equal to the element type of another. This allows for moving over several more parsers to the declarative form.

Differential Revision: https://reviews.llvm.org/D74648
2020-02-21 15:15:31 -08:00
River Riddle 6b6c96695c [mlir][ODS] Add a new trait `TypesMatchWith`
Summary:
This trait takes three arguments: lhs, rhs, transformer. It verifies that the type of 'rhs' matches the type of 'lhs' when the given 'transformer' is applied to 'lhs'. This allows for adding constraints like: "the type of 'a' must match the element type of 'b'". A followup revision will add support in the declarative parser for using these equality constraints to port more c++ parsers to the declarative form.

Differential Revision: https://reviews.llvm.org/D74647
2020-02-19 10:18:58 -08:00
Tim Shen f581e655ec [MLIR] Add std.assume_alignment op.
Reviewers: ftynse, nicolasvasilache, andydavis1

Subscribers: bixia, sanjoy.google, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, Joonsoo, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74378
2020-02-18 17:55:07 -08:00
River Riddle 4268e4f4b8 [mlir] Change the syntax of AffineMapAttr and IntegerSetAttr to avoid conflicts with function types.
Summary: The current syntax for AffineMapAttr and IntegerSetAttr conflict with function types, making it currently impossible to round-trip function types(and e.g. FuncOp) in the IR. This revision changes the syntax for the attributes by wrapping them in a keyword. AffineMapAttr is wrapped with `affine_map<>` and IntegerSetAttr is wrapped with `affine_set<>`.

Reviewed By: nicolasvasilache, ftynse

Differential Revision: https://reviews.llvm.org/D72429
2020-01-13 13:24:39 -08:00
Uday Bondhugula 47034c4bc5 Introduce prefetch op: affine -> std -> llvm intrinsic
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.

Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.

  affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#225

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
2019-12-18 10:00:04 -08:00
nmostafa daff60cd68 Add UnrankedMemRef Type
Closes tensorflow/mlir#261

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/261 from nmostafa:nmostafa/unranked 96b6e918f6ed64496f7573b2db33c0b02658ca45
PiperOrigin-RevId: 284037040
2019-12-05 13:13:20 -08:00
Alex Zinenko 993e79e9bd Fix ViewOp to have at most one offset operand
As described in the documentation, ViewOp is expected to take an optional
dynamic offset followed by a list of dynamic sizes. However, the ViewOp parser
did not include a check for the offset being a single value and accepeted a
list of values instead.

Furthermore, several tests have been exercising the wrong syntax of a ViewOp,
passing multiple values to the dyanmic stride list, which was not caught by the
parser. The trailing values could have been erronously interpreted as dynamic
sizes. This is likely due to resyntaxing of the ViewOp, with the previous
syntax taking the list of sizes before the offset. Update the tests to use the
syntax with the offset preceding the sizes.

Worse, the conversion of ViewOp to the LLVM dialect assumed the wrong order of
operands with offset in the trailing position, and erronously relied on the
permissive parsing that interpreted trailing dynamic offset values as leading
dynamic sizes. Fix the lowering to use the correct order of operands.

PiperOrigin-RevId: 283532506
2019-12-03 06:23:04 -08:00
Mahesh Ravishankar 1ea231bd39 Allow memref_cast from static strides to dynamic strides.
Memref_cast supports cast from static shape to dynamic shape
memrefs. The same should be true for strides as well, i.e a memref
with static strides can be casted to a memref with dynamic strides.

PiperOrigin-RevId: 282381862
2019-11-25 11:08:56 -08:00
Mahesh Ravishankar 1145cebdab Verify subview op result has dynamic shape, when sizes are specified.
If the sizes are specified as arguments to the subview op, then the
shape must be dynamic as well.

PiperOrigin-RevId: 281591608
2019-11-20 14:16:05 -08:00
Mahesh Ravishankar 19212105dd Changes to SubViewOp to make it more amenable to canonicalization.
The current SubViewOp specification allows for either all offsets,
shape and stride to be dynamic or all of them to be static. There are
opportunities for more fine-grained canonicalization based on which of
these are static. For example, if the sizes are static, the result
memref is of static shape. The specification of SubViewOp is modified
to allow on or more of offsets, shapes and strides to be statically
specified. The verification is updated to ensure that the result type
of the subview op is consistent with which of these are static and
which are dynamic.

PiperOrigin-RevId: 281560457
2019-11-20 12:32:51 -08:00
Lei Zhang a0986bf43d NFC: Convert CmpIPredicate in StandardOps to use EnumAttr
This turns several hand-written functions to auto-generated ones.

PiperOrigin-RevId: 280684326
2019-11-15 10:17:31 -08:00
Nicolas Vasilache f2b6ae9991 Move VectorOps to Tablegen - (almost) NFC
This CL moves VectorOps to Tablegen and cleans up the implementation.

This is almost NFC but 2 changes occur:
  1. an interface change occurs in the padding value specification in vector_transfer_read:
     the value becomes non-optional. As a shortcut we currently use %f0 for all paddings.
     This should become an OpInterface for vectorization in the future.
  2. the return type of vector.type_cast is trivial and simplified to `memref<vector<...>>`

Relevant roundtrip and invalid tests that used to sit in core are moved to the vector dialect.

The op documentation is moved to the .td file.

PiperOrigin-RevId: 280430869
2019-11-14 08:15:23 -08:00
Andy Davis 5cf6e0ce7f Adds std.subview operation which takes dynamic offsets, sizes and strides and returns a memref type which represents sub/reduced-size view of its memref argument.
This operation is a companion operation to the std.view operation added as proposed in "Updates to the MLIR MemRefType" RFC.

PiperOrigin-RevId: 279766410
2019-11-11 10:33:27 -08:00
Andy Davis 8f00b4494d Swap operand order in std.view operation so that offset appears before dynamic sizes in the operand list.
PiperOrigin-RevId: 279114236
2019-11-07 10:20:23 -08:00
Andy Davis b5654d1311 Add ViewOp verification for dynamic strides, and address some comments from previous change.
PiperOrigin-RevId: 278903187
2019-11-06 11:25:54 -08:00
Andy Davis c38dca7f4b Add ViewOp to the StandardOps dialect, which casts a 1D/i8 element type memref type to an N-D memref type.
Proposed in RFC: https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ

Supports creating the N-D memref type with dynamic sizes and at a dynamic offset within the 1D base memref.
This change contains op definition/parsing/printing and tests. Follow up changes will handle constant shape/layout map folding and llvm lowering.

PiperOrigin-RevId: 278869990
2019-11-06 08:54:12 -08:00
Smit Hinsu 85b46314c0 Allow dynamic but ranked types in ops with SameOperandsAndResultShape and SameOperandsAndResultType traits
Currently SameOperandsAndResultShape trait allows operands to have tensor<*xf32> and tensor<2xf32> but doesn't allow tensor<?xf32> and tensor<10xf32>.

Also, use the updated shape compatibility helper function in TensorCastOp::areCastCompatible method.

PiperOrigin-RevId: 273658336
2019-10-08 19:37:11 -07:00
MLIR Team 0dfa7fc908 Add fpext and fptrunc to the Standard dialect and includes conversion to LLVM
PiperOrigin-RevId: 272768027
2019-10-03 16:37:24 -07:00
Christian Sigg 496f4590a1 Generalize parse/printBinaryOp to parse/printOneResultOp.
PiperOrigin-RevId: 272722539
2019-10-03 13:00:12 -07:00
Uday Bondhugula 458ede8775 Introduce splat op + provide its LLVM lowering
- introduce splat op in standard dialect (currently for int/float/index input
  type, output type can be vector or statically shaped tensor)
- implement LLVM lowering (when result type is 1-d vector)
- add constant folding hook for it
- while on Ops.cpp, fix some stale names

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#141

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/141 from bondhugula:splat 48976a6aa0a75be6d91187db6418de989e03eb51
PiperOrigin-RevId: 270965304
2019-09-24 12:44:58 -07:00
Manuel Freiberger 2c11997d48 Add integer sign- and zero-extension and truncation to standard.
This adds sign- and zero-extension and truncation of integer types to the
standard dialects. This allows to perform integer type conversions without
having to go to the LLVM dialect and introduce custom type casts (between
standard and LLVM integer types).

Closes tensorflow/mlir#134

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/134 from ombre5733:sext-zext-trunc-in-std c7657bc84c0ca66b304e53ec03797e09152e4d31
PiperOrigin-RevId: 270479722
2019-09-21 16:14:56 -07:00
River Riddle 305516fcd3 Allow isolated regions to form isolated SSA name scopes in the printer.
This will allow for naming values the same as existing SSA values for regions attached to operations that are isolated from above. This fits in with how the system already allows separate name scopes for sibling regions. This name shadowing can be enabled in the custom parser of operations by setting the 'enableNameShadowing' flag to true when calling 'parseRegion'.

%arg = constant 10 : i32
foo.op {
  %arg = constant 10 : i32
}

PiperOrigin-RevId: 264255999
2019-08-19 15:27:10 -07:00
Mehdi Amini d5a02fcd96 Add a `HasParent` operation trait to enforce a specific parent on an operation (NFC)
PiperOrigin-RevId: 260532592
2019-07-30 06:17:11 -07:00
MLIR Team 8cb82c9478 Add sitofp to the standard dialect
Conversion from integers (window or input size, padding etc) to floating point is required to express many ML kernels, for example average pooling.

PiperOrigin-RevId: 259575284
2019-07-23 11:23:40 -07:00
River Riddle 42a767b23d Allow std.constant to hold a boolean value.
This was an oversight in the original implementation, std.constant already supports IntegerAttr just not BoolAttr.

PiperOrigin-RevId: 259467710
2019-07-22 21:43:37 -07:00
Alex Zinenko 6fe99662aa Move loop dialect tests into separate files - NFC
This was overlooked when moving out loop operations from Standard to a separate
dialect.

PiperOrigin-RevId: 258970115
2019-07-19 11:41:12 -07:00
Alex Zinenko 287d111023 Generalize implicit terminator into an OpTrait
Several groups of operations in different dialects (e.g. AffineForOp,
AffineIfOp; loop::ForOp, loop::IfOp) share the requirement for their regions to
contain 0 or 1 block, and for blocks to always have a specific terminator type.
Furthermore, this terminator may be omitted from the custom syntax.  Generalize
this behavior into OpTrait::SingleBlockImplicitTerminator, parameterized by the
terminator operation type.  This trait provides the verifier that checks the
presence of the terminator, and utility functions adding the terminator in case
of absence.

PiperOrigin-RevId: 258957180
2019-07-19 11:40:51 -07:00
River Riddle a4cbe4ebe1 Verify that ReturnOp only appears within the region of a FuncOp.
The invariants of ReturnOp are directly tied to FuncOp, making ReturnOp invalid in any other context.

PiperOrigin-RevId: 258421200
2019-07-16 13:45:54 -07:00
Nicolas Vasilache cca53e8527 Extract std.for std.if and std.terminator in their own dialect
These ops should not belong to the std dialect.
This CL extracts them in their own dialect and updates the corresponding conversions and tests.

PiperOrigin-RevId: 258123853
2019-07-16 13:43:18 -07:00
Nicolas Vasilache afadfebe9c Move StdForOp to ODS ForOp
PiperOrigin-RevId: 256657155
2019-07-05 05:05:19 -07:00
Nicolas Vasilache 991040478b Add a standard if op
This CL adds an "std.if" op to represent an if-then-else construct whose condition is an arbitrary value of type i1.
This is necessary to lower all the existing examples from affine and linalg to std.for + std.if.

This CL introduces the op and adds the relevant positive and negative unit test. Lowering will be done in a separate followup CL.

PiperOrigin-RevId: 256649138
2019-07-05 03:35:18 -07:00