Commit Graph

42 Commits

Author SHA1 Message Date
Nicolas Vasilache ed229132f1 [mlir][Linalg] Uniformize linalg.generic with named ops.
This revision allows representing a reduction at the level of linalg on tensors for generic ops by uniformizing with the named ops approach.
2020-09-22 04:13:22 -04:00
Ahmed S. Taei 9b47525824 Reorder linalg.conv indexing_maps loop order
Change the indexing map to iterate over the (b, x0, x1, z0, z1, q, k) instead of (b, x0, x1, k, q, z0, z1) to evaluate the convolution expression:
Y[b, x0, x1, k] = sum(W[z0, z1, q, k] * X[b, x0 + z0, x1 + z1, q], z0, z1, q)

This allows llvm auto vectorize to work and has better locality resulting significant performance improvments

Differential Revision: https://reviews.llvm.org/D87781
2020-09-22 04:53:57 +00:00
Nicolas Vasilache 93fd30bac3 [mlir][Linalg] Evolve named ops to use assembly form and support linalg on tensors.
This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented.

As an illustration, the syntax for a `linalg.matmul` writing into a buffer is:

```
  linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>)
               outs(%c : memref<?x?xf32>)
```

, whereas the syntax for a `linalg.matmul` returning a new tensor is:

```
  %d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>)
                    init(%c : memref<?x?xf32>)
                      -> tensor<?x?xf32>
```

Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
2020-09-18 06:14:30 -04:00
Jakub Lichman 53ffeea6d5 [mlir][Linalg] Reduction dimensions specified in TC definition of ConvOps.
This commit specifies reduction dimensions for ConvOps. This prevents
running reduction loops in parallel and enables easier detection of kernel dimensions
which we will need later on.

Differential Revision: https://reviews.llvm.org/D87288
2020-09-09 15:17:07 +00:00
Jakub Lichman eef1bfb2d2 [mlir][Linalg] Conv {1,2,3}D ops defined with TC syntax
Replaced definition of named ND ConvOps with tensor comprehension
syntax which reduces boilerplate code significantly. Furthermore,
new ops to support TF convolutions added (without strides and dilations).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84628
2020-07-31 13:20:17 +02:00
Jakub Lichman 1aaf8aa53d [mlir][Linalg] Conv1D, Conv2D and Conv3D added as named ops
This commit is part of a greater project which aims to add
full end-to-end support for convolutions inside mlir. The
reason behind having conv ops for each rank rather than
having one generic ConvOp is to enable better optimizations
for every N-D case which reflects memory layout of input/kernel
buffers better and simplifies code as well. We expect plain linalg.conv
to be progressively retired.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D83879
2020-07-29 16:39:56 +02:00
lorenzo chelini 946be75b9e [MLIR][Linalg] Retire C++ DotOp in favor of a linalg-ods-gen'd op
- replace DotOp, now that DRR rules have been dropped.

- Capture arguments mismatch in the parser. The number of parsed arguments must
  equal the number of expected arguments.

Reviewed By: ftynse, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D82952
2020-07-28 12:34:19 +02:00
Jakub Lichman e4dd964df0 [mlir] Loop bounds inference in linalg.generic op improved to support bounds for convolution
Loop bound inference is right now very limited as it supports only permutation maps and thus
it is impossible to implement convolution with linalg.generic as it requires more advanced
loop bound inference. This commits solves it for the convolution case.

Depends On D83158

Differential Revision: https://reviews.llvm.org/D83191
2020-07-23 11:01:54 +02:00
Jakub Lichman f9c8febc52 [mlir] Added support for symbols inside linalg.generic and map concatenation
This commit adds functionality needed for implementation of convolutions with
linalg.generic op. Since linalg.generic right now expects indexing maps to be
just permutations, offset indexing needed in convolutions is not possible.
Therefore in this commit we address the issue by adding support for symbols inside
indexing maps which enables more advanced indexing. The upcoming commit will
solve the problem of computing loop bounds from such maps.

Differential Revision: https://reviews.llvm.org/D83158
2020-07-20 19:20:47 +02:00
lorenzo chelini e31e8f1ed5 [MLIR][Linalg] Retire C++ MatvecOp in favor of a linalg-ods-gen'd op
Replace C++ MatvecOp, now that DRR rules have been dropped.

Differential Revision: https://reviews.llvm.org/D82007
2020-06-18 11:36:49 +02:00
Nicolas Vasilache eae76faeea [mlir][Linalg] Retire C++ MatmulOp in favor of a linalg-ods-gen'd op.
Summary:
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).

During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.

Deciding on a type-polymorphic behavior, and implementing it, is left for future work.

Reviewers: aartbik

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D81935
2020-06-16 10:46:35 -04:00
Kirill Bobyrev 9b72b47ed6 Revert "[mlir][Linalg] Retire C++ MatmulOp in favor of a linalg-ods-gen'd op."
This reverts commit 8c6c49f293.

As discussed offline, this patch breaks internal builds and tests so I'm
reverting it for now.
2020-06-16 11:02:28 +02:00
Nicolas Vasilache 8c6c49f293 [mlir][Linalg] Retire C++ MatmulOp in favor of a linalg-ods-gen'd op.
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).

During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.

Deciding on a type-polymorphic behavior, and implementing it, is left for future work.

Differential Revision: https://reviews.llvm.org/D79762
2020-06-15 18:14:15 -04:00
Mehdi Amini 95371ce9c2 Enable FileCheck -enable-var-scope by default in MLIR test
This option avoids to accidentally reuse variable across -LABEL match,
it can be explicitly opted-in by prefixing the variable name with $

Differential Revision: https://reviews.llvm.org/D81531
2020-06-12 00:43:09 +00:00
Frederik Gossen 904f91db5f [MLIR][Standard] Make the `dim` operation index an operand.
Allow for dynamic indices in the `dim` operation.
Rather than an attribute, the index is now an operand of type `index`.
This allows to apply the operation to dynamically ranked tensors.
The correct lowering of dynamic indices remains to be implemented.

Differential Revision: https://reviews.llvm.org/D81551
2020-06-10 13:54:47 +00:00
Alex Zinenko 60f443bb3b [mlir] Change dialect namespace loop->scf
All ops of the SCF dialect now use the `scf.` prefix instead of `loop.`. This
is a part of dialect renaming.

Differential Revision: https://reviews.llvm.org/D79844
2020-05-13 19:20:21 +02:00
Nicolas Vasilache 6ed61a26c2 [mlir] Simplify and better document std.view semantics
This [discussion](https://llvm.discourse.group/t/viewop-isnt-expressive-enough/991/2) raised some concerns with ViewOp.

In particular, the handling of offsets is incorrect and does not match the op description.
Note that with an elemental type change, offsets cannot be part of the type in general because sizeof(srcType) != sizeof(dstType).

Howerver, offset is a poorly chosen term for this purpose and is renamed to byte_shift.

Additionally, for all intended purposes, trying to support non-identity layouts for this op does not bring expressive power but rather increases code complexity.

This revision simplifies the existing semantics and implementation.
This simplification effort is voluntarily restrictive and acts as a stepping stone towards supporting richer semantics: treat the non-common cases as YAGNI for now and reevaluate based on concrete use cases once a round of simplification occurred.

Differential revision: https://reviews.llvm.org/D79541
2020-05-11 12:29:23 -04:00
Nicolas Vasilache 3bdd7fcc34 [mlir][Linalg] Add support to lower named ops to loops.
This revision adds support to allow named ops to lower to loops.
Linalg.batch_matmul successfully lowers to loops and to LLVM.

In the process, this test also activates linalg to affine loops.
However padded convolutions to not lower to affine.load atm so this revision overrides the type of underlying load / store operation.

Differential Revision: https://reviews.llvm.org/D79135
2020-04-30 13:45:17 -04:00
Tres Popp 2d2d696137 [MLIR] Propagate input side effect information
Summary:
Previously operations like std.load created methods for obtaining their
effects but did not inherit from the SideEffect interfaces when their
parameters were decorated with the information. The resulting situation
was that passes had no information on the SideEffects of std.load/store
and had to treat them more cautiously. This adds the inheritance
information when creating the methods.

As a side effect, many tests are modified, as they were using std.load
for testing and this oepration would be folded away as part of pattern
rewriting. Tests are modified to use store or to reutn the result of the
std.load.

Reviewers: mravishankar, antiagainst, nicolasvasilache, herhut, aartbik, ftynse!

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78802
2020-04-27 11:35:52 +02:00
Nicolas Vasilache f54312277c [mlir][Linalg] Drop function attribute from generic ops.
The function attribute in generic ops is not paying for itself.
A region is the more standardized way of specifying a custom computation.
If needed this region can call a function directly.
This is deemed more natural than managing a dedicated function attribute.

This also simplifies named ops generation by trimming unnecessary complexity.

Differential Revision: https://reviews.llvm.org/D78266
2020-04-16 09:47:08 -04:00
MaheshRavishankar 3b2f26ab05 [mlir][Linalg] NFC : Fix check for scalar case handling in LinalgToLoops
The invertPermutation method does not return a nullptr anymore, but
rather returns an empty map for the scalar case. Update the check in
LinalgToLoops to reflect this.
Also add test case for generating scalar code.
2020-04-13 13:23:01 -07:00
MaheshRavishankar 03391df90e [mlir][Linalg] Add loop.parallel lowering for all Linalg Ops.
The outer parallel loops of a linalg operation is lowered to
loop.parallel, with the other loops lowered to loop.for. This gets the
lowering to loop.parallel on par with the loop.for lowering. In future
the reduction loop could also be lowered to loop.parallel.
Also add a utility function that returns the loops that are
created.

Differential Revision: https://reviews.llvm.org/D77678
2020-04-13 13:19:12 -07:00
Hanhan Wang 69ddee1d2a [mlir][Linalg] Introduce linalg.pooling_min/max/sum op.
Summary:
Performs an N-D pooling operation similarly to the description in the TF
documentation:
https://www.tensorflow.org/api_docs/python/tf/nn/pool

Different from the description, this operation doesn't perform on batch and
channel. It only takes tensors of rank `N`.

```
  output[x[0], ..., x[N-1]] =
    REDUCE_{z[0], ..., z[N-1]}
      input[
            x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0],
            ...
            x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1]
            ],
```

The required optional arguments are:
  - strides: an i64 array specifying the stride (i.e. step) for window
    loops.
  - dilations: an i64 array specifying the filter upsampling/input
    downsampling rate
  - padding: an i64 array of pairs (low, high) specifying the number of
    elements to pad along a dimension.

If strides or dilations attributes are missing then the default value is
one for each of the input dimensions. Similarly, padding values are zero
for both low and high in each of the dimensions, if not specified.

Differential Revision: https://reviews.llvm.org/D76414
2020-03-31 21:21:54 -07:00
Ahmed Taei 221fa96cd4 Fix linalg.generic access of hoisted constants
Summary: Otherwise the added @generic_const_int will fail

Reviewers: nicolasvasilache, rriddle, mravishankar

Subscribers: mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77109
2020-03-30 21:15:41 -07:00
Hanhan Wang 92f7e8133a [mlir][Linalg] Implement padding for linalg.conv and lowering to loops.
Summary:
To enable this, two changes are needed:
1) Add an optional attribute `padding` to linalg.conv.
2) Compute if the indices accessing is out of bound in the loops. If so, use the
padding value `0`. Otherwise, use the value derived from load.

In the patch, the padding only works for lowering without other transformations,
e.g., tiling, fusion, etc.

Differential Revision: https://reviews.llvm.org/D75722
2020-03-13 14:35:58 -07:00
Nicolas Vasilache 47ec8702cb [mlir][Linalg] Revisit 0-D abstraction
This revision takes advantage of the empty AffineMap to specify the
0-D edge case. This allows removing a bunch of annoying corner cases
that ended up impacting users of Linalg.

Differential Revision: https://reviews.llvm.org/D75831
2020-03-10 15:14:09 -04:00
MaheshRavishankar 755c050200 [mlir][Linalg] Fix load/store operations generated while lower loops when
output has zero rank.

While lowering to loops, no indices should be used in the load/store
operation if the buffer is zero-rank.

Differential Revision: https://reviews.llvm.org/D75391
2020-03-04 17:04:30 -08:00
Hanhan Wang 28e0449ec6 [mlir][Linalg] Allow specifiying zero-rank shaped type operands to linalg.indexed_generic ops.
Patch D74638 allows linalg.generic ops to use zero-rank shaped type operands,
this also can be applied to linalg.indexed_generic ops.
2020-02-19 19:24:27 -05:00
MaheshRavishankar a8355b5c0f [mlir][Linalg] Allow specifiying zero-rank shaped type operands to linalg.generic ops.
Fixing a bug where using a zero-rank shaped type operand to
linalg.generic ops hit an unrelated assert. This also meant that
lowering the operation to loops was not supported. Adding roundtrip
tests and lowering to loops test for zero-rank shaped type operand
with fixes to make the test pass.

Differential Revision: https://reviews.llvm.org/D74638
2020-02-18 13:23:28 -08:00
River Riddle 4268e4f4b8 [mlir] Change the syntax of AffineMapAttr and IntegerSetAttr to avoid conflicts with function types.
Summary: The current syntax for AffineMapAttr and IntegerSetAttr conflict with function types, making it currently impossible to round-trip function types(and e.g. FuncOp) in the IR. This revision changes the syntax for the attributes by wrapping them in a keyword. AffineMapAttr is wrapped with `affine_map<>` and IntegerSetAttr is wrapped with `affine_set<>`.

Reviewed By: nicolasvasilache, ftynse

Differential Revision: https://reviews.llvm.org/D72429
2020-01-13 13:24:39 -08:00
Nicolas Vasilache 508d4e672e Continue refactoring StructuredOps utilities
This CL adds more common information to StructuredOpsUtils.h
The n_view attribute is retired in favor of args_in + args_out but the CL is otherwise NFC.

PiperOrigin-RevId: 285000621
2019-12-11 09:27:34 -08:00
Uday Bondhugula 3ade6a7d15 DimOp folding for alloc/view dynamic dimensions
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#253

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/253 from bondhugula:dimop a4b464f24ae63fd259114558d87e11b8ee4dae86
PiperOrigin-RevId: 284169689
2019-12-06 06:00:54 -08:00
Alex Zinenko 993e79e9bd Fix ViewOp to have at most one offset operand
As described in the documentation, ViewOp is expected to take an optional
dynamic offset followed by a list of dynamic sizes. However, the ViewOp parser
did not include a check for the offset being a single value and accepeted a
list of values instead.

Furthermore, several tests have been exercising the wrong syntax of a ViewOp,
passing multiple values to the dyanmic stride list, which was not caught by the
parser. The trailing values could have been erronously interpreted as dynamic
sizes. This is likely due to resyntaxing of the ViewOp, with the previous
syntax taking the list of sizes before the offset. Update the tests to use the
syntax with the offset preceding the sizes.

Worse, the conversion of ViewOp to the LLVM dialect assumed the wrong order of
operands with offset in the trailing position, and erronously relied on the
permissive parsing that interpreted trailing dynamic offset values as leading
dynamic sizes. Fix the lowering to use the correct order of operands.

PiperOrigin-RevId: 283532506
2019-12-03 06:23:04 -08:00
Jose Ignacio Gomez 0494ef60f7 [Linalg] Change attribute n_loop_types to iterator
This addresses issue tensorflow/mlir#270. Linalg is updated to take the same form
of iterator_types than vector contraction.

Closes tensorflow/mlir#280

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/280 from tetuante:PRissue270 d26d88d090d3765d3b9884bfabdd023143f27287
PiperOrigin-RevId: 282905396
2019-11-28 01:59:55 -08:00
Nicolas Vasilache 1fa8c8070b Implement Linalg to loops lowering as a pattern
This CL rewrites the linalg ops to loops transformations as patterns that can be targeted directly from Tablegen. Reliance on OpFolder is removed and to cope with it we introduce local folding patterns that are applied greedily.

PiperOrigin-RevId: 282765550
2019-11-27 07:32:13 -08:00
Alexander Belyaev 8c6a5233d5 Lower linalg.indexed_generic to loops.
PiperOrigin-RevId: 281169885
2019-11-18 16:55:15 -08:00
Nicolas Vasilache 72040bf7c8 Update Linalg to use std.view
Now that a view op has graduated to the std dialect, we can update Linalg to use it and remove ops that have become obsolete. As a byproduct, the linalg buffer and associated ops can also disappear.

PiperOrigin-RevId: 279073591
2019-11-07 06:33:10 -08:00
Nicolas Vasilache 9e7e297da3 Lower vector transfer ops to loop.for operations.
This allows mixing linalg operations with vector transfer operations (with additional modifications to affine ops) and is a step towards solving tensorflow/mlir#189.

PiperOrigin-RevId: 275543361
2019-10-18 14:10:10 -07:00
Nicolas Vasilache 516f6a3477 Add missing Linalg lowerings to allow roundtrip.mlir to lower to LLVM
Certain lowering patterns were reported as [missing](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/dkdmHa77sSQ).

This CL adds them and allows Linalg/roundtrip.mlir and Linalg/loops.mlir to lower to LLVM directly. Those 2 tests are updated to additionally check that the direct lowering to LLVM does not crash.

The following points, left as TODOs still need to be addressed for correct end-to-end execution:
1. the lowering for ConvOp needs to pass attributes such as strides and dilations; the external library call needs to support it.
2. the lowering for GenericOp needs to support lowering to loops as a DialectConversion pattern. This is blocked on the DialectConversion infrastructure accepting an OperationFolder.

PiperOrigin-RevId: 272878131
2019-10-04 08:07:54 -07:00
Nicolas Vasilache 218f0e611a Add syntactic sugar for strided memref parsing.
This CL implements the last remaining bit of the [strided memref proposal](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

The syntax is a bit more explicit than what was originally proposed and resembles:
  `memref<?x?xf32, offset: 0 strides: [?, 1]>`

Nonnegative strides and offsets are currently supported. Future extensions will include negative strides.

This also gives a concrete example of syntactic sugar for the ([RFC] Proposed Changes to MemRef and Tensor MLIR Types)[https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/-wKHANzDNTg].

The underlying implementation still uses AffineMap layout.

PiperOrigin-RevId: 272717437
2019-10-03 12:34:36 -07:00
Nicolas Vasilache e36337a998 Unify Linalg types by using strided memrefs
This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere.

PiperOrigin-RevId: 272187165
2019-10-01 05:23:21 -07:00
Alex Zinenko 6395229509 Move Linalg dialect tests to test/Dialect/Linalg
This was missing from the commit that moved the Linalg dialect to lib/Dialect.

PiperOrigin-RevId: 267141176
2019-09-04 06:33:23 -07:00