Commit Graph

46 Commits

Author SHA1 Message Date
not-jenni 08574ce4d6 [mlir][tosa] Add clamp + clamp as single clamp canonicalization
When 2 clamp ops are in a row, they can be canonicalized into a single clamp
that uses the most constrained range

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D117934
2022-01-21 16:24:43 -08:00
River Riddle d75c3e8396 [mlir] Don't print `// no predecessors` on entry blocks
Entry blocks can never have predecessors, so this is unnecessary.

Fixes #53287

Differential Revision: https://reviews.llvm.org/D117713
2022-01-19 15:57:58 -08:00
not-jenni 41d05e29c0 [mlir][tosa] Add tosa.clamp as no-op canonicalization
When the min/max are the total range of the value, it is a no-op as the values
are already restricted to that range.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D117625
2022-01-18 23:15:40 -08:00
Mogball 5c36ee8d57 [mlir] Drop the leading space when printing regions
The leading space that is always printed at the beginning of regions is not consistent with other parts of the printing API. Moreover, this leading space can lead to undesirable assembly formats:

```
attr-dict-with-keyword $region
```

Prints as:

```
// Two spaces between `}` and `{`
attributes {foo}  { ... }
```

Moreover, the leading space results in the odd generic op format:

```
"test.op"() ( {...}) : () -> ()
```

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D117411
2022-01-18 16:52:34 +00:00
Aaron DeBattista dfd070820c [mlir][tosa] Allow optional TOSA decompositions to be populated separately
Moved all TOSA decomposition patterns so that they can be optionally populated
and used by external rewrites. This avoids decomposing TOSa operations when
backends may benefit from the non-decomposed version.

Reviewed By: rsuderman, mehdi_amini

Differential Revision: https://reviews.llvm.org/D116526
2022-01-11 10:26:30 -08:00
Aaron DeBattista 64f694acaf [mlir][tosa] Move tosa canonicalizers to optional optimization pass
TOSA's canonicalizers that change dense operations should be moved to a
seperate optimization pass to avoid canonicalizing to operations not supported
for relevant backends.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D115890
2021-12-16 23:33:54 -08:00
not-jenni f9cefc7b90 [mlir][tosa] Add tosa.max_pool2d as no-op canonicalization
When the input and output of a pool2d op are both 1x1, it can be canonicalized to a no-op

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D115908
2021-12-16 15:27:26 -08:00
Rob Suderman 9a2308e170 [mlir][tosa] Minor cleanup of tosa.conv2d canonicalizer
Slight rename and better variable type usage in tosa.conv2d to
tosa.fully_connected lowering. Included disabling pass for padded
convolutions.

Reviewed By: not-jenni

Differential Revision: https://reviews.llvm.org/D115776
2021-12-16 15:13:01 -08:00
Rob Suderman 46c96fca0e [mlir][tosa] Fix quantized type for tosa.conv2d canonicalization
Wrong type was used for the result type in the tosa.conv_2d canonicalization.
The type should match the result element type should match the result type
not the input element type.

Differential Revision: https://reviews.llvm.org/D115463
2021-12-09 12:39:23 -08:00
Rob Suderman e9fae0f19e [mlir][tosa] Disable tosa.depthwise_conv2d canonicalizer for quantized case
Quantized case needs to include zero-point corrections before the tosa.mul.
Disabled for the quantized use-case.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D115264
2021-12-07 10:16:12 -08:00
not-jenni 5911a29aa9 [mlir][tosa] Add tosa.depthwise_conv2d as tosa.mul canonicalization
For a 1x1 weight and stride of 1, the input/weight can be reshaped and
multiplied elementwise then reshaped back

Reviewed By: rsuderman, KoolJBlack

Differential Revision: https://reviews.llvm.org/D115207
2021-12-06 17:28:52 -08:00
Rob Suderman 05e33d846f [mlir][tosa] Resubmit add tosa.conv2d as tosa.fully_connected canonicalization
Fixed the tosa.conv2d to tosa.fully_connected canonicalization for incorrect
output channels. Included uptes to tests to include checks for the result
shapes during canonicalization.

This allows conv2d to transform to the simpler fully_connected operation.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D115170
2021-12-06 15:33:07 -08:00
Mehdi Amini afb0582325 Fix TOSA verifier to emit verbose errors
Also as a test for invalid ops which was missing.
2021-12-05 19:16:54 +00:00
natashaknk e2d8b60742 Revert "[mlir][tosa] Add tosa.conv2d as fully_connected canonicalization"
This reverts commit 13bdb7ab4a. The commit introduced/uncovered an unintended bug in models containing Conv2D.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D115079
2021-12-03 14:35:48 -08:00
not-jenni 13bdb7ab4a [mlir][tosa] Add tosa.conv2d as fully_connected canonicalization
For a 1x1 weight and stride of 1, the input/weight can be reshaped and passed into a fully connected op then reshaped back

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D114757
2021-11-30 12:01:14 -08:00
Rob Suderman 0f1e52afa9 [mlir][tosa] Materialize tosa.pad value and fold noop pads
Padding now can explicitly specify the padding value when non-zero is wanted.
This also includes bypassing pads when the pad does nothing.

Differential Revision: https://reviews.llvm.org/D113611
2021-11-23 12:23:42 -08:00
Rob Suderman 54eec7cafc [mlir][tosa] Separate tosa.transpose_conv decomposition and added stride support
Transpose convolution decomposition is now performed in a separate pass. This
allows padding / constant propagation to be performed at the TOSA level. It
also adds support for striding when there is no dilation.

Differential Revision: https://reviews.llvm.org/D114409
2021-11-23 12:16:44 -08:00
Robert Suderman 6e41a06911 [mlir][tosa] Revert add-0 canonicalization for floating-point
Floating point optimization can produce incorrect numerical resutls for
-0.0 + 0.0 optimization as result needs to be -0.0.

Reviewed By: eric-k256

Differential Revision: https://reviews.llvm.org/D114127
2021-11-17 17:29:57 -08:00
Rob Suderman 044e7e013e [mlir][tosa] Fixed shape inference for tosa.transpose_conv2d
Transpose conv2d shape inference was incorrect, tests did not properly validate
that the shape inference was executing. Corrected shape inference, and extended
tests to actually execute.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D114026
2021-11-17 14:59:52 -08:00
not-jenni cdb0623ad8 [mlir][tosa] Add tosa.mul by one canonicalization
Multiply by one can be removed during canonicalization. This optimizes away unneeded operations.

Differential Revision: https://reviews.llvm.org/D113807
2021-11-15 14:52:16 -08:00
Kevin Cheng bef966eb37 tosa-make-broadcatable pass now supports numpy style broadcasting only.
- fix bug that in [c,1] + [a, b, c, d] broadcast
- add test [3,3,4,1] + [4,5]

Signed-off-by: Kevin Cheng <kevin.cheng@arm.com>
Change-Id: Iaed2f04df8775f655c82c740271395274163d147

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D113596
2021-11-10 11:48:35 -08:00
Suraj Sudhir 82568021dd [mlir][tosa] Spec v0.23 updates
Add pad_const field to tosa.pad.
Add builders to enable optional construction of pad_const in pad op.
Update documentation of tosa.clamp to match spec wording.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D113322
2021-11-08 10:13:54 -08:00
not-jenni 07a029c057 Canonicalization for add to no-op if one of the inputs is zero
Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D113207
2021-11-04 16:52:47 -07:00
Robert Suderman 58901a5a29 [mlir][tosa] Correct tosa.avg_pool2d for specification error
Specification specified the output type for quantized average pool should be
an i32. Only accumulator should be an i32, result type should match the input
type.

Caused in https://reviews.llvm.org/D111590

Reviewed By: sjarus, GMNGeoffrey

Differential Revision: https://reviews.llvm.org/D112484
2021-10-25 14:41:16 -07:00
Kojo Acquah 9c62bb55f4 Implementation of `ReshapeNoopOptimization` canonicalizer.
This canonicalizer replaces reshapes of constant tensors that contain the updated shape (skipping the reshape operation).

Differential Revision: https://reviews.llvm.org/D112038
2021-10-19 16:07:34 -07:00
not-jenni 4ada6c2aaf [mlir][tosa] Adds a canonicalization to the transpose op if the perms are a no op
Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D112037
2021-10-18 16:30:53 -07:00
Mogball a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Rob Suderman 95e4b71519 [mlir][tosa] Fix tosa average_pool2d to linalg type issue
Average pool assumed the same input/output type. Result type for integers
is always an i32, should be updated appropriately.

Reviewed By: GMNGeoffrey

Differential Revision: https://reviews.llvm.org/D111590
2021-10-12 13:09:21 -07:00
Lei Zhang b45476c94c [mlir][tosa] Do not fold transpose with quantized types
For such cases, the type of the constant DenseElementsAttr is
different from the transpose op return type.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D110446
2021-09-24 16:57:55 -04:00
Lei Zhang e325ebb9c7 [mlir][tosa] Add some transpose folders
* If the input is a constant splat value, we just
  need to reshape it.
* If the input is a general constant with one user,
  we can also constant fold it, without bloating
  the IR.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D110439
2021-09-24 15:25:14 -04:00
Rob Suderman b0532286fe [mlir][tosa] Add shape inference for tosa.while
Tosa.while shape inference requires repeatedly running shape inference across
the body of the loop until the types become static as we do not know the number
of iterations required by the loop body. Once the least specific arguments are
known they are propagated to both regions.

To determine the final end type, the least restrictive types are determined
from all yields.

Differential Revision: https://reviews.llvm.org/D108801
2021-09-10 13:11:53 -07:00
Rob Suderman 2b2ebb6f98 [mlir][tosa] Add folders for trivial tosa operation cases
Some folding cases are trivial to fold away, specifically no-op cases where
an operation's input and output are the same. Canonicalizing these away
removes unneeded operations.

The current version includes tensor cast operations to resolve shape
discreprencies that occur when an operation's result type differs from the
input type. These are resolved during a tosa shape propagation pass.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D107321
2021-08-10 14:43:00 -07:00
Rob Suderman 1b00b94ffc [mlir][tosa] Tosa shape propagation for tosa.cond_if
We can propagate the shape from tosa.cond_if operands into the true/false
regions then through the connected blocks. Then, using the tosa.yield ops
we can determine what all possible return types are.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D105940
2021-08-03 17:54:54 -07:00
Rob Suderman 143edeca6d [mlir][tosa] Shape inference for a few remaining easy cases:
Handles shape inference for identity, cast, and rescale. These were missed
during the initialy elementwise work. This includes resize shape propagation
which includes both attribute and input type based propagation.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D105845
2021-08-03 17:20:32 -07:00
Rob Suderman 2d0ba5e144 [mlir][tosa] Fix tosa.reshape failures due to implicit broadcasting
Make broadcastable needs the output shape to determine whether the operation
includes additional broadcasting. Include some canonicalizations for TOSA
to remove unneeded reshape.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D106846
2021-07-29 15:21:57 -07:00
Rob Suderman 11dda1a234 [mlir][tosa] Added shape inference for tosa convolution operations
Added shape inference handles cases for convolution operations. This includes
conv2d, conv3d, depthwise_conv2d, and transpose_conv2d. With transpose conv
we use the specified output shape when possible however will shape propagate
if the output shape attribute has dynamic values.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D105645
2021-07-19 10:41:56 -07:00
Rob Suderman f2832c2295 [mlir][tosa] Added shape propagation for TOSA pool operations.
Pool operations perform the same shape propagation. Included the shape
propagation and tests for these avg_pool2d and max_pool2d.

Differential Revision: https://reviews.llvm.org/D105665
2021-07-12 15:40:49 -07:00
Rob Suderman 5a4e776010 [mlir][tosa] Added more shape inference for tosa ops
Added shape inference for:
- scatter
- gather
- transpose
- slice
- pad
- concat
- reduction operations

Also updated reshape for more aggressive shape inference.

Differential Revision: https://reviews.llvm.org/D105383
2021-07-12 10:04:49 -07:00
Rob Suderman 8dea784b3e [mlir][tosa] Add tosa shape inference with InferReturnTypeComponent
Added InferReturnTypeComponents for NAry operations, reshape, and reverse.
With the additional tosa-infer-shapes pass, we can infer/propagate shapes
across a set of TOSA operations. Current version does not modify the
FuncOp type by inserting an unrealized conversion cast prior to any new
non-matchin returns.

Differential Revision: https://reviews.llvm.org/D105312
2021-07-01 16:04:26 -07:00
Suraj Sudhir 2eb7bbbe65 [mlir][tosa] Use 3D tensors in tosa.matmul
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D105213
2021-06-30 12:22:52 -07:00
Suraj Sudhir 4b01435230 [mlir][tosa] Remove tosa.identityn operator
Removes the identityn operator from TOSA MLIR definition.
Removes TosaToLinAlg mappings

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D102329
2021-05-12 12:46:22 -07:00
Rob Suderman d3e987c389 [mlir][tosa] Added div op, variadic concat. Removed placeholder. Spec v0.22 alignment.
Nearly complete alignment to spec v0.22
- Adds Div op
- Concat inputs now variadic
- Removes Placeholder op

Note: TF side PR https://github.com/tensorflow/tensorflow/pull/48921 deletes Concat legalizations to avoid breaking TensorFlow CI. This must be merged only after the TF PR has merged.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D101958
2021-05-06 15:55:58 -07:00
Suraj Sudhir ec46e03daf [mlir][tosa] TOSA MLIR dialect update to v0.22, part 1
Incremental set of updates to align to TOSA v0.22 spec

    - modify gather, resize
    - add scatter
    - remove aint8 type

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D99390
2021-03-25 21:34:34 -07:00
Rahul Joshi 67a339e968 [MLIR] Disallow `sym_visibility`, `sym_name` and `type` attributes in the parsed attribute dictionary.
Differential Revision: https://reviews.llvm.org/D94200
2021-01-12 09:11:02 -08:00
Stella Laurenzo db9713cd77 [mlir] Add Tosa dialect const folder for tosa.const.
* Was missed in the initial submission and is required for a ConstantLike op.
* Also adds a materializeConstant hook to preserve it.
* Tightens up the argument constraint on tosa.const to match what is actually legal.

Differential Revision: https://reviews.llvm.org/D92040
2020-11-24 17:33:00 +00:00
Suraj Sudhir b28121133d TOSA MLIR Dialect
This is the TOSA MLIR Dialect described in the following MLIR RFC: https://llvm.discourse.group/t/rfc-tosa-dialect-in-mlir/1971/24

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D90411
2020-11-07 08:38:09 -08:00