Commit Graph

2301 Commits

Author SHA1 Message Date
Aart Bik 0b55f94d2b [mlir][sparse] replace stack-based access pattern with dyn-alloc
Rationale:
Allocating the temporary buffers for access pattern expansion on the stack
(using alloca) is a bit too agressive, since it easily runs out of stack space
for large enveloping tensor dimensions. This revision changes the dynamic
allocation of these buffers with explicit alloc/dealloc pairs.

Reviewed By: bixia, wrengr

Differential Revision: https://reviews.llvm.org/D123253
2022-04-06 17:10:43 -07:00
Matthias Springer f4f1cf6c31 [mlir][bufferize] Better analysis for return values of CallOps
Support returning arbitrary tensors from functions. Even those that are
not equivalent. To that end, additional information is gathered during
the analysis phase. In particular, which function args are aliasing with
which return values.

Also fix bugs in the current implementation when returning equivalent
tensors. Various unit tests are added to ensure that we have better test
coverage.

Note: Returning non-equivalent tensors is only allowed when
allowReturnAllocs is enabled. This functionality is useful for unit
testing and compatibility with other bufferizations such as the sparse
compiler. This is also towards using ModuleBufferization as a
replacement for --func-bufferize.

Differential Revision: https://reviews.llvm.org/D119120
2022-04-06 23:54:32 +09:00
Matthias Springer cd7de446fd [mlir][bufferize] Simplify ModuleBufferization driver
* Bufferize FuncOp bodies and boundaries in the same loop. This is in preparation of moving FuncOp bufferization into an external model implementation.
* As a side effect, stop bufferization earlier if there was an error. (Do not continue bufferization, fewer error messages.)
* Run equivalence analysis of CallOps before the main analysis. This is needed so that equialvence info is propagated properly.

Differential Revision: https://reviews.llvm.org/D123208
2022-04-06 23:53:07 +09:00
Matthias Springer 5ab34492d6 [mlir][bufferize] Fix dropped return type in ModuleBufferization
Differential Revision: https://reviews.llvm.org/D123192
2022-04-06 23:48:15 +09:00
Alexander Belyaev 747b10be95 Revert "Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse).""
This reverts commit 96e9b6c9dc.
2022-04-06 12:18:30 +02:00
Nicolas Vasilache fc8f465a00 [mlir][MemRef] Allow transposed layouts in ExpandShapeOp.
https://reviews.llvm.org/D122641 introduced fixes to the ExpandShapeOp verifier
but also introduced an artificial layout limitation that prevents the consideration of transposed layouts.

This revision fixes the omissions and reimplements the logic using saturated arithmetic which is more
idiomatic and avoids leaking internal implementation details.

Tests cases are added for transposed layouts.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D122845
2022-04-06 04:19:30 -04:00
Hanhan Wang 96e9b6c9dc Revert "[mlir] Rewrite canonicalization of collapse(expand) and expand(collapse)."
This reverts commit 64f659bee6.

An invalid tensor.expand_shape op is generated with the commit. To repro:

$ mlir-opt -canonicalize a.mlir

```
func @foo(%0: tensor<1x1xf32>, %1: tensor<1x1xf32>, %2: tensor<1x1xf32>) -> tensor<1x1xf32> {
  %cst = arith.constant 0.000000e+00 : f32
  %3 = linalg.init_tensor [8, 1] : tensor<8x1xf32>
  %4 = linalg.fill ins(%cst : f32) outs(%3 : tensor<8x1xf32>) -> tensor<8x1xf32>
  %5 = tensor.collapse_shape %0 [] : tensor<1x1xf32> into tensor<f32>
  %6 = tensor.insert_slice %5 into %4[0, 0] [1, 1] [1, 1] : tensor<f32> into tensor<8x1xf32>
  %7 = linalg.init_tensor [8, 1] : tensor<8x1xf32>
  %8 = linalg.fill ins(%cst : f32) outs(%7 : tensor<8x1xf32>) -> tensor<8x1xf32>
  %9 = tensor.collapse_shape %2 [] : tensor<1x1xf32> into tensor<f32>
  %10 = tensor.insert_slice %9 into %8[0, 0] [1, 1] [1, 1] : tensor<f32> into tensor<8x1xf32>
  %11 = tensor.collapse_shape %6 [[0, 1]] : tensor<8x1xf32> into tensor<8xf32>
  %12 = linalg.init_tensor [8] : tensor<8xf32>
  %13 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%11 : tensor<8xf32>) outs(%12 : tensor<8xf32>) {
  ^bb0(%arg3: f32, %arg4: f32):
    linalg.yield %arg3 : f32
  } -> tensor<8xf32>
  %14 = tensor.expand_shape %13 [[0, 1, 2, 3]] : tensor<8xf32> into tensor<1x1x8x1xf32>
  %15 = tensor.collapse_shape %1 [] : tensor<1x1xf32> into tensor<f32>
  %16 = linalg.init_tensor [] : tensor<f32>
  %17 = linalg.generic {indexing_maps = [affine_map<() -> ()>, affine_map<() -> ()>], iterator_types = []} ins(%15 : tensor<f32>) outs(%16 : tensor<f32>) {
  ^bb0(%arg3: f32, %arg4: f32):
    linalg.yield %arg3 : f32
  } -> tensor<f32>
  %18 = tensor.expand_shape %17 [] : tensor<f32> into tensor<1x1x1x1xf32>
  %19 = tensor.collapse_shape %10 [[0, 1]] : tensor<8x1xf32> into tensor<8xf32>
  %20 = linalg.init_tensor [8] : tensor<8xf32>
  %21 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%19 : tensor<8xf32>) outs(%20 : tensor<8xf32>) {
  ^bb0(%arg3: f32, %arg4: f32):
    linalg.yield %arg3 : f32
  } -> tensor<8xf32>
  %22 = tensor.expand_shape %21 [[0, 1, 2, 3]] : tensor<8xf32> into tensor<1x1x8x1xf32>
  %23 = linalg.mmt4d {comment = "f32*f32->f32, aarch64, matrix*vector"} ins(%14, %18 : tensor<1x1x8x1xf32>, tensor<1x1x1x1xf32>) outs(%22 : tensor<1x1x8x1xf32>) -> tensor<1x1x8x1xf32>
  %24 = tensor.collapse_shape %23 [[0, 1, 2, 3]] : tensor<1x1x8x1xf32> into tensor<8xf32>
  %25 = linalg.init_tensor [8] : tensor<8xf32>
  %26 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%24 : tensor<8xf32>) outs(%25 : tensor<8xf32>) {
  ^bb0(%arg3: f32, %arg4: f32):
    linalg.yield %arg3 : f32
  } -> tensor<8xf32>
  %27 = tensor.expand_shape %26 [[0, 1]] : tensor<8xf32> into tensor<8x1xf32>
  %28 = tensor.extract_slice %27[0, 0] [1, 1] [1, 1] : tensor<8x1xf32> to tensor<f32>
  %29 = tensor.expand_shape %28 [] : tensor<f32> into tensor<1x1xf32>
  return %29 : tensor<1x1xf32>
}
```

Differential Revision: https://reviews.llvm.org/D123161
2022-04-05 15:05:41 -07:00
Nirvedh 01055ed1d7 [mlir][linalg] Move linalg.fill folding into linalg.generic pattern from canonicalization to elementwise fusion
Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D122847
2022-04-05 20:13:03 +00:00
Alexander Belyaev 64f659bee6 [mlir] Rewrite canonicalization of collapse(expand) and expand(collapse).
Differential Revision: https://reviews.llvm.org/D122666
2022-04-05 10:03:07 +02:00
River Riddle 6edef13569 [mlir:PassOption] Rework ListOption parsing and add support for std::vector/SmallVector options
ListOption currently uses llvm:🆑:list under the hood, but the usages
of ListOption are generally a tad different from llvm:🆑:list. This
commit codifies this by making ListOption implicitly comma separated,
and removes the explicit flag set for all of the current list options.
The new parsing for comma separation of ListOption also adds in support
for skipping over delimited sub-ranges (i.e. {}, [], (), "", ''). This
more easily supports nested options that use those as part of the
format, and this constraint (balanced delimiters) is already codified
in the syntax of pass pipelines.

See https://discourse.llvm.org/t/list-of-lists-pass-option/5950 for
related discussion

Differential Revision: https://reviews.llvm.org/D122879
2022-04-02 00:45:11 -07:00
jacquesguan bc37077947 [mlir][Vector] Add constant folder for extractelement.
This revision adds constant folder for vector.extractelement.

Differential Revision: https://reviews.llvm.org/D122886
2022-04-02 11:10:42 +08:00
jacquesguan 262823612d [mlir][Vector] Add constant folder for insertelement.
This revision adds constant folder for vector.insertelement.

Differential Revision: https://reviews.llvm.org/D122721
2022-04-02 10:20:19 +08:00
Lei Zhang a480d75fe4 [mlir][vector] Fold transpose(broadcast(<scalar>))
For such cases, the transpose op can be elided.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D122903
2022-04-01 14:51:36 -04:00
wren romano 63bdcaf92a [mlir][sparse] Moving `delete coo` into codegen instead of runtime library
Prior to this change there were a number of places where the allocation and deallocation of SparseTensorCOO objects were not cleanly paired, leading to inconsistencies regarding whether each function released its tensor/coo arguments or not, as well as making it easy to run afoul of memory leaks, use-after-free, or double-free errors.  This change cleans up the codegen vs runtime boundary to resolve those issues.  Now, the only time the runtime library frees an object is either (a) because it's a function explicitly designed to do so, or (b) because the allocated object is entirely local to the function and would be a memory leak if not released.  Thus, now the codegen takes complete responsibility for releasing any objects it caused to be allocated.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122435
2022-04-01 11:08:52 -07:00
Lei Zhang 57b101bdec [mlir][vector] Handle scalars in extract_strided_slice(broadcast)
For such cases we cannot generate extract_strided_slice ops.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D122902
2022-04-01 12:07:47 -04:00
Matthias Springer 73c0333dee [mlir][tensor][bufferize] Support 0-d collapse_shape with offset
Differential Revision: https://reviews.llvm.org/D122901
2022-04-01 22:30:37 +09:00
Mehdi Amini ba43d6f85c Revert "[GreedPatternRewriter] Preprocess constants while building worklist when not processing top down"
This reverts commit 59bbc7a085.

This exposes an issue breaking the contract of
`applyPatternsAndFoldGreedily` where we "converge" without applying
remaining patterns.
2022-04-01 06:16:55 +00:00
River Riddle 59bbc7a085 [GreedPatternRewriter] Preprocess constants while building worklist when not processing top down
This avoids accidentally reversing the order of constants during successive
application, e.g. when running the canonicalizer. This helps reduce the number
of iterations, and also avoids unnecessary changes to input IR.

Fixes #51892

Differential Revision: https://reviews.llvm.org/D122692
2022-03-31 12:08:55 -07:00
Okwan Kwon 65bdeddb1e [mlir] Bubble up tensor.extract_slice above linalg operation
Bubble up extract_slice above Linalg operation.

A sequence of operations

    %0 = linalg.<op> ... arg0, arg1, ...
    %1 = tensor.extract_slice %0 ...

can be replaced with

    %0 = tensor.extract_slice %arg0
    %1 = tensor.extract_slice %arg1
    %2 = linalg.<op> ... %0, %1, ...

This results in the reduce computation of the linalg operation.

The implementation uses the tiling utility functions. One difference
from the tiling process is that we don't need to insert the checking
code for the out-of-bound accesses. The use of the slice itself
represents that the code writer is sure about the boundary condition.
To avoid adding the boundary condtion check code, `omitPartialTileCheck`
is introduced for the tiling utility functions.

Differential Revision: https://reviews.llvm.org/D122437
2022-03-31 16:48:38 +00:00
Matthias Springer 51df62388e [mlir][tensor] Fix bufferization of CollapseShapeOp / ExpandShapeOp
Infer a tighter MemRef type instead of always falling back to the most dynamic MemRef type. This is inefficient and caused op verification errors.

Differential Revision: https://reviews.llvm.org/D122649
2022-03-31 17:11:45 +09:00
Matthias Springer 86d118e7f2 [mlir][memref] Fix CollapseShapeOp verifier
Differential Revision: https://reviews.llvm.org/D122647
2022-03-31 17:08:16 +09:00
Matthias Springer 2bd7ee4566 [mlir][memref] Fix ExpandShapeOp verifier
* Complete rewrite of the verifier.
* CollapseShapeOp verifier will be updated in a subsequent commit.
* Update and expand op documentation.
* Add a new builder that infers the result type based on the source type, result shape and reassociation indices. In essence, only the result layout map is inferred.

Differential Revision: https://reviews.llvm.org/D122641
2022-03-31 17:05:52 +09:00
jacquesguan 01ad70fd1d [mlir][Vector] Fold ShuffleOp if result is identical to one of source vectors.
For example, we could do the following eliminations:
  fold vector.shuffle V1, V2, [0, 1, 2, 3] : <4xi32>, <2xi32> -> V1
  fold vector.shuffle V1, V2, [4, 5] : <4xi32>, <2xi32> -> V2

Differential Revision: https://reviews.llvm.org/D122706
2022-03-31 10:46:13 +08:00
Benjamin Kramer 35dab904c0 [linalg] When removing noop linalg.generics, check that inserting a cast is valid
linalg.generic can also take scalars instead of tensors, which
tensor.cast doesn't support. We don't have an easy way to cast between
scalars and tensors so just keep the linalg.generic in those cases.

Differential Revision: https://reviews.llvm.org/D122575
2022-03-29 23:05:54 +02:00
gysit d26c42af57 [mlir][linalg] Control dimensions to pad.
This revision supports padding only a subset of the iteration dimensions via an additional padding-dimensions parameter. This control allows us to pad an operation in multiple steps. For example, one may want to pad only the output dimensions of a producer matmul fused into a consumer loop nest, before tiling and padding its reduction dimension.

Depends On D122309

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D122560
2022-03-28 14:39:57 +00:00
gysit 58d0da885e [mlir][linalg] Use arrays to pass padding options.
Pass the padding options using arrays instead of lambdas. In particular pass the padding value as string and use the argument parser to create the padding value. Arrays are a more natural choice that matches the current use cases and avoids converting arrays to lambdas.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D122309
2022-03-28 13:49:05 +00:00
Shraiysh Vaishay fcbf00f098 [mlir][OpenMP] Added ReductionClauseInterface
This patch adds the ReductionClauseInterface and also adds reduction
support for `omp.parallel` operation.

Reviewed By: kiranchandramohan

Differential Revision: https://reviews.llvm.org/D122402
2022-03-28 14:24:28 +05:30
Uday Bondhugula 5576579c86 Update affine.load folding hook to fold global splat constant loads
Enhance affine.load folding hook to fold loads on global splat constant
memrefs.

Differential Revision: https://reviews.llvm.org/D122292
2022-03-26 06:44:03 +05:30
Christopher Bate 3be7c28917 [mlir][NVVM] Add support for nvvm mma.sync ops
This patch adds MLIR NVVM support for the various NVPTX `mma.sync`
operations. There are a number of possible data type, shape,
and other attribute combinations supported by the operation, so a
custom assebmly format is added and attributes are inferred where
possible.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D122410
2022-03-25 17:28:05 +00:00
lipracer 5161835d5a [mlir][tosa] : adding folder and canonicalizer for select
define canonicalizer and folder for tosa::select

Reviewed By: mehdi_amini, Mogball

Differential Revision: https://reviews.llvm.org/D121513
2022-03-25 16:50:29 +00:00
Javier Setoain 7783a178f5 [mlir][Sparse] Add option for VLA sparsification
Use "enable-vla-vectorization=vla" to generate a vector length agnostic
loops during vectorization. This option works for vectorization strategy 2.

Differential Revision: https://reviews.llvm.org/D118379
2022-03-25 10:54:49 +00:00
Javier Setoain a75a46db89 [mlir][Vector] Enable create_mask for scalable vectors
The way vector.create_mask is currently lowered is
vector-length-dependent, and therefore incompatible with scalable vector
types. This patch adds an alternative lowering path for create_mask
operations that return a scalable vector mask.

Differential Revision: https://reviews.llvm.org/D118248
2022-03-25 10:48:59 +00:00
Thomas Raoux d77f483640 [mlir][gpu] Relax restriction on mma load/store op
Those ops can support more complex layout as long as the most inner
dimension is contiguous.

Differential Revision: https://reviews.llvm.org/D122452
2022-03-25 04:03:40 +00:00
Thomas Raoux 33d2a780a1 [mlir][linalg] Add pattern to split reduction dimension in a linalg op
This transformation allow to break up a reduction dimension in a
parallel and a reduction dimension. This is followed by a separate
reduction op. This allows to generate tree reduction which is beneficial
on target allowing to take advantage parallelism.

Differential Revision: https://reviews.llvm.org/D122045
2022-03-24 23:22:53 +00:00
gysit b1b57f8104 [mlir][linalg] Support padding LinalgOps in use-def chain.
Previously, only LinalgOps whose operands are defined by an ExtractSliceOp could be padded. The revision supports walking a use-def chain of LinalgOps to find an ExtractSliceOp.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D122116
2022-03-24 10:44:44 +00:00
gysit 53f7fb0a87 [mlir][linalg] Do not fuse shape-only producers.
This revision introduces a heuristic to stop fusion for shape-only tensors. A shape-only tensor only defines the shape of the consumer computation while the data is not used. Pure producer consumer fusion thus shall not fuse the producer of a shape-only tensor. In particular, since the shape-only tensor will have other uses that actually consume the data.

The revision enables fusion for consumers that have two uses of the same tensor. One as input operand and one as shape-only output operand. In these cases, we want to fuse only the input operand and avoid output fusion via iteration argument.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D120981
2022-03-24 10:22:41 +00:00
gysit b257dba58e [mlir][linalg] Create AffineMinOp map in canoncial form.
Create the AffineMinOp used to compute the padding width in canonical form and update the tests.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D122311
2022-03-24 06:55:59 +00:00
Shraiysh Vaishay 86f156a49b [mlir][OpenMP][NFC] Remove unnecessary attributes
These attributes were added because of oilist required them earlier. It
no longer requires them and so these attributes can be safely removed
from the operations.

Reviewed By: kiranchandramohan

Differential Revision: https://reviews.llvm.org/D122289
2022-03-24 10:13:06 +05:30
Shraiysh Vaishay 11ed2d4acd [mlir][OpenMP] Add omp.single
This patch adds omp.single according to Section 2.8.2 of OpenMP 5.0.

Also added tests for the same.

Reviewed By: peixin

Differential Revision: https://reviews.llvm.org/D122288

Co-authored-by: Kiran Kumar T P <kirankumar.tp@amd.com>
2022-03-23 16:45:27 +05:30
Shraiysh Vaishay b244bba582 [mlir][OpenMP] Added assembly format for omp.wsloop and remove parseClauses
This patch
 - adds assembly format for `omp.wsloop` operation
 - removes the `parseClauses` clauses as it is not required anymore

This is expected to be the final patch in a series of patches for replacing
parsers for clauses with `oilist`.

Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D121367
2022-03-23 10:02:02 +05:30
jacquesguan 75f0d12ebf [mlir][Arith] Make integer max/min commutative.
Make MaxSI, MaxUI, MinSI and MinUI commutative, so they will be canonicalized to have its constants appear as the second operand. And the constant folder will match more cases.

Differential Revision: https://reviews.llvm.org/D122225
2022-03-23 10:17:36 +08:00
wren romano df948127ac [mlir][sparse] Adding Action::kSparseToSparse for @newSparseTensor
This is work towards: https://github.com/llvm/llvm-project/issues/51652

This differential doesn't yet make use of the new kSparseToSparse, just introduces it.  The differential that finally makes use of them is D122061, which is the final differential in the chain that fixes bug 51652.

Depends On D122054

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122055
2022-03-22 13:46:59 -07:00
gysit 1d259f9f02 [mlir][affine] Add affine.min / affine.max canonicalization.
The revision introduces a affine.min and affine.max canonicalization pattern that orders the result expressions. It flattens the result expressions to arrays of dimension and symbol coefficients plus one constant coefficient and rearranges them in lexicographic order.

Without the pattern, CSE will not eliminate two affine.min / affine.max operation if the results are ordered differently. For example, the operations
```
  %1 = affine.min affine_map<(d0) -> (8, -d0 + 27)>(%arg4)
  %2 = affine.min affine_map<(d0) -> (-d0 + 27, 8)>(%arg4)
```
doe not CSE. After applying the pattern, the two operations are equivalent
```
  %1 = affine.min affine_map<(d0) -> (8, -d0 + 27)>(%arg4)
  %2 = affine.min affine_map<(d0) -> (8, -d0 + 27)>(%arg4)
```
which enables CSE.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D121819
2022-03-22 07:17:19 +00:00
jacquesguan e609417cdc [mlir][Math] Add more constant folder for Math ops.
This revision add constant folder for abs, copysign, ctlz, cttz and
ctpop.

Differential Revision: https://reviews.llvm.org/D122115
2022-03-22 10:23:15 +08:00
Mahesh Ravishankar b40f420c2b [mlir][MemRef] Add early exit for computing dropped unit-dims.
Computing dropped unit-dims when all the unit dims are dropped, does
not need to check for strides being dropped.
This also enables canonicalization of reduced-rank subviews.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D121766
2022-03-21 21:50:29 +00:00
Aart Bik 69a7759b40 [mlir][sparse] implement loop index value vectorization
with CHECK and integration test

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D122040
2022-03-21 10:40:38 -07:00
William S. Moses 195de3dd6c [MLIR][SCF] Fix nested if merging bug
The current nested if merging has a bug. Specifically, consider the following code:

```
    %r = scf.if %arg3 -> (i32) {
      scf.if %arg1 {
        "test.op"() : () -> ()
      }
      scf.yield %arg0 : i32
    } else {
      scf.yield %arg2 : i32
    }
```

When the above gets merged, it will become:
```
    %r = scf.if %arg3 && %arg1-> (i32) {
      "test.op"() : () -> ()
      scf.yield %arg0 : i32
    } else {
      scf.yield %arg2 : i32
    }
```

However, this means that when only %arg3 is true, we will incorrectly return %arg2 instead
of %arg0. This change updates the behavior of the pass to only enable nested if merging where
the outer yield contains only values from the inner if, or values defined outside of the if.

In the case of the latter, they can turned into a select of only the outer if condition, thus
maintaining correctness.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D122108
2022-03-21 11:42:26 -04:00
jacquesguan 55053205e5 [mlir][Arith] Add constant folder for right shift
Differential Revision: https://reviews.llvm.org/D121985
2022-03-21 09:58:18 +08:00
River Riddle 9595f3568a [mlir:PDL] Remove the ConstantParams support from native Constraints/Rewrites
This support has never really worked well, and is incredibly clunky to
use (it effectively creates two argument APIs), and clunky to generate (it isn't
clear how we should actually expose this from PDL frontends). Treating these
as just attribute arguments is much much cleaner in every aspect of the stack.
If we need to optimize lots of constant parameters, it would be better to
investigate internal representation optimizations (e.g. batch attribute creation),
that do not affect the user (we want a clean external API).

Differential Revision: https://reviews.llvm.org/D121569
2022-03-19 13:28:24 -07:00
William S. Moses d8a6a696bf [MLIR][SCF] Place hoisted scf.if->select prior to the remaining if
This patch slightly updates the behavior of scf.if->select to
place any hoisted select statements prior to the remaining scf.if body.

This allows better composition with other canonicalization passes, such as
scf.if nested merging.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D122027
2022-03-18 22:14:21 -04:00