Fix bufferization of collapse_shape of subviews with size 1 dims.

Currently, there's an optimization that claims dimensions of size 1 are always
contiguous. This is not necessarily the case for subviews.

```
Input:

[
  [
    [0, 1],
    [2, 3]
  ],
  [
    [4, 5]
    [6, 7]
  ]
]

Subview:

  [
    [
      [0, 1],
    ],
    [
      [4, 5]
    ]
  ]
```

The old logic treats this subview as contiguous, when it is not.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D134026
This commit is contained in:
Johannes Reifferscheid 2022-09-16 11:09:09 +02:00
parent 9f2c39418b
commit d7c606f5b7
2 changed files with 17 additions and 6 deletions

View File

@ -1917,12 +1917,6 @@ computeCollapsedLayoutMap(MemRefType srcType,
// Both source and result stride must have the same static value. In that
// case, we can be sure, that the dimensions are collapsible (because they
// are contiguous).
//
// One special case is when the srcShape is `1`, in which case it can
// never produce non-contiguity.
if (srcShape[idx] == 1)
continue;
// If `strict = false` (default during op verification), we accept cases
// where one or both strides are dynamic. This is best effort: We reject
// ops where obviously non-contiguous dims are collapsed, but accept ops

View File

@ -495,6 +495,23 @@ func.func @tensor.collapse_shape_of_slice4(%arg0: tensor<?x2x4xf32>, %offset: in
// -----
// CHECK-LABEL: func @tensor.collapse_shape_of_slice5(
func.func @tensor.collapse_shape_of_slice5(%arg0: tensor<2x2x2xi64>) -> tensor<4xi64> {
// CHECK: %[[subview:.*]] = memref.subview %{{.*}} : memref<2x2x2xi64> to memref<2x1x2xi64, #{{.*}}>
%0 = tensor.extract_slice %arg0[0, 0, 0] [2, 1, 2] [1, 1, 1] : tensor<2x2x2xi64> to tensor<2x1x2xi64>
// This memref is not collapsible, so the buffer must be copied to get rid of
// the layout map.
// CHECK: %[[alloc:.*]] = memref.alloc() {{.*}} : memref<2x1x2xi64>
// CHECK: memref.copy %[[subview]], %[[alloc]]
// CHECK: memref.collapse_shape %[[alloc]] [
// CHECK-SAME: [0, 1, 2]] : memref<2x1x2xi64> into memref<4xi64>
%1 = tensor.collapse_shape %0 [[0, 1, 2]] : tensor<2x1x2xi64> into tensor<4xi64>
return %1 : tensor<4xi64>
}
// -----
// CHECK-LABEL: func @tensor.reshape(
// CHECK-SAME: %[[t1:.*]]: tensor<?x10xf32>
func.func @tensor.reshape(%t1: tensor<?x10xf32>) -> tensor<2x2x5xf32> {