[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
|
2022-02-04 20:39:52 +08:00
|
|
|
; RUN: opt -passes=instcombine %s -S -o - | FileCheck %s
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
|
|
|
|
; Clamp positive to allOnes:
|
|
|
|
; E.g., clamp255 implemented in a shifty way, could be optimized as v > 255 ? 255 : v, where sub hasNoSignedWrap.
|
|
|
|
; int32 clamp255(int32 v) {
|
|
|
|
; return (((255 - (v)) >> 31) | (v)) & 255;
|
|
|
|
; }
|
|
|
|
;
|
|
|
|
|
|
|
|
; Scalar Types
|
|
|
|
|
|
|
|
define i32 @clamp255_i32(i32 %x) {
|
|
|
|
; CHECK-LABEL: @clamp255_i32(
|
2022-02-15 00:18:07 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = call i32 @llvm.smin.i32(i32 [[X:%.*]], i32 255)
|
|
|
|
; CHECK-NEXT: [[AND:%.*]] = and i32 [[TMP1]], 255
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i32 [[AND]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i32 255, %x
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
%and = and i32 %or, 255
|
|
|
|
ret i32 %and
|
|
|
|
}
|
|
|
|
|
|
|
|
define i8 @sub_ashr_or_i8(i8 %x, i8 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i8(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i8 [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i8 -1, i8 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i8 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i8 %y, %x
|
|
|
|
%shr = ashr i8 %sub, 7
|
|
|
|
%or = or i8 %shr, %x
|
|
|
|
ret i8 %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define i16 @sub_ashr_or_i16(i16 %x, i16 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i16(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i16 [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i16 -1, i16 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i16 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i16 %y, %x
|
|
|
|
%shr = ashr i16 %sub, 15
|
|
|
|
%or = or i16 %shr, %x
|
|
|
|
ret i16 %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32(i32 %x, i32 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i32 [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i32 -1, i32 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i32 %y, %x
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
ret i32 %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define i64 @sub_ashr_or_i64(i64 %x, i64 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i64(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i64 [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i64 -1, i64 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i64 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i64 %y, %x
|
|
|
|
%shr = ashr i64 %sub, 63
|
|
|
|
%or = or i64 %shr, %x
|
|
|
|
ret i64 %or
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:48:04 +08:00
|
|
|
define i32 @neg_or_ashr_i32(i32 %x) {
|
|
|
|
; CHECK-LABEL: @neg_or_ashr_i32(
|
[InstCombine] Fold lshr/ashr(or(neg(x),x),bw-1) --> zext/sext(icmp_ne(x,0)) (PR50816)
Handle the missing fold reported in PR50816, which is a variant of the existing ashr(sub_nsw(X,Y),bw-1) --> sext(icmp_sgt(X,Y)) fold.
We also handle the lshr(or(neg(x),x),bw-1) --> zext(icmp_ne(x,0)) equivalent - https://alive2.llvm.org/ce/z/SnZmSj
We still allow multi uses of the neg(x) - as this is likely to let us further simplify other uses of the neg - but not multi uses of the or() which would increase instruction count.
Differential Revision: https://reviews.llvm.org/D105764
2021-07-13 21:26:03 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i32 [[X:%.*]], 0
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = sext i1 [[TMP1]] to i32
|
2021-07-13 20:48:04 +08:00
|
|
|
; CHECK-NEXT: ret i32 [[SHR]]
|
|
|
|
;
|
|
|
|
%neg = sub i32 0, %x
|
|
|
|
%or = or i32 %neg, %x
|
|
|
|
%shr = ashr i32 %or, 31
|
|
|
|
ret i32 %shr
|
|
|
|
}
|
|
|
|
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; nuw nsw
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32_nuw_nsw(i32 %x, i32 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_nuw_nsw(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i32 [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i32 -1, i32 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nuw nsw i32 %y, %x
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
ret i32 %or
|
|
|
|
}
|
|
|
|
|
|
|
|
; Commute
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32_commute(i32 %x, i32 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_commute(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i32 [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i32 -1, i32 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i32 %y, %x
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
%or = or i32 %x, %shr ; commute %shr and %x
|
|
|
|
ret i32 %or
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:48:04 +08:00
|
|
|
define i32 @neg_or_ashr_i32_commute(i32 %x0) {
|
|
|
|
; CHECK-LABEL: @neg_or_ashr_i32_commute(
|
|
|
|
; CHECK-NEXT: [[X:%.*]] = sdiv i32 42, [[X0:%.*]]
|
[InstCombine] Fold lshr/ashr(or(neg(x),x),bw-1) --> zext/sext(icmp_ne(x,0)) (PR50816)
Handle the missing fold reported in PR50816, which is a variant of the existing ashr(sub_nsw(X,Y),bw-1) --> sext(icmp_sgt(X,Y)) fold.
We also handle the lshr(or(neg(x),x),bw-1) --> zext(icmp_ne(x,0)) equivalent - https://alive2.llvm.org/ce/z/SnZmSj
We still allow multi uses of the neg(x) - as this is likely to let us further simplify other uses of the neg - but not multi uses of the or() which would increase instruction count.
Differential Revision: https://reviews.llvm.org/D105764
2021-07-13 21:26:03 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i32 [[X]], 0
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = sext i1 [[TMP1]] to i32
|
2021-07-13 20:48:04 +08:00
|
|
|
; CHECK-NEXT: ret i32 [[SHR]]
|
|
|
|
;
|
|
|
|
%x = sdiv i32 42, %x0 ; thwart complexity-based canonicalization
|
|
|
|
%neg = sub i32 0, %x
|
|
|
|
%or = or i32 %x, %neg
|
|
|
|
%shr = ashr i32 %or, 31
|
|
|
|
ret i32 %shr
|
|
|
|
}
|
|
|
|
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; Vector Types
|
|
|
|
|
|
|
|
define <4 x i32> @sub_ashr_or_i32_vec(<4 x i32> %x, <4 x i32> %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_vec(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt <4 x i32> [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select <4 x i1> [[TMP1]], <4 x i32> <i32 -1, i32 -1, i32 -1, i32 -1>, <4 x i32> [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret <4 x i32> [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw <4 x i32> %y, %x
|
|
|
|
%shr = ashr <4 x i32> %sub, <i32 31, i32 31, i32 31, i32 31>
|
|
|
|
%or = or <4 x i32> %shr, %x
|
|
|
|
ret <4 x i32> %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @sub_ashr_or_i32_vec_nuw_nsw(<4 x i32> %x, <4 x i32> %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_vec_nuw_nsw(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt <4 x i32> [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select <4 x i1> [[TMP1]], <4 x i32> <i32 -1, i32 -1, i32 -1, i32 -1>, <4 x i32> [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret <4 x i32> [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nuw nsw <4 x i32> %y, %x
|
|
|
|
%shr = ashr <4 x i32> %sub, <i32 31, i32 31, i32 31, i32 31>
|
|
|
|
%or = or <4 x i32> %shr, %x
|
|
|
|
ret <4 x i32> %or
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:48:04 +08:00
|
|
|
define <4 x i32> @neg_or_ashr_i32_vec(<4 x i32> %x) {
|
|
|
|
; CHECK-LABEL: @neg_or_ashr_i32_vec(
|
[InstCombine] Fold lshr/ashr(or(neg(x),x),bw-1) --> zext/sext(icmp_ne(x,0)) (PR50816)
Handle the missing fold reported in PR50816, which is a variant of the existing ashr(sub_nsw(X,Y),bw-1) --> sext(icmp_sgt(X,Y)) fold.
We also handle the lshr(or(neg(x),x),bw-1) --> zext(icmp_ne(x,0)) equivalent - https://alive2.llvm.org/ce/z/SnZmSj
We still allow multi uses of the neg(x) - as this is likely to let us further simplify other uses of the neg - but not multi uses of the or() which would increase instruction count.
Differential Revision: https://reviews.llvm.org/D105764
2021-07-13 21:26:03 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp ne <4 x i32> [[X:%.*]], zeroinitializer
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = sext <4 x i1> [[TMP1]] to <4 x i32>
|
2021-07-13 20:48:04 +08:00
|
|
|
; CHECK-NEXT: ret <4 x i32> [[SHR]]
|
|
|
|
;
|
|
|
|
%neg = sub <4 x i32> zeroinitializer, %x
|
|
|
|
%or = or <4 x i32> %neg, %x
|
|
|
|
%shr = ashr <4 x i32> %or, <i32 31, i32 31, i32 31, i32 31>
|
|
|
|
ret <4 x i32> %shr
|
|
|
|
}
|
|
|
|
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
define <4 x i32> @sub_ashr_or_i32_vec_commute(<4 x i32> %x, <4 x i32> %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_vec_commute(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt <4 x i32> [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select <4 x i1> [[TMP1]], <4 x i32> <i32 -1, i32 -1, i32 -1, i32 -1>, <4 x i32> [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret <4 x i32> [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw <4 x i32> %y, %x
|
|
|
|
%shr = ashr <4 x i32> %sub, <i32 31, i32 31, i32 31, i32 31>
|
|
|
|
%or = or <4 x i32> %x, %shr
|
|
|
|
ret <4 x i32> %or
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:48:04 +08:00
|
|
|
define <4 x i32> @neg_or_ashr_i32_vec_commute(<4 x i32> %x0) {
|
|
|
|
; CHECK-LABEL: @neg_or_ashr_i32_vec_commute(
|
|
|
|
; CHECK-NEXT: [[X:%.*]] = sdiv <4 x i32> <i32 42, i32 42, i32 42, i32 42>, [[X0:%.*]]
|
[InstCombine] Fold lshr/ashr(or(neg(x),x),bw-1) --> zext/sext(icmp_ne(x,0)) (PR50816)
Handle the missing fold reported in PR50816, which is a variant of the existing ashr(sub_nsw(X,Y),bw-1) --> sext(icmp_sgt(X,Y)) fold.
We also handle the lshr(or(neg(x),x),bw-1) --> zext(icmp_ne(x,0)) equivalent - https://alive2.llvm.org/ce/z/SnZmSj
We still allow multi uses of the neg(x) - as this is likely to let us further simplify other uses of the neg - but not multi uses of the or() which would increase instruction count.
Differential Revision: https://reviews.llvm.org/D105764
2021-07-13 21:26:03 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp ne <4 x i32> [[X]], zeroinitializer
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = sext <4 x i1> [[TMP1]] to <4 x i32>
|
2021-07-13 20:48:04 +08:00
|
|
|
; CHECK-NEXT: ret <4 x i32> [[SHR]]
|
|
|
|
;
|
|
|
|
%x = sdiv <4 x i32> <i32 42, i32 42, i32 42, i32 42>, %x0 ; thwart complexity-based canonicalization
|
|
|
|
%neg = sub <4 x i32> zeroinitializer, %x
|
|
|
|
%or = or <4 x i32> %x, %neg
|
|
|
|
%shr = ashr <4 x i32> %or, <i32 31, i32 31, i32 31, i32 31>
|
|
|
|
ret <4 x i32> %shr
|
|
|
|
}
|
|
|
|
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; Extra uses
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32_extra_use_sub(i32 %x, i32 %y, i32* %p) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_extra_use_sub(
|
|
|
|
; CHECK-NEXT: [[SUB:%.*]] = sub nsw i32 [[Y:%.*]], [[X:%.*]]
|
|
|
|
; CHECK-NEXT: store i32 [[SUB]], i32* [[P:%.*]], align 4
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i32 [[Y]], [[X]]
|
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i32 -1, i32 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i32 %y, %x
|
|
|
|
store i32 %sub, i32* %p
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
ret i32 %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32_extra_use_or(i32 %x, i32 %y, i32* %p) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_extra_use_or(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i32 [[Y:%.*]], [[X:%.*]]
|
[InstCombine] Fold a shifty implementation of clamp-to-allones.
Summary:
Fold
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X)
into
X s> Y ? -1 : X
https://rise4fun.com/Alive/d8Ab
clamp255 is a common operator in image processing, can be implemented
in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select
enables more optimization, e.g., vmin generation for ARM target.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Reviewed By: lebedev.ri
Subscribers: kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67800
llvm-svn: 372678
2019-09-24 08:30:09 +08:00
|
|
|
; CHECK-NEXT: [[OR:%.*]] = select i1 [[TMP1]], i32 -1, i32 [[X]]
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: store i32 [[OR]], i32* [[P:%.*]], align 4
|
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i32 %y, %x
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
store i32 %or, i32* %p
|
|
|
|
ret i32 %or
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:48:04 +08:00
|
|
|
define i32 @neg_extra_use_or_ashr_i32(i32 %x, i32* %p) {
|
|
|
|
; CHECK-LABEL: @neg_extra_use_or_ashr_i32(
|
|
|
|
; CHECK-NEXT: [[NEG:%.*]] = sub i32 0, [[X:%.*]]
|
[InstCombine] Fold lshr/ashr(or(neg(x),x),bw-1) --> zext/sext(icmp_ne(x,0)) (PR50816)
Handle the missing fold reported in PR50816, which is a variant of the existing ashr(sub_nsw(X,Y),bw-1) --> sext(icmp_sgt(X,Y)) fold.
We also handle the lshr(or(neg(x),x),bw-1) --> zext(icmp_ne(x,0)) equivalent - https://alive2.llvm.org/ce/z/SnZmSj
We still allow multi uses of the neg(x) - as this is likely to let us further simplify other uses of the neg - but not multi uses of the or() which would increase instruction count.
Differential Revision: https://reviews.llvm.org/D105764
2021-07-13 21:26:03 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i32 [[X]], 0
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = sext i1 [[TMP1]] to i32
|
2021-07-13 20:48:04 +08:00
|
|
|
; CHECK-NEXT: store i32 [[NEG]], i32* [[P:%.*]], align 4
|
|
|
|
; CHECK-NEXT: ret i32 [[SHR]]
|
|
|
|
;
|
|
|
|
%neg = sub i32 0, %x
|
|
|
|
%or = or i32 %neg, %x
|
|
|
|
%shr = ashr i32 %or, 31
|
|
|
|
store i32 %neg, i32* %p
|
|
|
|
ret i32 %shr
|
|
|
|
}
|
|
|
|
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; Negative Tests
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32_extra_use_ashr(i32 %x, i32 %y, i32* %p) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_extra_use_ashr(
|
[InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.
It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
sub w8, w0, w1
lsr w0, w8, #31
vs:
cmp w0, w1
cset w0, lt
If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.
A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.
Alive proof:
https://rise4fun.com/Alive/o4E
Name: sign-bit splat
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = ashr %s, C1
=>
%c = icmp slt %x, %y
%r = sext %c
Name: sign-bit LSB
Pre: C1 == (width(%x) - 1)
%s = sub nsw %x, %y
%r = lshr %s, C1
=>
%c = icmp slt %x, %y
%r = zext %c
2020-12-01 21:51:19 +08:00
|
|
|
; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i32 [[Y:%.*]], [[X:%.*]]
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = sext i1 [[TMP1]] to i32
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
; CHECK-NEXT: store i32 [[SHR]], i32* [[P:%.*]], align 4
|
|
|
|
; CHECK-NEXT: [[OR:%.*]] = or i32 [[SHR]], [[X]]
|
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i32 %y, %x
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
store i32 %shr, i32* %p
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
ret i32 %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32_no_nsw_nuw(i32 %x, i32 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_no_nsw_nuw(
|
|
|
|
; CHECK-NEXT: [[SUB:%.*]] = sub i32 [[Y:%.*]], [[X:%.*]]
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = ashr i32 [[SUB]], 31
|
|
|
|
; CHECK-NEXT: [[OR:%.*]] = or i32 [[SHR]], [[X]]
|
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub i32 %y, %x
|
|
|
|
%shr = ashr i32 %sub, 31
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
ret i32 %or
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:48:04 +08:00
|
|
|
define i32 @neg_or_extra_use_ashr_i32(i32 %x, i32* %p) {
|
|
|
|
; CHECK-LABEL: @neg_or_extra_use_ashr_i32(
|
|
|
|
; CHECK-NEXT: [[NEG:%.*]] = sub i32 0, [[X:%.*]]
|
|
|
|
; CHECK-NEXT: [[OR:%.*]] = or i32 [[NEG]], [[X]]
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = ashr i32 [[OR]], 31
|
|
|
|
; CHECK-NEXT: store i32 [[OR]], i32* [[P:%.*]], align 4
|
|
|
|
; CHECK-NEXT: ret i32 [[SHR]]
|
|
|
|
;
|
|
|
|
%neg = sub i32 0, %x
|
|
|
|
%or = or i32 %neg, %x
|
|
|
|
%shr = ashr i32 %or, 31
|
|
|
|
store i32 %or, i32* %p
|
|
|
|
ret i32 %shr
|
|
|
|
}
|
|
|
|
|
[NFC][InstCombine] Add tests for shifty implementation of clamping.
Summary:
Clamp negative to zero and clamp positive to allOnes are common
operation in image saturation.
Add tests for shifty implementation of clamping, as prepare work for
folding:
and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0;
or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X.
Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67798
llvm-svn: 372671
2019-09-24 07:48:32 +08:00
|
|
|
define <4 x i32> @sub_ashr_or_i32_vec_undef1(<4 x i32> %x) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_vec_undef1(
|
|
|
|
; CHECK-NEXT: [[SUB:%.*]] = sub <4 x i32> <i32 255, i32 255, i32 undef, i32 255>, [[X:%.*]]
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = ashr <4 x i32> [[SUB]], <i32 31, i32 31, i32 31, i32 31>
|
|
|
|
; CHECK-NEXT: [[OR:%.*]] = or <4 x i32> [[SHR]], [[X]]
|
|
|
|
; CHECK-NEXT: ret <4 x i32> [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub <4 x i32> <i32 255, i32 255, i32 undef, i32 255>, %x
|
|
|
|
%shr = ashr <4 x i32> %sub, <i32 31, i32 31, i32 31, i32 31>
|
|
|
|
%or = or <4 x i32> %shr, %x
|
|
|
|
ret <4 x i32> %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @sub_ashr_or_i32_vec_undef2(<4 x i32> %x) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_vec_undef2(
|
|
|
|
; CHECK-NEXT: [[SUB:%.*]] = sub nsw <4 x i32> <i32 255, i32 255, i32 255, i32 255>, [[X:%.*]]
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = ashr <4 x i32> [[SUB]], <i32 undef, i32 31, i32 31, i32 31>
|
|
|
|
; CHECK-NEXT: [[OR:%.*]] = or <4 x i32> [[SHR]], [[X]]
|
|
|
|
; CHECK-NEXT: ret <4 x i32> [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw <4 x i32> <i32 255, i32 255, i32 255, i32 255>, %x
|
|
|
|
%shr = ashr <4 x i32> %sub, <i32 undef, i32 31, i32 31, i32 31>
|
|
|
|
%or = or <4 x i32> %shr, %x
|
|
|
|
ret <4 x i32> %or
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @sub_ashr_or_i32_shift_wrong_bit(i32 %x, i32 %y) {
|
|
|
|
; CHECK-LABEL: @sub_ashr_or_i32_shift_wrong_bit(
|
|
|
|
; CHECK-NEXT: [[SUB:%.*]] = sub nsw i32 [[Y:%.*]], [[X:%.*]]
|
|
|
|
; CHECK-NEXT: [[SHR:%.*]] = ashr i32 [[SUB]], 11
|
|
|
|
; CHECK-NEXT: [[OR:%.*]] = or i32 [[SHR]], [[X]]
|
|
|
|
; CHECK-NEXT: ret i32 [[OR]]
|
|
|
|
;
|
|
|
|
%sub = sub nsw i32 %y, %x
|
|
|
|
%shr = ashr i32 %sub, 11
|
|
|
|
%or = or i32 %shr, %x
|
|
|
|
ret i32 %or
|
|
|
|
}
|