[X86] Turn setne X, signedmax into setgt signedmax, X in LowerVSETCC to avoid an invert

We won't be able to fold the constant pool load, but its still better than materialing ones and xoring for the invert if we used PCMPEQ.

This will fix another regression from D42948.

llvm-svn: 325845
This commit is contained in:
Craig Topper 2018-02-23 00:21:39 +00:00
parent 5c986b010b
commit 0dcc88a500
2 changed files with 22 additions and 3 deletions

View File

@ -18100,12 +18100,16 @@ static SDValue LowerVSETCC(SDValue Op, const X86Subtarget &Subtarget,
}
// If this is a SETNE against the signed minimum value, change it to SETGT.
// If this is a SETNE against the signed maximum value, change it to SETLT
// which will be swapped to SETGT.
// Otherwise we use PCMPEQ+invert.
APInt ConstValue;
if (Cond == ISD::SETNE &&
ISD::isConstantSplatVector(Op1.getNode(), ConstValue),
ConstValue.isMinSignedValue()) {
Cond = ISD::SETGT;
ISD::isConstantSplatVector(Op1.getNode(), ConstValue)) {
if (ConstValue.isMinSignedValue())
Cond = ISD::SETGT;
else if (ConstValue.isMaxSignedValue())
Cond = ISD::SETLT;
}
// If both operands are known non-negative, then an unsigned compare is the

View File

@ -345,3 +345,18 @@ define <4 x i32> @ne_smin(<4 x i32> %x) {
ret <4 x i32> %r
}
; Make sure we can efficiently handle ne smax by turning into sgt. We can't fold
; the constant pool load, but the alternative is a cmpeq+invert which is 3 instructions.
; The PCMPGT version is two instructions given sufficient register allocation freedom
; to avoid the last mov to %xmm0 seen here.
define <4 x i32> @ne_smax(<4 x i32> %x) {
; CHECK-LABEL: ne_smax:
; CHECK: # %bb.0:
; CHECK-NEXT: movdqa {{.*#+}} xmm1 = [2147483647,2147483647,2147483647,2147483647]
; CHECK-NEXT: pcmpgtd %xmm0, %xmm1
; CHECK-NEXT: movdqa %xmm1, %xmm0
; CHECK-NEXT: retq
%cmp = icmp ne <4 x i32> %x, <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
%r = sext <4 x i1> %cmp to <4 x i32>
ret <4 x i32> %r
}