[X86] Remove flag setting ISD nodes from computeKnownBitsForTargetNode

Summary:
The flag result is an i32 type. But its only really used for connectivity. I don't think anything even assumes a particular format. We don't ever do any real operations on it. So known bits don't help us optimize anything.

My main motivation is that the UMUL behavior is actually wrong. I was going to fix this in D36654, but then realized there was just no reason for it to be here.

Reviewers: RKSimon, zvi, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36657

llvm-svn: 310845
This commit is contained in:
Craig Topper 2017-08-14 15:28:49 +00:00
parent b9e3e111ad
commit c0471829b1
1 changed files with 0 additions and 15 deletions

View File

@ -26915,21 +26915,6 @@ void X86TargetLowering::computeKnownBitsForTargetNode(const SDValue Op,
Known.resetAll();
switch (Opc) {
default: break;
case X86ISD::ADD:
case X86ISD::SUB:
case X86ISD::ADC:
case X86ISD::SBB:
case X86ISD::SMUL:
case X86ISD::UMUL:
case X86ISD::INC:
case X86ISD::DEC:
case X86ISD::OR:
case X86ISD::XOR:
case X86ISD::AND:
// These nodes' second result is a boolean.
if (Op.getResNo() == 0)
break;
LLVM_FALLTHROUGH;
case X86ISD::SETCC:
Known.Zero.setBitsFrom(1);
break;