Commit Graph

92 Commits

Author SHA1 Message Date
Daniel Berlin 2c75c63063 InstCombine: Use the new SimplifyQuery versions of Simplify*. Use AssumptionCache, DominatorTree, TargetLibraryInfo everywhere.
llvm-svn: 301464
2017-04-26 20:56:07 +00:00
Craig Topper b45eabcf82 [ValueTracking] Introduce a KnownBits struct to wrap the two APInts for computeKnownBits
This patch introduces a new KnownBits struct that wraps the two APInt used by computeKnownBits. This allows us to treat them as more of a unit.

Initially I've just altered the signatures of computeKnownBits and InstCombine's simplifyDemandedBits to pass a KnownBits reference instead of two separate APInt references. I'll do similar to the SelectionDAG version of computeKnownBits/simplifyDemandedBits as a separate patch.

I've added a constructor that allows initializing both APInts to the same bit width with a starting value of 0. This reduces the repeated pattern of initializing both APInts. Once place default constructed the APInts so I added a default constructor for those cases.

Going forward I would like to add more methods that will work on the pairs. For example trunc, zext, and sext occur on both APInts together in several places. We should probably add a clear method that can be used to clear both pieces. Maybe a method to check for conflicting information. A method to return (Zero|One) so we don't write it out everywhere. Maybe a method for (Zero|One).isAllOnesValue() to determine if all bits are known. I'm sure there are many other methods we can come up with.

Differential Revision: https://reviews.llvm.org/D32376

llvm-svn: 301432
2017-04-26 16:39:58 +00:00
Sanjay Patel cc663b82fa [InstCombine] function names start with lower-case letter; NFC
Forgot to make this fix with the signature change in r300911.

llvm-svn: 300912
2017-04-20 22:37:01 +00:00
Sanjay Patel c9485ca895 [InstCombine] allow shl+shr demanded bits folds with splat constants
llvm-svn: 300911
2017-04-20 22:33:54 +00:00
Craig Topper fb71b7d3e0 [InstCombine] Support folding a subtract with a constant LHS into a phi node
We currently only support folding a subtract into a select but not a PHI. This fixes that.

I had to fix an assumption in FoldOpIntoPhi that assumed the PHI node was always in operand 0. Now we pass it in like we do for FoldOpIntoSelect. But we still require some dancing to find the Constant when we create the BinOp or ConstantExpr. This is based code is similar to what we do for selects.

Since I touched all call sites, this also renames FoldOpIntoPhi to foldOpIntoPhi to match coding standards.

Differential Revision: https://reviews.llvm.org/D31686

llvm-svn: 300363
2017-04-14 19:20:12 +00:00
Craig Topper b0076fe8b4 [InstCombine] Move portion of SimplifyDemandedUseBits that deals with instructions with multiple uses out to a separate method. NFCI
llvm-svn: 300082
2017-04-12 18:05:21 +00:00
Craig Topper 7226d796aa [InstCombine] Remove redundant combine from visitAnd
This combine is fully handled by SimplifyDemandedInstructionBits as of r299658 where I fixed this code to ensure the Add/Sub had only a single user. Otherwise it would fire and create additional instructions. That fix resulted in an improvement to code generated for tsan which is why I committed it before deleting.

Differential Revision: https://reviews.llvm.org/D31543

llvm-svn: 299704
2017-04-06 20:41:48 +00:00
Craig Topper 79120e80b8 Revert r299337 "[InstCombine] Remove redundant combine from visitAnd"
One of the tsan bots started failing at this commit. I don't see anything obviously wrong with the commit so trying this to see if it recovers.

Failing log: http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-autoconf/builds/6792

llvm-svn: 299366
2017-04-03 17:22:23 +00:00
Craig Topper d0b053d229 [InstCombine] Make foldOpWithConstantIntoOperand take a BinaryOperator instead of a generic Instruction.
It blindly assumes there are two operands so make it explicit.

llvm-svn: 299351
2017-04-03 07:08:08 +00:00
Craig Topper 70e4f434ae [InstCombine] Make InstCombiner::OptAndOp take a BinaryOperator instead of an Instruction.
The callers have already performed the necessary cast before calling. This allows us to remove a comment that says the instruction must be a BinaryOperator and make it explicit in the argument type.

Had to add a default case to the switch because BinaryOperator::getOpcode() returns a BinaryOps enum.

llvm-svn: 299339
2017-04-02 17:57:30 +00:00
Craig Topper d133591a7e [InstCombine] Remove redundant combine from visitAnd
As far as I can tell this combine is fully handled by SimplifyDemandedInstructionBits.

I was only looking at this because it is the only user of APIntOps::isShiftedMask which is itself broken. As demonstrated by r299187. I was going to fix isShiftedMask and needed to make sure we had coverage for the new cases it would expose to this combine. But looks like we can nuke it instead.

Differential Revision: https://reviews.llvm.org/D31543

llvm-svn: 299337
2017-04-02 17:34:30 +00:00
Craig Topper 47596dd4cc [InstCombine] Change the interface of SimplifyDemandedBits so that it takes the instruction and operand instead of the Use.
The first thing it did was get the User for the Use to get the instruction back. This requires looking through the Uses for the User using the waymarking walk. That's pretty fast, but its probably still better to just pass the Instruction we already had.

llvm-svn: 298772
2017-03-25 06:52:52 +00:00
Adrian Prantl 47ea6478ed Salvage debug info from instructions about to be deleted
[Reapplies r297971 and punting on finding a better API for findDbgValues()]

This patch improves debug info quality in InstCombine by looking at
values that are about to be deleted, checking whether there are any
dbg.value instrinsics referring to them, and potentially encoding the
semantics of the deleted instruction into the dbg.value's
DIExpression.

In the example in the testcase (which was extracted from XNU) there is a sequence of

 %4 = load %struct.entry*, %struct.entry** %next2, align 8, !dbg !41
 %5 = bitcast %struct.entry* %4 to i8*, !dbg !42
 %add.ptr4 = getelementptr inbounds i8, i8* %5, i64 -8, !dbg !43
 %6 = bitcast i8* %add.ptr4 to %struct.entry*, !dbg !44
 call void @llvm.dbg.value(metadata %struct.entry* %6, i64 0, metadata !20, metadata !21), !dbg 34

When these instructions are eliminated by instcombine one after
another, we can still salvage the otherwise dead debug info:

- Bitcasts have no effect, so have the dbg.value point to operand(0)
- Loads can be expressed via a DW_OP_deref
- Constant gep instructions can be replaced by DWARF expression arithmetic

The API introduced by this patch is not specific to instcombine and
can be useful in other places, too.

rdar://problem/30725338

Differential Revision: https://reviews.llvm.org/D30919

llvm-svn: 297994
2017-03-16 21:14:09 +00:00
Adrian Prantl fa9e84eb6d Revert commit r297971 because of issues reported by msan.
llvm-svn: 297982
2017-03-16 20:11:54 +00:00
Adrian Prantl 4377314a98 Salvage debug info from instructions about to be deleted
This patch improves debug info quality in InstCombine by looking at
values that are about to be deleted, checking whether there are any
dbg.value instrinsics referring to them, and potentially encoding the
semantics of the deleted instruction into the dbg.value's
DIExpression.

In the example in the testcase (which was extracted from XNU) there is a sequence of

  %4 = load %struct.entry*, %struct.entry** %next2, align 8, !dbg !41
  %5 = bitcast %struct.entry* %4 to i8*, !dbg !42
  %add.ptr4 = getelementptr inbounds i8, i8* %5, i64 -8, !dbg !43
  %6 = bitcast i8* %add.ptr4 to %struct.entry*, !dbg !44
  call void @llvm.dbg.value(metadata %struct.entry* %6, i64 0, metadata !20, metadata !21), !dbg 34

When these instructions are eliminated by instcombine one after
another, we can still salvage the otherwise dead debug info:

- Bitcasts have no effect, so have the dbg.value point to operand(0)
- Loads can be expressed via a DW_OP_deref
- Constant gep instructions can be replaced by DWARF expression arithmetic

The API introduced by this patch is not specific to instcombine and
can be useful in other places, too.

rdar://problem/30725338

Differential Revision: https://reviews.llvm.org/D30919

llvm-svn: 297971
2017-03-16 18:22:52 +00:00
Yaxun Liu ba01ed00fe Fix invalid addrspacecast due to combining alloca with global var
For function-scope variables with large initialisation list, FE usually 
generates a global variable to hold the initializer, then generates 
memcpy intrinsic to initialize the alloca. InstCombiner::visitAllocaInst 
identifies such allocas which are accessed only by reading and replaces 
them with the global variable. This is done by casting the global variable 
to the type of the alloca and replacing all references.

However, when the global variable is in a different address space which 
is disjoint with addr space 0 (e.g. for IR generated from OpenCL, 
global variable cannot be in private addr space i.e. addr space 0), casting 
the global variable to addr space 0 results in invalid IR for certain 
targets (e.g. amdgpu).

To fix this issue, when the global variable is not in addr space 0, 
instead of casting it to addr space 0, this patch chases down the uses 
of alloca until reaching the load instructions, then replaces load from 
alloca with load from the global variable. If during the chasing 
bitcast and GEP are encountered, new bitcast and GEP based on the global 
variable are generated and used in the load instructions.

Differential Revision: https://reviews.llvm.org/D27283

llvm-svn: 294786
2017-02-10 21:46:07 +00:00
Igor Laevsky 900ffa34c8 [InstCombineCalls] Unfold element atomic memcpy instruction
Differential Revision: https://reviews.llvm.org/D28909

llvm-svn: 294453
2017-02-08 14:32:04 +00:00
David Blaikie 4c01af203e Fix the -Werror build for some sign-comparisons
llvm-svn: 294331
2017-02-07 18:58:17 +00:00
Davide Italiano 2133bf5562 [InstCombine] Make max size array combine a tunable.
Requested by Sanjoy/Hal a while ago, and forgotten by me
(r283612).

llvm-svn: 294323
2017-02-07 17:56:50 +00:00
Sanjay Patel 73fc8ddb06 [InstCombine] fix operand-complexity-based canonicalization (PR28296)
The code comments didn't match the code logic, and we didn't actually distinguish the fake unary (not/neg/fneg) 
operators from arguments. Adding another level to the weighting scheme provides more structure and can help 
simplify the pattern matching in InstCombine and other places.

I fixed regressions that would have shown up from this change in:
rL290067
rL290127

But that doesn't mean there are no pattern-matching logic holes left; some combines may just be missing regression tests.

Should fix:
https://llvm.org/bugs/show_bug.cgi?id=28296

Differential Revision: https://reviews.llvm.org/D27933

llvm-svn: 294049
2017-02-03 21:43:34 +00:00
Davide Italiano aec4617dc8 [Instcombine] Combine consecutive identical fences
Differential Revision:  https://reviews.llvm.org/D29314

llvm-svn: 293661
2017-01-31 18:09:05 +00:00
Sanjay Patel 2217f75ad1 fix formatting; NFC
llvm-svn: 293652
2017-01-31 17:25:42 +00:00
Sanjay Patel db0938fd9a [InstCombine] add a wrapper for a common pair of transforms; NFCI
Some of the callers are artificially limiting this transform to integer types;
this should make it easier to incrementally remove that restriction.

llvm-svn: 291620
2017-01-10 23:49:07 +00:00
Craig Topper 28ec3460e4 [InstCombine] Remove a piece of a comment that said that InstCombiner contains pass infrastructure. That hasn't been true since r226618. NFC
llvm-svn: 290648
2016-12-28 03:12:42 +00:00
David Majnemer b0761a0c1b Revert "[InstCombine] New opportunities for FoldAndOfICmp and FoldXorOfICmp"
This reverts commit r289813, it caused PR31449.

llvm-svn: 290266
2016-12-21 19:21:59 +00:00
Daniel Jasper aec2fa352f Revert @llvm.assume with operator bundles (r289755-r289757)
This creates non-linear behavior in the inliner (see more details in
r289755's commit thread).

llvm-svn: 290086
2016-12-19 08:22:17 +00:00
Ehsan Amiri 795b0671c5 [InstCombine] New opportunities for FoldAndOfICmp and FoldXorOfICmp
A number of new patterns for simplifying and/xor of icmp:

(icmp ne %x, 0) ^ (icmp ne %y, 0) => icmp ne %x, %y if the following is true:
1- (%x = and %a, %mask) and (%y = and %b, %mask)
2- %mask is a power of 2.

(icmp eq %x, 0) & (icmp ne %y, 0) => icmp ult %x, %y if the following is true:
1- (%x = and %a, %mask1) and (%y = and %b, %mask2)
2- Let %t be the smallest power of 2 where %mask1 & %t != 0. Then for any
   %s that is a power of 2 and %s & %mask2 != 0, we must have %s <= %t.
For example if %mask1 = 24 and %mask2 = 16, setting %s = 16 and %t = 8
violates condition (2) above. So this optimization cannot be applied.

llvm-svn: 289813
2016-12-15 12:25:13 +00:00
Hal Finkel 3ca4a6bcf1 Remove the AssumptionCache
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...

llvm-svn: 289756
2016-12-15 03:02:15 +00:00
Robert Lougher 2428a4050f [InstCombine] Merge debug locations when folding through a phi node
If all the operands to a phi node are of the same operation, instcombine
will try to pull them through the phi node, combining them into a single
operation.  When it does this, the debug location of the operation should
be the merged debug locations of the phi node arguments.

Patch 2 of 8 for D26256.  Folding of a binary operation.

Differential Revision: https://reviews.llvm.org/D26256

llvm-svn: 289679
2016-12-14 17:49:19 +00:00
Sanjay Patel aa8b28e509 [InstCombine] allow more narrowing transforms for logic ops
We had a limited version of this for scalar 'and'; this expands
the transform to 'or' and 'xor' and allows vectors types too.

llvm-svn: 288273
2016-11-30 20:48:54 +00:00
Sanjay Patel 611f9f92fc [InstCombine] handle simple vector integer constants in IsFreeToInvert
llvm-svn: 285318
2016-10-27 17:30:50 +00:00
Guozhi Wei ae541f6a71 [InstCombine] Resubmit the combine of A->B->A BitCast and fix for pr27996
The original patch of the A->B->A BitCast optimization was reverted by r274094 because it may cause infinite loop inside compiler https://llvm.org/bugs/show_bug.cgi?id=27996.

The problem is with following code

xB = load (type B); 
xA = load (type A); 
+yA = (A)xB; B -> A
+zAn = PHI[yA, xA]; PHI 
+zBn = (B)zAn; // A -> B
store zAn;
store zBn;

optimizeBitCastFromPhi generates

+zBn = (B)zAn; // A -> B

and expects it will be combined with the following store instruction to another

store zAn 

Unfortunately before combineStoreToValueType is called on the store instruction, optimizeBitCastFromPhi is called on the new BitCast again, and this pattern repeats indefinitely.

optimizeBitCastFromPhi only generates BitCast for load/store instructions, only the BitCast before store can cause the reexecution of optimizeBitCastFromPhi, and BitCast before store can easily be handled by InstCombineLoadStoreAlloca.cpp. So the solution to the problem is if all users of a CI are store instructions, we should not do optimizeBitCastFromPhi on it. Then optimizeBitCastFromPhi will not be called on the new BitCast instructions.

Differential Revision: https://reviews.llvm.org/D23896

llvm-svn: 285116
2016-10-25 20:43:42 +00:00
Sanjay Patel f7b851fe84 [InstCombine] allow non-splat folds of select cond (ext X), C
llvm-svn: 282906
2016-09-30 19:49:22 +00:00
Sanjay Patel 453ceff261 [InstCombine] fix function names; NFC
Also, make foldSelectExtConst() a member of InstCombiner, remove
unnecessary parameters from its interface, and group visitSelectInst
helpers together in the header file.

llvm-svn: 282796
2016-09-29 22:18:30 +00:00
Sanjay Patel 10494b2682 [InstCombine] add helper functions for visitICmpInst(); NFCI
llvm-svn: 281743
2016-09-16 16:10:22 +00:00
Sanjay Patel 8da42cc5d3 [InstCombine] move folds for icmp (sh C2, Y), C1 in with other icmp+sh folds; NFCI
llvm-svn: 281672
2016-09-15 22:26:31 +00:00
Sanjay Patel af91d1f81e [InstCombine] allow icmp (shr/shl) folds for vectors
These 2 helper functions were already using APInt internally, so just
change the API and caller to allow folds for splats. The scalar
regression tests look quite thorough, so I just added a couple of
tests to prove that vectors are handled too.

These folds should be grouped with the other cmp+shift folds though.
That can be an NFC follow-up.

llvm-svn: 281663
2016-09-15 21:35:30 +00:00
Sanjay Patel 06b127a771 [InstCombine] add helper function for foldICmpWithConstant; NFC
This is a big glob of transforms that probably should work for vectors,
but currently they are disallowed because of ConstantInt guards.

llvm-svn: 281614
2016-09-15 14:37:50 +00:00
Sanjay Patel 3151dec7f1 [InstCombine] add helper function for foldICmpUsingKnownBits; NFCI
llvm-svn: 281217
2016-09-12 15:24:31 +00:00
Sanjay Patel 5352331716 fix formatting/typos; NFC
llvm-svn: 281214
2016-09-12 14:25:46 +00:00
Sanjay Patel f58f68c891 [InstCombine] rename and reorganize some icmp folding functions; NFC
Everything under foldICmpInstWithConstant() should now be working for
splat vectors via m_APInt matchers. Ie, I've removed all of the FIXMEs
that I added while cleaning that section up. Note that not all of the
associated FIXMEs in the regression tests are gone though, because some
of the tests require earlier folds that are still scalar-only. 

llvm-svn: 281139
2016-09-10 15:03:44 +00:00
Sanjay Patel 9b40f98357 [InstCombine] use m_APInt to allow icmp (and (sh X, Y), C2), C1 folds for splat constant vectors
llvm-svn: 280873
2016-09-07 22:33:03 +00:00
Sanjay Patel 85d79744df [InstCombine] change insertRangeTest() to use APInt instead of Constant; NFCI
This is prep work before changing the callers to also use APInt which will
allow folds for splat vectors. Currently, the callers have ConstantInt
guards in place, so no functional change intended with this commit.

llvm-svn: 280282
2016-08-31 19:49:56 +00:00
Sanjay Patel 14e0e18d76 [InstCombine] add helper function for icmp (and (sh X, Y), C2), C1 ; NFC
Like other recent changes near here, the goal is to allow vector types for
all of these folds. Splitting things up makes it easier to incrementally 
enhance the code and easier to read.

llvm-svn: 279851
2016-08-26 18:28:46 +00:00
Sanjay Patel d3c7bb28be [InstCombine] add helper function for folding of icmp (and X, C2), C; NFC
llvm-svn: 279834
2016-08-26 16:42:33 +00:00
Sanjay Patel 1655414903 [InstCombine] move foldICmpDivConstConst() contents to foldICmpDivConstant(); NFCI
There was no logic in foldICmpDivConstant, so no need for a separate function.
The code is directly copy/pasted, so further cleanups to follow.

llvm-svn: 279685
2016-08-24 23:03:36 +00:00
Sanjay Patel dcac0dfca9 [InstCombine] move foldICmpShrConstConst() contents to foldICmpShrConst(); NFCI
There will only be 3 lines of code in foldICmpShrConst() when the cleanup is done,
so it doesn't make much sense to have a separate function for a single fold.

llvm-svn: 279575
2016-08-23 21:25:13 +00:00
Sanjay Patel c9196c4488 [InstCombine] change param type from Instruction to BinaryOperator for icmp helpers; NFCI
This saves some casting in the helper functions and eases some further refactoring.

llvm-svn: 279478
2016-08-22 21:24:29 +00:00
Sanjay Patel a3f4f0828b [InstCombine] add helper functions for foldICmpWithConstant; NFCI
Besides breaking up a 700 line function to improve readability,
this sinks the 'FIXME: ConstantInt' check into each helper. So 
now we can independently break that restriction within any of the
helper functions.

As much as possible, the code was only {cut/paste/clang-format}'ed 
to minimize risk (no functional changes intended), so several more
readability improvements are still possible. 

llvm-svn: 278828
2016-08-16 17:54:36 +00:00
Sanjay Patel 1e5b2d1611 [InstCombine] use m_APInt in foldICmpWithConstant; NFCI
There's some formatting and pointer deref ugliness here that I intend to fix in
subsequent patches. The overall goal is to refactor the obnoxiously long switch
and incrementally remove the restriction to scalar types (allow folds for vector
splats). This patch introduces the use of m_APInt which means the RHSV reference
is now a pointer (and may have matched a vector splat), but the check of 'RHS' 
remains, so vector folds are disallowed and no functional change is intended.

llvm-svn: 278816
2016-08-16 16:08:11 +00:00