Commit Graph

3559 Commits

Author SHA1 Message Date
Nadav Rotem 93fa5ef957 Fix a bug in vectorization of if-converted reduction variables. If the
reduction variable is not used outside the loop then we ran into an
endless loop. This change checks if we found the original PHI.

llvm-svn: 169324
2012-12-04 22:40:22 +00:00
Shuxin Yang 73285933c9 For rdar://12329730, last piece.
This change attempts to simplify (X^Y) -> X or Y in the user's context if we know that
only bits from X or Y are demanded.

  A minimized case is provided bellow. This change will simplify "t>>16" into "var1 >>16".

  =============================================================
  unsigned foo (unsigned val1, unsigned val2) {
    unsigned t = val1 ^ 1234;
    return (t >> 16) | t; // NOTE: t is used more than once.
  }
  =============================================================

  Note that if the "t" were used only once, the expression would be finally optimized as well.
However, with with this change, the optimization will take place earlier.

  Reviewed by Nadav, Thanks a lot!

llvm-svn: 169317
2012-12-04 22:15:32 +00:00
Nadav Rotem a10b311aec Add support for reduction variables when IF-conversion is enabled.
llvm-svn: 169288
2012-12-04 18:17:33 +00:00
Nadav Rotem 628c2dba60 Add the last part that is needed for vectorization of if-converted code.
Added the code that actually performs the if-conversion during vectorization.

We can now vectorize this code:

for (int i=0; i<n; ++i) {
  unsigned k = 0;

  if (a[i] > b[i])   <------ IF inside the loop.
    k = k * 5 + 3;

  a[i] = k;          <---- K is a phi node that becomes vector-select.
}

llvm-svn: 169217
2012-12-04 06:15:11 +00:00
Shuxin Yang 86c0e232b7 rdar://12329730 (2nd part, revised)
The type of shirt-right (logical or arithemetic) should remain unchanged 
when transforming  "X << C1 >> C2" into "X << (C1-C2)"

llvm-svn: 169209
2012-12-04 03:28:32 +00:00
Shuxin Yang 63e999edbf rdar://12329730 (2nd part)
This change tries to simmplify E1 = " X >> C1 << C2" into :
  - E2 = "X << (C2 - C1)" if C2 > C1, or
  - E2 = "X >> (C1 - C2)" if C1 > C2, or
  - E2 = X if C1 == C2.

 Reviewed by Nadav. Thanks!

llvm-svn: 169182
2012-12-04 00:04:54 +00:00
Benjamin Kramer 47534c7440 SROA: Avoid struct and array types early to avoid creating an overly large integer type.
Fixes PR14465.

Differential Revision: http://llvm-reviews.chandlerc.com/D148

llvm-svn: 169084
2012-12-01 11:53:32 +00:00
Zhou Sheng 8e6d64a71d Revert previous check in r168581, r169079 as they are still in code review status.
llvm-svn: 169083
2012-12-01 10:54:28 +00:00
Zhou Sheng 13fb1ca44a The patch is to improve the memory footprint of pass GlobalOpt.
Also check in a case to repeat the issue, on which 'opt -globalopt' consumes 1.6GB memory.
The big memory footprint cause is that current GlobalOpt one by one hoists and stores the leaf element constant into the global array, in each iteration, it recreates the global array initializer constant and leave the old initializer alone. This may result in many obsolete constants left.
For example:  we have global array @rom = global [16 x i32] zeroinitializer
After the first element value is hoisted and installed:   @rom = global [16 x i32] [ 1, 0, 0, ... ]
After the second element value is installed:  @rom = global [16 x 32] [ 1, 2, 0, 0, ... ]        // here the previous initializer is obsolete
...
When the transform is done, we have 15 obsolete initializers left useless.

llvm-svn: 169079
2012-12-01 04:38:53 +00:00
Evan Cheng 65df808f62 Fix logic to determine whether to turn a switch into a lookup table. When
the tables cannot fit in registers (i.e. bitmap), do not emit the table
if it's using an illegal type.

rdar://12779436

llvm-svn: 168970
2012-11-30 02:02:42 +00:00
Shuxin Yang abcc370423 rdar://12100355 (part 1)
This revision attempts to recognize following population-count pattern:

 while(a) { c++; ... ; a &= a - 1; ... },
  where <c> and <a>could be used multiple times in the loop body.

 TODO: On X8664 and ARM, __buildin_ctpop() are not expanded to a efficent 
instruction sequence, which need to be improved in the following commits.

Reviewed by Nadav, really appreciate!

llvm-svn: 168931
2012-11-29 19:38:54 +00:00
Meador Inge 75798bb7fe instcombine: Migrate puts optimizations
This patch migrates the puts optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

All the simplifiers from simplify-libcalls have now been migrated to
instcombine.  Yay!  Just a few other bits to migrate (prototype attribute
inference and a few statistics) and simplify-libcalls can finally be put
to rest.

llvm-svn: 168925
2012-11-29 19:15:17 +00:00
Benjamin Kramer ba11a9892c Follow up to 168711: It's safe to base this analysis on the found compare, just return the value for the right predicate.
Thanks to Andy for catching this.

llvm-svn: 168921
2012-11-29 19:07:57 +00:00
Shuxin Yang f265351491 fix a typo
llvm-svn: 168909
2012-11-29 18:09:37 +00:00
Meador Inge f8e725081c instcombine: Migrate fputs optimizations
This patch migrates the fputs optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168893
2012-11-29 15:45:43 +00:00
Meador Inge bc84d1a4f5 instcombine: Migrate fwrite optimizations
This patch migrates the fwrite optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168892
2012-11-29 15:45:39 +00:00
Meador Inge 1009cecca0 instcombine: Migrate fprintf optimizations
This patch migrates the fprintf optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168891
2012-11-29 15:45:33 +00:00
Shuxin Yang 01ab5d718b Instruction::isAssociative() returns true for fmul/fadd if they are tagged "unsafe" mode.
Approved by: Eli and Michael.

llvm-svn: 168848
2012-11-29 01:47:31 +00:00
Patrik Hägglund 3eb16c543e Add error handling in getInt.
Accordingly, update a testcase with a broken datalayout string.

Also, we never parse negative numbers, because '-' is used as a
separator. Therefore, use unsigned as result type.

llvm-svn: 168785
2012-11-28 12:13:12 +00:00
Hal Finkel 88ee6b0082 BBVectorize: Correctly merge SubclassOptionalData
When two instructions are combined into a vector instruction,
the resulting instruction must have the most-conservative flags.

llvm-svn: 168765
2012-11-28 03:04:10 +00:00
Meador Inge f1bc9e7431 instcombine: Don't replace all uses for instructions with no uses
My commit to migrate the printf simplifiers from the simplify-libcalls
in r168604 introduced a regression reported by Duncan [1].  The problem
is that in some cases the library call simplifier can return a new value
that has no uses and the new value's type is different than the old value's
type (which is fine because there are no uses).  The specific case that
triggered the bug looked something like:

   declare void @printf(i8*, ...)
   ...
   call void (i8*, ...)* @printf(i8* %fmt)

Which we want to optimized into:

   call i32 @putchar(i32 104)

However, the code was attempting to replace all uses of the printf with
the putchar and the types differ, hence a crash.  This is fixed by *just*
deleting the original instruction when there are no uses.  The old
simplify-libcalls pass is already doing something similar.

[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-November/056338.html

llvm-svn: 168716
2012-11-27 18:52:49 +00:00
Benjamin Kramer e20e124280 SCEV: Even if the latch terminator is foldable we can't deduce the result of an unrelated condition with it.
Fixes PR14432.

llvm-svn: 168711
2012-11-27 18:16:32 +00:00
Meador Inge 4ae8b684f5 Move sprintf simplifier tests to test/Transforms/InstCombine
The tests from SPrintF.ll should have been migrated to sprintf-1.ll in
r168677, but I forgot to do it.

llvm-svn: 168702
2012-11-27 15:35:58 +00:00
Bill Wendling ee5984df39 Remove the dependent libraries feature.
The dependent libraries feature was never used and has bit-rotted. Remove it.

llvm-svn: 168694
2012-11-27 09:55:56 +00:00
NAKAMURA Takumi 96c39f733f llvm/test/Transforms/SimplifyLibCalls: FileCheck-ize 3 tests.
llvm-svn: 168691
2012-11-27 08:18:23 +00:00
NAKAMURA Takumi 62be55783d llvm/test/Transforms/SimplifyLibCalls/SPrintF.ll: Handle @sprintf() with -instcombine, not -simplify-libcalls.
llvm-svn: 168690
2012-11-27 08:18:15 +00:00
NAKAMURA Takumi eb42eebee8 llvm/test/Transforms/SimplifyLibCalls/SPrintF.ll: Fix datalayout since r168516.
llvm-svn: 168689
2012-11-27 08:18:08 +00:00
NAKAMURA Takumi fc2bdccfcd Trailing linefeeds.
llvm-svn: 168688
2012-11-27 08:17:58 +00:00
NAKAMURA Takumi 5f7c2d80d8 test/Transforms/SimplifyLibCalls/SPrintF.ll: Suppress this for now. r168677 unveiled another failure.
FYI, this test makes no sense with "not grep"... I saw "assertion failure" in stderr.

llvm-svn: 168679
2012-11-27 06:42:48 +00:00
Meador Inge 25c9b3b6e4 instcombine: Migrate sprintf optimizations
This patch migrates the sprintf optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168677
2012-11-27 05:57:54 +00:00
Michael Ilseman 6cdacff2d0 Fast-math test for SimplifyInstruction: fold multiply by 0
Applied the patch, rather than committing it.

llvm-svn: 168656
2012-11-27 01:00:22 +00:00
Eli Friedman b14873c4f1 Get rid of the getPointeeAlignment helper function from
InstCombineLoadStoreAlloca.cpp, which had many issues.
(At least two bugs were noted on llvm-commits, and it was overly conservative.)
Instead, use getOrEnforceKnownAlignment.

llvm-svn: 168629
2012-11-26 23:04:53 +00:00
Shuxin Yang 6ea79e864d rdar://12329730 (defect 2)
Enhancement to InstCombine. Try to catch this opportunity:
  
 ---------------------------------------------------------------
 ((X^C1) >> C2) ^ C3  => (X>>C2) ^ ((C1>>C2)^C3)
  where the subexpression "X ^ C1" has more than one uses, and
  "(X^C1) >> C2" has single use. 
 ---------------------------------------------------------------- 

 Reviewed by Nadav (with minor change per his request).

llvm-svn: 168615
2012-11-26 21:44:25 +00:00
Meador Inge 08ca115abd instcombine: Migrate printf optimizations
This patch migrates the printf optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168604
2012-11-26 20:37:20 +00:00
Meador Inge 604937d1cc instcombine: Migrate toascii optimizations
This patch migrates the toascii optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168580
2012-11-26 03:38:52 +00:00
Meador Inge a62a39e0e9 instcombine: Migrate isascii optimizations
This patch migrates the isascii optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168579
2012-11-26 03:10:07 +00:00
Meador Inge 9a59ab6133 instcombine: Migrate isdigit optimizations
This patch migrates the isdigit optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168578
2012-11-26 02:31:59 +00:00
Meador Inge 24d134c375 Fix bogus comment; no functional change.
llvm-svn: 168575
2012-11-26 00:25:33 +00:00
Meador Inge a0b6d87879 instcombine: Migrate *abs optimizations
This patch migrates the *abs optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168574
2012-11-26 00:24:07 +00:00
Meador Inge 7415f8403d instcombine: Migrate ffs* optimizations
This patch migrates the ffs* optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168571
2012-11-25 20:45:27 +00:00
Nadav Rotem ea3824f160 Add support for pointer induction variables even when there is no integer induction variable.
llvm-svn: 168558
2012-11-25 08:41:35 +00:00
Patrik Hägglund 59189597de Disallow the undocumented practice of starting the datalayout string with '-'.
Update some test cases accordingly.

llvm-svn: 168516
2012-11-23 14:51:42 +00:00
Meador Inge 780a1861f1 Add more functions to the target library information.
I discovered a few more missing functions while migrating optimizations
from the simplify-libcalls pass to the instcombine (I already added some
in r167659).

llvm-svn: 168501
2012-11-22 15:36:42 +00:00
NAKAMURA Takumi 0a41c0bb18 llvm/test/Transforms/InstCombine/sdiv-1.ll: FileCheck-ize.
"not grep '-715827882'" performed as below...bad...

Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.

llvm-svn: 168430
2012-11-21 14:46:18 +00:00
Chandler Carruth 845b73c06f PR14055: Implement support for sub-vector operations in SROA.
Now if we can transform an alloca into a single vector value, but it has
subvector, non-element accesses, we form the appropriate shufflevectors
to allow SROA to proceed. This fixes PR14055 which pointed out a very
common pattern that SROA couldn't handle -- mixed vec3 and vec4
operations on a single alloca.

llvm-svn: 168418
2012-11-21 08:16:30 +00:00
Chandler Carruth 3e994a26e2 Fix PR14132 and handle OOB loads speculated throuh PHI nodes.
The issue is that we may end up with newly OOB loads when speculating
a load into the predecessors of a PHI node, and this confuses the new
integer splitting logic in some cases, triggering an assertion failure.
In fact, the branch in question must be dead code as it loads from
a too-narrow alloca. Add code to handle this gracefully and leave the
requisite FIXMEs for both optimizing more aggressively and doing more to
aid sanitizing invalid code which triggers these patterns.

llvm-svn: 168361
2012-11-20 10:02:19 +00:00
Chandler Carruth 18db795b05 Rework the rewriting of loads and stores for vector and integer allocas
to properly handle the combinations of these with split integer loads
and stores. This essentially replaces Evan's r168227 by refactoring the
code in a different way, and trynig to mirror that refactoring in both
the load and store sides of the rewriting.

Generally speaking there was some really problematic duplicated code
here that led to poorly founded assumptions and then subtle bugs. Now
much of the code actually flows through and follows a more consistent
style and logical path. There is still a tiny bit of duplication on the
store side of things, but it is much less bad.

This also changes the logic to never re-use a load or store instruction
as that was simply too error prone in practice.

I've added a few tests (one a reduction of the one in Evan's original
patch, which happened to be the same as the report in PR14349). I'm
going to look at adding a few more tests for things I found and fixed in
passing (such as the volatile tests in the vectorizable predicate).

This patch has survived bootstrap, and modulo one bugfix survived
Duncan's test suite, but let me know if anything else explodes.

llvm-svn: 168346
2012-11-20 01:12:50 +00:00
Duncan Sands 20bd7fa0f7 Fix PR14060, an infinite loop in reassociate. The problem was that one of the
operands of the expression being written was wrongly thought to be reusable as
an inner node of the expression resulting in it turning up as both an inner node
*and* a leaf, creating a cycle in the def-use graph.  This would have caused the
verifier to blow up if things had gotten that far, however it managed to provoke
an infinite loop first.

llvm-svn: 168291
2012-11-18 19:27:01 +00:00
Nick Lewycky 3d35b45f8e Don't try to calculate the alignment of an unsigned type. Fixes PR14371!
llvm-svn: 168280
2012-11-18 05:39:39 +00:00
Nadav Rotem c3c07e62e8 LoopVectorizer: Add initial support for pointer induction variables (for example: *dst++ = *src++).
At the moment we still require to have an integer induction variable (for example: i++).

llvm-svn: 168231
2012-11-17 00:27:03 +00:00
Evan Cheng f1b6177b62 Teach SROA rewriteVectorizedStoreInst to handle cases when the loaded value is narrower than the stored value. rdar://12713675
llvm-svn: 168227
2012-11-17 00:05:06 +00:00
Duncan Sands c41076c07c InstructionSimplify should be able to simplify A+B==B+A to 'true'
but wasn't due to the same logic bug that caused PR14361.

llvm-svn: 168186
2012-11-16 19:41:26 +00:00
Duncan Sands 1d3acddf0e Fix PR14361: wrong simplification of A+B==B+A. You may think that the old logic
replaced by this patch is equivalent to the new logic, but you'd be wrong, and
that's exactly where the bug was.  There's a similar bug in instsimplify which
manifests itself as instsimplify failing to simplify this, rather than doing it
wrong, see next commit.

llvm-svn: 168181
2012-11-16 18:55:49 +00:00
Hans Wennborg 18aa124075 Constant::IsThreadDependent(): Use dyn_cast<Constant> instead of cast
It turns out that the operands of a Constant are not always themselves
Constant. For example, one of the operands of BlockAddress is
BasicBlock, which is not a Constant.

This should fix the dragonegg-x86_64-linux-gcc-4.6-test build which
broke in r168037.

llvm-svn: 168147
2012-11-16 10:33:25 +00:00
Hans Wennborg 709e015cf1 Make GlobalOpt be conservative with TLS variables (PR14309)
For global variables that get the same value stored into them
everywhere, GlobalOpt will replace them with a constant. The problem is
that a thread-local GlobalVariable looks like one value (the address of
the TLS var), but is different between threads.

This patch introduces Constant::isThreadDependent() which returns true
for thread-local variables and constants which depend on them (e.g. a GEP
into a thread-local array), and teaches GlobalOpt not to track such
values.

llvm-svn: 168037
2012-11-15 11:40:00 +00:00
Duncan Sands ac852c742f Fix a crash observed by Shuxin Yang. The issue here is that LinearizeExprTree,
the utility for extracting a chain of operations from the IR, thought that it
might as well combine any constants it came across (rather than just returning
them along with everything else).  On the other hand, the factorization code
would like to see the individual constants (this is quite reasonable: it is
much easier to pull a factor of 3 out of 2*3 than it is to pull it out of 6;
you may think 6/3 isn't so hard, but due to overflow it's not as easy to undo
multiplications of constants as it may at first appear).  This patch therefore
makes LinearizeExprTree stupider: it now leaves optimizing to the optimization
part of reassociate, and sticks to just analysing the IR.

llvm-svn: 168035
2012-11-15 09:58:38 +00:00
Jakub Staszak 0c4468b5e6 Remove DOS line endings.
llvm-svn: 167968
2012-11-14 20:18:34 +00:00
Duncan Sands db698d8a8a Fix the instcombine GEP index widening transform to work correctly for vector
getelementptrs.

llvm-svn: 167829
2012-11-13 13:01:00 +00:00
Duncan Sands e6beec6765 Relax the restrictions on vector of pointer types, and vector getelementptr.
Previously in a vector of pointers, the pointer couldn't be any pointer type,
it had to be a pointer to an integer or floating point type.  This is a hassle
for dragonegg because the GCC vectorizer happily produces vectors of pointers
where the pointer is a pointer to a struct or whatever.  Vector getelementptr
was restricted to just one index, but now that vectors of pointers can have
any pointer type it is more natural to allow arbitrary vector getelementptrs.
There is however the issue of struct GEPs, where if each lane chose different
struct fields then from that point on each lane will be working down into
unrelated types.  This seems like too much pain for too little gain, so when
you have a vector struct index all the elements are required to be the same.

llvm-svn: 167828
2012-11-13 12:59:33 +00:00
Alexey Samsonov cfd662f279 Figure out <size> argument of llvm.lifetime intrinsics at the moment they are created (during function inlining)
llvm-svn: 167821
2012-11-13 07:15:32 +00:00
Meador Inge 193e035b9c instcombine: Migrate math library call simplifications
This patch migrates the math library call simplifications from the
simplify-libcalls pass into the instcombine library call simplifier.

I have typically migrated just one simplifier at a time, but the math
simplifiers are interdependent because:

   1. CosOpt, PowOpt, and Exp2Opt all depend on UnaryDoubleFPOpt.
   2. CosOpt, PowOpt, Exp2Opt, and UnaryDoubleFPOpt all depend on
      the option -enable-double-float-shrink.

These two factors made migrating each of these simplifiers individually
more of a pain than it would be worth.  So, I migrated them all together.

llvm-svn: 167815
2012-11-13 04:16:17 +00:00
Hal Finkel 2a1df367d4 BBVectorize: Don't vectorize vector-manipulation chains
Don't choose a vectorization plan containing only shuffles and
vector inserts/extracts. Due to inperfections in the cost model,
these can lead to infinite recusion.

llvm-svn: 167811
2012-11-13 03:12:40 +00:00
Shuxin Yang c94c3bb5d0 revert r167740
llvm-svn: 167787
2012-11-13 00:08:49 +00:00
Hal Finkel 3b79f55c5f BBVectorize: Only some insert element operand pairs are free.
This fixes another infinite recursion case when using target costs.
We can only replace insert element input chains that are pure (end
with inserting into an undef).

llvm-svn: 167784
2012-11-12 23:55:36 +00:00
Hal Finkel 9cf3372931 BBVectorize: Use a more sophisticated check for input cost
The old checking code, which assumed that input shuffles and insert-elements
could always be folded (and thus were free) is too simple.
This can only happen in special circumstances.
Using the simple check caused infinite recursion.

llvm-svn: 167750
2012-11-12 21:21:02 +00:00
Hal Finkel f8326b6052 BBVectorize: Check the types of compare instructions
The pass would previously assert when trying to compute the cost of
compare instructions with illegal vector types (like struct pointers).

llvm-svn: 167743
2012-11-12 19:41:38 +00:00
Shuxin Yang 1c442f5ec6 This change is to fix rdar://12571717 which is about assertion in Reassociate pass.
The assertion is trigged when the Reassociater tries to transform expression
     ... + 2 * n * 3 + 2 * m + ...
  into:
     ... + 2 * (n*3 + m).

In the process of the transformation, a helper routine folds the constant 2*3 into 6,
confusing optimizer which is trying the to eliminate the common factor 2, and cannot
find 2 any more. 

Review is pending. But I'd like commit first in order to help those who are waiting 
for this fix. 

llvm-svn: 167740
2012-11-12 19:34:11 +00:00
Hal Finkel ef53df0f9f BBVectorize: Check the input types of shuffles for legality
This fixes a bug where shuffles were being fused such that the
resulting input types were not legal on the target. This would
occur only when both inputs and dependencies were also foldable
operations (such as other shuffles) and there were other connected
pairs in the same block.

llvm-svn: 167731
2012-11-12 14:50:59 +00:00
Meador Inge b3e91f6ae0 Normalize memcmp constant folding results.
The library call simplifier folds memcmp calls with all constant arguments
to a constant.  For example:

  memcmp("foo", "foo", 3) ->  0
  memcmp("hel", "foo", 3) ->  1
  memcmp("foo", "hel", 3) -> -1

The folding is implemented in terms of the system memcmp that LLVM gets
linked with.  It currently just blindly uses the value returned from
the system memcmp as the folded constant.

This patch normalizes the values returned from the system memcmp to
(-1, 0, 1) so that we get consistent results across multiple platforms.
The test cases were adjusted accordingly.

llvm-svn: 167726
2012-11-12 14:00:45 +00:00
Meador Inge 9493eb9bc4 Remove hard-coded constant in Transforms/InstCombine/memcmp-1.ll
Transforms/InstCombine/memcmp-1.ll has a test case that looks like:

  @foo = constant [4 x i8] c"foo\00"
  @hel = constant [4 x i8] c"hel\00"

  ...

  %mem1 = getelementptr [4 x i8]* @hel, i32 0, i32 0
  %mem2 = getelementptr [4 x i8]* @foo, i32 0, i32 0
  %ret = call i32 @memcmp(i8* %mem1, i8* %mem2, i32 3)
  ret i32 %ret
  ; CHECK: ret i32 2

The folded return value (2 above) is computed using the system memcmp
that the compiler is linked with.  This can return different values on
different systems.  The test was originally written on an OS X 10.7.5
x86-64 box and passed.  However, it failed on one of the x86-64 FreeBSD
buildbots because the system memcpy on that machine returned a different
value (1 instead of 2).

I fixed the test by checking the folding constants with regexes.

llvm-svn: 167691
2012-11-11 07:10:25 +00:00
Meador Inge d4825780ed instcombine: Migrate memset optimizations
This patch migrates the memset optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167689
2012-11-11 06:49:03 +00:00
Meador Inge 9cf328b526 instcombine: Migrate memmove optimizations
This patch migrates the memmove optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167687
2012-11-11 06:22:40 +00:00
Meador Inge dd9234a10a instcombine: Migrate memcpy optimizations
This patch migrates the memcpy optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167686
2012-11-11 05:54:34 +00:00
Meador Inge 4d2827c10d instcombine: Migrate memcmp optimizations
This patch migrates the memcmp optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167683
2012-11-11 05:11:20 +00:00
Meador Inge 56edbc9323 instcombine: Migrate strstr optimizations
This patch migrates the strstr optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167682
2012-11-11 03:51:48 +00:00
Meador Inge bcd88ef764 instcombine: Migrate strcspn optimizations
This patch migrates the strcspn optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167675
2012-11-10 15:16:48 +00:00
Meador Inge 03be256db9 instcombine: Query target library information to gate libcall simplifications
Several of the simplifiers migrated from the simplify-libcalls pass to
the instcombine pass were not correctly checking the target library
information to gate the simplifications.  This patch ensures that the
check is made.

llvm-svn: 167660
2012-11-10 03:11:10 +00:00
Nadav Rotem 1cfef3e9ee Add support for memory runtime check. When we can, we calculate array bounds.
If the arrays are found to be disjoint then we run the vectorized version of
the loop. If they are not, we run the scalar code.

llvm-svn: 167608
2012-11-09 07:09:44 +00:00
NAKAMURA Takumi 43ab4ef9ba llvm/ConstantFolding.cpp: Make ReadDataFromGlobal() and FoldReinterpretLoadFromConstPtr() Big-endian-aware.
llvm-svn: 167595
2012-11-08 20:34:25 +00:00
Meador Inge 489b5d645f instcombine: Migrate strspn optimizations
This patch migrates the strspn optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167568
2012-11-08 01:33:50 +00:00
Hans Wennborg c3c8d95c51 Only do switch-to-lookup table transformation when TargetTransformInfo
is available.

llvm-svn: 167552
2012-11-07 21:35:12 +00:00
Hans Wennborg 11d4ebe224 Fix bad test IR in switch_to_lookup_table.ll
llvm-svn: 167543
2012-11-07 18:38:24 +00:00
Nadav Rotem 0914f0b262 Cost Model: add tables for some avx type-conversion hacks.
llvm-svn: 167480
2012-11-06 19:33:53 +00:00
Nadav Rotem ae79765676 Code Model: Improve the accuracy of the zext/sext/trunc vector cost estimation.
llvm-svn: 167412
2012-11-05 22:20:53 +00:00
Nadav Rotem 7411623fd8 Implement the cost of abnormal x86 instruction lowering as a table.
llvm-svn: 167395
2012-11-05 19:32:46 +00:00
Duncan Sands a318ef6fa6 Generalize the transform that boosts GEP indices to the size of a pointer to
also do it for vectors of pointers.

llvm-svn: 167354
2012-11-03 11:44:17 +00:00
Chandler Carruth b62807a95c Add a testcase to loop-idiom to cover PR14241 when we start handling
strided loops again.

llvm-svn: 167287
2012-11-02 08:40:24 +00:00
Chandler Carruth 099f5cb031 Revert the switch of loop-idiom to use the new dependence analysis.
The new analysis is not yet ready for prime time. It has a *critical*
flawed assumption, and some troubling shortages of testing. Until it's
been hammered into better shape, let's stick with the working code. This
should be easy to revert itself when the analysis is ready.

Fixes PR14241, a miscompile of any memcpy-able loop which uses a pointer
as the induction mechanism. If you have been seeing miscompiles in this
revision range, you really want to test with this backed out. The
results of this miscompile are a bit subtle as they can lead to
downstream passes concluding things are impossible which are in fact
possible.

Thanks to David Blaikie for the majority of the reduction of this
miscompile. I'll be checking in the test case in a non-revert commit.

Revesions reverted here:

r167045: LoopIdiom: Fix a serious missed optimization: we only turned
         top-level loops into memmove.
r166877: LoopIdiom: Add checks to avoid turning memmove into an infinite
         loop.
r166875: LoopIdiom: Recognize memmove loops.
r166874: LoopIdiom: Replace custom dependence analysis with
         DependenceAnalysis.
llvm-svn: 167286
2012-11-02 08:33:25 +00:00
Hal Finkel 376f82d5d3 BBVectorize: Commit the rest of the test-case change.
llvm-svn: 167257
2012-11-01 21:57:27 +00:00
Hal Finkel 560545b85f BBVectorize: Use target costs for incoming and outgoing values instead of the depth heuristic.
When target cost information is available, compute explicit costs of inserting and
extracting values from vectors. At this point, all costs are estimated using the
target information, and the chain-depth heuristic is not needed. As a result, it is now, by
default, disabled when using target costs.

llvm-svn: 167256
2012-11-01 21:50:12 +00:00
Chandler Carruth d5639ff80f Add a test case for PR14233.
llvm-svn: 167224
2012-11-01 10:26:36 +00:00
Chandler Carruth 7ec5085e01 Revert the series of commits starting with r166578 which introduced the
getIntPtrType support for multiple address spaces via a pointer type,
and also introduced a crasher bug in the constant folder reported in
PR14233.

These commits also contained several problems that should really be
addressed before they are re-committed. I have avoided reverting various
cleanups to the DataLayout APIs that are reasonable to have moving
forward in order to reduce the amount of churn, and minimize the number
of commits that were reverted. I've also manually updated merge
conflicts and manually arranged for the getIntPtrType function to stay
in DataLayout and to be defined in a plausible way after this revert.

Thanks to Duncan for working through this exact strategy with me, and
Nick Lewycky for tracking down the really annoying crasher this
triggered. (Test case to follow in its own commit.)

After discussing with Duncan extensively, and based on a note from
Micah, I'm going to continue to back out some more of the more
problematic patches in this series in order to ensure we go into the
LLVM 3.2 branch with a reasonable story here. I'll send a note to
llvmdev explaining what's going on and why.

Summary of reverted revisions:

r166634: Fix a compiler warning with an unused variable.
r166607: Add some cleanup to the DataLayout changes requested by
         Chandler.
r166596: Revert "Back out r166591, not sure why this made it through
         since I cancelled the command. Bleh, sorry about this!
r166591: Delete a directory that wasn't supposed to be checked in yet.
r166578: Add in support for getIntPtrType to get the pointer type based
         on the address space.
llvm-svn: 167221
2012-11-01 08:07:29 +00:00
Nadav Rotem 4cb8cdab5e LoopVectorize: Preserve NSW, NUW and IsExact flags.
llvm-svn: 167174
2012-10-31 21:40:39 +00:00
Nadav Rotem 6d7d39783d Fix a bug in the cost calculation of vector casts. Detect situations where bitcasts cost zero.
llvm-svn: 167170
2012-10-31 20:52:26 +00:00
Hans Wennborg b71f72aa82 Remove fixme about unreachable cases from SwitchToLookupTable
SimplifyCFG will have removed those cases for us.

llvm-svn: 167132
2012-10-31 16:15:25 +00:00
Hal Finkel 842ad0b621 BBVectorize: Choose pair ordering to minimize shuffles
BBVectorize would, except for loads and stores, always fuse instructions
so that the first instruction (in the current source order) would always
represent the low part of the input vectors and the second instruction
would always represent the high part. This lead to too many shuffles
being produced because sometimes the opposite order produces fewer of them.

With this change, BBVectorize tracks the kind of pair connections that form
the DAG of candidate pairs, and uses that information to reorder the pairs to
avoid excess shuffles. Using this information, a future commit will be able
to add VTTI-based shuffle costs to the pair selection procedure. Importantly,
the number of remaining shuffles can now be estimated during pair selection.

There are some trivial instruction reorderings in the test cases, and one
simple additional test where we certainly want to do a reordering to
avoid an unnecessary shuffle.

llvm-svn: 167122
2012-10-31 15:17:07 +00:00
Meador Inge 05a625a0ed instcombine: Migrate strto* optimizations
This patch migrates the strto* optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167119
2012-10-31 14:58:26 +00:00
Hans Wennborg 9e74dd97b8 Do simple constant propagation in lookup table formation for switches
By propagating the value for the switch condition, LLVM can now build
lookup tables for code such as:

  switch (x) {
    case 1: return 5;
    case 2: return 42;
    case 3: case 4: case 5:
      return x - 123;
    default:
      return 123;
  }

Given that x is known for each case, "x - 123" becomes a constant for
cases 3, 4, and 5.

llvm-svn: 167115
2012-10-31 13:42:45 +00:00
Benjamin Kramer 8682ac1a77 LCSSA: Add a workaround for another nasty SCEV cache invalidation issue.
I'm not entirely happy with this solution, but I don't see a smarter way currently.
Fixes PR14214.

llvm-svn: 167112
2012-10-31 10:01:29 +00:00
Benjamin Kramer 24c643b6de DependenceAnalysis: Don't crash if there is no constant operand.
This makes the code match the comments. Resolves a crash in loop idiom (PR14219).

llvm-svn: 167110
2012-10-31 09:20:38 +00:00
Meador Inge 6f8e01121a instcombine: Migrate strpbrk optimizations
This patch migrates the strpbrk optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167105
2012-10-31 04:29:58 +00:00
Meador Inge d589ac621b instcombine: Migrate strlen optimizations
This patch migrates the strlen optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167103
2012-10-31 03:33:06 +00:00
Meador Inge 067294b3ac instcombine: Migrate strncpy optimizations
This patch migrates the strncpy optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167102
2012-10-31 03:33:00 +00:00
Nadav Rotem ce77ab0c24 LoopVectorize: Do not vectorize loops with tiny constant trip counts.
llvm-svn: 167101
2012-10-31 03:31:07 +00:00
Nadav Rotem ff7889196b Add support for loops that don't start with Zero.
This is important for loops in the LAPACK test-suite.
These loops start at 1 because they are auto-converted from fortran.

llvm-svn: 167084
2012-10-31 00:45:26 +00:00
Meador Inge 9a6a190562 instcombine: Migrate stpcpy optimizations
This patch migrates the stpcpy optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.  Note that the
__stpcpy_chk simplifications were migrated in a previous commit.

llvm-svn: 167083
2012-10-31 00:20:56 +00:00
Meador Inge cdb2ca54ae instcombine: Split out the __stpcpy_chk simplifications from StrCpyChkOpt
r166198 migrated the strcpy optimization to instcombine.  The strcpy
simplifier that was migrated from Transforms/Scalar/SimplifyLibCalls.cpp
was also doing some __strcpy_chk simplifications.  Those fortified
simplifications were migrated as well, but introduced a bug in the
__stpcpy_chk simplifier in the process.  This happened because the
__strcpy_chk and __stpcpy_chk simplifiers were both mapped to StrCpyChkOpt
which was updated with simplifications that worked for __strcpy_chk, but
not __stpcpy_chk.

This patch fixes the problem by adding proper test coverage and creating a
new simplifier for __stpcpy_chk (instead of sharing one with __strcpy_chk).

llvm-svn: 167082
2012-10-31 00:20:51 +00:00
Chandler Carruth 1296b59522 Fix PR14212: For some strange reason I treated vectors differently from
integers in that the code to handle split alloca-wide integer loads or
stores doesn't come first. It should, for the same reasons as with
integers, and the PR attests to that. Also had to fix a busted assert in
that this test case also covers.

llvm-svn: 167051
2012-10-30 20:52:40 +00:00
Benjamin Kramer 48a6478242 LoopIdiom: Fix a serious missed optimization: we only turned top-level loops into memmove.
Thanks to Preston Briggs for catching this!

llvm-svn: 167045
2012-10-30 19:49:39 +00:00
Hal Finkel 2eaadd1a2d BBVectorize: Fix a small bug introduced in r167042.
We need to make sure that we take the correct load/store alignment
when the inputs are flipped.

llvm-svn: 167044
2012-10-30 19:47:37 +00:00
Nadav Rotem bc21aceb19 LoopVectorize: Add support for write-only loops when the write destination is a single pointer.
Speedup SciMark by 1%

llvm-svn: 167035
2012-10-30 18:36:45 +00:00
Nadav Rotem b3e8e688da LoopVectorize: Fix a bug in the initialization of reduction variables. AND needs to start at all-one
while XOR, and OR need to start at zero.

llvm-svn: 167032
2012-10-30 18:12:36 +00:00
Ulrich Weigand 7db4429430 Set %defaultjit to use MCJIT for PowerPC targets.
Update Transforms/LICM/2003-12-11-SinkingToPHI.ll test to use
%defaultjit as well.

llvm-svn: 167031
2012-10-30 18:07:58 +00:00
Hans Wennborg e0cf14fa9d switch_to_lookup_table.ll: Remove some unnecessary lines, comments,
function attributes, etc.

llvm-svn: 167016
2012-10-30 15:11:52 +00:00
Ulrich Weigand 6a9bb51a8d Enable some additional constant folding for PPCDoubleDouble.
This fixes Clang :: CodeGen/complex-builtints.c on PowerPC.

llvm-svn: 167013
2012-10-30 12:33:18 +00:00
Hans Wennborg f3254838e4 Use TargetTransformInfo to control switch-to-lookup table transformation
When the switch-to-lookup tables transform landed in SimplifyCFG, it
was pointed out that this could be inappropriate for some targets.
Since there was no way at the time for the pass to know anything about
the target, an awkward reverse-transform was added in CodeGenPrepare
that turned lookup tables back into switches for some targets.

This patch uses the new TargetTransformInfo to determine if a
switch should be transformed, and removes
CodeGenPrepare::ConvertLoadToSwitch.

llvm-svn: 167011
2012-10-30 11:23:25 +00:00
Hal Finkel d0b95b0961 Remove an invalid assert in TargetTransformImpl
getCastInstrCost had an assert prohibiting scalar to vector casts. Such casts,
however, are allowed. This should make the vectorizer buildbot happier.

llvm-svn: 166998
2012-10-30 02:41:57 +00:00
Benjamin Kramer 8d2ee55a0c LoopIdiom: Add checks to avoid turning memmove into an infinite loop.
I don't think this is possible with the current implementation but that may change eventually.

llvm-svn: 166877
2012-10-27 15:18:28 +00:00
Benjamin Kramer 1c9e5186c0 LoopIdiom: Recognize memmove loops.
This turns loops like
  for (unsigned i = 0; i != n; ++i)
    p[i] = p[i+1];
into memmove, which has a highly optimized implementation in most libcs.

This was really easy with the new DependenceAnalysis :)

llvm-svn: 166875
2012-10-27 14:25:51 +00:00
Benjamin Kramer d5c9be8247 LoopIdiom: Replace custom dependence analysis with DependenceAnalysis.
Requires a lot less code and complexity on loop-idiom's side and the more
precise analysis can catch more cases, like the one I included as a test case.
This also fixes the edge-case miscompilation from PR9481.

Compile time performance seems to be slightly worse, but this is mostly due
to an extra LCSSA run scheduled by the PassManager and should be fixed there.

llvm-svn: 166874
2012-10-27 14:25:44 +00:00
Nadav Rotem 859366f93f 1. Fix a bug in getTypeConversion. When a *simple* type is split, we need to return the type of the split result.
2. Change the maximum vectorization width from 4 to 8.
3. A test for both.

llvm-svn: 166864
2012-10-27 04:11:32 +00:00
Nadav Rotem afae78edab Refactor the VectorTargetTransformInfo interface.
Add getCostXXX calls for different families of opcodes, such as casts, arithmetic, cmp, etc.

Port the LoopVectorizer to the new API.

The LoopVectorizer now finds instructions which will remain uniform after vectorization. It uses this information when calculating the cost of these instructions.

llvm-svn: 166836
2012-10-26 23:49:28 +00:00
Hal Finkel e0d9db9953 Move target-specific BBVectorize tests into a separate directory.
llvm-svn: 166802
2012-10-26 19:38:09 +00:00
Nadav Rotem fcd1af344c Move the target-specific tests, which require specific backends, to dirs that only run if the target is present.
llvm-svn: 166796
2012-10-26 18:52:01 +00:00
Rafael Espindola 4253bd8faf Change the internalize pass to internalize all symbols when given an empty
list of externals. This makes sense since a shared library with no symbols
can still be useful if it has static constructors.

llvm-svn: 166795
2012-10-26 18:47:48 +00:00
Benjamin Kramer e3d821a466 Fix SCEV cache invalidation in LCSSA and LoopSimplify.
The LoopSimplify bug is pretty harmless because the loop goes from unanalyzable
to analyzable but the LCSSA bug is very nasty. It only comes into play with a
specific order of the LoopPassManager worklist and can cause actual
miscompilations, when a SCEV refers to a value that has been replaced with PHI
node. SCEVExpander may then insert code into the wrong place, either violating
domination or randomly miscompiling stuff.

Comes with an extensive test case reduced from the test-suite with
bugpoint+SCEVValidator.

llvm-svn: 166787
2012-10-26 17:31:43 +00:00
Nadav Rotem 15198e94d2 Fix a crash in SimpliftDemandedBits of vectors of pointers.
PR14183.

llvm-svn: 166785
2012-10-26 17:17:05 +00:00
Rafael Espindola 375c7f3859 Port testcase to FileCheck.
llvm-svn: 166742
2012-10-26 00:14:11 +00:00
Hal Finkel 41a6ded4a0 Disable generation of pointer vectors by BBVectorize.
Once vector-of-pointer support works, then this can be reverted.

llvm-svn: 166741
2012-10-26 00:05:26 +00:00
Nadav Rotem 8255ceb2cf Revert 166726 because it may have broken a number of SPEC tests. PR14183.
llvm-svn: 166739
2012-10-25 23:51:48 +00:00
Nadav Rotem bb4cfb5ee1 Fix a crash in ValueTracking. Add support for vectors of pointers.
llvm-svn: 166726
2012-10-25 21:52:52 +00:00
Nadav Rotem ede4fd4777 Fix the cost-model test.
llvm-svn: 166722
2012-10-25 21:42:50 +00:00
Hal Finkel 65e0da798b Add CPU model to BBVectorize cost-model tests.
llvm-svn: 166720
2012-10-25 21:31:51 +00:00
Nadav Rotem 27d523580c Add the cpu model to the test.
llvm-svn: 166718
2012-10-25 21:18:42 +00:00
Hal Finkel cbf9365f4c Begin incorporating target information into BBVectorize.
This is the first of several steps to incorporate information from the new
TargetTransformInfo infrastructure into BBVectorize. Two things are done here:

 1. Target information is used to determine if it is profitable to fuse two
    instructions. This means that the cost of the vector operation must not
    be more expensive than the cost of the two original operations. Pairs that
    are not profitable are no longer considered (because current cost information
    is incomplete, for intrinsics for example, equal-cost pairs are still
    considered).

 2. The 'cost savings' computed for the profitability check are also used to
    rank the DAGs that represent the potential vectorization plans. Specifically,
    for nodes of non-trivial depth, the cost savings is used as the node
    weight.

The next step will be to incorporate the shuffle costs into the DAG weighting;
this will give the edges of the DAG weights as well. Once that is done, when
target information is available, we should be able to dispense with the
depth heuristic.

llvm-svn: 166716
2012-10-25 21:12:23 +00:00
Jakob Stoklund Olesen 977f41a1fa Also optimize large switch statements.
The isValueEqualityComparison() guard at the top of SimplifySwitch()
only applies to some of the possible transformations.

The newer transformations work just fine on large switches, and the
check on predecessor count is nonsensical.

llvm-svn: 166710
2012-10-25 18:51:15 +00:00
Chandler Carruth 58d0556765 Teach SROA how to split whole-alloca integer loads and stores into
smaller integer loads and stores.

The high-level motivation is that the frontend sometimes generates
a single whole-alloca integer load or store during ABI lowering of
splittable allocas. We need to be able to break this apart in order to
see the underlying elements and properly promote them to SSA values. The
hope is that this fixes some performance regressions on x86-32 with the
new SROA pass.

Unfortunately, this causes quite a bit of churn in the test cases, and
bloats some IR that comes out. When we see an alloca that consists soley
of bits and bytes being extracted and re-inserted, we now do some
splitting first, before building widened integer "bucket of bits"
representations. These are always well folded by instcombine however, so
this shouldn't actually result in missed opportunities.

If this splitting of all-integer allocas does cause problems (perhaps
due to smaller SSA values going into the RA), we could potentially go to
some extreme measures to only do this integer splitting trick when there
are non-integer component accesses of an alloca, but discovering this is
quite expensive: it adds yet another complete walk of the recursive use
tree of the alloca.

Either way, I will be watching build bots and LNT bots to see what
fallout there is here. If anyone gets x86-32 numbers before & after this
change, I would be very interested.

llvm-svn: 166662
2012-10-25 04:37:07 +00:00
Nadav Rotem 5ffb049a55 Add support for additional reduction variables: AND, OR, XOR.
Patch by Paul Redmond <paul.redmond@intel.com>.

llvm-svn: 166649
2012-10-25 00:08:41 +00:00
Nadav Rotem 4a87683a41 Implement a basic cost model for vector and scalar instructions.
llvm-svn: 166642
2012-10-24 23:47:38 +00:00
Hal Finkel 69b07a2c3a Update GVN to support vectors of pointers.
GVN will now generate ptrtoint instructions for vectors of pointers.
Fixes PR14166.

llvm-svn: 166624
2012-10-24 21:22:30 +00:00
Nadav Rotem a721b21c64 LoopVectorizer: Add a basic cost model which uses the VTTI interface.
llvm-svn: 166620
2012-10-24 20:36:32 +00:00
Hal Finkel 30bd9346a0 getSmallConstantTripMultiple should never return zero.
When the trip count is -1, getSmallConstantTripMultiple could return zero,
and this would cause runtime loop unrolling to assert. Instead of returning
zero, one is now returned (consistent with the existing overflow cases).
Fixes PR14167.

llvm-svn: 166612
2012-10-24 19:46:44 +00:00
Micah Villmow 12d9127833 Add in support for getIntPtrType to get the pointer type based on the address space.
This checkin also adds in some tests that utilize these paths and updates some of the
clients.

llvm-svn: 166578
2012-10-24 15:52:52 +00:00
Duncan Sands 72c19ed386 Add a testcase that would have noticed the typo fixed in commit 166475.
llvm-svn: 166547
2012-10-24 07:17:20 +00:00
Nadav Rotem 5bed7b4fad Use the AliasAnalysis isIdentifiedObj because it also understands mallocs and c++ news.
PR14158.

llvm-svn: 166491
2012-10-23 18:44:18 +00:00
Bill Wendling 5858b56ce3 Ignore unreachable blocks when doing memory dependence analysis on non-local
loads. It's not really profitable and may result in GVN going into an infinite
loop when it hits constructs like this:

     %x = gep %some.type %x, ...

Found via an LTO build of LLVM.

llvm-svn: 166490
2012-10-23 18:37:11 +00:00
Duncan Sands 533c8ae79f Transform code like this
%V = mul i64 %N, 4
 %t = getelementptr i8* bitcast (i32* %arr to i8*), i32 %V
into
 %t1 = getelementptr i32* %arr, i32 %N
 %t = bitcast i32* %t1 to i8*
incorporating the multiplication into the getelementptr.
This happens all the time in dragonegg, for example for
  int foo(int *A, int N) {
    return A[N];
  }
because gcc turns this into byte pointer arithmetic before it hits the plugin:
  D.1590_2 = (long unsigned int) N_1(D);
  D.1591_3 = D.1590_2 * 4;
  D.1592_5 = A_4(D) + D.1591_3;
  D.1589_6 = *D.1592_5;
  return D.1589_6;
The D.1592_5 line is a POINTER_PLUS_EXPR, which is turned into a getelementptr
on a bitcast of A_4 to i8*, so this becomes exactly the kind of IR that the
transform fires on.

An analogous transform (with no testcases!) already existed for bitcasts of
arrays, so I rewrote it to share code with this one.

llvm-svn: 166474
2012-10-23 08:28:26 +00:00
Nadav Rotem 1c7fc71e69 Don't crash if the load/store pointer is not a GEP.
Fix by Shivarama Rao <Shivarama.Rao@amd.com>

llvm-svn: 166427
2012-10-22 18:27:56 +00:00
Argyrios Kyrtzidis 54ff5e81a1 Revert r166407 because it caused analyzer tests to crash and broke self-host bots.
llvm-svn: 166424
2012-10-22 18:16:14 +00:00
Hal Finkel 931c52b84c BBVectorize should ignore unreachable blocks.
Unreachable blocks can have invalid instructions. For example,
jump threading can produce self-referential instructions in
unreachable blocks. Also, we should not be spending time
optimizing unreachable code. Fixes PR14133.

llvm-svn: 166423
2012-10-22 18:00:55 +00:00
Nadav Rotem 03011f1393 Vectorizer: optimize the generation of selects. If the condition is uniform, generate a scalar-cond select (i1 as selector).
llvm-svn: 166409
2012-10-22 04:38:00 +00:00
Nick Lewycky 8b67e1e0b9 Reapply r166405, teaching tailcallelim to be smarter about nocapture, with a
very small but very important bugfix:
  bool shouldExplore(Use *U) {
    Value *V = U->get();
    if (isa<CallInst>(V) || isa<InvokeInst>(V))
    [...]
should have read:
  bool shouldExplore(Use *U) {
    Value *V = U->getUser();
    if (isa<CallInst>(V) || isa<InvokeInst>(V))
Fixes PR14143!

llvm-svn: 166407
2012-10-22 03:03:52 +00:00
NAKAMURA Takumi 60d56d2eea Revert r166405, "Teach TailRecursionElimination to consider 'nocapture' when deciding whether"
It broke selfhosting stage2 in several builders.

llvm-svn: 166406
2012-10-22 00:48:51 +00:00
Nick Lewycky 2d28f2bf83 Teach TailRecursionElimination to consider 'nocapture' when deciding whether
calls can be marked tail.

llvm-svn: 166405
2012-10-21 23:51:22 +00:00
Hal Finkel 8884dc323f DataLayout should use itself when calculating the size of a vector.
This is important for vectors of pointers because only DataLayout,
not the underlying vector type, knows how to calculate the size
of the pointers in the vector. Fixes PR14138.

llvm-svn: 166401
2012-10-21 20:38:03 +00:00
Benjamin Kramer f77f224df9 Revert r166390 "LoopIdiom: Replace custom dependence analysis with LoopDependenceAnalysis."
It passes all tests, produces better results than the old code but uses the
wrong pass, LoopDependenceAnalysis, which is old and unmaintained. "Why is it
still in tree?", you might ask. The answer is obviously: "To confuse developers."

Just swapping in the new dependency pass sends the pass manager into an infinte
loop, I'll try to figure out why tomorrow.

llvm-svn: 166399
2012-10-21 19:31:16 +00:00
Benjamin Kramer 3ae8bc68af LoopIdiom: Replace custom dependence analysis with LoopDependenceAnalysis.
Requires a lot less code and complexity on loop-idiom's side and the more
precise analysis can catch more cases, like the one I included as a test case.
This also fixes the edge-case miscompilation from PR9481. I'm not entirely
sure that all cases are handled that the old checks handled but LDA will
certainly become smarter in the future.

llvm-svn: 166390
2012-10-21 15:03:07 +00:00
Nadav Rotem fe88c67161 Fix a bug in the vectorization of wide load/store operations.
We used a SCEV to detect that A[X] is consecutive. We assumed that X was
the induction variable. But X can be any expression that uses the induction
for example: X = i + 2;

llvm-svn: 166388
2012-10-21 06:49:10 +00:00
Nadav Rotem c1679a95b6 Add support for reduction variables that do not start at zero.
This is important for nested-loop reductions such as :

In the innermost loop, the induction variable does not start with zero:

for (i = 0 .. n)
 for (j = 0 .. m)
  sum += ...

llvm-svn: 166387
2012-10-21 05:52:51 +00:00
Nadav Rotem 7e1084d36c Vectorizer: fix a bug in the classification of induction/reduction phis.
llvm-svn: 166384
2012-10-21 02:38:01 +00:00
Nadav Rotem e5dc57d4fb Fix an infinite loop in the loop-vectorizer.
PR14134.

llvm-svn: 166379
2012-10-20 20:45:01 +00:00
Benjamin Kramer f55b592cc8 InstCombine: Fix an edge case where constant icmps could sneak into ConstantFoldInstOperands and crash.
Have to refactor the ConstantFolder interface one day to define bugs like this away. Fixes PR14131.

llvm-svn: 166374
2012-10-20 08:43:52 +00:00
Nadav Rotem d189b82a9b Vectorize: teach cavVectorizeMemory to distinguish between A[i]+=x and A[B[i]]+=x.
If the pointer is consecutive then it is safe to read and write. If the pointer is non-loop-consecutive then
it is unsafe to vectorize it because we may hit an ordering issue.

llvm-svn: 166371
2012-10-20 08:26:33 +00:00
Nadav Rotem 4f7f72702b Vectorizer: Add support for loop reductions.
For example:

  for (i=0; i<n; i++)
   sum += A[i] +  B[i] + i;

llvm-svn: 166351
2012-10-19 23:05:40 +00:00
Benjamin Kramer 317d6c621d SimplifyLibcalls: The return value of ffsll is always i32, even when the input is zero.
Fixes PR13028.

llvm-svn: 166313
2012-10-19 20:43:44 +00:00
Benjamin Kramer f1088a37cb Indvars: Don't recursively delete instruction during BB iteration.
This can invalidate the iterators leading to use after frees and crashes.
Fixes PR12536.

llvm-svn: 166291
2012-10-19 17:53:54 +00:00
Benjamin Kramer a225ed8d2b SCEVExpander: Don't crash when trying to merge two constant phis.
Just constant fold them so they can't cause any trouble. Fixes PR12627.

llvm-svn: 166286
2012-10-19 16:37:30 +00:00
Nadav Rotem ced93f3a05 vectorizer: Add support for reading and writing from the same memory location.
llvm-svn: 166255
2012-10-19 01:24:18 +00:00
Meador Inge 000dbccfc6 instcombine: Migrate strcpy optimizations
This patch migrates the strcpy optimizations from the simplify-libcalls pass
into the instcombine library call simplifier.  Note also that StrCpyChkOpt
has been updated with a few simplifications that were being done in the
simplify-libcalls version of StrCpyOpt, but not in the migrated implementation
of StrCpyOpt.  There is no reason to overload StrCpyOpt with fortified and
regular simplifications in the new model since there is already a dedicated
simplifier for __strcpy_chk.

llvm-svn: 166198
2012-10-18 18:12:40 +00:00
Nadav Rotem b52f717411 Vectorizer: Add support for loops with an unknown count. For example:
for (i=0; i<n; i++){
        a[i] = b[i+1] + c[i+3];
     }

llvm-svn: 166165
2012-10-18 05:29:12 +00:00
Nadav Rotem 6b94c2a09b Add a loop vectorizer.
llvm-svn: 166112
2012-10-17 18:25:06 +00:00
Chandler Carruth 6fab42aa39 This just in, it is a *bad idea* to use 'udiv' on an offset of
a pointer. A very bad idea. Let's not do that. Fixes PR14105.

Note that this wasn't *that* glaring of an oversight. Originally, these
routines were only called on offsets within an alloca, which are
intrinsically positive. But over the evolution of the pass, they ended
up being called for arbitrary offsets, and things went downhill...

llvm-svn: 166095
2012-10-17 09:23:48 +00:00
Michael Gottesman 02a1141e5a [InstCombine] Teach InstCombine how to handle an obfuscated splat.
An obfuscated splat is where the frontend poorly generates code for a splat
using several different shuffles to create the splat, i.e.,

  %A = load <4 x float>* %in_ptr, align 16
  %B = shufflevector <4 x float> %A, <4 x float> undef, <4 x i32> <i32 0, i32 0, i32 undef, i32 undef>
  %C = shufflevector <4 x float> %B, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 4, i32 undef>
  %D = shufflevector <4 x float> %C, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 2, i32 4>

llvm-svn: 166061
2012-10-16 21:29:38 +00:00
Chandler Carruth 49c8eea3c0 Update the memcpy rewriting to fully support widened int rewriting. This
includes extracting ints for copying elsewhere and inserting ints when
copying into the alloca. This should fix the CanSROA assertion coming
out of Clang's regression test suite.

llvm-svn: 165931
2012-10-15 10:24:43 +00:00
Chandler Carruth 9d966a2002 Follow-up fix to r165928: handle memset rewriting for widened integers,
and generally clean up the memset handling. It had rotted a bit as the
other rewriting logic got polished more.

llvm-svn: 165930
2012-10-15 10:24:40 +00:00
Chandler Carruth 435c4e0792 First major step toward addressing PR14059. This teaches SROA to handle
cases where we have partial integer loads and stores to an otherwise
promotable alloca to widen[1] those loads and stores to cover the entire
alloca and bitcast them into the appropriate type such that promotion
can proceed.

These partial loads and stores stem from an annoying confluence of ARM's
calling convention and ABI lowering and the FCA pre-splitting which
takes place in SROA. Clang lowers a { double, double } in-register
function argument as a [4 x i32] function argument to ensure it is
placed into integer 32-bit registers (a really unnerving implicit
contract between Clang and the ARM backend I would add). This results in
a FCA load of [4 x i32]* from the { double, double } alloca, and SROA
decomposes this into a sequence of i32 loads and stores. Inlining
proceeds, code gets folded, but at the end of the day, we still have i32
stores to the low and high halves of a double alloca. Widening these to
be i64 operations, and bitcasting them to double prior to loading or
storing allows promotion to proceed for these allocas.

I looked quite a bit changing the IR which Clang produces for this case
to be more friendly, but small changes seem unlikely to help. I think
the best representation we could use currently would be to pass 4 i32
arguments thereby avoiding any FCAs, but that would still require this
fix. It seems like it might eventually be nice to somehow encode the ABI
register selection choices outside of the parameter type system so that
the parameter can be a { double, double }, but the CC register
annotations indicate that this should be passed via 4 integer registers.

This patch does not address the second problem in PR14059, which is the
reverse: when a struct alloca is loaded as a *larger* single integer.

This patch also does not address some of the code quality issues with
the FCA-splitting. Those don't actually impede any optimizations really,
but they're on my list to clean up.

[1]: Pedantic footnote: for those concerned about memory model issues
here, this is safe. For the alloca to be promotable, it cannot escape or
have any use of its address that could allow these loads or stores to be
racing. Thus, widening is always safe.

llvm-svn: 165928
2012-10-15 08:40:30 +00:00
Meador Inge 40b6fac36c instcombine: Migrate strcmp and strncmp optimizations
This patch migrates the strcmp and strncmp optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165915
2012-10-15 03:47:37 +00:00
Meador Inge 174185084c instcombine: Migrate strchr and strrchr optimizations
This patch migrates the strchr and strrchr optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165875
2012-10-13 16:45:37 +00:00
Meador Inge 7fb2f7378b instcombine: Migrate strcat and strncat optimizations
This patch migrates the strcat and strncat optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165874
2012-10-13 16:45:32 +00:00
Chandler Carruth ba9319925e Teach SROA to cope with wrapper aggregates. These show up a lot in ABI
type coercion code, especially when targetting ARM. Things like [1
x i32] instead of i32 are very common there.

The goal of this logic is to ensure that when we are picking an alloca
type, we look through such wrapper aggregates and across any zero-length
aggregate elements to find the simplest type possible to form a type
partition.

This logic should (generally speaking) rarely fire. It only ends up
kicking in when an alloca is accessed using two different types (for
instance, i32 and float), and the underlying alloca type has wrapper
aggregates around it. I noticed a significant amount of this occurring
looking at stepanov_abstraction generated code for arm, and suspect it
happens elsewhere as well.

Note that this doesn't yet address truly heinous IR productions such as
PR14059 is concerning. Those result in mismatched *sizes* of types in
addition to mismatched access and alloca types.

llvm-svn: 165870
2012-10-13 10:49:33 +00:00
Nick Lewycky 49ac81ac68 Don't crash when !tbaa.struct contents is invalid.
llvm-svn: 165693
2012-10-11 02:05:23 +00:00
Duncan Sands 244e3ba5f1 Add the testcase from pr13254 (the old scalarreply pass handles this wrong;
the new sroa pass handles it right).

llvm-svn: 165644
2012-10-10 18:41:19 +00:00
Michael Ilseman c93cffb590 New EarlyCSE tests for CSE-ing across commutativity.
llvm-svn: 165510
2012-10-09 16:58:13 +00:00
Alexey Samsonov 3b861ec989 Fix PR14016.
DeadArgumentElimination pass can replace one LLVM function with another,
invalidating a pointer stored in debug info metadata entry for this function.
To fix this, we collect debug info descriptors for functions before
running a DeadArgumentElimination pass and "patch" pointers in metadata nodes
if we replace a function.

llvm-svn: 165490
2012-10-09 08:13:15 +00:00
Chandler Carruth 503eb2bb49 Fix PR14034, an infloop / heap corruption / crash bug in the new SROA.
Thanks to Benjamin for the raw test case. This one took about 50 times
longer to reduce than to fix. =/

llvm-svn: 165476
2012-10-09 01:58:35 +00:00
Micah Villmow 9cfc13d46c Move TargetData to DataLayout.
llvm-svn: 165403
2012-10-08 16:39:34 +00:00
Chandler Carruth e5b7a2ccd2 Teach the new SROA a new trick. Now we zap any memcpy or memmoves which
are in fact identity operations. We detect these and kill their
partitions so that even splitting is unaffected by them. This is
particularly important because Clang relies on emitting identity memcpy
operations for struct copies, and these fold away to constants very
often after inlining.

Fixes the last big performance FIXME I have on my plate.

llvm-svn: 165285
2012-10-05 01:29:09 +00:00
Benjamin Kramer d12e82e523 SimplifyCFG: Enhance the "remove CFG edge that leads to null pointer dereference" optimization to also handle instructions with multiple uses.
We conservatively only check the first use to avoid walking long use chains.
This catches the common case of having both a load and a store to a pointer
supplied by a PHI node.

llvm-svn: 165232
2012-10-04 16:11:49 +00:00
Duncan Sands a6d20010fe In my recent change to avoid use of underaligned memory I didn't notice that
cpyDest can be mutated in some cases, which would then cause a crash later if
indeed the memory was underaligned.  This brought down several buildbots, so
I guess the underaligned case is much more common than I thought!

llvm-svn: 165228
2012-10-04 13:53:21 +00:00
Duncan Sands 271ea6cdc5 The alignment of an sret parameter is known: it must be at least the
alignment of the return type.  Teach the optimizers this.

llvm-svn: 165226
2012-10-04 13:36:31 +00:00
Chandler Carruth ac8317fd36 Fix PR13969, a mini-phase-ordering issue with the new SROA pass.
Currently, we re-visit allocas when something changes about the way they
might be *split* to allow better scalarization to take place. However,
we weren't handling the case when the *promotion* is what would change
the behavior of SROA. When an address derived from an alloca is stored
into another alloca, we consider the first to have escaped. If the
second is ever promoted to an SSA value, we will suddenly be able to run
the SROA pass on the first alloca.

This patch adds explicit support for this form if iteration. When we
detect a store of a pointer derived from an alloca, we flag the
underlying alloca for reprocessing after promotion. The logic works hard
to only do this when there is definitely going to be promotion and it
might remove impediments to the analysis of the alloca.

Thanks to Nick for the great test case and Benjamin for some sanity
check review.

llvm-svn: 165223
2012-10-04 12:33:50 +00:00
Duncan Sands c6ada69a14 The memcpy optimizer was happily doing call slot forwarding when the new memory
was less aligned than the old.  In the testcase this results in an overaligned
memset: the memset alignment was correct for the original memory but is too much
for the new memory.  Fix this by either increasing the alignment of the new
memory or bailing out if that isn't possible.  Should fix the gcc-4.7 self-host
buildbot failure.

llvm-svn: 165220
2012-10-04 10:54:40 +00:00
Chandler Carruth 43c8b46deb Teach the integer-promotion rewrite strategy to be endianness aware.
Sorry for this being broken so long. =/

As part of this, switch all of the existing tests to be Little Endian,
which is the behavior I was asserting in them anyways! Add in a new
big-endian test that checks the interesting behavior there.

Another part of this is to tighten the rules abotu when we perform the
full-integer promotion. This logic now rejects cases where there fully
promoted integer is a non-multiple-of-8 bitwidth or cases where the
loads or stores touch bits which are in the allocated space of the
alloca but are not loaded or stored when accessing the integer. Sadly,
these aren't really observable today as the rest of the pass will
already ensure the invariants hold. However, the latter situation is
likely to become a potential concern in the future.

Thanks to Benjamin and Duncan for early review of this patch. I'm still
looking into whether there are further endianness issues, please let me
know if anyone sees BE failures persisting past this.

llvm-svn: 165219
2012-10-04 10:39:28 +00:00
Jakub Staszak f8a8129513 Fix PR13967.
llvm-svn: 165187
2012-10-03 23:59:47 +00:00
Chandler Carruth 08e5f49f90 Fix an issue where we failed to adjust the alignment constraint on
a memcpy to reflect that '0' has a different meaning when applied to
a load or store. Now we correctly use underaligned loads and stores for
the test case added.

llvm-svn: 165101
2012-10-03 08:26:28 +00:00
Chandler Carruth 4b2b38d398 Try to use a better set of abstractions for computing the alignment
necessary during rewriting. As part of this, fix a real think-o here
where we might have left off an alignment specification when the address
is in fact underaligned. I haven't come up with any way to trigger this,
as there is always some other factor that reduces the alignment, but it
certainly might have been an observable bug in some way I can't think
of. This also slightly changes the strategy for placing explicit
alignments on loads and stores to only do so when the alignment does not
match that required by the ABI. This causes a few redundant alignments
to go away from test cases.

I've also added a couple of tests that really push on the alignment that
we end up with on loads and stores. More to come here as I try to fix an
underlying bug I have conjectured and produced test cases for, although
it's not clear if this bug is the one currently hitting dragonegg's
gcc47 bootstrap.

llvm-svn: 165100
2012-10-03 08:14:02 +00:00
Chandler Carruth b09f0a3c75 Teach the new SROA to handle cases where an alloca that has already been
scheduled for processing on the worklist eventually gets deleted while
we are processing another alloca, fixing the original test case in
PR13990.

To facilitate this, add a remove_if helper to the SetVector abstraction.
It's not easy to use the standard abstractions for this because of the
specifics of SetVectors types and implementation.

Finally, a nice small test case is included. Thanks to Benjamin for the
fantastic reduced test case here! All I had to do was delete some empty
basic blocks!

llvm-svn: 165065
2012-10-02 22:46:45 +00:00
Benjamin Kramer bc2724f6e0 Fix broken tests.
llvm-svn: 165019
2012-10-02 15:49:34 +00:00
Chandler Carruth 9866b97f94 Fix more misspellings found by Duncan during review.
llvm-svn: 164940
2012-10-01 12:30:45 +00:00
Chandler Carruth 176ca71a82 Fix several issues with alignment. We weren't always accounting for type
alignment requirements of the new alloca. As one consequence which was
reported as a bug by Duncan, we overaligned memcpy calls to ranges of
allocas after they were rewritten to types with lower alignment
requirements. Other consquences are possible, but I don't have any test
cases for them.

llvm-svn: 164937
2012-10-01 12:16:54 +00:00
Benjamin Kramer 9fc3dc7781 SimplifyCFG: Don't crash when forming a switch bitmap with an undef default value.
Fixes PR13985.

llvm-svn: 164934
2012-10-01 11:31:48 +00:00
Chandler Carruth 54e8f0b4cf Refactor the PartitionUse structure to actually use the Use* instead of
a pair of instructions, one for the used pointer and the second for the
user. This simplifies the representation and also makes it more dense.

This was noticed because of the miscompile in PR13926. In that case, we
were running up against a fundamental "bad idea" in the speculation of
PHI and select instructions: the speculation and rewriting are
interleaved, which requires phi speculation to also perform load
rewriting! This is bad, and causes us to miss opportunities to do (for
example) vector rewriting only exposed after PHI speculation, etc etc.
It also, in the old system, required us to insert *new* load uses into
the current partition's use list, which would then be ignored during
rewriting because we had already extracted an end iterator for the use
list. The appending behavior (and much of the other oddities) stem from
the strange de-duplication strategy in the PartitionUse builder.
Amusingly, all this went without notice for so long because it could
only be triggered by having *different* GEPs into the same partition of
the same alloca, where both different GEPs were operands of a single
PHI, and where the GEP which was not encountered first also had multiple
uses within that same PHI node... Hence the insane steps required to
reproduce.

So, step one in fixing this fundamental bad idea is to make the
PartitionUse actually contain a Use*, and to make the builder do proper
deduplication instead of funky de-duplication. This is enough to remove
the appending behavior, and fix the miscompile in PR13926, but there is
more work to be done here. Subsequent commits will lift the speculation
into its own visitor. It'll be a useful step toward potentially
extracting all of the speculation logic into a generic utility
transform.

The existing PHI test case for repeated operands has been made more
extreme to catch even these issues. This test case, run through the old
pass, will exactly reproduce the miscompile from PR13926. ;] We were so
close here!

llvm-svn: 164925
2012-10-01 01:49:22 +00:00
Chandler Carruth 903790eff5 Fix a somewhat surprising miscompile where code relying on an ABI
alignment could lose it due to the alloca type moving down to a much
smaller alignment guarantee.

Now SROA will actively compute a proper alignment, factoring the target
data, any explicit alignment, and the offset within the struct. This
will in some cases lower the alignment requirements, but when we lower
them below those of the type, we drop the alignment entirely to give
freedom to the code generator to align it however is convenient.

Thanks to Duncan for the lovely test case that pinned this down. =]

llvm-svn: 164891
2012-09-29 10:41:21 +00:00
Evan Cheng 9e99f0c40d Add test case for r164850.
llvm-svn: 164867
2012-09-29 00:12:08 +00:00
Benjamin Kramer 255dea4b90 CorrelatedPropagation: BasicBlock::removePredecessor can simplify PHI nodes. If the it's the condition of a SwitchInst, reload it.
Fixes PR13972.

llvm-svn: 164818
2012-09-28 10:42:50 +00:00
Benjamin Kramer ed84360a45 GlobalOpt: non-constexpr bitcasts or GEPs can occur even if the global value is only stored once.
Fixes PR13968.

llvm-svn: 164815
2012-09-28 10:01:27 +00:00
Nick Lewycky 156999f8b9 Surprisingly, we missed a trivial case here. Fix that!
llvm-svn: 164814
2012-09-28 09:33:53 +00:00
Meador Inge 7fbc364ecb instcombine: Add more test cases for __strncpy_chk simplification
llvm-svn: 164800
2012-09-27 21:21:31 +00:00
Meador Inge 213d642840 instcombine: Add more test cases for __strcpy_chk simplification
llvm-svn: 164799
2012-09-27 21:21:28 +00:00
Meador Inge 058e29c432 instcombine: Add more test cases for __memmove_chk simplification
llvm-svn: 164798
2012-09-27 21:21:25 +00:00
Meador Inge 0d402f06fe instcombine: Add more test cases for __memcpy_chk simplification
llvm-svn: 164797
2012-09-27 21:21:21 +00:00
Meador Inge 6f01da1c99 instcombine: Add more test cases for __memset_chk simplification
llvm-svn: 164796
2012-09-27 21:21:18 +00:00
Benjamin Kramer c2081d1c19 Fix a integer overflow in SimplifyCFG's look up table formation logic.
If the width is very large it gets truncated from uint64_t to uint32_t when
passed to TD->fitsInLegalInteger. The truncated value can fit in a register.
This manifested in massive memory usage or crashes (PR13946).

llvm-svn: 164784
2012-09-27 18:29:58 +00:00
Sylvestre Ledru 91ce36c986 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
2012-09-27 10:14:43 +00:00
Sylvestre Ledru 721cffd53a Fix a typo 'iff' => 'if'
llvm-svn: 164767
2012-09-27 09:59:43 +00:00
Nick Lewycky 7b4cd228aa Prefer shuffles to selects. Backends love shuffles!
llvm-svn: 164763
2012-09-27 08:33:56 +00:00
Hans Wennborg cd3a11f725 Address Duncan's comments on r164684:
- Put statistics in alphabetical order
- Don't use getZextValue when building TableInt, just use APInts
- Introduce Create{Z,S}ExtOrTrunc in IRBuilder.

llvm-svn: 164696
2012-09-26 14:01:53 +00:00
Chandler Carruth 3e4273dd0c When rewriting the pointer operand to a load or store which has
alignment guarantees attached, re-compute the alignment so that we
consider offsets which impact alignment.

llvm-svn: 164690
2012-09-26 10:45:28 +00:00
Chandler Carruth 871ba7249c Teach all of the loads, stores, memsets and memcpys created by the
rewriter in SROA to carry a proper alignment. This involves
interrogating various sources of alignment, etc. This is a more complete
and principled fix to PR13920 as well as related bugs pointed out by Eli
in review and by inspection in the area.

Also by inspection fix the integer and vector promotion paths to create
aligned loads and stores. I still need to work up test cases for
these... Sorry for the delay, they were found purely by inspection.

llvm-svn: 164689
2012-09-26 10:27:46 +00:00
Benjamin Kramer 205d70ed28 Fix tests that didn't test anything.
llvm-svn: 164686
2012-09-26 09:51:39 +00:00
Hans Wennborg 39583b88a0 SimplifyCFG: Make the switch-to-lookup table transformation store the
tables in bitmaps when they fit in a target-legal register.

This saves some space, and it also allows for building tables that would
otherwise be deemed too sparse.

One interesting case that this hits is example 7 from
http://blog.regehr.org/archives/320. We currently generate good code
for this when lowering the switch to the selection DAG: we build a
bitmask to decide whether to jump to one block or the other. My patch
will result in the same bitmask, but it removes the need for the jump,
as the return value can just be retrieved from the mask.

llvm-svn: 164684
2012-09-26 09:44:49 +00:00
Chandler Carruth 4bd8f66ed9 Revert the business end of r164636 and try again. I'll come in again. ;]
This should really, really fix PR13916. For real this time. The
underlying bug is... a bit more subtle than I had imagined.

The setup is a code pattern that leads to an @llvm.memcpy call with two
equal pointers to an alloca in the source and dest. Now, not any pattern
will do. The alloca needs to be formed just so, and both pointers should
be wrapped in different bitcasts etc. When this precise pattern hits,
a funny sequence of events transpires. First, we correctly detect the
potential for overlap, and correctly optimize the memcpy. The first
time. However, we do simplify the set of users of the alloca, and that
causes us to run the alloca back through the SROA pass in case there are
knock-on simplifications. At this point, a curious thing has happened.
If we happen to have an i8 alloca, we have direct i8 pointer values. So
we don't bother creating a cast, we rewrite the arguments to the memcpy
to dircetly refer to the alloca.

Now, in an unrelated area of the pass, we have clever logic which
ensures that when visiting each User of a particular pointer derived
from an alloca, we only visit that User once, and directly inspect all
of its operands which refer to that particular pointer value. However,
the mechanism used to detect memcpy's with the potential to overlap
relied upon getting visited once per *Use*, not once per *User*. This is
always true *unless* the same exact value is both source and dest. It
turns out that almost nothing actually produces that pattern though.

We can hand craft test cases that more directly test this behavior of
course, and those are included. Also, note that there is a significant
missed optimization here -- we prove in many cases that there is
a non-volatile memcpy call with identical source and dest addresses. We
shouldn't prevent splitting the alloca in that case, and in fact we
should just remove such memcpy calls eagerly. I'll address that in
a subsequent commit.

llvm-svn: 164669
2012-09-26 07:41:40 +00:00
Nick Lewycky d9f7910671 Don't drop the alignment on a memcpy intrinsic when producing a store. This is
only a missed optimization opportunity if the store is over-aligned, but a
miscompile if the store's new type has a higher natural alignment than the
memcpy did. Fixes PR13920!

llvm-svn: 164641
2012-09-25 22:46:21 +00:00
Nick Lewycky 9f19349846 Don't try to promote the same alloca twice. Fixes PR13916!
Chandler, it's not obvious that it's okay that this alloca gets into the list
twice to begin with. Please review and see whether this is the fix you really
want, but I wanted to get a fix checked in quickly.

llvm-svn: 164634
2012-09-25 21:15:50 +00:00
Nick Lewycky 65b1f5055d Make this test check the transforms it's actually doing. Also add a test that it
doesn't transform the trivially unsafe case.

llvm-svn: 164617
2012-09-25 18:17:38 +00:00
Chandler Carruth 8b907e8acb Fix a case where SROA did not correctly detect dead PHI or selects due
to chains or cycles between PHIs and/or selects. Also add a couple of
really nice test cases reduced from Kostya's reports in PR13905 and
PR13906. Both are fixed by this patch.

llvm-svn: 164596
2012-09-25 10:03:40 +00:00
Nick Lewycky 42bca056e0 Don't forget that strcpy and friends return a pointer to the destination, so
it's not a dead store if that pointer is used. Whoops!

llvm-svn: 164583
2012-09-25 01:55:59 +00:00
Nick Lewycky 9f4729d331 Teach DSE that strcpy, strncpy, strcat and strncat are all stores which may be
dead.

llvm-svn: 164561
2012-09-24 22:09:10 +00:00
Richard Osborne 6fb4bd77e2 Add missing : in CHECK line.
llvm-svn: 164540
2012-09-24 17:22:43 +00:00
Richard Osborne 2fd29bfb90 Add missing check for presence of target data.
This avoids a crash in visitAllocaInst when target data isn't available.

llvm-svn: 164539
2012-09-24 17:10:03 +00:00
Chandler Carruth 92924fd28f Address one of the original FIXMEs for the new SROA pass by implementing
integer promotion analogous to vector promotion. When there is an
integer alloca being accessed both as its integer type and as a narrower
integer type, promote the narrower access to "insert" and "extract" the
smaller integer from the larger one, and make the integer alloca
a candidate for promotion.

In the new formulation, we don't care about target legal integer or use
thresholds to control things. Instead, we only perform this promotion to
an integer type which the frontend has already emitted a load or store
for. This bounds the scope and prevents optimization passes from
coalescing larger and larger entities into a single integer.

llvm-svn: 164479
2012-09-24 00:34:20 +00:00
Chandler Carruth e7a1ba5e8b Switch to a signed representation for the dynamic offsets while walking
across the uses of the alloca. It's entirely possible for negative
numbers to come up here, and in some rare cases simply doing the 2's
complement arithmetic isn't the correct decision. Notably, we can't zext
the index of the GEP. The definition of GEP is that these offsets are
sign extended or truncated to the size of the pointer, and then wrapping
2's complement arithmetic used.

This patch fixes an issue that comes up with *no* input from the
buildbots or bootstrap afaict. The only place where it manifested,
disturbingly, is Clang's own regression test suite. A reduced and
targeted collection of tests are added to cope with this. Note that I've
tried to pin down the potential cases of overflow, but may have missed
some cases. I've tried to add a few cases to test this, but its hard
because LLVM has quite limited support for >64bit constructs.

llvm-svn: 164475
2012-09-23 11:43:14 +00:00
Chandler Carruth 225d4bdb07 Fix a case where the new SROA pass failed to zap dead operands to
selects with a constant condition. This resulted in the operands
remaining live through the SROA rewriter. Most of the time, this just
caused some dead allocas to persist and get zapped by later passes, but
in one case found by Joerg, it caused a crash when we tried to *promote*
the alloca despite it having this dead use. We already have the
mechanisms in place to handle this, just wire select up to them.

llvm-svn: 164427
2012-09-21 23:36:40 +00:00
Benjamin Kramer eba9aca5cd LoopIdiom: Give up when the loop is not in canonical form.
We rely on it when doing the transforms. This can happen when there is an
indirectbr in  the loop.

Fixes PR13892.

llvm-svn: 164383
2012-09-21 17:27:23 +00:00
Benjamin Kramer efb4d34bcf InstCombine: Make sure we use the pre-zext type when creating a constant of a value that is zext'd.
Fixes PR13250.

llvm-svn: 164377
2012-09-21 16:26:41 +00:00
Manman Ren 93ab64916f SimplifyCFG: sink common codes from IF, ELSE blocks down to END block.
We already have HoistThenElseCodeToIf, this patch implements
SinkThenElseCodeToEnd. When END block has only two predecessors and each
predecessor terminates with unconditional branches, we compare instructions in
IF and ELSE blocks backwards and check whether we can sink the common
instructions down.

rdar://12191395

llvm-svn: 164325
2012-09-20 22:37:36 +00:00
Hans Wennborg f744fa917d SimplifyCFG: Don't generate invalid code for switch used to initialize
two variables where the first variable is returned and the second
ignored.

I don't think this occurs in practice (other passes should have cleaned
up the unused phi node), but it should still be handled correctly.

Also make the logic for determining if we should return early less
sketchy.

llvm-svn: 164225
2012-09-19 14:24:21 +00:00
Hans Wennborg ff9b5a8465 Move load_to_switch.ll to test/CodeGen/SPARC/
Because the test invokes llc -march=sparc, it needs to be in a directory
which is only run when the sparc target is built.

llvm-svn: 164211
2012-09-19 09:25:03 +00:00
Nadav Rotem 0b66119141 rename test
llvm-svn: 164210
2012-09-19 09:22:17 +00:00
Nadav Rotem 4eb3d4b2cf Prevent inlining of callees which allocate lots of memory into a recursive caller.
Example:

void foo() {
 ... foo();   // I'm recursive!

  bar();
}

bar() {  int a[1000];  // large stack size }

rdar://10853263

llvm-svn: 164207
2012-09-19 08:08:04 +00:00
Hans Wennborg 02fbc71647 CodeGenPrep: turn lookup tables into switches for some targets.
This is a follow-up from r163302, which added a transformation to
SimplifyCFG that turns some switches into loads from lookup tables.

It was pointed out that some targets, such as GPUs and deeply embedded
targets, might not find this appropriate, but SimplifyCFG doesn't have
enough information about the target to decide this.

This patch adds the reverse transformation to CodeGenPrep: it turns
loads from lookup tables back into switches for targets where we do not
build jump tables (assuming these are also the targets where lookup
tables are inappropriate).

Hopefully we will eventually get to have target information in
SimplifyCFG, and then this CodeGenPrep transformation can be removed.

llvm-svn: 164206
2012-09-19 07:48:16 +00:00
Chandler Carruth 3f882d4cf5 Fix the last crasher I've gotten a reproduction for in SROA. This one
from the dragonegg build bots when we turned on the full version of the
pass. Included a much reduced test case for this pesky bug, despite
bugpoint's uncooperative behavior.

Also, I audited all the similar code I could find and didn't spot any
other cases where this mistake cropped up.

llvm-svn: 164178
2012-09-18 22:37:19 +00:00
Andrew Trick 402edbbe39 LSR critical edge splitting fix for PR13756.
llvm-svn: 164147
2012-09-18 17:51:33 +00:00
Chandler Carruth d356fd02a9 Fix getCommonType in a different way from the way I fixed it when
working on FCA splitting. Instead of refusing to form a common type when
there are uses of a subsection of the alloca as well as a use of the
entire alloca, just skip the subsection uses and continue looking for
a whole-alloca use with a type that we can use.

This produces slightly prettier IR I think, and also fixes the other
failure in the test.

llvm-svn: 164146
2012-09-18 17:49:37 +00:00
Benjamin Kramer d4d37db071 XFAIL SROA test until Chandler can get to it.
llvm-svn: 164128
2012-09-18 14:27:53 +00:00
Chandler Carruth a34f3567e0 Fix a warning in release builds and a test case I forgot to update with
a fix to getCommonType in the previous patch.

llvm-svn: 164120
2012-09-18 13:02:06 +00:00
Chandler Carruth 42cb9cb14f Add a major missing piece to the new SROA pass: aggressive splitting of
FCAs. This is essential in order to promote allocas that are used in
struct returns by frontends like Clang. The FCA load would block the
rest of the pass from firing, resulting is significant regressions with
the bullet benchmark in the nightly test suite.

Thanks to Duncan for repeated discussions about how best to do this, and
to both him and Benjamin for review.

This appears to have blocked many places where the pass tries to fire,
and so I'm expect somewhat different results with this fix added.

As with the last big patch, I'm including a change to enable the SROA by
default *temporarily*. Ben is going to remove this as soon as the LNT
bots pick up the patch. I'm just trying to get a round of LNT numbers
from the stable machines in the lab.

NOTE: Four clang tests are expected to fail in the brief window where
this is enabled. Sorry for the noise!

llvm-svn: 164119
2012-09-18 12:57:43 +00:00
Richard Osborne b68053e266 Fix instcombine to obey requested alignment when merging allocas.
llvm-svn: 164117
2012-09-18 09:31:44 +00:00
Manman Ren 5657555357 PGO: preserve branch-weight metadata when simplifying Switch to a sub, an icmp
and a conditional branch; also when removing dead cases from a switch.

llvm-svn: 164084
2012-09-18 00:47:33 +00:00
Manman Ren ce48ea7e25 PGO: preserve branch-weight metadata when simplifying Switch
Hanlde the case when we split the default edge if the default target has "icmp"
and unconditinal branch.

llvm-svn: 164076
2012-09-17 23:07:43 +00:00
Manman Ren 774246a3a9 PGO: preserve branch-weight metadata when simplifying SwitchOnSelect.
llvm-svn: 164068
2012-09-17 22:28:55 +00:00
Manman Ren 2d4c10fc49 PGO: preserve branch-weight metadata when simplifying two branches with a common
destination in SimplifyCondBranchToCondBranch.

llvm-svn: 164054
2012-09-17 21:30:40 +00:00
Chandler Carruth 70b44c5ccf Port the SSAUpdater-based promotion logic from the old SROA pass to the
new one, and add support for running the new pass in that mode and in
that slot of the pass manager. With this the new pass can completely
replace the old one within the pipeline.

The strategy for enabling or disabling the SSAUpdater logic is to do it
by making the requirement of the domtree analysis optional. By default,
it is required and we get the standard mem2reg approach. This is usually
the desired strategy when run in stand-alone situations. Within the
CGSCC pass manager, we disable requiring of the domtree analysis and
consequentially trigger fallback to the SSAUpdater promotion.

In theory this would allow the pass to re-use a domtree if one happened
to be available even when run in a mode that doesn't require it. In
practice, it lets us have a single pass rather than two which was
simpler for me to wrap my head around.

There is a hidden flag to force the use of the SSAUpdater code path for
the purpose of testing. The primary testing strategy is just to run the
existing tests through that path. One notable difference is that it has
custom code to handle lifetime markers, and one of the tests has been
enhanced to exercise that code.

This has survived a bootstrap and the test suite without serious
correctness issues, however my run of the test suite produced *very*
alarming performance numbers. I don't entirely understand or trust them
though, so more investigation is on-going.

To aid my understanding of the performance impact of the new SROA now
that it runs throughout the optimization pipeline, I'm enabling it by
default in this commit, and will disable it again once the LNT bots have
picked up one iteration with it. I want to get those bots (which are
much more stable) to evaluate the impact of the change before I jump to
any conclusions.

NOTE: Several Clang tests will fail because they run -O3 and check the
result's order of output. They'll go back to passing once I disable it
again.

llvm-svn: 163965
2012-09-15 11:43:14 +00:00
Manman Ren bfb9d435e4 PGO: preserve branch-weight metadata when simplifying two branches with a common
destination.

Updated previous implementation to fix a case not covered:
// PBI: br i1 %x, TrueDest, BB
// BI:  br i1 %y, TrueDest, FalseDest
The other case was handled correctly.
// PBI: br i1 %x, BB, FalseDest
// BI:  br i1 %y, TrueDest, FalseDest

Also tried to use 64-bit arithmetic instead of APInt with scale to simplify the
computation. Let me know if you have other opinions about this.

llvm-svn: 163954
2012-09-15 00:39:57 +00:00
Manman Ren 8691e5220b PGO: preserve branch-weight metadata when simplifying a switch with a single
case to a conditional branch and when removing dead cases.

llvm-svn: 163942
2012-09-14 21:53:06 +00:00
Alex Rosenberg af2808cb72 Review feedback from Duncan Sands. Alphabetize includes and simplify
lit config.

llvm-svn: 163928
2012-09-14 19:19:57 +00:00
Manman Ren d81b8e88e3 PGO: preserve branch-weight metadata when merging two switches where
the default target of the first switch is not the basic block the second switch
is in (PredDefault != BB).

llvm-svn: 163916
2012-09-14 17:29:56 +00:00
Chandler Carruth 1b398ae0ae Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
  thresholds.
- It is overly conservative about which constructs can be split and
  promoted.
- The vector value replacement aspect is separated from the splitting
  logic, missing many opportunities where splitting and vector value
  formation can work together.
- The splitting is entirely based around the underlying type of the
  alloca, despite this type often having little to do with the reality
  of how that memory is used. This is especially prevelant with unions
  and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
  replacement (again because it is separate) can kick in for
  preposterous cases where we simply should have split the value. This
  results in forming i1024 and i2048 integer "bit vectors" that
  tremendously slow down subsequnet IR optimizations (due to large
  APInts) and impede the backend's lowering.

The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.

First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).

The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.

Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.

Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.

Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.

After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.

There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.

Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.

Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.

Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.

llvm-svn: 163883
2012-09-14 09:22:59 +00:00
Dan Gohman 3f553c21eb Handle the new !tbaa.struct metadata tags when converting a memcpy into scalar
loads and stores.

llvm-svn: 163844
2012-09-13 21:51:01 +00:00
Benjamin Kramer 15a257dadd MemCpyOpt: When forming a memset from stores also take GEP constexprs into account.
This is common when storing to global variables.

llvm-svn: 163809
2012-09-13 16:29:49 +00:00
Dan Gohman 7c84dad80a Detect overflow in the path count computation. rdar://12277446.
llvm-svn: 163739
2012-09-12 20:45:17 +00:00
Manman Ren 49dbe255e6 PGO: preserve branch-weight metadata when removing a case which jumps
to the default target.

llvm-svn: 163724
2012-09-12 17:04:11 +00:00
Manman Ren 571d9e4b80 SimplifyCFG: preserve branch-weight metadata when creating a new switch from
a pair of switch/branch where both depend on the value of the same variable and
the default case of the first switch/branch goes to the second switch/branch.

Code clean up and fixed a few issues:
1> handling the case where some cases of the 2nd switch are invalidated
2> correctly calculate the weight for the 2nd switch when it is a conditional eq

Testing case is modified from Alastair's original patch.

llvm-svn: 163635
2012-09-11 17:43:35 +00:00
Alex Rosenberg 04b43aab43 Add a pass that renames everything with metasyntatic names. This works well after using bugpoint to reduce the confusion presented by the original names, which no longer mean what they used to.
llvm-svn: 163592
2012-09-11 02:46:18 +00:00
Andrew Trick d3b4d2cb76 Remove an incorrect assert during branch weight propagation.
Patch and test case by Alastair Murray!

llvm-svn: 163437
2012-09-08 00:07:26 +00:00
Hans Wennborg feb4d07d88 Fix switch_to_lookup_table.ll test from r163302.
The lookup tables did not get built in a deterministic order.
This makes them get built in the order that the corresponding phi nodes
were found.

llvm-svn: 163305
2012-09-06 10:10:35 +00:00
Hans Wennborg 8a62fc5294 Build lookup tables for switches (PR884)
This adds a transformation to SimplifyCFG that attemps to turn switch
instructions into loads from lookup tables. It works on switches that
are only used to initialize one or more phi nodes in a common successor
basic block, for example:

  int f(int x) {
    switch (x) {
    case 0: return 5;
    case 1: return 4;
    case 2: return -2;
    case 5: return 7;
    case 6: return 9;
    default: return 42;
  }

This speeds up the code by removing the hard-to-predict jump, and
reduces code size by removing the code for the jump targets.

llvm-svn: 163302
2012-09-06 09:43:28 +00:00
Manman Ren f3fedb6935 JumpThreading: when default destination is the destination of some cases in a
switch, make sure we include the value for the cases when calculating edge
value from switch to the default destination.

rdar://12241132

llvm-svn: 163270
2012-09-05 23:45:58 +00:00
Dan Gohman df476e5e93 Make provenance checking conservative in cases when
pointers-to-strong-pointers may be in play. These can lead to retains and
releases happening in unstructured ways, foiling the optimizer. This fixes
rdar://12150909.

llvm-svn: 163180
2012-09-04 23:16:20 +00:00
Nadav Rotem 03dcd85b56 LICM may hoist an instruction with undefined behavior above a trap.
Scan the body of the loop and find instructions that may trap.
Use this information when deciding if it is safe to hoist or sink instructions.
Notice that we can optimize the search of instructions that may throw in the case of nested loops.

rdar://11518836

llvm-svn: 163132
2012-09-04 10:25:04 +00:00
Bob Wilson dcc54decd5 Fix more fallout from r158919, similar to PR13547.
This code used to only handle malloc-like calls, which do not read memory.
r158919 changed it to check isNoAliasFn(), which includes strdup-like and
realloc-like calls, but it was not checking for dependencies on the memory
read by those calls.

llvm-svn: 163106
2012-09-03 05:15:15 +00:00
Benjamin Kramer 599a4bb6ea LoopRotation: Make the brute force DomTree update more brute force.
We update until we hit a fixpoint. This is probably slow but also
slightly simplifies the code. It should also fix the occasional
invalid domtrees observed when building with expensive checking.

I couldn't find a case where this had a measurable slowdown, but
if someone finds a pathological case where it does we may have
to find a cleverer way of updating dominators here.

Thanks to Duncan for the test case.

llvm-svn: 163091
2012-09-02 11:57:22 +00:00
Michael Gottesman 2dc1120f15 [llvm] Updated the test fold-vector-select so that we test the vector selects exhaustively.
llvm-svn: 162953
2012-08-30 23:11:49 +00:00
Benjamin Kramer 17ce7117d0 Fix test case.
llvm-svn: 162913
2012-08-30 15:42:45 +00:00
Benjamin Kramer afdfdb5cff LoopRotate: Also rotate loops with multiple exits.
The old PHI updating code in loop-rotate was replaced with SSAUpdater a while
ago, it has no problems with comples PHIs. What had to be fixed is detecting
whether a loop was already rotated and updating dominators when multiple exits
were present.

This change increases overall code size a bit, mostly due to additional loop
unrolling opportunities. Passes test-suite and selfhost with -verify-dom-info.
Fixes PR7447.

Thanks to Andy for the input on the domtree updating code.

llvm-svn: 162912
2012-08-30 15:39:42 +00:00
Nadav Rotem d5f5777b77 It is illegal to transform (sdiv (ashr X c1) c2) -> (sdiv x (2^c1 * c2)),
because C always rounds towards zero.

Thanks Dirk and Ben.

llvm-svn: 162899
2012-08-30 11:23:20 +00:00
Benjamin Kramer 8bcc971174 Make MemoryBuiltins aware of TargetLibraryInfo.
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.

Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.

Fixes PR13694 and probably others.

llvm-svn: 162841
2012-08-29 15:32:21 +00:00
Benjamin Kramer 9c0a807c27 InstCombine: Guard the transform introduced in r162743 against large ints and non-const shifts.
llvm-svn: 162751
2012-08-28 13:08:13 +00:00
Nadav Rotem d457787fed Make sure that we don't call getZExtValue on values > 64 bits.
Thanks Benjamin for noticing this.

llvm-svn: 162749
2012-08-28 12:23:22 +00:00
Nadav Rotem 11935b29f3 Teach InstCombine to canonicalize [SU]div+[AL]shl patterns.
For example:
  %1 = lshr i32 %x, 2
  %2 = udiv i32 %1, 100

rdar://12182093

llvm-svn: 162743
2012-08-28 10:01:43 +00:00
Benjamin Kramer e07728b936 SimplifyLibCalls: Give all safely-shrinkable libcalls the same treatment.
llvm-svn: 162383
2012-08-22 19:39:15 +00:00
Chad Rosier 1df1fb511f Whitespace.
llvm-svn: 162370
2012-08-22 17:34:11 +00:00
Chad Rosier 671dc096ae Add test case for r162368.
llvm-svn: 162369
2012-08-22 17:31:04 +00:00
Chandler Carruth c908ca1766 Port the global copy optimization from the SROA pass to InstCombine.
This optimization is really just replacing allocas wholesale with
globals, there is no scalarization.

The underlying motivation for this patch is to simplify the SROA pass
and focus it on splitting and promoting allocas.

llvm-svn: 162271
2012-08-21 08:39:44 +00:00
Benjamin Kramer 9d03242fcf InstCombine: Fix a crasher when encountering a function pointer.
llvm-svn: 162180
2012-08-18 22:04:34 +00:00
Benjamin Kramer 8c2a733c55 InstCombine: Add a couple of fabs identities for comparing with 0.0.
llvm-svn: 162174
2012-08-18 20:06:47 +00:00
Benjamin Kramer 000132454c SimplifyLibcalls: Add fabs and trunc to the list of libcalls that are safe to shrink from double to float.
llvm-svn: 162173
2012-08-18 19:27:32 +00:00
Benjamin Kramer 34764fe2e4 MemoryBuiltins: Properly guard ObjectSizeOffsetVisitor against cycles in the IR.
The previous fix only checked for simple cycles, use a set to catch longer
cycles too.

Drop the broken check from the ObjectSizeOffsetEvaluator. The BoundsChecking
pass doesn't have to deal with invalid IR like InstCombine does.

llvm-svn: 162120
2012-08-17 19:26:41 +00:00
Benjamin Kramer 4901f0d2a2 Guard MemoryBuiltins against self-looping GEPs, which can occur in unreachable code due to constant propagation.
Fixes PR13621.

llvm-svn: 162098
2012-08-17 14:16:37 +00:00
Benjamin Kramer 2f47a3fb07 Fix broken check lines.
I really need to find a way to automate this, but I can't come up with a regex
that has no false positives while handling tricky cases like custom check
prefixes.

llvm-svn: 162097
2012-08-17 12:28:26 +00:00
Rafael Espindola cc80cdebb9 Teach GVN to reason about edges dominating uses. This allows it to handle cases
where some fact lake a=b dominates a use in a phi, but doesn't dominate the
basic block itself.

This feature could also be implemented by splitting critical edges, but at least
with the current algorithm reasoning about the dominance directly is faster.

The time for running "opt -O2" in the testcase in pr10584 is 1.003 times slower
and on gcc as a single file it is 1.0007 times faster.

llvm-svn: 162023
2012-08-16 15:09:43 +00:00
Michael Liao 69e172a6f0 fix infinite loop in instcombine with more than 4GB memcpy
- memcpy size is wrongly truncated into 32-bit and treat 8GB memcpy is
  0-sized memcpy
- as 0-sized memcpy/memset is already removed before SimplifyMemTransfer
  and SimplifyMemSet in visitCallInst, replace 0 checking with
  assertions.
- replace getZExtValue() with getLimitedValue() according to
  Eli Friedman

llvm-svn: 161923
2012-08-15 03:49:59 +00:00
Craig Topper 2a40418a99 Change greater than to greater than or equal so that an identical sized store to the same offset is treated as completing overwriting.
llvm-svn: 161857
2012-08-14 07:32:05 +00:00
Nadav Rotem 70409991bc During the CodeGenPrepare we often lower intrinsics (such as objsize)
and allow some optimizations to turn conditional branches into unconditional.
This commit adds a simple control-flow optimization which merges two consecutive
basic blocks which are connected by a single edge. This allows the codegen to
operate on larger basic blocks.

rdar://11973998

llvm-svn: 161852
2012-08-14 05:19:07 +00:00
Eli Friedman 4c923b3b3f The normal edge of an invoke is not allowed to branch to a block with a
landingpad.  Enforce it in the verifier, and fix the regression tests to match.

llvm-svn: 161697
2012-08-10 20:55:20 +00:00
Pete Cooper 0deca6be79 Fix crash when when do lto on Bullet. Dynamic GEPs in SROA were incorrectly being applied to all accesses to an alloca, not just the ones which read from the GEP. Thanks to Evan for reducing the test. rdar://11861001
llvm-svn: 161654
2012-08-10 03:26:36 +00:00
Eli Friedman 08ec0a8122 isAllocLikeFn is allowed to return true for functions which read memory; make
sure we account for that correctly in DeadStoreElimination.  Fixes a regression
from r158919.  PR13547.

llvm-svn: 161468
2012-08-08 02:17:32 +00:00
Dan Gohman b948736002 Avoid recomputing the unique exit blocks and their insert points when doing
multiple scalar promotions on a single loop. This also has the effect of
preserving the order of stores sunk out of loops, which is aesthetically
pleasing, and it happens to fix the testcase in PR13542, though it doesn't
fix the underlying problem.

llvm-svn: 161459
2012-08-08 00:00:26 +00:00
Bob Wilson 61f3ad5759 Fix a serious typo in InstCombine's optimization of comparisons.
An unsigned value converted to floating-point will always be greater than
a negative constant.  Unfortunately InstCombine reversed the check so that
unsigned values were being optimized to always be greater than all positive
floating-point constants.  <rdar://problem/12029145>

llvm-svn: 161452
2012-08-07 22:35:16 +00:00
Benjamin Kramer c99d0e9186 PR13095: Give an inline cost bonus to functions using byval arguments.
We give a bonus for every argument because the argument setup is not needed
anymore when the function is inlined. With this patch we interpret byval
arguments as a compact representation of many arguments. The byval argument
setup is implemented in the backend as an inline memcpy, so to model the
cost as accurately as possible we take the number of pointer-sized elements
in the byval argument and give a bonus of 2 instructions for every one of
those. The bonus is capped at 8 elements, which is the number of stores
at which the x86 backend switches from an expanded inline memcpy to a real
memcpy. It would be better to use the real memcpy threshold from the backend,
but it's not available via TargetData.

This change brings the performance of c-ray in line with gcc 4.7. The included
test case tries to reproduce the c-ray problem to catch regressions for this
benchmark early, its performance is dominated by the inline decision of a
specific call.

This only has a small impact on most code, more on x86 and arm than on x86_64
due to the way the ABI works. When building LLVM for x86 it gives a small
inline cost boost to virtually any function using StringRef or STL allocators,
but only a 0.01% increase in overall binary size. The size of gcc compiled by
clang actually shrunk by a couple bytes with this patch applied, but not
significantly.

llvm-svn: 161413
2012-08-07 11:13:19 +00:00