Commit Graph

3559 Commits

Author SHA1 Message Date
Nadav Rotem 3f7c4f36ba LoopVectorizer: Optimize the vectorization of consecutive memory access when the iteration step is -1
llvm-svn: 171114
2012-12-26 19:08:17 +00:00
Hal Finkel 30e95a8ebb BBVectorize: Use VTTI to compute costs for intrinsics vectorization
For the time being this includes only some dummy test cases. Once the
generic implementation of the intrinsics cost function does something other
than assuming scalarization in all cases, or some target specializes the
interface, some real test cases can be added.

Also, for consistency, I changed the type of IID from unsigned to Intrinsic::ID
in a few other places.

llvm-svn: 171079
2012-12-26 01:36:57 +00:00
Hal Finkel b44f890133 LoopVectorize: Enable vectorization of the fmuladd intrinsic
llvm-svn: 171076
2012-12-25 23:21:29 +00:00
Hal Finkel 2a456112ec BBVectorize: Enable vectorization of the fmuladd intrinsic
llvm-svn: 171075
2012-12-25 22:36:08 +00:00
Nick Lewycky fb43258080 Fix typo "Makre" -> "Make".
llvm-svn: 171043
2012-12-24 19:55:47 +00:00
Nadav Rotem 5f7c12cfbd LoopVectorizer: When checking for vectorizable types, also check
the StoreInst operands.

PR14705.

llvm-svn: 171023
2012-12-24 09:14:18 +00:00
Nadav Rotem bd5d1d832a LoopVectorizer: Fix an endless loop in the code that looks for reductions.
The bug was in the code that detects PHIs in if-then-else block sequence.

PR14701.

llvm-svn: 171008
2012-12-24 01:22:06 +00:00
Nadav Rotem cf9999d9d5 CostModel: Change the default target-independent implementation for finding
the cost of arithmetic functions. We now assume that the cost of arithmetic
operations that are marked as Legal or Promote is low, but ops that are
marked as custom are higher.

llvm-svn: 171002
2012-12-23 17:31:23 +00:00
Nadav Rotem 2cade68025 Loop Vectorizer: Update the cost model of scatter/gather operations and make
them more expensive.

llvm-svn: 170995
2012-12-23 07:23:55 +00:00
Nadav Rotem e7785686a5 Fix a bug in the code that checks if we can vectorize loops while using dynamic
memory bound checks.  Before the fix we were able to vectorize this loop from
the Livermore Loops benchmark:

for ( k=1 ; k<n ; k++ )
  x[k] = x[k-1] + y[k];

llvm-svn: 170811
2012-12-21 00:07:35 +00:00
Nadav Rotem 2ababf68d7 LoopVectorize: Fix a bug in the scalarization of instructions.
Before if-conversion we could check if a value is loop invariant
if it was declared inside the basic block. Now that loops have
multiple blocks this check is incorrect.

This fixes External/SPEC/CINT95/099_go/099_go

llvm-svn: 170756
2012-12-20 20:24:40 +00:00
James Molloy 4f6fb953a7 Add a new attribute, 'noduplicate'. If a function contains a noduplicate call, the call cannot be duplicated - Jump threading, loop unrolling, loop unswitching, and loop rotation are inhibited if they would duplicate the call.
Similarly inlining of the function is inhibited, if that would duplicate the call (in particular inlining is still allowed when there is only one callsite and the function has internal linkage).

llvm-svn: 170704
2012-12-20 16:04:27 +00:00
Paul Redmond 5917f4c715 Transform (x&C)>V into (x&C)!=0 where possible
When the least bit of C is greater than V, (x&C) must be greater than V
if it is not zero, so the comparison can be simplified.

Although this was suggested in Target/X86/README.txt, it benefits any
architecture with a directly testable form of AND.

Patch by Kevin Schoedel

llvm-svn: 170576
2012-12-19 19:47:13 +00:00
Benjamin Kramer ae0bb61053 Make TargetLowering::getTypeConversion more resilient against odd illegal MVTs.
- An MVT can become an EVT when being split (e.g. v2i8 -> v1i8, the latter doesn't exist)
- Return the scalar value when an MVT is scalarized (v1i64 -> i64)

Fixes PR14639ff.

llvm-svn: 170546
2012-12-19 14:34:28 +00:00
Shuxin Yang 37a1efe1c6 rdar://12801297
InstCombine for unsafe floating-point add/sub.

llvm-svn: 170471
2012-12-18 23:10:12 +00:00
Benjamin Kramer f0e5d2f032 LoopVectorize: Emit reductions as log2(vectorsize) shuffles + vector ops instead of scalar operations.
For example on x86 with SSE4.2 a <8 x i8> add reduction becomes
	movdqa	%xmm0, %xmm1
	movhlps	%xmm1, %xmm1            ## xmm1 = xmm1[1,1]
	paddw	%xmm0, %xmm1
	pshufd	$1, %xmm1, %xmm0        ## xmm0 = xmm1[1,0,0,0]
	paddw	%xmm1, %xmm0
	phaddw	%xmm0, %xmm0
	pextrb	$0, %xmm0, %edx

instead of
	pextrb	$2, %xmm0, %esi
	pextrb	$0, %xmm0, %edx
	addb	%sil, %dl
	pextrb	$4, %xmm0, %esi
	addb	%dl, %sil
	pextrb	$6, %xmm0, %edx
	addb	%sil, %dl
	pextrb	$8, %xmm0, %esi
	addb	%dl, %sil
	pextrb	$10, %xmm0, %edi
	pextrb	$14, %xmm0, %edx
	addb	%sil, %dil
	pextrb	$12, %xmm0, %esi
	addb	%dil, %sil
	addb	%sil, %dl

llvm-svn: 170439
2012-12-18 18:40:20 +00:00
Nadav Rotem cb23342876 Rename the test so that we can add additional vectors-of-pointers tests
into the same file in the future.

llvm-svn: 170414
2012-12-18 05:50:54 +00:00
Nadav Rotem a5024fc3e1 SROA: Replace calls to getScalarSizeInBits to DataLayout's API because
getScalarSizeInBits could not handle vectors of pointers.

llvm-svn: 170412
2012-12-18 05:23:31 +00:00
Chandler Carruth e3f4119b06 Fix another SROA crasher, PR14601.
This was a silly oversight, we weren't pruning allocas which were used
by variable-length memory intrinsics from the set that could be widened
and promoted as integers. Fix that.

llvm-svn: 170353
2012-12-17 18:48:07 +00:00
Chandler Carruth 21eb4e96c2 Teach the rewriting of memcpy calls to support subvector copies.
This also cleans up a bit of the memcpy call rewriting by sinking some
irrelevant code further down and making the call-emitting code a bit
more concrete.

Previously, memcpy of a subvector would actually miscompile (!!!) the
copy into a single vector element copy. I have no idea how this ever
worked. =/ This is the memcpy half of PR14478 which we probably weren't
noticing previously because it didn't actually assert.

The rewrite relies on the newly refactored insert- and extractVector
functions to do the heavy lifting, and those are the same as used for
loads and stores which makes the test coverage a bit more meaningful
here.

llvm-svn: 170338
2012-12-17 14:51:24 +00:00
Chandler Carruth cacda256a1 Fix a secondary bug I introduced while fixing the first part of PR14478.
The first half of fixing this bug was actually in r170328, but was
entirely coincidental. It did however get me to realize the nature of
the bug, and adapt the test case to test more interesting behavior. In
turn, that uncovered the rest of the bug which I've fixed here.

This should fix two new asserts that showed up in the vectorize nightly
tester.

llvm-svn: 170333
2012-12-17 14:03:01 +00:00
Chandler Carruth ccca504f3a Fix the first part of PR14478: memset now works.
PR14478 highlights a serious problem in SROA that simply wasn't being
exercised due to a lack of vector input code mixed with C-library
function calls. Part of SROA was written carefully to handle subvector
accesses via memset and memcpy, but the rewriter never grew support for
this. Fixing it required refactoring the subvector access code in other
parts of SROA so it could be shared, and then fixing the splat formation
logic and using subvector insertion (this patch).

The PR isn't quite fixed yet, as memcpy is still broken in the same way.
I'm starting on that series of patches now.

Hopefully this will be enough to bring the bullet benchmark back to life
with the bb-vectorizer enabled, but that may require fixing memcpy as
well.

llvm-svn: 170301
2012-12-17 04:07:37 +00:00
Chandler Carruth c50394fcfa Add a corollary test for PR14572. We got this code path correct already.
llvm-svn: 170271
2012-12-15 09:31:54 +00:00
Chandler Carruth 067edd342f Relax an overly aggressive assert to fix PR14572.
The alloca width is based on the alloc size, not the type size.

llvm-svn: 170270
2012-12-15 09:26:06 +00:00
Michael Ilseman e2754dc887 Add back FoldOpIntoPhi optimizations with fix. Included test cases to help catch these errors and to test the presence of the optimization itself
llvm-svn: 170248
2012-12-14 22:08:26 +00:00
Nadav Rotem aa3e2a907e Fix a crash in ValueTracking on vectors of pointers.
llvm-svn: 170240
2012-12-14 20:43:49 +00:00
Shuxin Yang f8e9a5a061 rdar://12753946
Implement rule : "x * (select cond 1.0, 0.0) -> select cond x, 0.0"

llvm-svn: 170226
2012-12-14 18:46:06 +00:00
NAKAMURA Takumi 38d2b2442f Revert r170020, "Simplify negated bit test", for now.
This assumes (1 << n) is always not zero. Consider n is greater than word size.
Although I know it is undefined, this transforms undefined behavior hidden.

This led clang unexpected behavior with some failures. I will investigate to fix undefined shl in clang.

llvm-svn: 170128
2012-12-13 14:28:16 +00:00
Quentin Colombet c0dba2035a Take into account minimize size attribute in the inliner.
Better controls the inlining of functions when the caller function has MinSize attribute.
Basically, when the caller function has this attribute, we do not "force" the inlining
of callee functions carrying the InlineHint attribute (i.e., functions defined with
inline keyword)

llvm-svn: 170065
2012-12-13 01:05:25 +00:00
Nadav Rotem 36510f7194 Teach the cost model about the optimization in r169904: Truncation of induction variables costs the same as scalar trunc.
llvm-svn: 170051
2012-12-13 00:21:03 +00:00
Jakub Staszak a3619d31d8 unHECKify test fixed by Jacob in r159003.
llvm-svn: 170023
2012-12-12 20:58:42 +00:00
David Majnemer 5226aa94ce Simplify negated bit test
llvm-svn: 170020
2012-12-12 20:48:54 +00:00
Jakub Staszak 0a74fc8d6c unHECKify test. It was fixed by Chris in 2009.
llvm-svn: 170017
2012-12-12 20:43:00 +00:00
Jakub Staszak aee9cca331 Fix typo in test-case.
llvm-svn: 170015
2012-12-12 20:29:06 +00:00
Jakub Staszak 67bf76ebbc Fix typo.
llvm-svn: 170006
2012-12-12 19:47:04 +00:00
Nadav Rotem d0bb22bba3 LoopVectorizer: Use the "optsize" attribute to decide if we are allowed to increase the function size.
llvm-svn: 170004
2012-12-12 19:29:45 +00:00
Shuxin Yang 81b3678564 - Fix a problematic way in creating all-the-1 APInt.
- Propagate "exact" bit of [l|a]shr instruction.

llvm-svn: 169942
2012-12-12 00:29:03 +00:00
Michael Ilseman bb6f691b01 Added a slew of SimplifyInstruction floating-point optimizations, many of which take advantage of fast-math flags. Test cases included.
fsub X, +0 ==> X
  fsub X, -0 ==> X, when we know X is not -0
  fsub +/-0.0, (fsub -0.0, X) ==> X
  fsub nsz +/-0.0, (fsub +/-0.0, X) ==> X
  fsub nnan ninf X, X ==> 0.0
  fadd nsz X, 0 ==> X
  fadd [nnan ninf] X, (fsub [nnan ninf] 0, X) ==> 0
    where nnan and ninf have to occur at least once somewhere in this expression
  fmul X, 1.0 ==> X

llvm-svn: 169940
2012-12-12 00:27:46 +00:00
Nadav Rotem f707bf4ca3 PR14574. Fix a bug in the code that calculates the mask the converted PHIs in if-conversion.
llvm-svn: 169916
2012-12-11 21:30:14 +00:00
Nadav Rotem e266efb70b Loop Vectorize: optimize the vectorization of trunc(induction_var). The truncation is now done on scalars.
llvm-svn: 169904
2012-12-11 18:58:10 +00:00
Nadav Rotem dbb3328194 Fix PR14565. Don't if-convert loops that have switch statements in them.
llvm-svn: 169813
2012-12-11 04:55:10 +00:00
Nadav Rotem 7b5b55c195 Add support for reverse induction variables. For example:
while (i--)
 sum+=A[i];

llvm-svn: 169752
2012-12-10 19:25:06 +00:00
Chandler Carruth e45f4658a3 Fix PR14548: SROA was crashing on a mixture of i1 and i8 loads and stores.
When SROA was evaluating a mixture of i1 and i8 loads and stores, in
just a particular case, it would tickle a latent bug where we compared
bits to bytes rather than bits to bits. As a consequence of the latent
bug, we would allow integers through which were not byte-size multiples,
a situation the later rewriting code was never intended to handle.

In release builds this could trigger all manner of oddities, but the
reported issue in PR14548 was forming invalid bitcast instructions.

The only downside of this fix is that it makes it more clear that SROA
in its current form is not capable of handling mixed i1 and i8 loads and
stores. Sometimes with the previous code this would work by luck, but
usually it would crash, so I'm not terribly worried. I'll watch the LNT
numbers just to be sure.

llvm-svn: 169719
2012-12-10 00:54:45 +00:00
Paul Redmond 2adb13c100 LoopVectorize: support vectorizing intrinsic calls
- added function to VectorTargetTransformInfo to query cost of intrinsics
- vectorize trivially vectorizable intrinsic calls such as sin, cos, log, etc.

Reviewed by: Nadav

llvm-svn: 169711
2012-12-09 20:42:17 +00:00
Shuxin Yang 95de7c37e2 - Re-enable population count loop idiom recognization
- fix a bug which cause sigfault.
- add two testing cases which was causing crash

llvm-svn: 169687
2012-12-09 03:12:46 +00:00
Chandler Carruth 91e47532fe Revert the patches adding a popcount loop idiom recognition pass.
There are still bugs in this pass, as well as other issues that are
being worked on, but the bugs are crashers that occur pretty easily in
the wild. Test cases have been sent to the original commit's review
thread.

This reverts the commits:
  r169671: Fix a logic error.
  r169604: Move the popcnt tests to an X86 subdirectory.
  r168931: Initial commit adding the pass.

llvm-svn: 169683
2012-12-08 22:18:29 +00:00
David Tweed cfeb8fc49a The test unconditionally assumes a particular cpu has a backend build in the target.
Buildbots for some hosts may choose to build only their own backend in order to
maximise testing-turnaround time. Move the test into a prefixed directory so
lit's standard "backend specific" suppression can be done.

llvm-svn: 169604
2012-12-07 15:57:45 +00:00
Chandler Carruth 80d3e56c73 Add support to ValueTracking for determining that a pointer is non-null
by virtue of inbounds GEPs that preclude a null pointer.

This is a very common pattern in the code generated by std::vector and
other standard library routines which use allocators that test for null
pervasively. This is one step closer to teaching Clang+LLVM to be able
to produce an empty function for:

  void f() {
    std::vector<int> v;
    v.push_back(1);
    v.push_back(2);
    v.push_back(3);
    v.push_back(4);
  }

Which is related to getting them to completely fold SmallVector
push_back sequences into constants when inlining and other optimizations
make that a possibility.

llvm-svn: 169573
2012-12-07 02:08:58 +00:00
Dmitri Gribenko 1c704355cf Fix typos in CHECK lines.
Patch by Alexander Zinenko.

llvm-svn: 169547
2012-12-06 21:24:47 +00:00
Shuxin Yang ada92f5018 fix a typo
llvm-svn: 169345
2012-12-05 00:33:16 +00:00
Nadav Rotem 93fa5ef957 Fix a bug in vectorization of if-converted reduction variables. If the
reduction variable is not used outside the loop then we ran into an
endless loop. This change checks if we found the original PHI.

llvm-svn: 169324
2012-12-04 22:40:22 +00:00
Shuxin Yang 73285933c9 For rdar://12329730, last piece.
This change attempts to simplify (X^Y) -> X or Y in the user's context if we know that
only bits from X or Y are demanded.

  A minimized case is provided bellow. This change will simplify "t>>16" into "var1 >>16".

  =============================================================
  unsigned foo (unsigned val1, unsigned val2) {
    unsigned t = val1 ^ 1234;
    return (t >> 16) | t; // NOTE: t is used more than once.
  }
  =============================================================

  Note that if the "t" were used only once, the expression would be finally optimized as well.
However, with with this change, the optimization will take place earlier.

  Reviewed by Nadav, Thanks a lot!

llvm-svn: 169317
2012-12-04 22:15:32 +00:00
Nadav Rotem a10b311aec Add support for reduction variables when IF-conversion is enabled.
llvm-svn: 169288
2012-12-04 18:17:33 +00:00
Nadav Rotem 628c2dba60 Add the last part that is needed for vectorization of if-converted code.
Added the code that actually performs the if-conversion during vectorization.

We can now vectorize this code:

for (int i=0; i<n; ++i) {
  unsigned k = 0;

  if (a[i] > b[i])   <------ IF inside the loop.
    k = k * 5 + 3;

  a[i] = k;          <---- K is a phi node that becomes vector-select.
}

llvm-svn: 169217
2012-12-04 06:15:11 +00:00
Shuxin Yang 86c0e232b7 rdar://12329730 (2nd part, revised)
The type of shirt-right (logical or arithemetic) should remain unchanged 
when transforming  "X << C1 >> C2" into "X << (C1-C2)"

llvm-svn: 169209
2012-12-04 03:28:32 +00:00
Shuxin Yang 63e999edbf rdar://12329730 (2nd part)
This change tries to simmplify E1 = " X >> C1 << C2" into :
  - E2 = "X << (C2 - C1)" if C2 > C1, or
  - E2 = "X >> (C1 - C2)" if C1 > C2, or
  - E2 = X if C1 == C2.

 Reviewed by Nadav. Thanks!

llvm-svn: 169182
2012-12-04 00:04:54 +00:00
Benjamin Kramer 47534c7440 SROA: Avoid struct and array types early to avoid creating an overly large integer type.
Fixes PR14465.

Differential Revision: http://llvm-reviews.chandlerc.com/D148

llvm-svn: 169084
2012-12-01 11:53:32 +00:00
Zhou Sheng 8e6d64a71d Revert previous check in r168581, r169079 as they are still in code review status.
llvm-svn: 169083
2012-12-01 10:54:28 +00:00
Zhou Sheng 13fb1ca44a The patch is to improve the memory footprint of pass GlobalOpt.
Also check in a case to repeat the issue, on which 'opt -globalopt' consumes 1.6GB memory.
The big memory footprint cause is that current GlobalOpt one by one hoists and stores the leaf element constant into the global array, in each iteration, it recreates the global array initializer constant and leave the old initializer alone. This may result in many obsolete constants left.
For example:  we have global array @rom = global [16 x i32] zeroinitializer
After the first element value is hoisted and installed:   @rom = global [16 x i32] [ 1, 0, 0, ... ]
After the second element value is installed:  @rom = global [16 x 32] [ 1, 2, 0, 0, ... ]        // here the previous initializer is obsolete
...
When the transform is done, we have 15 obsolete initializers left useless.

llvm-svn: 169079
2012-12-01 04:38:53 +00:00
Evan Cheng 65df808f62 Fix logic to determine whether to turn a switch into a lookup table. When
the tables cannot fit in registers (i.e. bitmap), do not emit the table
if it's using an illegal type.

rdar://12779436

llvm-svn: 168970
2012-11-30 02:02:42 +00:00
Shuxin Yang abcc370423 rdar://12100355 (part 1)
This revision attempts to recognize following population-count pattern:

 while(a) { c++; ... ; a &= a - 1; ... },
  where <c> and <a>could be used multiple times in the loop body.

 TODO: On X8664 and ARM, __buildin_ctpop() are not expanded to a efficent 
instruction sequence, which need to be improved in the following commits.

Reviewed by Nadav, really appreciate!

llvm-svn: 168931
2012-11-29 19:38:54 +00:00
Meador Inge 75798bb7fe instcombine: Migrate puts optimizations
This patch migrates the puts optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

All the simplifiers from simplify-libcalls have now been migrated to
instcombine.  Yay!  Just a few other bits to migrate (prototype attribute
inference and a few statistics) and simplify-libcalls can finally be put
to rest.

llvm-svn: 168925
2012-11-29 19:15:17 +00:00
Benjamin Kramer ba11a9892c Follow up to 168711: It's safe to base this analysis on the found compare, just return the value for the right predicate.
Thanks to Andy for catching this.

llvm-svn: 168921
2012-11-29 19:07:57 +00:00
Shuxin Yang f265351491 fix a typo
llvm-svn: 168909
2012-11-29 18:09:37 +00:00
Meador Inge f8e725081c instcombine: Migrate fputs optimizations
This patch migrates the fputs optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168893
2012-11-29 15:45:43 +00:00
Meador Inge bc84d1a4f5 instcombine: Migrate fwrite optimizations
This patch migrates the fwrite optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168892
2012-11-29 15:45:39 +00:00
Meador Inge 1009cecca0 instcombine: Migrate fprintf optimizations
This patch migrates the fprintf optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168891
2012-11-29 15:45:33 +00:00
Shuxin Yang 01ab5d718b Instruction::isAssociative() returns true for fmul/fadd if they are tagged "unsafe" mode.
Approved by: Eli and Michael.

llvm-svn: 168848
2012-11-29 01:47:31 +00:00
Patrik Hägglund 3eb16c543e Add error handling in getInt.
Accordingly, update a testcase with a broken datalayout string.

Also, we never parse negative numbers, because '-' is used as a
separator. Therefore, use unsigned as result type.

llvm-svn: 168785
2012-11-28 12:13:12 +00:00
Hal Finkel 88ee6b0082 BBVectorize: Correctly merge SubclassOptionalData
When two instructions are combined into a vector instruction,
the resulting instruction must have the most-conservative flags.

llvm-svn: 168765
2012-11-28 03:04:10 +00:00
Meador Inge f1bc9e7431 instcombine: Don't replace all uses for instructions with no uses
My commit to migrate the printf simplifiers from the simplify-libcalls
in r168604 introduced a regression reported by Duncan [1].  The problem
is that in some cases the library call simplifier can return a new value
that has no uses and the new value's type is different than the old value's
type (which is fine because there are no uses).  The specific case that
triggered the bug looked something like:

   declare void @printf(i8*, ...)
   ...
   call void (i8*, ...)* @printf(i8* %fmt)

Which we want to optimized into:

   call i32 @putchar(i32 104)

However, the code was attempting to replace all uses of the printf with
the putchar and the types differ, hence a crash.  This is fixed by *just*
deleting the original instruction when there are no uses.  The old
simplify-libcalls pass is already doing something similar.

[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-November/056338.html

llvm-svn: 168716
2012-11-27 18:52:49 +00:00
Benjamin Kramer e20e124280 SCEV: Even if the latch terminator is foldable we can't deduce the result of an unrelated condition with it.
Fixes PR14432.

llvm-svn: 168711
2012-11-27 18:16:32 +00:00
Meador Inge 4ae8b684f5 Move sprintf simplifier tests to test/Transforms/InstCombine
The tests from SPrintF.ll should have been migrated to sprintf-1.ll in
r168677, but I forgot to do it.

llvm-svn: 168702
2012-11-27 15:35:58 +00:00
Bill Wendling ee5984df39 Remove the dependent libraries feature.
The dependent libraries feature was never used and has bit-rotted. Remove it.

llvm-svn: 168694
2012-11-27 09:55:56 +00:00
NAKAMURA Takumi 96c39f733f llvm/test/Transforms/SimplifyLibCalls: FileCheck-ize 3 tests.
llvm-svn: 168691
2012-11-27 08:18:23 +00:00
NAKAMURA Takumi 62be55783d llvm/test/Transforms/SimplifyLibCalls/SPrintF.ll: Handle @sprintf() with -instcombine, not -simplify-libcalls.
llvm-svn: 168690
2012-11-27 08:18:15 +00:00
NAKAMURA Takumi eb42eebee8 llvm/test/Transforms/SimplifyLibCalls/SPrintF.ll: Fix datalayout since r168516.
llvm-svn: 168689
2012-11-27 08:18:08 +00:00
NAKAMURA Takumi fc2bdccfcd Trailing linefeeds.
llvm-svn: 168688
2012-11-27 08:17:58 +00:00
NAKAMURA Takumi 5f7c2d80d8 test/Transforms/SimplifyLibCalls/SPrintF.ll: Suppress this for now. r168677 unveiled another failure.
FYI, this test makes no sense with "not grep"... I saw "assertion failure" in stderr.

llvm-svn: 168679
2012-11-27 06:42:48 +00:00
Meador Inge 25c9b3b6e4 instcombine: Migrate sprintf optimizations
This patch migrates the sprintf optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168677
2012-11-27 05:57:54 +00:00
Michael Ilseman 6cdacff2d0 Fast-math test for SimplifyInstruction: fold multiply by 0
Applied the patch, rather than committing it.

llvm-svn: 168656
2012-11-27 01:00:22 +00:00
Eli Friedman b14873c4f1 Get rid of the getPointeeAlignment helper function from
InstCombineLoadStoreAlloca.cpp, which had many issues.
(At least two bugs were noted on llvm-commits, and it was overly conservative.)
Instead, use getOrEnforceKnownAlignment.

llvm-svn: 168629
2012-11-26 23:04:53 +00:00
Shuxin Yang 6ea79e864d rdar://12329730 (defect 2)
Enhancement to InstCombine. Try to catch this opportunity:
  
 ---------------------------------------------------------------
 ((X^C1) >> C2) ^ C3  => (X>>C2) ^ ((C1>>C2)^C3)
  where the subexpression "X ^ C1" has more than one uses, and
  "(X^C1) >> C2" has single use. 
 ---------------------------------------------------------------- 

 Reviewed by Nadav (with minor change per his request).

llvm-svn: 168615
2012-11-26 21:44:25 +00:00
Meador Inge 08ca115abd instcombine: Migrate printf optimizations
This patch migrates the printf optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168604
2012-11-26 20:37:20 +00:00
Meador Inge 604937d1cc instcombine: Migrate toascii optimizations
This patch migrates the toascii optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168580
2012-11-26 03:38:52 +00:00
Meador Inge a62a39e0e9 instcombine: Migrate isascii optimizations
This patch migrates the isascii optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168579
2012-11-26 03:10:07 +00:00
Meador Inge 9a59ab6133 instcombine: Migrate isdigit optimizations
This patch migrates the isdigit optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168578
2012-11-26 02:31:59 +00:00
Meador Inge 24d134c375 Fix bogus comment; no functional change.
llvm-svn: 168575
2012-11-26 00:25:33 +00:00
Meador Inge a0b6d87879 instcombine: Migrate *abs optimizations
This patch migrates the *abs optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168574
2012-11-26 00:24:07 +00:00
Meador Inge 7415f8403d instcombine: Migrate ffs* optimizations
This patch migrates the ffs* optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 168571
2012-11-25 20:45:27 +00:00
Nadav Rotem ea3824f160 Add support for pointer induction variables even when there is no integer induction variable.
llvm-svn: 168558
2012-11-25 08:41:35 +00:00
Patrik Hägglund 59189597de Disallow the undocumented practice of starting the datalayout string with '-'.
Update some test cases accordingly.

llvm-svn: 168516
2012-11-23 14:51:42 +00:00
Meador Inge 780a1861f1 Add more functions to the target library information.
I discovered a few more missing functions while migrating optimizations
from the simplify-libcalls pass to the instcombine (I already added some
in r167659).

llvm-svn: 168501
2012-11-22 15:36:42 +00:00
NAKAMURA Takumi 0a41c0bb18 llvm/test/Transforms/InstCombine/sdiv-1.ll: FileCheck-ize.
"not grep '-715827882'" performed as below...bad...

Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.

llvm-svn: 168430
2012-11-21 14:46:18 +00:00
Chandler Carruth 845b73c06f PR14055: Implement support for sub-vector operations in SROA.
Now if we can transform an alloca into a single vector value, but it has
subvector, non-element accesses, we form the appropriate shufflevectors
to allow SROA to proceed. This fixes PR14055 which pointed out a very
common pattern that SROA couldn't handle -- mixed vec3 and vec4
operations on a single alloca.

llvm-svn: 168418
2012-11-21 08:16:30 +00:00
Chandler Carruth 3e994a26e2 Fix PR14132 and handle OOB loads speculated throuh PHI nodes.
The issue is that we may end up with newly OOB loads when speculating
a load into the predecessors of a PHI node, and this confuses the new
integer splitting logic in some cases, triggering an assertion failure.
In fact, the branch in question must be dead code as it loads from
a too-narrow alloca. Add code to handle this gracefully and leave the
requisite FIXMEs for both optimizing more aggressively and doing more to
aid sanitizing invalid code which triggers these patterns.

llvm-svn: 168361
2012-11-20 10:02:19 +00:00
Chandler Carruth 18db795b05 Rework the rewriting of loads and stores for vector and integer allocas
to properly handle the combinations of these with split integer loads
and stores. This essentially replaces Evan's r168227 by refactoring the
code in a different way, and trynig to mirror that refactoring in both
the load and store sides of the rewriting.

Generally speaking there was some really problematic duplicated code
here that led to poorly founded assumptions and then subtle bugs. Now
much of the code actually flows through and follows a more consistent
style and logical path. There is still a tiny bit of duplication on the
store side of things, but it is much less bad.

This also changes the logic to never re-use a load or store instruction
as that was simply too error prone in practice.

I've added a few tests (one a reduction of the one in Evan's original
patch, which happened to be the same as the report in PR14349). I'm
going to look at adding a few more tests for things I found and fixed in
passing (such as the volatile tests in the vectorizable predicate).

This patch has survived bootstrap, and modulo one bugfix survived
Duncan's test suite, but let me know if anything else explodes.

llvm-svn: 168346
2012-11-20 01:12:50 +00:00
Duncan Sands 20bd7fa0f7 Fix PR14060, an infinite loop in reassociate. The problem was that one of the
operands of the expression being written was wrongly thought to be reusable as
an inner node of the expression resulting in it turning up as both an inner node
*and* a leaf, creating a cycle in the def-use graph.  This would have caused the
verifier to blow up if things had gotten that far, however it managed to provoke
an infinite loop first.

llvm-svn: 168291
2012-11-18 19:27:01 +00:00
Nick Lewycky 3d35b45f8e Don't try to calculate the alignment of an unsigned type. Fixes PR14371!
llvm-svn: 168280
2012-11-18 05:39:39 +00:00
Nadav Rotem c3c07e62e8 LoopVectorizer: Add initial support for pointer induction variables (for example: *dst++ = *src++).
At the moment we still require to have an integer induction variable (for example: i++).

llvm-svn: 168231
2012-11-17 00:27:03 +00:00
Evan Cheng f1b6177b62 Teach SROA rewriteVectorizedStoreInst to handle cases when the loaded value is narrower than the stored value. rdar://12713675
llvm-svn: 168227
2012-11-17 00:05:06 +00:00
Duncan Sands c41076c07c InstructionSimplify should be able to simplify A+B==B+A to 'true'
but wasn't due to the same logic bug that caused PR14361.

llvm-svn: 168186
2012-11-16 19:41:26 +00:00
Duncan Sands 1d3acddf0e Fix PR14361: wrong simplification of A+B==B+A. You may think that the old logic
replaced by this patch is equivalent to the new logic, but you'd be wrong, and
that's exactly where the bug was.  There's a similar bug in instsimplify which
manifests itself as instsimplify failing to simplify this, rather than doing it
wrong, see next commit.

llvm-svn: 168181
2012-11-16 18:55:49 +00:00
Hans Wennborg 18aa124075 Constant::IsThreadDependent(): Use dyn_cast<Constant> instead of cast
It turns out that the operands of a Constant are not always themselves
Constant. For example, one of the operands of BlockAddress is
BasicBlock, which is not a Constant.

This should fix the dragonegg-x86_64-linux-gcc-4.6-test build which
broke in r168037.

llvm-svn: 168147
2012-11-16 10:33:25 +00:00
Hans Wennborg 709e015cf1 Make GlobalOpt be conservative with TLS variables (PR14309)
For global variables that get the same value stored into them
everywhere, GlobalOpt will replace them with a constant. The problem is
that a thread-local GlobalVariable looks like one value (the address of
the TLS var), but is different between threads.

This patch introduces Constant::isThreadDependent() which returns true
for thread-local variables and constants which depend on them (e.g. a GEP
into a thread-local array), and teaches GlobalOpt not to track such
values.

llvm-svn: 168037
2012-11-15 11:40:00 +00:00
Duncan Sands ac852c742f Fix a crash observed by Shuxin Yang. The issue here is that LinearizeExprTree,
the utility for extracting a chain of operations from the IR, thought that it
might as well combine any constants it came across (rather than just returning
them along with everything else).  On the other hand, the factorization code
would like to see the individual constants (this is quite reasonable: it is
much easier to pull a factor of 3 out of 2*3 than it is to pull it out of 6;
you may think 6/3 isn't so hard, but due to overflow it's not as easy to undo
multiplications of constants as it may at first appear).  This patch therefore
makes LinearizeExprTree stupider: it now leaves optimizing to the optimization
part of reassociate, and sticks to just analysing the IR.

llvm-svn: 168035
2012-11-15 09:58:38 +00:00
Jakub Staszak 0c4468b5e6 Remove DOS line endings.
llvm-svn: 167968
2012-11-14 20:18:34 +00:00
Duncan Sands db698d8a8a Fix the instcombine GEP index widening transform to work correctly for vector
getelementptrs.

llvm-svn: 167829
2012-11-13 13:01:00 +00:00
Duncan Sands e6beec6765 Relax the restrictions on vector of pointer types, and vector getelementptr.
Previously in a vector of pointers, the pointer couldn't be any pointer type,
it had to be a pointer to an integer or floating point type.  This is a hassle
for dragonegg because the GCC vectorizer happily produces vectors of pointers
where the pointer is a pointer to a struct or whatever.  Vector getelementptr
was restricted to just one index, but now that vectors of pointers can have
any pointer type it is more natural to allow arbitrary vector getelementptrs.
There is however the issue of struct GEPs, where if each lane chose different
struct fields then from that point on each lane will be working down into
unrelated types.  This seems like too much pain for too little gain, so when
you have a vector struct index all the elements are required to be the same.

llvm-svn: 167828
2012-11-13 12:59:33 +00:00
Alexey Samsonov cfd662f279 Figure out <size> argument of llvm.lifetime intrinsics at the moment they are created (during function inlining)
llvm-svn: 167821
2012-11-13 07:15:32 +00:00
Meador Inge 193e035b9c instcombine: Migrate math library call simplifications
This patch migrates the math library call simplifications from the
simplify-libcalls pass into the instcombine library call simplifier.

I have typically migrated just one simplifier at a time, but the math
simplifiers are interdependent because:

   1. CosOpt, PowOpt, and Exp2Opt all depend on UnaryDoubleFPOpt.
   2. CosOpt, PowOpt, Exp2Opt, and UnaryDoubleFPOpt all depend on
      the option -enable-double-float-shrink.

These two factors made migrating each of these simplifiers individually
more of a pain than it would be worth.  So, I migrated them all together.

llvm-svn: 167815
2012-11-13 04:16:17 +00:00
Hal Finkel 2a1df367d4 BBVectorize: Don't vectorize vector-manipulation chains
Don't choose a vectorization plan containing only shuffles and
vector inserts/extracts. Due to inperfections in the cost model,
these can lead to infinite recusion.

llvm-svn: 167811
2012-11-13 03:12:40 +00:00
Shuxin Yang c94c3bb5d0 revert r167740
llvm-svn: 167787
2012-11-13 00:08:49 +00:00
Hal Finkel 3b79f55c5f BBVectorize: Only some insert element operand pairs are free.
This fixes another infinite recursion case when using target costs.
We can only replace insert element input chains that are pure (end
with inserting into an undef).

llvm-svn: 167784
2012-11-12 23:55:36 +00:00
Hal Finkel 9cf3372931 BBVectorize: Use a more sophisticated check for input cost
The old checking code, which assumed that input shuffles and insert-elements
could always be folded (and thus were free) is too simple.
This can only happen in special circumstances.
Using the simple check caused infinite recursion.

llvm-svn: 167750
2012-11-12 21:21:02 +00:00
Hal Finkel f8326b6052 BBVectorize: Check the types of compare instructions
The pass would previously assert when trying to compute the cost of
compare instructions with illegal vector types (like struct pointers).

llvm-svn: 167743
2012-11-12 19:41:38 +00:00
Shuxin Yang 1c442f5ec6 This change is to fix rdar://12571717 which is about assertion in Reassociate pass.
The assertion is trigged when the Reassociater tries to transform expression
     ... + 2 * n * 3 + 2 * m + ...
  into:
     ... + 2 * (n*3 + m).

In the process of the transformation, a helper routine folds the constant 2*3 into 6,
confusing optimizer which is trying the to eliminate the common factor 2, and cannot
find 2 any more. 

Review is pending. But I'd like commit first in order to help those who are waiting 
for this fix. 

llvm-svn: 167740
2012-11-12 19:34:11 +00:00
Hal Finkel ef53df0f9f BBVectorize: Check the input types of shuffles for legality
This fixes a bug where shuffles were being fused such that the
resulting input types were not legal on the target. This would
occur only when both inputs and dependencies were also foldable
operations (such as other shuffles) and there were other connected
pairs in the same block.

llvm-svn: 167731
2012-11-12 14:50:59 +00:00
Meador Inge b3e91f6ae0 Normalize memcmp constant folding results.
The library call simplifier folds memcmp calls with all constant arguments
to a constant.  For example:

  memcmp("foo", "foo", 3) ->  0
  memcmp("hel", "foo", 3) ->  1
  memcmp("foo", "hel", 3) -> -1

The folding is implemented in terms of the system memcmp that LLVM gets
linked with.  It currently just blindly uses the value returned from
the system memcmp as the folded constant.

This patch normalizes the values returned from the system memcmp to
(-1, 0, 1) so that we get consistent results across multiple platforms.
The test cases were adjusted accordingly.

llvm-svn: 167726
2012-11-12 14:00:45 +00:00
Meador Inge 9493eb9bc4 Remove hard-coded constant in Transforms/InstCombine/memcmp-1.ll
Transforms/InstCombine/memcmp-1.ll has a test case that looks like:

  @foo = constant [4 x i8] c"foo\00"
  @hel = constant [4 x i8] c"hel\00"

  ...

  %mem1 = getelementptr [4 x i8]* @hel, i32 0, i32 0
  %mem2 = getelementptr [4 x i8]* @foo, i32 0, i32 0
  %ret = call i32 @memcmp(i8* %mem1, i8* %mem2, i32 3)
  ret i32 %ret
  ; CHECK: ret i32 2

The folded return value (2 above) is computed using the system memcmp
that the compiler is linked with.  This can return different values on
different systems.  The test was originally written on an OS X 10.7.5
x86-64 box and passed.  However, it failed on one of the x86-64 FreeBSD
buildbots because the system memcpy on that machine returned a different
value (1 instead of 2).

I fixed the test by checking the folding constants with regexes.

llvm-svn: 167691
2012-11-11 07:10:25 +00:00
Meador Inge d4825780ed instcombine: Migrate memset optimizations
This patch migrates the memset optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167689
2012-11-11 06:49:03 +00:00
Meador Inge 9cf328b526 instcombine: Migrate memmove optimizations
This patch migrates the memmove optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167687
2012-11-11 06:22:40 +00:00
Meador Inge dd9234a10a instcombine: Migrate memcpy optimizations
This patch migrates the memcpy optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167686
2012-11-11 05:54:34 +00:00
Meador Inge 4d2827c10d instcombine: Migrate memcmp optimizations
This patch migrates the memcmp optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167683
2012-11-11 05:11:20 +00:00
Meador Inge 56edbc9323 instcombine: Migrate strstr optimizations
This patch migrates the strstr optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167682
2012-11-11 03:51:48 +00:00
Meador Inge bcd88ef764 instcombine: Migrate strcspn optimizations
This patch migrates the strcspn optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167675
2012-11-10 15:16:48 +00:00
Meador Inge 03be256db9 instcombine: Query target library information to gate libcall simplifications
Several of the simplifiers migrated from the simplify-libcalls pass to
the instcombine pass were not correctly checking the target library
information to gate the simplifications.  This patch ensures that the
check is made.

llvm-svn: 167660
2012-11-10 03:11:10 +00:00
Nadav Rotem 1cfef3e9ee Add support for memory runtime check. When we can, we calculate array bounds.
If the arrays are found to be disjoint then we run the vectorized version of
the loop. If they are not, we run the scalar code.

llvm-svn: 167608
2012-11-09 07:09:44 +00:00
NAKAMURA Takumi 43ab4ef9ba llvm/ConstantFolding.cpp: Make ReadDataFromGlobal() and FoldReinterpretLoadFromConstPtr() Big-endian-aware.
llvm-svn: 167595
2012-11-08 20:34:25 +00:00
Meador Inge 489b5d645f instcombine: Migrate strspn optimizations
This patch migrates the strspn optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167568
2012-11-08 01:33:50 +00:00
Hans Wennborg c3c8d95c51 Only do switch-to-lookup table transformation when TargetTransformInfo
is available.

llvm-svn: 167552
2012-11-07 21:35:12 +00:00
Hans Wennborg 11d4ebe224 Fix bad test IR in switch_to_lookup_table.ll
llvm-svn: 167543
2012-11-07 18:38:24 +00:00
Nadav Rotem 0914f0b262 Cost Model: add tables for some avx type-conversion hacks.
llvm-svn: 167480
2012-11-06 19:33:53 +00:00
Nadav Rotem ae79765676 Code Model: Improve the accuracy of the zext/sext/trunc vector cost estimation.
llvm-svn: 167412
2012-11-05 22:20:53 +00:00
Nadav Rotem 7411623fd8 Implement the cost of abnormal x86 instruction lowering as a table.
llvm-svn: 167395
2012-11-05 19:32:46 +00:00
Duncan Sands a318ef6fa6 Generalize the transform that boosts GEP indices to the size of a pointer to
also do it for vectors of pointers.

llvm-svn: 167354
2012-11-03 11:44:17 +00:00
Chandler Carruth b62807a95c Add a testcase to loop-idiom to cover PR14241 when we start handling
strided loops again.

llvm-svn: 167287
2012-11-02 08:40:24 +00:00
Chandler Carruth 099f5cb031 Revert the switch of loop-idiom to use the new dependence analysis.
The new analysis is not yet ready for prime time. It has a *critical*
flawed assumption, and some troubling shortages of testing. Until it's
been hammered into better shape, let's stick with the working code. This
should be easy to revert itself when the analysis is ready.

Fixes PR14241, a miscompile of any memcpy-able loop which uses a pointer
as the induction mechanism. If you have been seeing miscompiles in this
revision range, you really want to test with this backed out. The
results of this miscompile are a bit subtle as they can lead to
downstream passes concluding things are impossible which are in fact
possible.

Thanks to David Blaikie for the majority of the reduction of this
miscompile. I'll be checking in the test case in a non-revert commit.

Revesions reverted here:

r167045: LoopIdiom: Fix a serious missed optimization: we only turned
         top-level loops into memmove.
r166877: LoopIdiom: Add checks to avoid turning memmove into an infinite
         loop.
r166875: LoopIdiom: Recognize memmove loops.
r166874: LoopIdiom: Replace custom dependence analysis with
         DependenceAnalysis.
llvm-svn: 167286
2012-11-02 08:33:25 +00:00
Hal Finkel 376f82d5d3 BBVectorize: Commit the rest of the test-case change.
llvm-svn: 167257
2012-11-01 21:57:27 +00:00
Hal Finkel 560545b85f BBVectorize: Use target costs for incoming and outgoing values instead of the depth heuristic.
When target cost information is available, compute explicit costs of inserting and
extracting values from vectors. At this point, all costs are estimated using the
target information, and the chain-depth heuristic is not needed. As a result, it is now, by
default, disabled when using target costs.

llvm-svn: 167256
2012-11-01 21:50:12 +00:00
Chandler Carruth d5639ff80f Add a test case for PR14233.
llvm-svn: 167224
2012-11-01 10:26:36 +00:00
Chandler Carruth 7ec5085e01 Revert the series of commits starting with r166578 which introduced the
getIntPtrType support for multiple address spaces via a pointer type,
and also introduced a crasher bug in the constant folder reported in
PR14233.

These commits also contained several problems that should really be
addressed before they are re-committed. I have avoided reverting various
cleanups to the DataLayout APIs that are reasonable to have moving
forward in order to reduce the amount of churn, and minimize the number
of commits that were reverted. I've also manually updated merge
conflicts and manually arranged for the getIntPtrType function to stay
in DataLayout and to be defined in a plausible way after this revert.

Thanks to Duncan for working through this exact strategy with me, and
Nick Lewycky for tracking down the really annoying crasher this
triggered. (Test case to follow in its own commit.)

After discussing with Duncan extensively, and based on a note from
Micah, I'm going to continue to back out some more of the more
problematic patches in this series in order to ensure we go into the
LLVM 3.2 branch with a reasonable story here. I'll send a note to
llvmdev explaining what's going on and why.

Summary of reverted revisions:

r166634: Fix a compiler warning with an unused variable.
r166607: Add some cleanup to the DataLayout changes requested by
         Chandler.
r166596: Revert "Back out r166591, not sure why this made it through
         since I cancelled the command. Bleh, sorry about this!
r166591: Delete a directory that wasn't supposed to be checked in yet.
r166578: Add in support for getIntPtrType to get the pointer type based
         on the address space.
llvm-svn: 167221
2012-11-01 08:07:29 +00:00
Nadav Rotem 4cb8cdab5e LoopVectorize: Preserve NSW, NUW and IsExact flags.
llvm-svn: 167174
2012-10-31 21:40:39 +00:00
Nadav Rotem 6d7d39783d Fix a bug in the cost calculation of vector casts. Detect situations where bitcasts cost zero.
llvm-svn: 167170
2012-10-31 20:52:26 +00:00
Hans Wennborg b71f72aa82 Remove fixme about unreachable cases from SwitchToLookupTable
SimplifyCFG will have removed those cases for us.

llvm-svn: 167132
2012-10-31 16:15:25 +00:00
Hal Finkel 842ad0b621 BBVectorize: Choose pair ordering to minimize shuffles
BBVectorize would, except for loads and stores, always fuse instructions
so that the first instruction (in the current source order) would always
represent the low part of the input vectors and the second instruction
would always represent the high part. This lead to too many shuffles
being produced because sometimes the opposite order produces fewer of them.

With this change, BBVectorize tracks the kind of pair connections that form
the DAG of candidate pairs, and uses that information to reorder the pairs to
avoid excess shuffles. Using this information, a future commit will be able
to add VTTI-based shuffle costs to the pair selection procedure. Importantly,
the number of remaining shuffles can now be estimated during pair selection.

There are some trivial instruction reorderings in the test cases, and one
simple additional test where we certainly want to do a reordering to
avoid an unnecessary shuffle.

llvm-svn: 167122
2012-10-31 15:17:07 +00:00
Meador Inge 05a625a0ed instcombine: Migrate strto* optimizations
This patch migrates the strto* optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167119
2012-10-31 14:58:26 +00:00
Hans Wennborg 9e74dd97b8 Do simple constant propagation in lookup table formation for switches
By propagating the value for the switch condition, LLVM can now build
lookup tables for code such as:

  switch (x) {
    case 1: return 5;
    case 2: return 42;
    case 3: case 4: case 5:
      return x - 123;
    default:
      return 123;
  }

Given that x is known for each case, "x - 123" becomes a constant for
cases 3, 4, and 5.

llvm-svn: 167115
2012-10-31 13:42:45 +00:00
Benjamin Kramer 8682ac1a77 LCSSA: Add a workaround for another nasty SCEV cache invalidation issue.
I'm not entirely happy with this solution, but I don't see a smarter way currently.
Fixes PR14214.

llvm-svn: 167112
2012-10-31 10:01:29 +00:00
Benjamin Kramer 24c643b6de DependenceAnalysis: Don't crash if there is no constant operand.
This makes the code match the comments. Resolves a crash in loop idiom (PR14219).

llvm-svn: 167110
2012-10-31 09:20:38 +00:00
Meador Inge 6f8e01121a instcombine: Migrate strpbrk optimizations
This patch migrates the strpbrk optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167105
2012-10-31 04:29:58 +00:00
Meador Inge d589ac621b instcombine: Migrate strlen optimizations
This patch migrates the strlen optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167103
2012-10-31 03:33:06 +00:00
Meador Inge 067294b3ac instcombine: Migrate strncpy optimizations
This patch migrates the strncpy optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.

llvm-svn: 167102
2012-10-31 03:33:00 +00:00
Nadav Rotem ce77ab0c24 LoopVectorize: Do not vectorize loops with tiny constant trip counts.
llvm-svn: 167101
2012-10-31 03:31:07 +00:00
Nadav Rotem ff7889196b Add support for loops that don't start with Zero.
This is important for loops in the LAPACK test-suite.
These loops start at 1 because they are auto-converted from fortran.

llvm-svn: 167084
2012-10-31 00:45:26 +00:00
Meador Inge 9a6a190562 instcombine: Migrate stpcpy optimizations
This patch migrates the stpcpy optimizations from the simplify-libcalls
pass into the instcombine library call simplifier.  Note that the
__stpcpy_chk simplifications were migrated in a previous commit.

llvm-svn: 167083
2012-10-31 00:20:56 +00:00
Meador Inge cdb2ca54ae instcombine: Split out the __stpcpy_chk simplifications from StrCpyChkOpt
r166198 migrated the strcpy optimization to instcombine.  The strcpy
simplifier that was migrated from Transforms/Scalar/SimplifyLibCalls.cpp
was also doing some __strcpy_chk simplifications.  Those fortified
simplifications were migrated as well, but introduced a bug in the
__stpcpy_chk simplifier in the process.  This happened because the
__strcpy_chk and __stpcpy_chk simplifiers were both mapped to StrCpyChkOpt
which was updated with simplifications that worked for __strcpy_chk, but
not __stpcpy_chk.

This patch fixes the problem by adding proper test coverage and creating a
new simplifier for __stpcpy_chk (instead of sharing one with __strcpy_chk).

llvm-svn: 167082
2012-10-31 00:20:51 +00:00
Chandler Carruth 1296b59522 Fix PR14212: For some strange reason I treated vectors differently from
integers in that the code to handle split alloca-wide integer loads or
stores doesn't come first. It should, for the same reasons as with
integers, and the PR attests to that. Also had to fix a busted assert in
that this test case also covers.

llvm-svn: 167051
2012-10-30 20:52:40 +00:00
Benjamin Kramer 48a6478242 LoopIdiom: Fix a serious missed optimization: we only turned top-level loops into memmove.
Thanks to Preston Briggs for catching this!

llvm-svn: 167045
2012-10-30 19:49:39 +00:00
Hal Finkel 2eaadd1a2d BBVectorize: Fix a small bug introduced in r167042.
We need to make sure that we take the correct load/store alignment
when the inputs are flipped.

llvm-svn: 167044
2012-10-30 19:47:37 +00:00
Nadav Rotem bc21aceb19 LoopVectorize: Add support for write-only loops when the write destination is a single pointer.
Speedup SciMark by 1%

llvm-svn: 167035
2012-10-30 18:36:45 +00:00
Nadav Rotem b3e8e688da LoopVectorize: Fix a bug in the initialization of reduction variables. AND needs to start at all-one
while XOR, and OR need to start at zero.

llvm-svn: 167032
2012-10-30 18:12:36 +00:00
Ulrich Weigand 7db4429430 Set %defaultjit to use MCJIT for PowerPC targets.
Update Transforms/LICM/2003-12-11-SinkingToPHI.ll test to use
%defaultjit as well.

llvm-svn: 167031
2012-10-30 18:07:58 +00:00
Hans Wennborg e0cf14fa9d switch_to_lookup_table.ll: Remove some unnecessary lines, comments,
function attributes, etc.

llvm-svn: 167016
2012-10-30 15:11:52 +00:00
Ulrich Weigand 6a9bb51a8d Enable some additional constant folding for PPCDoubleDouble.
This fixes Clang :: CodeGen/complex-builtints.c on PowerPC.

llvm-svn: 167013
2012-10-30 12:33:18 +00:00
Hans Wennborg f3254838e4 Use TargetTransformInfo to control switch-to-lookup table transformation
When the switch-to-lookup tables transform landed in SimplifyCFG, it
was pointed out that this could be inappropriate for some targets.
Since there was no way at the time for the pass to know anything about
the target, an awkward reverse-transform was added in CodeGenPrepare
that turned lookup tables back into switches for some targets.

This patch uses the new TargetTransformInfo to determine if a
switch should be transformed, and removes
CodeGenPrepare::ConvertLoadToSwitch.

llvm-svn: 167011
2012-10-30 11:23:25 +00:00
Hal Finkel d0b95b0961 Remove an invalid assert in TargetTransformImpl
getCastInstrCost had an assert prohibiting scalar to vector casts. Such casts,
however, are allowed. This should make the vectorizer buildbot happier.

llvm-svn: 166998
2012-10-30 02:41:57 +00:00
Benjamin Kramer 8d2ee55a0c LoopIdiom: Add checks to avoid turning memmove into an infinite loop.
I don't think this is possible with the current implementation but that may change eventually.

llvm-svn: 166877
2012-10-27 15:18:28 +00:00
Benjamin Kramer 1c9e5186c0 LoopIdiom: Recognize memmove loops.
This turns loops like
  for (unsigned i = 0; i != n; ++i)
    p[i] = p[i+1];
into memmove, which has a highly optimized implementation in most libcs.

This was really easy with the new DependenceAnalysis :)

llvm-svn: 166875
2012-10-27 14:25:51 +00:00
Benjamin Kramer d5c9be8247 LoopIdiom: Replace custom dependence analysis with DependenceAnalysis.
Requires a lot less code and complexity on loop-idiom's side and the more
precise analysis can catch more cases, like the one I included as a test case.
This also fixes the edge-case miscompilation from PR9481.

Compile time performance seems to be slightly worse, but this is mostly due
to an extra LCSSA run scheduled by the PassManager and should be fixed there.

llvm-svn: 166874
2012-10-27 14:25:44 +00:00
Nadav Rotem 859366f93f 1. Fix a bug in getTypeConversion. When a *simple* type is split, we need to return the type of the split result.
2. Change the maximum vectorization width from 4 to 8.
3. A test for both.

llvm-svn: 166864
2012-10-27 04:11:32 +00:00
Nadav Rotem afae78edab Refactor the VectorTargetTransformInfo interface.
Add getCostXXX calls for different families of opcodes, such as casts, arithmetic, cmp, etc.

Port the LoopVectorizer to the new API.

The LoopVectorizer now finds instructions which will remain uniform after vectorization. It uses this information when calculating the cost of these instructions.

llvm-svn: 166836
2012-10-26 23:49:28 +00:00
Hal Finkel e0d9db9953 Move target-specific BBVectorize tests into a separate directory.
llvm-svn: 166802
2012-10-26 19:38:09 +00:00
Nadav Rotem fcd1af344c Move the target-specific tests, which require specific backends, to dirs that only run if the target is present.
llvm-svn: 166796
2012-10-26 18:52:01 +00:00
Rafael Espindola 4253bd8faf Change the internalize pass to internalize all symbols when given an empty
list of externals. This makes sense since a shared library with no symbols
can still be useful if it has static constructors.

llvm-svn: 166795
2012-10-26 18:47:48 +00:00
Benjamin Kramer e3d821a466 Fix SCEV cache invalidation in LCSSA and LoopSimplify.
The LoopSimplify bug is pretty harmless because the loop goes from unanalyzable
to analyzable but the LCSSA bug is very nasty. It only comes into play with a
specific order of the LoopPassManager worklist and can cause actual
miscompilations, when a SCEV refers to a value that has been replaced with PHI
node. SCEVExpander may then insert code into the wrong place, either violating
domination or randomly miscompiling stuff.

Comes with an extensive test case reduced from the test-suite with
bugpoint+SCEVValidator.

llvm-svn: 166787
2012-10-26 17:31:43 +00:00
Nadav Rotem 15198e94d2 Fix a crash in SimpliftDemandedBits of vectors of pointers.
PR14183.

llvm-svn: 166785
2012-10-26 17:17:05 +00:00
Rafael Espindola 375c7f3859 Port testcase to FileCheck.
llvm-svn: 166742
2012-10-26 00:14:11 +00:00
Hal Finkel 41a6ded4a0 Disable generation of pointer vectors by BBVectorize.
Once vector-of-pointer support works, then this can be reverted.

llvm-svn: 166741
2012-10-26 00:05:26 +00:00
Nadav Rotem 8255ceb2cf Revert 166726 because it may have broken a number of SPEC tests. PR14183.
llvm-svn: 166739
2012-10-25 23:51:48 +00:00
Nadav Rotem bb4cfb5ee1 Fix a crash in ValueTracking. Add support for vectors of pointers.
llvm-svn: 166726
2012-10-25 21:52:52 +00:00
Nadav Rotem ede4fd4777 Fix the cost-model test.
llvm-svn: 166722
2012-10-25 21:42:50 +00:00
Hal Finkel 65e0da798b Add CPU model to BBVectorize cost-model tests.
llvm-svn: 166720
2012-10-25 21:31:51 +00:00
Nadav Rotem 27d523580c Add the cpu model to the test.
llvm-svn: 166718
2012-10-25 21:18:42 +00:00
Hal Finkel cbf9365f4c Begin incorporating target information into BBVectorize.
This is the first of several steps to incorporate information from the new
TargetTransformInfo infrastructure into BBVectorize. Two things are done here:

 1. Target information is used to determine if it is profitable to fuse two
    instructions. This means that the cost of the vector operation must not
    be more expensive than the cost of the two original operations. Pairs that
    are not profitable are no longer considered (because current cost information
    is incomplete, for intrinsics for example, equal-cost pairs are still
    considered).

 2. The 'cost savings' computed for the profitability check are also used to
    rank the DAGs that represent the potential vectorization plans. Specifically,
    for nodes of non-trivial depth, the cost savings is used as the node
    weight.

The next step will be to incorporate the shuffle costs into the DAG weighting;
this will give the edges of the DAG weights as well. Once that is done, when
target information is available, we should be able to dispense with the
depth heuristic.

llvm-svn: 166716
2012-10-25 21:12:23 +00:00
Jakob Stoklund Olesen 977f41a1fa Also optimize large switch statements.
The isValueEqualityComparison() guard at the top of SimplifySwitch()
only applies to some of the possible transformations.

The newer transformations work just fine on large switches, and the
check on predecessor count is nonsensical.

llvm-svn: 166710
2012-10-25 18:51:15 +00:00
Chandler Carruth 58d0556765 Teach SROA how to split whole-alloca integer loads and stores into
smaller integer loads and stores.

The high-level motivation is that the frontend sometimes generates
a single whole-alloca integer load or store during ABI lowering of
splittable allocas. We need to be able to break this apart in order to
see the underlying elements and properly promote them to SSA values. The
hope is that this fixes some performance regressions on x86-32 with the
new SROA pass.

Unfortunately, this causes quite a bit of churn in the test cases, and
bloats some IR that comes out. When we see an alloca that consists soley
of bits and bytes being extracted and re-inserted, we now do some
splitting first, before building widened integer "bucket of bits"
representations. These are always well folded by instcombine however, so
this shouldn't actually result in missed opportunities.

If this splitting of all-integer allocas does cause problems (perhaps
due to smaller SSA values going into the RA), we could potentially go to
some extreme measures to only do this integer splitting trick when there
are non-integer component accesses of an alloca, but discovering this is
quite expensive: it adds yet another complete walk of the recursive use
tree of the alloca.

Either way, I will be watching build bots and LNT bots to see what
fallout there is here. If anyone gets x86-32 numbers before & after this
change, I would be very interested.

llvm-svn: 166662
2012-10-25 04:37:07 +00:00
Nadav Rotem 5ffb049a55 Add support for additional reduction variables: AND, OR, XOR.
Patch by Paul Redmond <paul.redmond@intel.com>.

llvm-svn: 166649
2012-10-25 00:08:41 +00:00
Nadav Rotem 4a87683a41 Implement a basic cost model for vector and scalar instructions.
llvm-svn: 166642
2012-10-24 23:47:38 +00:00
Hal Finkel 69b07a2c3a Update GVN to support vectors of pointers.
GVN will now generate ptrtoint instructions for vectors of pointers.
Fixes PR14166.

llvm-svn: 166624
2012-10-24 21:22:30 +00:00
Nadav Rotem a721b21c64 LoopVectorizer: Add a basic cost model which uses the VTTI interface.
llvm-svn: 166620
2012-10-24 20:36:32 +00:00
Hal Finkel 30bd9346a0 getSmallConstantTripMultiple should never return zero.
When the trip count is -1, getSmallConstantTripMultiple could return zero,
and this would cause runtime loop unrolling to assert. Instead of returning
zero, one is now returned (consistent with the existing overflow cases).
Fixes PR14167.

llvm-svn: 166612
2012-10-24 19:46:44 +00:00
Micah Villmow 12d9127833 Add in support for getIntPtrType to get the pointer type based on the address space.
This checkin also adds in some tests that utilize these paths and updates some of the
clients.

llvm-svn: 166578
2012-10-24 15:52:52 +00:00
Duncan Sands 72c19ed386 Add a testcase that would have noticed the typo fixed in commit 166475.
llvm-svn: 166547
2012-10-24 07:17:20 +00:00
Nadav Rotem 5bed7b4fad Use the AliasAnalysis isIdentifiedObj because it also understands mallocs and c++ news.
PR14158.

llvm-svn: 166491
2012-10-23 18:44:18 +00:00
Bill Wendling 5858b56ce3 Ignore unreachable blocks when doing memory dependence analysis on non-local
loads. It's not really profitable and may result in GVN going into an infinite
loop when it hits constructs like this:

     %x = gep %some.type %x, ...

Found via an LTO build of LLVM.

llvm-svn: 166490
2012-10-23 18:37:11 +00:00
Duncan Sands 533c8ae79f Transform code like this
%V = mul i64 %N, 4
 %t = getelementptr i8* bitcast (i32* %arr to i8*), i32 %V
into
 %t1 = getelementptr i32* %arr, i32 %N
 %t = bitcast i32* %t1 to i8*
incorporating the multiplication into the getelementptr.
This happens all the time in dragonegg, for example for
  int foo(int *A, int N) {
    return A[N];
  }
because gcc turns this into byte pointer arithmetic before it hits the plugin:
  D.1590_2 = (long unsigned int) N_1(D);
  D.1591_3 = D.1590_2 * 4;
  D.1592_5 = A_4(D) + D.1591_3;
  D.1589_6 = *D.1592_5;
  return D.1589_6;
The D.1592_5 line is a POINTER_PLUS_EXPR, which is turned into a getelementptr
on a bitcast of A_4 to i8*, so this becomes exactly the kind of IR that the
transform fires on.

An analogous transform (with no testcases!) already existed for bitcasts of
arrays, so I rewrote it to share code with this one.

llvm-svn: 166474
2012-10-23 08:28:26 +00:00
Nadav Rotem 1c7fc71e69 Don't crash if the load/store pointer is not a GEP.
Fix by Shivarama Rao <Shivarama.Rao@amd.com>

llvm-svn: 166427
2012-10-22 18:27:56 +00:00
Argyrios Kyrtzidis 54ff5e81a1 Revert r166407 because it caused analyzer tests to crash and broke self-host bots.
llvm-svn: 166424
2012-10-22 18:16:14 +00:00
Hal Finkel 931c52b84c BBVectorize should ignore unreachable blocks.
Unreachable blocks can have invalid instructions. For example,
jump threading can produce self-referential instructions in
unreachable blocks. Also, we should not be spending time
optimizing unreachable code. Fixes PR14133.

llvm-svn: 166423
2012-10-22 18:00:55 +00:00
Nadav Rotem 03011f1393 Vectorizer: optimize the generation of selects. If the condition is uniform, generate a scalar-cond select (i1 as selector).
llvm-svn: 166409
2012-10-22 04:38:00 +00:00
Nick Lewycky 8b67e1e0b9 Reapply r166405, teaching tailcallelim to be smarter about nocapture, with a
very small but very important bugfix:
  bool shouldExplore(Use *U) {
    Value *V = U->get();
    if (isa<CallInst>(V) || isa<InvokeInst>(V))
    [...]
should have read:
  bool shouldExplore(Use *U) {
    Value *V = U->getUser();
    if (isa<CallInst>(V) || isa<InvokeInst>(V))
Fixes PR14143!

llvm-svn: 166407
2012-10-22 03:03:52 +00:00
NAKAMURA Takumi 60d56d2eea Revert r166405, "Teach TailRecursionElimination to consider 'nocapture' when deciding whether"
It broke selfhosting stage2 in several builders.

llvm-svn: 166406
2012-10-22 00:48:51 +00:00
Nick Lewycky 2d28f2bf83 Teach TailRecursionElimination to consider 'nocapture' when deciding whether
calls can be marked tail.

llvm-svn: 166405
2012-10-21 23:51:22 +00:00
Hal Finkel 8884dc323f DataLayout should use itself when calculating the size of a vector.
This is important for vectors of pointers because only DataLayout,
not the underlying vector type, knows how to calculate the size
of the pointers in the vector. Fixes PR14138.

llvm-svn: 166401
2012-10-21 20:38:03 +00:00
Benjamin Kramer f77f224df9 Revert r166390 "LoopIdiom: Replace custom dependence analysis with LoopDependenceAnalysis."
It passes all tests, produces better results than the old code but uses the
wrong pass, LoopDependenceAnalysis, which is old and unmaintained. "Why is it
still in tree?", you might ask. The answer is obviously: "To confuse developers."

Just swapping in the new dependency pass sends the pass manager into an infinte
loop, I'll try to figure out why tomorrow.

llvm-svn: 166399
2012-10-21 19:31:16 +00:00
Benjamin Kramer 3ae8bc68af LoopIdiom: Replace custom dependence analysis with LoopDependenceAnalysis.
Requires a lot less code and complexity on loop-idiom's side and the more
precise analysis can catch more cases, like the one I included as a test case.
This also fixes the edge-case miscompilation from PR9481. I'm not entirely
sure that all cases are handled that the old checks handled but LDA will
certainly become smarter in the future.

llvm-svn: 166390
2012-10-21 15:03:07 +00:00
Nadav Rotem fe88c67161 Fix a bug in the vectorization of wide load/store operations.
We used a SCEV to detect that A[X] is consecutive. We assumed that X was
the induction variable. But X can be any expression that uses the induction
for example: X = i + 2;

llvm-svn: 166388
2012-10-21 06:49:10 +00:00
Nadav Rotem c1679a95b6 Add support for reduction variables that do not start at zero.
This is important for nested-loop reductions such as :

In the innermost loop, the induction variable does not start with zero:

for (i = 0 .. n)
 for (j = 0 .. m)
  sum += ...

llvm-svn: 166387
2012-10-21 05:52:51 +00:00
Nadav Rotem 7e1084d36c Vectorizer: fix a bug in the classification of induction/reduction phis.
llvm-svn: 166384
2012-10-21 02:38:01 +00:00
Nadav Rotem e5dc57d4fb Fix an infinite loop in the loop-vectorizer.
PR14134.

llvm-svn: 166379
2012-10-20 20:45:01 +00:00
Benjamin Kramer f55b592cc8 InstCombine: Fix an edge case where constant icmps could sneak into ConstantFoldInstOperands and crash.
Have to refactor the ConstantFolder interface one day to define bugs like this away. Fixes PR14131.

llvm-svn: 166374
2012-10-20 08:43:52 +00:00
Nadav Rotem d189b82a9b Vectorize: teach cavVectorizeMemory to distinguish between A[i]+=x and A[B[i]]+=x.
If the pointer is consecutive then it is safe to read and write. If the pointer is non-loop-consecutive then
it is unsafe to vectorize it because we may hit an ordering issue.

llvm-svn: 166371
2012-10-20 08:26:33 +00:00
Nadav Rotem 4f7f72702b Vectorizer: Add support for loop reductions.
For example:

  for (i=0; i<n; i++)
   sum += A[i] +  B[i] + i;

llvm-svn: 166351
2012-10-19 23:05:40 +00:00
Benjamin Kramer 317d6c621d SimplifyLibcalls: The return value of ffsll is always i32, even when the input is zero.
Fixes PR13028.

llvm-svn: 166313
2012-10-19 20:43:44 +00:00
Benjamin Kramer f1088a37cb Indvars: Don't recursively delete instruction during BB iteration.
This can invalidate the iterators leading to use after frees and crashes.
Fixes PR12536.

llvm-svn: 166291
2012-10-19 17:53:54 +00:00
Benjamin Kramer a225ed8d2b SCEVExpander: Don't crash when trying to merge two constant phis.
Just constant fold them so they can't cause any trouble. Fixes PR12627.

llvm-svn: 166286
2012-10-19 16:37:30 +00:00
Nadav Rotem ced93f3a05 vectorizer: Add support for reading and writing from the same memory location.
llvm-svn: 166255
2012-10-19 01:24:18 +00:00
Meador Inge 000dbccfc6 instcombine: Migrate strcpy optimizations
This patch migrates the strcpy optimizations from the simplify-libcalls pass
into the instcombine library call simplifier.  Note also that StrCpyChkOpt
has been updated with a few simplifications that were being done in the
simplify-libcalls version of StrCpyOpt, but not in the migrated implementation
of StrCpyOpt.  There is no reason to overload StrCpyOpt with fortified and
regular simplifications in the new model since there is already a dedicated
simplifier for __strcpy_chk.

llvm-svn: 166198
2012-10-18 18:12:40 +00:00
Nadav Rotem b52f717411 Vectorizer: Add support for loops with an unknown count. For example:
for (i=0; i<n; i++){
        a[i] = b[i+1] + c[i+3];
     }

llvm-svn: 166165
2012-10-18 05:29:12 +00:00
Nadav Rotem 6b94c2a09b Add a loop vectorizer.
llvm-svn: 166112
2012-10-17 18:25:06 +00:00
Chandler Carruth 6fab42aa39 This just in, it is a *bad idea* to use 'udiv' on an offset of
a pointer. A very bad idea. Let's not do that. Fixes PR14105.

Note that this wasn't *that* glaring of an oversight. Originally, these
routines were only called on offsets within an alloca, which are
intrinsically positive. But over the evolution of the pass, they ended
up being called for arbitrary offsets, and things went downhill...

llvm-svn: 166095
2012-10-17 09:23:48 +00:00
Michael Gottesman 02a1141e5a [InstCombine] Teach InstCombine how to handle an obfuscated splat.
An obfuscated splat is where the frontend poorly generates code for a splat
using several different shuffles to create the splat, i.e.,

  %A = load <4 x float>* %in_ptr, align 16
  %B = shufflevector <4 x float> %A, <4 x float> undef, <4 x i32> <i32 0, i32 0, i32 undef, i32 undef>
  %C = shufflevector <4 x float> %B, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 4, i32 undef>
  %D = shufflevector <4 x float> %C, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 2, i32 4>

llvm-svn: 166061
2012-10-16 21:29:38 +00:00
Chandler Carruth 49c8eea3c0 Update the memcpy rewriting to fully support widened int rewriting. This
includes extracting ints for copying elsewhere and inserting ints when
copying into the alloca. This should fix the CanSROA assertion coming
out of Clang's regression test suite.

llvm-svn: 165931
2012-10-15 10:24:43 +00:00
Chandler Carruth 9d966a2002 Follow-up fix to r165928: handle memset rewriting for widened integers,
and generally clean up the memset handling. It had rotted a bit as the
other rewriting logic got polished more.

llvm-svn: 165930
2012-10-15 10:24:40 +00:00
Chandler Carruth 435c4e0792 First major step toward addressing PR14059. This teaches SROA to handle
cases where we have partial integer loads and stores to an otherwise
promotable alloca to widen[1] those loads and stores to cover the entire
alloca and bitcast them into the appropriate type such that promotion
can proceed.

These partial loads and stores stem from an annoying confluence of ARM's
calling convention and ABI lowering and the FCA pre-splitting which
takes place in SROA. Clang lowers a { double, double } in-register
function argument as a [4 x i32] function argument to ensure it is
placed into integer 32-bit registers (a really unnerving implicit
contract between Clang and the ARM backend I would add). This results in
a FCA load of [4 x i32]* from the { double, double } alloca, and SROA
decomposes this into a sequence of i32 loads and stores. Inlining
proceeds, code gets folded, but at the end of the day, we still have i32
stores to the low and high halves of a double alloca. Widening these to
be i64 operations, and bitcasting them to double prior to loading or
storing allows promotion to proceed for these allocas.

I looked quite a bit changing the IR which Clang produces for this case
to be more friendly, but small changes seem unlikely to help. I think
the best representation we could use currently would be to pass 4 i32
arguments thereby avoiding any FCAs, but that would still require this
fix. It seems like it might eventually be nice to somehow encode the ABI
register selection choices outside of the parameter type system so that
the parameter can be a { double, double }, but the CC register
annotations indicate that this should be passed via 4 integer registers.

This patch does not address the second problem in PR14059, which is the
reverse: when a struct alloca is loaded as a *larger* single integer.

This patch also does not address some of the code quality issues with
the FCA-splitting. Those don't actually impede any optimizations really,
but they're on my list to clean up.

[1]: Pedantic footnote: for those concerned about memory model issues
here, this is safe. For the alloca to be promotable, it cannot escape or
have any use of its address that could allow these loads or stores to be
racing. Thus, widening is always safe.

llvm-svn: 165928
2012-10-15 08:40:30 +00:00
Meador Inge 40b6fac36c instcombine: Migrate strcmp and strncmp optimizations
This patch migrates the strcmp and strncmp optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165915
2012-10-15 03:47:37 +00:00
Meador Inge 174185084c instcombine: Migrate strchr and strrchr optimizations
This patch migrates the strchr and strrchr optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165875
2012-10-13 16:45:37 +00:00
Meador Inge 7fb2f7378b instcombine: Migrate strcat and strncat optimizations
This patch migrates the strcat and strncat optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165874
2012-10-13 16:45:32 +00:00
Chandler Carruth ba9319925e Teach SROA to cope with wrapper aggregates. These show up a lot in ABI
type coercion code, especially when targetting ARM. Things like [1
x i32] instead of i32 are very common there.

The goal of this logic is to ensure that when we are picking an alloca
type, we look through such wrapper aggregates and across any zero-length
aggregate elements to find the simplest type possible to form a type
partition.

This logic should (generally speaking) rarely fire. It only ends up
kicking in when an alloca is accessed using two different types (for
instance, i32 and float), and the underlying alloca type has wrapper
aggregates around it. I noticed a significant amount of this occurring
looking at stepanov_abstraction generated code for arm, and suspect it
happens elsewhere as well.

Note that this doesn't yet address truly heinous IR productions such as
PR14059 is concerning. Those result in mismatched *sizes* of types in
addition to mismatched access and alloca types.

llvm-svn: 165870
2012-10-13 10:49:33 +00:00
Nick Lewycky 49ac81ac68 Don't crash when !tbaa.struct contents is invalid.
llvm-svn: 165693
2012-10-11 02:05:23 +00:00
Duncan Sands 244e3ba5f1 Add the testcase from pr13254 (the old scalarreply pass handles this wrong;
the new sroa pass handles it right).

llvm-svn: 165644
2012-10-10 18:41:19 +00:00
Michael Ilseman c93cffb590 New EarlyCSE tests for CSE-ing across commutativity.
llvm-svn: 165510
2012-10-09 16:58:13 +00:00
Alexey Samsonov 3b861ec989 Fix PR14016.
DeadArgumentElimination pass can replace one LLVM function with another,
invalidating a pointer stored in debug info metadata entry for this function.
To fix this, we collect debug info descriptors for functions before
running a DeadArgumentElimination pass and "patch" pointers in metadata nodes
if we replace a function.

llvm-svn: 165490
2012-10-09 08:13:15 +00:00
Chandler Carruth 503eb2bb49 Fix PR14034, an infloop / heap corruption / crash bug in the new SROA.
Thanks to Benjamin for the raw test case. This one took about 50 times
longer to reduce than to fix. =/

llvm-svn: 165476
2012-10-09 01:58:35 +00:00
Micah Villmow 9cfc13d46c Move TargetData to DataLayout.
llvm-svn: 165403
2012-10-08 16:39:34 +00:00
Chandler Carruth e5b7a2ccd2 Teach the new SROA a new trick. Now we zap any memcpy or memmoves which
are in fact identity operations. We detect these and kill their
partitions so that even splitting is unaffected by them. This is
particularly important because Clang relies on emitting identity memcpy
operations for struct copies, and these fold away to constants very
often after inlining.

Fixes the last big performance FIXME I have on my plate.

llvm-svn: 165285
2012-10-05 01:29:09 +00:00
Benjamin Kramer d12e82e523 SimplifyCFG: Enhance the "remove CFG edge that leads to null pointer dereference" optimization to also handle instructions with multiple uses.
We conservatively only check the first use to avoid walking long use chains.
This catches the common case of having both a load and a store to a pointer
supplied by a PHI node.

llvm-svn: 165232
2012-10-04 16:11:49 +00:00
Duncan Sands a6d20010fe In my recent change to avoid use of underaligned memory I didn't notice that
cpyDest can be mutated in some cases, which would then cause a crash later if
indeed the memory was underaligned.  This brought down several buildbots, so
I guess the underaligned case is much more common than I thought!

llvm-svn: 165228
2012-10-04 13:53:21 +00:00
Duncan Sands 271ea6cdc5 The alignment of an sret parameter is known: it must be at least the
alignment of the return type.  Teach the optimizers this.

llvm-svn: 165226
2012-10-04 13:36:31 +00:00
Chandler Carruth ac8317fd36 Fix PR13969, a mini-phase-ordering issue with the new SROA pass.
Currently, we re-visit allocas when something changes about the way they
might be *split* to allow better scalarization to take place. However,
we weren't handling the case when the *promotion* is what would change
the behavior of SROA. When an address derived from an alloca is stored
into another alloca, we consider the first to have escaped. If the
second is ever promoted to an SSA value, we will suddenly be able to run
the SROA pass on the first alloca.

This patch adds explicit support for this form if iteration. When we
detect a store of a pointer derived from an alloca, we flag the
underlying alloca for reprocessing after promotion. The logic works hard
to only do this when there is definitely going to be promotion and it
might remove impediments to the analysis of the alloca.

Thanks to Nick for the great test case and Benjamin for some sanity
check review.

llvm-svn: 165223
2012-10-04 12:33:50 +00:00
Duncan Sands c6ada69a14 The memcpy optimizer was happily doing call slot forwarding when the new memory
was less aligned than the old.  In the testcase this results in an overaligned
memset: the memset alignment was correct for the original memory but is too much
for the new memory.  Fix this by either increasing the alignment of the new
memory or bailing out if that isn't possible.  Should fix the gcc-4.7 self-host
buildbot failure.

llvm-svn: 165220
2012-10-04 10:54:40 +00:00
Chandler Carruth 43c8b46deb Teach the integer-promotion rewrite strategy to be endianness aware.
Sorry for this being broken so long. =/

As part of this, switch all of the existing tests to be Little Endian,
which is the behavior I was asserting in them anyways! Add in a new
big-endian test that checks the interesting behavior there.

Another part of this is to tighten the rules abotu when we perform the
full-integer promotion. This logic now rejects cases where there fully
promoted integer is a non-multiple-of-8 bitwidth or cases where the
loads or stores touch bits which are in the allocated space of the
alloca but are not loaded or stored when accessing the integer. Sadly,
these aren't really observable today as the rest of the pass will
already ensure the invariants hold. However, the latter situation is
likely to become a potential concern in the future.

Thanks to Benjamin and Duncan for early review of this patch. I'm still
looking into whether there are further endianness issues, please let me
know if anyone sees BE failures persisting past this.

llvm-svn: 165219
2012-10-04 10:39:28 +00:00
Jakub Staszak f8a8129513 Fix PR13967.
llvm-svn: 165187
2012-10-03 23:59:47 +00:00
Chandler Carruth 08e5f49f90 Fix an issue where we failed to adjust the alignment constraint on
a memcpy to reflect that '0' has a different meaning when applied to
a load or store. Now we correctly use underaligned loads and stores for
the test case added.

llvm-svn: 165101
2012-10-03 08:26:28 +00:00
Chandler Carruth 4b2b38d398 Try to use a better set of abstractions for computing the alignment
necessary during rewriting. As part of this, fix a real think-o here
where we might have left off an alignment specification when the address
is in fact underaligned. I haven't come up with any way to trigger this,
as there is always some other factor that reduces the alignment, but it
certainly might have been an observable bug in some way I can't think
of. This also slightly changes the strategy for placing explicit
alignments on loads and stores to only do so when the alignment does not
match that required by the ABI. This causes a few redundant alignments
to go away from test cases.

I've also added a couple of tests that really push on the alignment that
we end up with on loads and stores. More to come here as I try to fix an
underlying bug I have conjectured and produced test cases for, although
it's not clear if this bug is the one currently hitting dragonegg's
gcc47 bootstrap.

llvm-svn: 165100
2012-10-03 08:14:02 +00:00
Chandler Carruth b09f0a3c75 Teach the new SROA to handle cases where an alloca that has already been
scheduled for processing on the worklist eventually gets deleted while
we are processing another alloca, fixing the original test case in
PR13990.

To facilitate this, add a remove_if helper to the SetVector abstraction.
It's not easy to use the standard abstractions for this because of the
specifics of SetVectors types and implementation.

Finally, a nice small test case is included. Thanks to Benjamin for the
fantastic reduced test case here! All I had to do was delete some empty
basic blocks!

llvm-svn: 165065
2012-10-02 22:46:45 +00:00
Benjamin Kramer bc2724f6e0 Fix broken tests.
llvm-svn: 165019
2012-10-02 15:49:34 +00:00
Chandler Carruth 9866b97f94 Fix more misspellings found by Duncan during review.
llvm-svn: 164940
2012-10-01 12:30:45 +00:00
Chandler Carruth 176ca71a82 Fix several issues with alignment. We weren't always accounting for type
alignment requirements of the new alloca. As one consequence which was
reported as a bug by Duncan, we overaligned memcpy calls to ranges of
allocas after they were rewritten to types with lower alignment
requirements. Other consquences are possible, but I don't have any test
cases for them.

llvm-svn: 164937
2012-10-01 12:16:54 +00:00