Commit Graph

172 Commits

Author SHA1 Message Date
Dan Gohman e5572706e8 Refine the fix in r51169 to only apply when the operand val being
replaced is a PHI. This prevents it from inserting uses before defs
in the case that it isn't a PHI and it depends on other instructions
later in the block. This fixes the 447.dealII regression on x86-64.

llvm-svn: 51292
2008-05-20 03:01:48 +00:00
Dan Gohman 0a0fa7cf78 Fix a bug in LoopStrengthReduce that caused it to emit IR with
use-before-def. The problem comes up in code with multiple PHIs where
one PHI is being rewritten in terms of the other, but the other needs
to be casted first. LLVM rules requre the cast instruction to be
inserted after any PHI instructions, but when instructions were
inserted to replace the second PHI value with a function of the first,
they were ended up going before the cast instruction. Avoid this
problem by remembering the location of the cast instruction, when one
is needed, and inserting the expansion of the new value after it.

This fixes a bug that surfaced in 255.vortex on x86-64 when
instcombine was removed from the middle of the loop optimization
passes. 

llvm-svn: 51169
2008-05-15 23:26:57 +00:00
Dan Gohman d78c400b5b Clean up the use of static and anonymous namespaces. This turned up
several things that were neither in an anonymous namespace nor static
but not intended to be global.

llvm-svn: 51017
2008-05-13 00:00:25 +00:00
Dan Gohman e36714c0b4 Minor whitespace and comment cleanups.
llvm-svn: 49671
2008-04-14 18:26:16 +00:00
Gabor Greif e9ecc68d8f API changes for class Use size reduction, wave 1.
Specifically, introduction of XXX::Create methods
for Users that have a potentially variable number of
Uses.

llvm-svn: 49277
2008-04-06 20:25:17 +00:00
Evan Cheng a90fdc4340 Remove dead options.
llvm-svn: 48556
2008-03-19 22:02:26 +00:00
Dan Gohman 70de4cb1cd Use empty() instead of comparing size() with zero.
llvm-svn: 46514
2008-01-29 13:02:09 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Evan Cheng 26ee54eb05 Clean up previous patch: PHI uses should not prevent iv reuse if all other uses are addresses. This trades a constant multiply for one fewer iv.
llvm-svn: 45251
2007-12-20 02:20:53 +00:00
Evan Cheng e2a8ba7fec Allow iv reuse if the user is a PHI node which is in turn used as addresses.
llvm-svn: 45230
2007-12-19 23:33:23 +00:00
Dale Johannesen 7d97662467 Remove indeterminism from a loop. We think this will
fix an occasional nonrepeatable bootstrap failure we've
been seeing on Darwin.

llvm-svn: 44202
2007-11-17 02:48:01 +00:00
Evan Cheng 240c1adade At end of LSR, replace uses of now constant (as result of SplitCriticalEdge) PHI node with the constant value.
llvm-svn: 43533
2007-10-30 23:45:15 +00:00
Evan Cheng c2dbfee43f It's not safe to tell SplitCriticalEdge to merge identical edges. It may delete the phi instruction that's being processed.
llvm-svn: 43524
2007-10-30 22:27:26 +00:00
Evan Cheng b024c4c81d - Bug fixes.
- Allow icmp rewrite using an iv / stride of a smaller integer type.

llvm-svn: 43480
2007-10-29 22:07:18 +00:00
Dan Gohman 7414e21ec0 Update a comment to reflect the current code.
llvm-svn: 43463
2007-10-29 19:32:39 +00:00
Dan Gohman f5feb01056 Remove an unused function argument.
llvm-svn: 43462
2007-10-29 19:31:25 +00:00
Dan Gohman 50d42224d0 Fix a typo in a comment.
llvm-svn: 43461
2007-10-29 19:26:14 +00:00
Dan Gohman 8e8adada83 Avoid calling ValidStride when not all uses are addresses.
llvm-svn: 43460
2007-10-29 19:23:53 +00:00
Evan Cheng 9dbe99dcd6 A number of LSR fixes:
- ChangeCompareStride only reuse stride that is larger than current stride. It
  will let the general reuse mechanism to try to reuse a smaller stride.
- Watch out for multiplication overflow in ChangeCompareStride.
- Replace std::set with SmallPtrSet.

llvm-svn: 43408
2007-10-26 23:08:19 +00:00
Evan Cheng d78a3e5555 Fix a crash. Make sure TLI is not null.
llvm-svn: 43384
2007-10-26 17:24:46 +00:00
Evan Cheng 7f3d02471d Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
e.g.
Turns this loop:
LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
        movw    %dx, %si
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %edi
        movw    %si, (%edi)
        movl    L_Y$non_lazy_ptr, %edi
        movw    %dx, (%edi)
		addw    $4, %dx
		incw    %si
		incl    %ecx
		cmpl    %eax, %ecx
		jne     LBB1_2  # bb
	
into

LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %esi
        movw    %cx, (%esi)
        movl    L_Y$non_lazy_ptr, %esi
        movw    %dx, (%esi)
        addw    $4, %dx
		incl    %ecx
        cmpl    %eax, %ecx
        jne     LBB1_2  # bb

llvm-svn: 43375
2007-10-26 01:56:11 +00:00
Evan Cheng 29e29e63bd Do not rewrite compare instruction using iv of a different stride if the new
stride may be rewritten using the stride of the compare instruction.

llvm-svn: 43367
2007-10-25 22:45:20 +00:00
Evan Cheng 5a38108374 Remove code that's commented out.
llvm-svn: 43356
2007-10-25 18:38:24 +00:00
Evan Cheng 133694db06 If a loop termination compare instruction is the only use of its stride,
and the compaison is against a constant value, try eliminate the stride
by moving the compare instruction to another stride and change its
constant operand accordingly. e.g.

loop:
...
v1 = v1 + 3
v2 = v2 + 1
if (v2 < 10) goto loop
=>
loop:
...
v1 = v1 + 3
if (v1 < 30) goto loop

llvm-svn: 43336
2007-10-25 09:11:16 +00:00
Dan Gohman e0c3d9f338 Strength reduction improvements.
- Avoid attempting stride-reuse in the case that there are users that
   aren't addresses. In that case, there will be places where the
   multiplications won't be folded away, so it's better to try to
   strength-reduce them.

 - Several SSE intrinsics have operands that strength-reduction can
   treat as addresses. The previous item makes this more visible, as
   any non-address use of an IV can inhibit stride-reuse.

 - Make ValidStride aware of whether there's likely to be a base
   register in the address computation. This prevents it from thinking
   that things like stride 9 are valid on x86 when the base register is
   already occupied.

Also, XFAIL the 2007-08-10-LEA16Use32.ll test; the new logic to avoid
stride-reuse elimintes the LEA in the loop, so the test is no longer
testing what it was intended to test.

llvm-svn: 43231
2007-10-22 20:40:42 +00:00
Dan Gohman a37eaf2bf9 Move the SCEV object factors from being static members of the individual
SCEV subclasses to being non-static member functions of the ScalarEvolution
class.

llvm-svn: 43224
2007-10-22 18:31:58 +00:00
Dale Johannesen b6c05b1f90 Fix stride computations for long double arrays.
llvm-svn: 42508
2007-10-01 23:08:35 +00:00
Chris Lattner 2740694450 wrap some long lines. Major offenders that are left include
gvn, gvnpre, dse, and predsimplify.  To see these, use:

  make check-line-length

llvm-svn: 40738
2007-08-02 16:53:43 +00:00
Dan Gohman 34d442f274 More explicit keywords.
llvm-svn: 40673
2007-08-01 15:32:29 +00:00
Dan Gohman 8c4da37b1f Use SCEVExpander::InsertCastOfTo instead of calling new IntToPtrInst
directly, because the insert point used by the SCEVExpander may vary
from what LSR originally computes.

llvm-svn: 40641
2007-07-31 17:22:27 +00:00
Dan Gohman 32f53bbd85 Rename ScalarEvolution::deleteInstructionFromRecords to
deleteValueFromRecords and loosen the types to all it to accept
Value* instead of just Instruction*, since this is what
ScalarEvolution uses internally anyway. This allows more flexibility
for future uses.

llvm-svn: 37657
2007-06-19 14:28:31 +00:00
Dan Gohman cb9e09ad57 Add a SCEV class and supporting code for sign-extend expressions.
This created an ambiguity for expandInTy to decide when to use
sign-extension or zero-extension, but it turns out that most of its callers
don't actually need a type conversion, now that LLVM types don't have
explicit signedness. Drop expandInTy in favor of plain expand, and change
the few places that actually need a type conversion to do it themselves.

llvm-svn: 37591
2007-06-15 14:38:12 +00:00
Devang Patel df6355ccf8 Use DominatorTree instead of ETForest.
llvm-svn: 37499
2007-06-07 21:42:15 +00:00
Chris Lattner 1b7b6e76ec Fix PR1495 and CodeGen/X86/2007-06-05-LSR-Dominator.ll
llvm-svn: 37454
2007-06-06 01:23:55 +00:00
Chris Lattner e8bd53c36a Handle negative strides much more optimally. This compiles X86/lsr-negative-stride.ll
into:

_t:
        movl 8(%esp), %ecx
        movl 4(%esp), %eax
        cmpl %ecx, %eax
        je LBB1_3       #bb17
LBB1_1: #bb
        cmpl %ecx, %eax
        jg LBB1_4       #cond_true
LBB1_2: #cond_false
        subl %eax, %ecx
        cmpl %ecx, %eax
        jne LBB1_1      #bb
LBB1_3: #bb17
        ret
LBB1_4: #cond_true
        subl %ecx, %eax
        cmpl %ecx, %eax
        jne LBB1_1      #bb
        jmp LBB1_3      #bb17

instead of:

_t:
        subl $4, %esp
        movl %esi, (%esp)
        movl 12(%esp), %ecx
        movl 8(%esp), %eax
        cmpl %ecx, %eax
        je LBB1_4       #bb17
LBB1_1: #bb.outer
        movl %ecx, %edx
        negl %edx
LBB1_2: #bb
        cmpl %ecx, %eax
        jle LBB1_5      #cond_false
LBB1_3: #cond_true
        addl %edx, %eax
        cmpl %ecx, %eax
        jne LBB1_2      #bb
LBB1_4: #bb17
        movl (%esp), %esi
        addl $4, %esp
        ret
LBB1_5: #cond_false
        movl %ecx, %edx
        subl %eax, %edx
        movl %eax, %esi
        addl %esi, %esi
        cmpl %ecx, %esi
        je LBB1_4       #bb17
LBB1_6: #cond_false.bb.outer_crit_edge
        movl %edx, %ecx
        jmp LBB1_1      #bb.outer

llvm-svn: 37252
2007-05-19 01:22:21 +00:00
Chris Lattner 1480e16596 significantly improve debug output of lsr
llvm-svn: 36996
2007-05-11 22:40:34 +00:00
Dan Gohman 2bcbd5b7ca Use IntrinsicInst to test for prefetch instructions, which is ever so
slightly nicer than using CallInst with an extra check; thanks Chris.

llvm-svn: 36743
2007-05-04 14:59:09 +00:00
Dan Gohman 3fbb18d1b6 Allow strength reduction to make use of addressing modes for the
address operand in a prefetch intrinsic.

llvm-svn: 36713
2007-05-03 23:20:33 +00:00
Devang Patel 8c78a0bff0 Drop 'const'
llvm-svn: 36662
2007-05-03 01:11:54 +00:00
Devang Patel e95c6ad802 Use 'static const char' instead of 'static const int'.
Due to darwin gcc bug, one version of darwin linker coalesces
static const int, which defauts PassID based pass identification.

llvm-svn: 36652
2007-05-02 21:39:20 +00:00
Devang Patel 09f162ca6a Do not use typeinfo to identify pass in pass manager.
llvm-svn: 36632
2007-05-01 21:15:47 +00:00
Devang Patel 38bc86f057 Fix
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20070423/048333.html

llvm-svn: 36380
2007-04-23 22:42:03 +00:00
Owen Anderson f35a1dbc7a Remove ImmediateDominator analysis. The same information can be obtained from DomTree. A lot of code for
constructing ImmediateDominator is now folded into DomTree construction.

This is part of the ongoing work for PR217.

llvm-svn: 36063
2007-04-15 08:47:27 +00:00
Chris Lattner efd3051d60 Now that codegen prepare isn't defeating me, I can finally fix what I set
out to do! :)

This fixes a problem where LSR would insert a bunch of code into each MBB
that uses a particular subexpression (e.g. IV+base+C).  The problem is that
this code cannot be CSE'd back together if inserted into different blocks.

This patch changes LSR to attempt to insert a single copy of this code and
share it, allowing codegenprepare to duplicate the code if it can be sunk
into various addressing modes.  On CodeGen/ARM/lsr-code-insertion.ll,
for example, this gives us code like:

        add r8, r0, r5
        str r6, [r8, #+4]
..
        ble LBB1_4      @cond_next
LBB1_3: @cond_true
        str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
        ldr r6, LCPI1_1
        str r6, [r8, #+4]

instead of:

        add r10, r0, r6
        str r8, [r10, #+4]
...
        ble LBB1_4      @cond_next
LBB1_3: @cond_true
        add r8, r0, r6
        str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
        add r8, r0, r6
        ldr r10, LCPI1_1
        str r10, [r8, #+4]

Besides being smaller and more efficient, this makes it immediately
obvious that it is profitable to predicate LBB1_3 now :)

llvm-svn: 35972
2007-04-13 20:42:26 +00:00
Chris Lattner 780c009756 switch LSR to use isLegalAddressingMode instead of other simpler hooks
llvm-svn: 35837
2007-04-09 22:20:14 +00:00
Owen Anderson 8763ba1b88 Completely purge DomSet. This is the (hopefully) final patch for PR1171.
llvm-svn: 35731
2007-04-07 07:17:27 +00:00
Chris Lattner 81e0707552 split some code out into a helper function
llvm-svn: 35615
2007-04-03 05:11:24 +00:00
Chris Lattner f3197a7d53 allow -1 strides to reuse "1" strides.
llvm-svn: 35607
2007-04-02 22:51:58 +00:00
Chris Lattner 28e0e4e11e Pass the type of the store access, not the type of the store, into the
target hook.  This allows us to codegen a loop as:

LBB1_1: @cond_next
        mov r2, #0
        str r2, [r0, +r3, lsl #2]
        add r3, r3, #1
        cmn r3, #1
        bne LBB1_1      @cond_next

instead of:

LBB1_1: @cond_next
        mov r2, #0
        str r2, [r0], #+4
        add r3, r3, #1
        cmn r3, #1
        bne LBB1_1      @cond_next

This looks the same, but has one fewer induction variable (and therefore,
one fewer register) live in the loop.

llvm-svn: 35592
2007-04-02 06:34:44 +00:00
Chris Lattner 8fe3cbe6bd print the type of an inserted IV in -debug mode.
llvm-svn: 35563
2007-04-01 22:21:39 +00:00