- ChangeCompareStride only reuse stride that is larger than current stride. It
will let the general reuse mechanism to try to reuse a smaller stride.
- Watch out for multiplication overflow in ChangeCompareStride.
- Replace std::set with SmallPtrSet.
llvm-svn: 43408
and the compaison is against a constant value, try eliminate the stride
by moving the compare instruction to another stride and change its
constant operand accordingly. e.g.
loop:
...
v1 = v1 + 3
v2 = v2 + 1
if (v2 < 10) goto loop
=>
loop:
...
v1 = v1 + 3
if (v1 < 30) goto loop
llvm-svn: 43336
- Avoid attempting stride-reuse in the case that there are users that
aren't addresses. In that case, there will be places where the
multiplications won't be folded away, so it's better to try to
strength-reduce them.
- Several SSE intrinsics have operands that strength-reduction can
treat as addresses. The previous item makes this more visible, as
any non-address use of an IV can inhibit stride-reuse.
- Make ValidStride aware of whether there's likely to be a base
register in the address computation. This prevents it from thinking
that things like stride 9 are valid on x86 when the base register is
already occupied.
Also, XFAIL the 2007-08-10-LEA16Use32.ll test; the new logic to avoid
stride-reuse elimintes the LEA in the loop, so the test is no longer
testing what it was intended to test.
llvm-svn: 43231
deleteValueFromRecords and loosen the types to all it to accept
Value* instead of just Instruction*, since this is what
ScalarEvolution uses internally anyway. This allows more flexibility
for future uses.
llvm-svn: 37657
This created an ambiguity for expandInTy to decide when to use
sign-extension or zero-extension, but it turns out that most of its callers
don't actually need a type conversion, now that LLVM types don't have
explicit signedness. Drop expandInTy in favor of plain expand, and change
the few places that actually need a type conversion to do it themselves.
llvm-svn: 37591
out to do! :)
This fixes a problem where LSR would insert a bunch of code into each MBB
that uses a particular subexpression (e.g. IV+base+C). The problem is that
this code cannot be CSE'd back together if inserted into different blocks.
This patch changes LSR to attempt to insert a single copy of this code and
share it, allowing codegenprepare to duplicate the code if it can be sunk
into various addressing modes. On CodeGen/ARM/lsr-code-insertion.ll,
for example, this gives us code like:
add r8, r0, r5
str r6, [r8, #+4]
..
ble LBB1_4 @cond_next
LBB1_3: @cond_true
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
ldr r6, LCPI1_1
str r6, [r8, #+4]
instead of:
add r10, r0, r6
str r8, [r10, #+4]
...
ble LBB1_4 @cond_next
LBB1_3: @cond_true
add r8, r0, r6
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
add r8, r0, r6
ldr r10, LCPI1_1
str r10, [r8, #+4]
Besides being smaller and more efficient, this makes it immediately
obvious that it is profitable to predicate LBB1_3 now :)
llvm-svn: 35972