This takes something like this:
%A = phi int [ 3, %cond_false.0 ], [ 2, %endif.0.i ], [ 2, %endif.1.i ]
%B = div int %tmp.243, 4
and turns it into:
%A = phi int [ 3/4, %cond_false.0 ], [ 2/4, %endif.0.i ], [ 2/4, %endif.1.i ]
which is later simplified (in this case) into %A = 0.
This triggers thousands of times in spec, for example, 269 times in 176.gcc.
This is tested by InstCombine/add.ll:test23 and set.ll:test18.
llvm-svn: 16582
Copy constant-pool entries' addresses into registers before loading out of them,
to avoid errors from the assembler.
Handle loading call args past the 6th one off the stack.
Add IMPLICIT_DEF pseudo-instrs for double and long arguments passed in register
pairs.
Use FpMOVD to copy doubles around instead of the horrible store-load thing we
were doing before.
Handle 'ret double' and 'ret long'.
Fix a bug in handling 'and/or/xor long'.
llvm-svn: 16577
Instcombine (setcc (truncate X), C1).
This occurs THOUSANDS of times in many benchmarks. Particularlly common
seem to be things like (seteq (cast bool X to int), int 0)
This turns it into (seteq bool %X, false), which then becomes (not %X).
llvm-svn: 16567
integers that we can use as immediate values in instructions.
Example from yacr2:
- lis r10, -1
- ori r10, r10, 65535
- add r28, r28, r10
+ addi r28, r28, -1
addi r7, r7, 1
addi r9, r9, 1
b .LBB_main_9 ; loopentry.1.i214
llvm-svn: 16566
This is important for several reasons:
1. Benchmarks have lots of code that looks like this (perlbmk in particular):
%tmp.2.i = setne int %tmp.0.i, 128 ; <bool> [#uses=1]
%tmp.6343 = seteq int %tmp.0.i, 1 ; <bool> [#uses=1]
%tmp.63 = and bool %tmp.2.i, %tmp.6343 ; <bool> [#uses=1]
we now fold away the setne, a clear improvement.
2. In the more important cases, such as (X >= 10) & (X < 20), we now produce
smaller code: (X-10) < 10.
3. Perhaps the nicest effect of this patch is that it really helps out the
code generators. In particular, for a 'range test' like the above,
instead of generating this on X86 (the difference on PPC is even more
pronounced):
cmp %EAX, 50
setge %CL
cmp %EAX, 100
setl %AL
and %CL, %AL
cmp %CL, 0
we now generate this:
add %EAX, -50
cmp %EAX, 50
Furthermore, this causes setcc's to be folded into branches more often.
These combinations trigger dozens of times in the spec benchmarks, particularly
in 176.gcc, 186.crafty, 253.perlbmk, 254.gap, & 099.go.
llvm-svn: 16559
Implement (setcc (shl X, C1), C2) folding.
The second one occurs several dozen times in spec. The first was added
just in case. :)
These are tested by shift.ll:test2[12], and div.ll:test5
llvm-svn: 16549
This latent bug was exposed by recent changes, and is tested as:
llvm/test/Regression/Transforms/InstCombine/2004-09-28-BadShiftAndSetCC.llx
llvm-svn: 16546
always guaranteed to exist. This fixes PR444. Thanks to Alkis
for reporting the bug and testing the patch.
AddRecord used to return a big list, but that return value was never
used. So now it doesn't return anything.
Create the WebDir if it does not exist.
Fix a typo in a comment.
llvm-svn: 16541