2. Added argument to instruction scheduler creators so the creators can do
special things.
3. Repaired target hazard code.
4. Misc.
More to follow.
llvm-svn: 29450
scheduler can go into a "vertical mode" (i.e. traversing up the two-address
chain, etc.) when the register pressure is low.
This does seem to reduce the number of spills in the cases I've looked at. But
with x86, it's no guarantee the performance of the code improves.
It can be turned on with -sched-vertically option.
llvm-svn: 28108
up the schedule. This helps code that looks like this:
loads ...
computations (first set) ...
stores (first set) ...
loads
computations (seccond set) ...
stores (seccond set) ...
Without this change, the stores and computations are more likely to
interleave:
loads ...
loads ...
computations (first set) ...
computations (second set) ...
computations (first set) ...
stores (first set) ...
computations (second set) ...
stores (stores set) ...
This can increase the number of spills if we are unlucky.
llvm-svn: 28033
operands have all issued, but whose results are not yet available. This
allows us to compile:
int G;
int test(int A, int B, int* P) {
return (G+A)*(B+1);
}
to:
_test:
lis r2, ha16(L_G$non_lazy_ptr)
addi r4, r4, 1
lwz r2, lo16(L_G$non_lazy_ptr)(r2)
lwz r2, 0(r2)
add r2, r2, r3
mullw r3, r2, r4
blr
instead of this, which has a stall between the lis/lwz:
_test:
lis r2, ha16(L_G$non_lazy_ptr)
lwz r2, lo16(L_G$non_lazy_ptr)(r2)
addi r4, r4, 1
lwz r2, 0(r2)
add r2, r2, r3
mullw r3, r2, r4
blr
llvm-svn: 26716
keep track of a sense of "mobility", i.e. how many other nodes scheduling one
node will free up. For something like this:
float testadd(float *X, float *Y, float *Z, float *W, float *V) {
return (*X+*Y)*(*Z+*W)+*V;
}
For example, this makes us schedule *X then *Y, not *X then *Z. The former
allows us to issue the add, the later only lets us issue other loads.
This turns the above code from this:
_testadd:
lfs f0, 0(r3)
lfs f1, 0(r6)
lfs f2, 0(r4)
lfs f3, 0(r5)
fadds f0, f0, f2
fadds f1, f3, f1
lfs f2, 0(r7)
fmadds f1, f0, f1, f2
blr
into this:
_testadd:
lfs f0, 0(r6)
lfs f1, 0(r5)
fadds f0, f1, f0
lfs f1, 0(r4)
lfs f2, 0(r3)
fadds f1, f2, f1
lfs f2, 0(r7)
fmadds f1, f1, f0, f2
blr
llvm-svn: 26680
Only enable this with -use-sched-latencies, I'll enable it by default with a
clean nightly tester run tonight.
PPC is the only target that provides latency info currently.
llvm-svn: 26634
class, sever its implementation from the interface. Now we can provide new
implementations of the same interface (priority computation) without touching
the scheduler itself.
llvm-svn: 26630
function of the top-down scheduler are completely bogus currently, and
having (future) PPC specific in this file is also wrong, but this is a
small incremental step.
llvm-svn: 26552
a predecessor appearing more than once in the operand list was counted as
multiple predecessor; priority1 should be updated during scheduling;
CycleBound was updated after the node is inserted into priority queue; one
of the tie breaking condition was flipped.
- Take into consideration of two address opcodes. If a predecessor is a def&use
operand, it should have a higher priority.
- Scheduler should also favor floaters, i.e. nodes that do not have real
predecessors such as MOV32ri.
- The scheduling fixes / tweaks fixed bug 478:
.text
.align 4
.globl _f
_f:
movl 4(%esp), %eax
movl 8(%esp), %ecx
movl %eax, %edx
imull %ecx, %edx
imull %eax, %eax
imull %ecx, %ecx
addl %eax, %ecx
leal (%ecx,%edx,2), %eax
ret
It is also a slight performance win (1% - 3%) for most tests.
llvm-svn: 26470