to DenseMap<SDNode*, SUnit*>, and adjust the way cloned SUnit nodes are
handled so that only the original node needs to be in the map.
This speeds up llc on 447.dealII.llvm.bc by about 2%.
llvm-svn: 52576
ScheduleDAG; they don't correspond to any actual instructions so they
don't need to be scheduled.
This fixes a bug where the EntryToken was being scheduled multiple
times in some cases, though it ended up not causing any trouble because
EntryToken doesn't expand into anything. With this fixed the schedulers
reliably schedule the expected number of units, so we can check this
with an assertion.
This requires a tweak to test/CodeGen/X86/loop-hoist.ll because it
ends up getting scheduled differently in a trivial way, though it was
enough to fool the prcontext+grep that the test does.
llvm-svn: 49701
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
llvm-svn: 45467
This reduces selectiondag time on kc++ from 5.43s to 4.98s (9%). More
significantly, this speeds up the default ppc scheduler from ~1571ms to 1063ms,
a 33% speedup.
llvm-svn: 29743
2. Added argument to instruction scheduler creators so the creators can do
special things.
3. Repaired target hazard code.
4. Misc.
More to follow.
llvm-svn: 29450
scheduler can go into a "vertical mode" (i.e. traversing up the two-address
chain, etc.) when the register pressure is low.
This does seem to reduce the number of spills in the cases I've looked at. But
with x86, it's no guarantee the performance of the code improves.
It can be turned on with -sched-vertically option.
llvm-svn: 28108
up the schedule. This helps code that looks like this:
loads ...
computations (first set) ...
stores (first set) ...
loads
computations (seccond set) ...
stores (seccond set) ...
Without this change, the stores and computations are more likely to
interleave:
loads ...
loads ...
computations (first set) ...
computations (second set) ...
computations (first set) ...
stores (first set) ...
computations (second set) ...
stores (stores set) ...
This can increase the number of spills if we are unlucky.
llvm-svn: 28033
operands have all issued, but whose results are not yet available. This
allows us to compile:
int G;
int test(int A, int B, int* P) {
return (G+A)*(B+1);
}
to:
_test:
lis r2, ha16(L_G$non_lazy_ptr)
addi r4, r4, 1
lwz r2, lo16(L_G$non_lazy_ptr)(r2)
lwz r2, 0(r2)
add r2, r2, r3
mullw r3, r2, r4
blr
instead of this, which has a stall between the lis/lwz:
_test:
lis r2, ha16(L_G$non_lazy_ptr)
lwz r2, lo16(L_G$non_lazy_ptr)(r2)
addi r4, r4, 1
lwz r2, 0(r2)
add r2, r2, r3
mullw r3, r2, r4
blr
llvm-svn: 26716
keep track of a sense of "mobility", i.e. how many other nodes scheduling one
node will free up. For something like this:
float testadd(float *X, float *Y, float *Z, float *W, float *V) {
return (*X+*Y)*(*Z+*W)+*V;
}
For example, this makes us schedule *X then *Y, not *X then *Z. The former
allows us to issue the add, the later only lets us issue other loads.
This turns the above code from this:
_testadd:
lfs f0, 0(r3)
lfs f1, 0(r6)
lfs f2, 0(r4)
lfs f3, 0(r5)
fadds f0, f0, f2
fadds f1, f3, f1
lfs f2, 0(r7)
fmadds f1, f0, f1, f2
blr
into this:
_testadd:
lfs f0, 0(r6)
lfs f1, 0(r5)
fadds f0, f1, f0
lfs f1, 0(r4)
lfs f2, 0(r3)
fadds f1, f2, f1
lfs f2, 0(r7)
fmadds f1, f1, f0, f2
blr
llvm-svn: 26680