with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.
llvm-svn: 37644
simultaneously. Move that pass to SimpleRegisterCoalescing.
This makes it easier to implement alternative register allocation and
coalescing strategies while maintaining reuse of the existing live
interval analysis.
llvm-svn: 37520
- A register def / use now implicitly affects sub-register liveness but does
not affect liveness information of super-registers.
- Def of a larger register (if followed by a use later) is treated as
read/mod/write of a smaller register.
llvm-svn: 36434
long live interval that has low usage density.
1. Change order of coalescing to join physical registers with virtual
registers first before virtual register intervals become too long.
2. Check size and usage density to determine if it's worthwhile to join.
3. If joining is aborted, assign virtual register live interval allocation
preference field to the physical register.
4. Register allocator should try to allocate to the preferred register
first (if available) to create identify moves that can be eliminated.
llvm-svn: 36218
of dead def live interval at 1 to avoid multiple def's targeting the same
register. The previous patch missed a case where the source operand is live-in.
In that case, remove the whole interval.
llvm-svn: 35512
to be really bad. Once they are joined they are not broken apart. Also, physical
intervals cannot be spilled!
Added a heuristic as a workaround for this. Be careful coalescing with a
physical register if the virtual register uses are "far". Check if there are
uses in the same loop as the source (copy instruction). Check if it is in the
loop preheader, etc.
llvm-svn: 35134
entry (0x8b056f0, LLVM BB @0x8b01b30, ID#0):
Live Ins: %r0 %r1 %r2 %r3
%reg1032 = tMOVrr %r3<kill>
%reg1033 = tMOVri8 1
%reg1034 = tMOVri8 0
tCMPi8 %reg1029<kill>, 0
tBcc mbb<entry,0x8b06a10>, 0
Successors according to CFG: 0x8b06980 0x8b06a10
entry (0x8b06980, LLVM BB @0x8b01b30, ID#12):
Predecessors according to CFG: 0x8b056f0
%reg1036 = tMOVrr %reg1034<kill>
Successors according to CFG: 0x8b06a10
entry (0x8b06a10, LLVM BB @0x8b01b30, ID#13):
Predecessors according to CFG: 0x8b056f0 0x8b06980
%reg1024<dead> = tMOVrr %reg1030<kill>
...
reg1030 and r1 have already been joined. When reg1024 and reg1030 are joined,
r1 live range from function entry to the tMOVrr instruction are dead. Eliminate
r1 from the livein set of the entry BB, not the BB where the copy is.
llvm-svn: 34866
- When coalescing a copy MI, if its destination is "dead", propagate the
property to the source MI's destination if there are no intervening uses.
- Detect dead function live-in's and remove them.
llvm-svn: 34383
by 40%, FreeBench/fourinarow by 20%, and many other programs 10-25%.
On PPC, this speeds up fourinarow by 18%, and probably other things as well.
llvm-svn: 31504
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.
llvm-svn: 31380
actually *removes* one of the operands, instead of just assigning both operands
the same register. This make reasoning about instructions unnecessarily complex,
because you need to know if you are before or after register allocation to match
up operand #'s with the target description file.
Changing this also gets rid of a bunch of hacky code in various places.
This patch also includes changes to fold loads into cmp/test instructions in
the X86 backend, along with a significant simplification to the X86 spill
folding code.
llvm-svn: 30108
number of copies, potentially defining live ranges that appear to have
differing value numbers that become identical when coallsced. Among other
things, this fixes CodeGen/X86/shift-coalesce.ll and PR687.
llvm-svn: 29968
paves the way for future changes, increases coallescing opportunities (in
theory, not witnessed in practice), and eliminates the really expensive
LiveIntervals::overlapsAliases method.
llvm-svn: 29890
instructions which define each value#) to simplify and improve the coallescer.
In particular, this patch:
1. Implements iterative coallescing.
2. Reverts an unsafe hack from handlePhysRegDef, superceeding it with a
better solution.
3. Implements PR865, "coallescing" away the second copy in code like:
A = B
...
B = A
This also includes changes to symbolically print registers in intervals
when possible.
llvm-svn: 29862
But this is incorrect if the spilled value live range extends beyond the
current BB.
It is currently controlled by a temporary option -spiller-check-liveout.
llvm-svn: 28024
For example, we can now join things like [0-30:0)[31-40:1)[52-59:2)
with [40:60:0) if the 52-59 range is defined by a copy from the 40-60 range.
The resultant range ends up being [0-30:0)[31-60:1).
This fires a lot through-out the test suite (e.g. shrinking bc from
19492 -> 18509 machineinstrs) though most gains are smaller (e.g. about
50 copies eliminated from crafty).
llvm-svn: 23866
only add a reload live range once for the instruction. This is one step
towards fixing a regalloc pessimization that Nate notice, but is later undone
by the spiller (so no code is changed).
llvm-svn: 23293
numbering values in live ranges for physical registers.
The alpha backend currently generates code that looks like this:
vreg = preg
...
preg = vreg
use preg
...
preg = vreg
use preg
etc. Because vreg contains the value of preg coming in, each of the
copies back into preg contain that initial value as well.
In the case of the Alpha, this allows this testcase:
void "foo"(int %blah) {
store int 5, int *%MyVar
store int 12, int* %MyVar2
ret void
}
to compile to:
foo:
ldgp $29, 0($27)
ldiq $0,5
stl $0,MyVar
ldiq $0,12
stl $0,MyVar2
ret $31,($26),1
instead of:
foo:
ldgp $29, 0($27)
bis $29,$29,$0
ldiq $1,5
bis $0,$0,$29
stl $1,MyVar
ldiq $1,12
bis $0,$0,$29
stl $1,MyVar2
ret $31,($26),1
This does not seem to have any noticable effect on X86 code.
This fixes PR535.
llvm-svn: 20536
it was a use, def, or both. This allows us to be less pessimistic in our
analysis of them. In practice, this doesn't make a big difference, but it
doesn't hurt either.
llvm-svn: 16632
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.
llvm-svn: 16137
Regression.CodeGen.Generic.2004-04-09-SameValueCoalescing.llx and the
code size problem.
This bug prevented us from doing most register coallesces.
llvm-svn: 16031
same as the PHI use. This is not correct as the PHI use value is different
depending on which branch is taken. This fixes espresso with aggressive
coallescing, and perhaps others.
llvm-svn: 15189
Interval. This generalizes the isDefinedOnce mechanism that we used before
to help us coallesce ranges that overlap. As part of this, every logical
range with a different value is assigned a different number in the interval.
For example, for code that looks like this:
0 X = ...
4 X += ...
...
N = X
We now generate a live interval that contains two ranges: [2,6:0),[6,?:1)
reflecting the fact that there are two different values in the range at
different positions in the code.
Currently we are not using this information at all, so this just slows down
liveintervals. In the future, this will change.
Note that this change also substantially refactors the joinIntervalsInMachineBB
method to merge the cases for virt-virt and phys-virt joining into a single
case, adds comments, and makes the code a bit easier to follow.
llvm-svn: 15154