as an (index,bool) pair. The bool flag records whether the kill is a
PHI kill or not. This code will be used to enable splitting of live
intervals containing PHI-kills.
A slight change to live interval weights introduced an extra spill
into lsr-code-insertion (outside the critical sections). The test
condition has been updated to reflect this.
llvm-svn: 75097
Note, isUndef marker must be placed even on implicit_def def operand or else the scavenger will not ignore it. This is necessary because -O0 path does not use liveintervalanalysis, it treats implicit_def just like any other def.
llvm-svn: 74601
The register allocator, when it allocates a register to a virtual register defined by an implicit_def, can allocate any physical register without worrying about overlapping live ranges. It should mark all of operands of the said virtual register so later passes will do the right thing.
This is not the best solution. But it should be a lot less fragile to having the scavenger try to track what is defined by implicit_def.
llvm-svn: 74518
entries as there are basic blocks in the function. LiveVariables::getVarInfo
creates a VarInfo struct for every register in the function, leading to
quadratic space use. This patch changes the BitVector to a SparseBitVector,
which doesn't help the worst-case memory use but does reduce the actual use in
very long functions with short-lived variables.
llvm-svn: 72426
VirtRegMap keeps track of allocations so it knows what's not used. As a horrible hack, the stack coloring can color spill slots with *free* registers. That is, it replace reload and spills with copies from and to the free register. It unfold instructions that load and store the spill slot and replace them with register using variants.
Not yet enabled. This is part 1. More coming.
llvm-svn: 70787
This fixes a very subtle bug. vr defined by an implicit_def is allowed overlap with any register since it doesn't actually modify anything. However, if it's used as a two-address use, its live range can be extended and it can be spilled. The spiller must take care not to emit a reload for the vn number that's defined by the implicit_def. This is both a correctness and performance issue.
llvm-svn: 69743
%reg1498<def> = MOV32rm %reg1024, 1, %reg0, 12, %reg0, Mem:LD(4,4) [sunkaddr39 + 0]
%reg1506<def> = MOV32rm %reg1024, 1, %reg0, 8, %reg0, Mem:LD(4,4) [sunkaddr42 + 0]
%reg1486<def> = MOV32rr %reg1506
%reg1486<def> = XOR32rr %reg1486, %reg1498, %EFLAGS<imp-def,dead>
%reg1510<def> = MOV32rm %reg1024, 1, %reg0, 4, %reg0, Mem:LD(4,4) [sunkaddr45 + 0]
=>
%reg1498<def> = MOV32rm %reg2036, 1, %reg0, 12, %reg0, Mem:LD(4,4) [sunkaddr39 + 0]
%reg1506<def> = MOV32rm %reg2037, 1, %reg0, 8, %reg0, Mem:LD(4,4) [sunkaddr42 + 0]
%reg1486<def> = MOV32rr %reg1506
%reg1486<def> = XOR32rr %reg1486, %reg1498, %EFLAGS<imp-def,dead>
%reg1510<def> = MOV32rm %reg2038, 1, %reg0, 4, %reg0, Mem:LD(4,4) [sunkaddr45 + 0]
From linearscan's point of view, each of reg2036, 2037, and 2038 are separate registers, each is "killed" after a single use. The reloaded register is available and it's often clobbered right away. e.g. In thise case reg1498 is allocated EAX while reg2036 is allocated RAX. This means we end up with multiple reloads from the same stack slot in the same basic block.
Now linearscan recognize there are other reloads from same SS in the same BB. So it'll "downgrade" RAX (and its aliases) after reg2036 is allocated until the next reload (reg2037) is done. This greatly increase the likihood reloads from SS are reused.
This speeds up sha1 from OpenSSL by 5.8%. It is also an across the board win for SPEC2000 and 2006.
llvm-svn: 69585
register destinations that are tied to source operands. The
TargetInstrDescr::findTiedToSrcOperand method silently fails for inline
assembly. The existing MachineInstr::isRegReDefinedByTwoAddr was very
close to doing what is needed, so this revision makes a few changes to
that method and also renames it to isRegTiedToUseOperand (for consistency
with the very similar isRegTiedToDefOperand and because it handles both
two-address instructions and inline assembly with tied registers).
llvm-svn: 68714
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
instructions), and teach the DAGCombiner to take advantage of this on
targets which support it. This eliminates many redundant
zero-extension operations on x86-64.
This adds a new TargetLowering hook, isZExtFree. It's similar to
isTruncateFree, except it only applies to actual definitions, and not
no-op truncates which may not zero the high bits.
Also, this adds a new optimization to SimplifyDemandedBits: transform
operations like x+y into (zext (add (trunc x), (trunc y))) on targets
where all the casts are no-ops. In contexts where the high part of the
add is explicitly masked off, this allows the mask operation to be
eliminated. Fix the DAGCombiner to avoid undoing these transformations
to eliminate casts on targets where the casts are no-ops.
Also, this adds a new two-address lowering heuristic. Since
two-address lowering runs before coalescing, it helps to be able to
look through copies when deciding whether commuting and/or
three-address conversion are profitable.
Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
the case that a clobber range extended both before and beyond an
existing live range. In that case, multiple live ranges need to be
added. This was exposed by the new subreg coalescing code.
Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
spiller behavior it was looking for no longer occurrs with the new
instruction selection.
llvm-svn: 68576
- Make type declarations match the struct/class keyword of the definition.
- Move AddSignalHandler into the namespace where it belongs.
- Correctly call functions from template base.
- Some other small changes.
With this patch, LLVM and Clang should build properly and with far less noise under VS2008.
llvm-svn: 67347
v1024 = EDI // not killed
=
= EDI
One possible solution is for the coalescer to examine the sub-register live intervals in the same manner as the physical register. Another possibility is to examine defs and uses (when needed) of sub-registers. Both solutions are too expensive. For now, look for "short virtual intervals" and scan instructions to look for conflict instead.
This is a small win on x86-64. e.g. It shaves 403.gcc by ~80 instructions.
llvm-svn: 61847
172 %ECX<def> = MOV32rr %reg1039<kill>
180 INLINEASM <es:subl $5,$1
sbbl $3,$0>, 10, %EAX<def>, 14, %ECX<earlyclobber,def>, 9, %EAX<kill>,
36, <fi#0>, 1, %reg0, 0, 9, %ECX<kill>, 36, <fi#1>, 1, %reg0, 0
188 %EAX<def> = MOV32rr %EAX<kill>
196 %ECX<def> = MOV32rr %ECX<kill>
204 %ECX<def> = MOV32rr %ECX<kill>
212 %EAX<def> = MOV32rr %EAX<kill>
220 %EAX<def> = MOV32rr %EAX
228 %reg1039<def> = MOV32rr %ECX<kill>
The early clobber operand ties ECX input to the ECX def.
The live interval of ECX is represented as this:
%reg20,inf = [46,47:1)[174,230:0) 0@174-(230) 1@46-(47)
The right way to represent this is something like
%reg20,inf = [46,47:2)[174,182:1)[181:230:0) 0@174-(182) 1@181-230 @2@46-(47)
Of course that won't work since that means overlapping live ranges defined by two val#.
The workaround for now is to add a bit to val# which says the val# is redefined by a early clobber def somewhere. This prevents the move at 228 from being optimized away by SimpleRegisterCoalescing::AdjustCopiesBackFrom.
llvm-svn: 61259
can give it the same stack slot as the spilled interval if it is folded.
This prevents the fold/unfold code from pointing to the wrong register.
llvm-svn: 58255
"If a re-materializable instruction has a register
operand, the spiller will change the register operand's
spill weight to HUGE_VAL to avoid it being spilled.
However, if the operand is already in the queue ready
to be spilled, avoid re-materializing it".
llvm-svn: 56837
RA problem by expanding the live interval of an
earlyclobber def back one slot. Remove
overlap-earlyclobber throughout. Remove
earlyclobber bits and their handling from
live internals.
llvm-svn: 56539
with an earlyclobber operand elsewhere. Propagate
this bit and the earlyclobber bit through SDISel.
Change linear-scan RA not to allocate regs in a way
that conflicts with an earlyclobber. See also comments.
llvm-svn: 56290
instruction. Also, their valno's should have an unknown def. This has no effect currently, but was
causing issues when StrongPHIElimination was enabled.
llvm-svn: 56231
isImmediate(), isRegister(), and friends, to avoid confusion
about having two different names with the same meaning. I'm
not attached to the longer names, and would be ok with
changing to the shorter names if others prefer it.
llvm-svn: 56189
in so far as it compiles and, in theory, works, but does not take advantage of recent advancements. For instance, it could be improved by using
MachineRegisterInfo::use_iterator.
llvm-svn: 54924
a new ilist_node class, and remove them. Unlike alist_node,
ilist_node doesn't attempt to manage storage itself, so it avoids
the associated problems, including being opaque in gdb.
Adjust the Recycler class so that it doesn't depend on alist_node.
Also, change it to use explicit Size and Align parameters, allowing
it to work when the largest-sized node doesn't have the greatest
alignment requirement.
Change MachineInstr's MachineMemOperand list from a pool-backed
alist to a std::list for now.
llvm-svn: 54146
by the PHI needs to be extended to the beginning of its basic block, and the intervals that were inputs need to be trimmed to the end
of their basic blocks.
llvm-svn: 54070
and knowledge of PseudoSourceValues. This unfortunately isn't sufficient to allow
constants to be rematerialized in PIC mode -- the extra indirection is a
complication.
llvm-svn: 54000
to multiply the instruction count by a constant factor in a few places, which
caused the register allocator to require many more iterations.
llvm-svn: 53959
8 %reg1024<def> = IMPLICIT_DEF
12 %reg1024<def> = INSERT_SUBREG %reg1024<kill>, %reg1025, 2
The live range [12, 14) are not part of the r1024 live interval since it's defined by an implicit def. It will not conflicts with live interval of r1025. Now suppose both registers are spilled, you can easily see a situation where both registers are reloaded before the INSERT_SUBREG and both target registers that would overlap.
llvm-svn: 53503
MachineMemOperands. The pools are owned by MachineFunctions.
This drastically reduces the number of calls to malloc/free made
during the "Emit" phase of scheduling, as well as later phases
in CodeGen. Combined with other changes, this speeds up the
"instruction selection" phase of CodeGen by 10% in some cases.
llvm-svn: 53212
live interval to infinity if the instruction being rewritten is an
original remat def instruction. We were only checking against the clone
of the remat def which doesn't actually appear in the IR at all.
llvm-svn: 51440
that it is cheap and efficient to get.
Move a variety of predicates from TargetInstrInfo into
TargetInstrDescriptor, which makes it much easier to query a predicate
when you don't have TII around. Now you can use MI->getDesc()->isBranch()
instead of going through TII, and this is much more efficient anyway. Not
all of the predicates have been moved over yet.
Update old code that used MI->getInstrDescriptor()->Flags to use the
new predicates in many places.
llvm-svn: 45674
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
llvm-svn: 45467
- Eliminate the static "print" method for operands, moving it
into MachineOperand::print.
- Change various set* methods for register flags to take a bool
for the value to set it to. Remove unset* methods.
- Group methods more logically by operand flavor in MachineOperand.h
llvm-svn: 45461
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
low spill weight so it would not be picked in case spilling is needed (avoid
pushing other intervals in the same BB to be spilled).
llvm-svn: 44601
in the middle of a split basic block, create a new live interval starting at
the def. This avoid artifically extending the live interval over a number of
cycles where it is dead. e.g.
bb1:
= vr1204 (use / kill) <= new interval starts and ends here.
...
...
vr1204 = (new def) <= start a new interval here.
= vr1204 (use)
llvm-svn: 44436
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
llvm-svn: 44198
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.
llvm-svn: 44104
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
llvm-svn: 43662
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.
llvm-svn: 42899
Changes related modules so VNInfo's are not copied. This decrease
copy coalescing time by 45% and overall compilation time by 10% on siod.
llvm-svn: 41579
kill instruction #, and source register number (iff the value# is defined by a
copy).
- Now def instruction # is set for every value#, not just for copy defined ones.
- Update some outdated code related inactive live ranges.
- Kill info not yet set. That's next patch.
llvm-svn: 40913
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.
llvm-svn: 37644
simultaneously. Move that pass to SimpleRegisterCoalescing.
This makes it easier to implement alternative register allocation and
coalescing strategies while maintaining reuse of the existing live
interval analysis.
llvm-svn: 37520
- A register def / use now implicitly affects sub-register liveness but does
not affect liveness information of super-registers.
- Def of a larger register (if followed by a use later) is treated as
read/mod/write of a smaller register.
llvm-svn: 36434
long live interval that has low usage density.
1. Change order of coalescing to join physical registers with virtual
registers first before virtual register intervals become too long.
2. Check size and usage density to determine if it's worthwhile to join.
3. If joining is aborted, assign virtual register live interval allocation
preference field to the physical register.
4. Register allocator should try to allocate to the preferred register
first (if available) to create identify moves that can be eliminated.
llvm-svn: 36218
of dead def live interval at 1 to avoid multiple def's targeting the same
register. The previous patch missed a case where the source operand is live-in.
In that case, remove the whole interval.
llvm-svn: 35512
to be really bad. Once they are joined they are not broken apart. Also, physical
intervals cannot be spilled!
Added a heuristic as a workaround for this. Be careful coalescing with a
physical register if the virtual register uses are "far". Check if there are
uses in the same loop as the source (copy instruction). Check if it is in the
loop preheader, etc.
llvm-svn: 35134
entry (0x8b056f0, LLVM BB @0x8b01b30, ID#0):
Live Ins: %r0 %r1 %r2 %r3
%reg1032 = tMOVrr %r3<kill>
%reg1033 = tMOVri8 1
%reg1034 = tMOVri8 0
tCMPi8 %reg1029<kill>, 0
tBcc mbb<entry,0x8b06a10>, 0
Successors according to CFG: 0x8b06980 0x8b06a10
entry (0x8b06980, LLVM BB @0x8b01b30, ID#12):
Predecessors according to CFG: 0x8b056f0
%reg1036 = tMOVrr %reg1034<kill>
Successors according to CFG: 0x8b06a10
entry (0x8b06a10, LLVM BB @0x8b01b30, ID#13):
Predecessors according to CFG: 0x8b056f0 0x8b06980
%reg1024<dead> = tMOVrr %reg1030<kill>
...
reg1030 and r1 have already been joined. When reg1024 and reg1030 are joined,
r1 live range from function entry to the tMOVrr instruction are dead. Eliminate
r1 from the livein set of the entry BB, not the BB where the copy is.
llvm-svn: 34866
- When coalescing a copy MI, if its destination is "dead", propagate the
property to the source MI's destination if there are no intervening uses.
- Detect dead function live-in's and remove them.
llvm-svn: 34383