This only adds the interference checks required for correctness.
We still need to take advantage of register masks for the
interference driven live range splitting.
llvm-svn: 150191
THe LRE_DidCloneVirtReg callback may be called with vitual registers
that RAGreedy doesn't even know about yet. In that case, there are no
data structures to update.
llvm-svn: 139702
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.
Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions. This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important. The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting. Stack slots are cheap.
This patch adds the interface to enable spill modes in SplitKit. In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler. In some cases, it can become much simpler because no
extra PHI-defs are required. This will speed up both splitting and
spilling.
This is only the interface to enable spill modes, no implementation yet.
llvm-svn: 139500
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.
This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.
llvm-svn: 137001
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.
The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.
llvm-svn: 136989
This helps generate better code in functions with high register
pressure.
The previous version of compact region splitting caused regressions
because the regions were a bit too large. A stronger negative bias
applied in r136832 fixed this problem.
llvm-svn: 136836
Apply twice the negative bias on transparent blocks when computing the
compact regions. This excludes loop backedges from the region when only
one of the loop blocks uses the register.
Previously, we would include the backedge in the region if the loop
preheader and the loop latch both used the register, but the loop header
didn't.
When both the header and latch blocks use the register, we still keep it
live on the backedge.
llvm-svn: 136832
There are two conflicting strategies in play:
- Under high register pressure, we want to assign large live ranges
first. Smaller live ranges are easier to place afterwards.
- Live range splitting is guided by interference, so splitting should be
deferred until interference is as realistic as possible.
With the recent changes to the live range stages, and with compact
regions enabled, it is less traumatic to split a live range too early.
If some of the split products were too big, they can often be split
again.
By reversing the RS_Split order, we get this queue order:
1. Normal live ranges, large to small.
2. RS_Split live ranges, large to small.
The large-to-small order improves RAGreedy's puzzle solving skills under
high register pressure. It may cause a bit more iterated splitting, but
we handle that better now.
With this change, -compact-regions is mostly an improvement on SPEC.
llvm-svn: 136388
When splitting global live ranges, it is now possible to split for
multiple destination intervals at once. Previously, we only had the main
and stack intervals.
Each edge bundle is assigned to a split candidate, and splitAroundRegion
will insert copies between the candidate intervals and the stack
interval as needed.
The multi-way splitting is used to split around compact regions when
enabled with -compact-regions. The best candidate register still gets
all the bundles it wants, but everything outside the main interval is
first split around compact regions before we create single-block
intervals.
Compact region splitting still causes some regressions, so it is not
enabled by default.
llvm-svn: 136186
When dead code elimination deletes a PHI value, the virtual register may
split into multiple connected components. In that case, revert each
component to the RS_Assign stage.
The new components are guaranteed to be smaller (the original value
numbers are distributed among the components), so this will always be
making progress. The components are now allowed to evict other live
ranges or be split again.
llvm-svn: 136034
This mechanism already exists, but the RS_Split2 stage makes it clearer.
When live range splitting creates ranges that may not be making
progress, they are marked RS_Split2 instead of RS_New. These ranges may
be split again, but only in a way that can be proven to make progress.
For local ranges, that means they must be split into ranges used by
strictly fewer instructions.
For global ranges, region splitting is bypassed and the RS_Split2
ranges go straight to per-block splitting.
llvm-svn: 135912
The stage is used to control where a live range is going, not where it
is coming from. Live ranges created by splitting will usually be marked
RS_New, but some are marked RS_Spill to avoid wasting time trying to
split them again.
The old RS_Global and RS_Local stages are merged - they are really the
same thing for local and global live ranges.
llvm-svn: 135911
This method computes the edge bundles that should be live when splitting
around a compact region. This is independent of interference.
The function returns false if the live range was already a compact
region, or the compact region doesn't have any live bundles - it would
be the same as splitting around basic blocks.
Compact regions are computed using the normal spill placement code. We
pretend there is interference in all live-through blocks that don't use
the live range. This removes all edges from the Hopfield network used
for spill placement, so it converges instantly.
llvm-svn: 135847
A split candidate can have a null PhysReg which means that it doesn't
map to a real interference pattern. Instead, pretend that all through
blocks have interference.
This makes it possible to generate compact regions where the live range
doesn't go through blocks that don't use it. The live range will still
be live between directly connected blocks with uses.
Splitting around a compact region tends to produce a live range with a
high spill weight, so it may evict a less dense live range.
llvm-svn: 135845
This gets rid of some of the gory splitting details in RAGreedy and
makes them available to future SplitKit clients.
Slightly generalize the functionality to support multi-way splitting.
Specifically, SplitEditor::splitLiveThroughBlock() supports switching
between different register intervals in a block.
llvm-svn: 135307
Original commit message:
Count references to interference cache entries.
Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.
This makes it possible to have multiple live cursors examining
interference for different physregs.
The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().
Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.
llvm-svn: 135130
Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.
This makes it possible to have multiple live cursors examining
interference for different physregs.
The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().
Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.
llvm-svn: 135121
The cache entry referenced by the best split candidate could become
clobbered by an unsuccessful candidate.
The correct fix here is to use reference counts on the cache entries.
Coming up.
llvm-svn: 135113
Some pysical registers create split solutions that would spill anywhere.
They should not even be considered in future multi-way global splits.
This does not affect code generation (yet).
llvm-svn: 135080
RAGreedy::tryAssign will now evict interference from the preferred
register even when another register is free.
To support this, add the EvictionCost struct that counts how many hints
are broken by an eviction. We don't want to break one hint just to
satisfy another.
Rename canEvict to shouldEvict, and add the first bit of eviction policy
that doesn't depend on spill weights: Always make room in the preferred
register as long as the evictees can be split and aren't already
assigned to their preferred register.
Also make the CSR avoidance more accurate. When looking for a cheaper
register it is OK to use a new volatile register. Only CSR aliases that
have never been used before should be avoided.
llvm-svn: 134735
This is impossible in theory, I can prove it. In practice, our near-zero
threshold can cause the network to oscillate between equally good
solutions.
<rdar://problem/9720596>
llvm-svn: 134428
A split point inserted in a block with a landing pad successor may be
hoisted above the call to ensure that it dominates all successors. The
code that handles the rest of the basic block must take this into
account.
I am not including a test case, it would be very fragile. PR10244 comes
from building clang with exceptions enabled.
llvm-svn: 134369
Every live range is assigned a cascade number the first time it is
involved in an eviction. As the evictor, it gets a new cascade number.
Every evictee is assigned the same cascade number as the evictor.
Eviction is prohibited if the evictor has a lower assigned cascade
number than the evictee.
This means that assigned cascade numbers are monotonically increasing
with every eviction, yet they are bounded by NextCascade which can only
be incremented by new live ranges. Thus, infinite loops cannot happen,
but eviction cascades can still be triggered by new live ranges as we
want.
Thanks to Andy for explaining this to me.
llvm-svn: 134303
This patch will sometimes choose live range split points next to
interference instead of always splitting next to a register point. That
means spill code can now appear almost anywhere, and it was necessary
to fix code that didn't expect that.
The difficult places were:
- Between a CALL returning a value on the x87 stack and the
corresponding FpPOP_RETVAL (was FpGET_ST0). Probably also near x87
inline assembly, but that didn't actually show up in testing.
- Between a CALL popping arguments off the stack and the corresponding
ADJCALLSTACKUP.
Both are fixed now. The only place spill code can't appear is after
terminators, see SplitAnalysis::getLastSplitPoint.
Original commit message:
Rewrite RAGreedy::splitAroundRegion, now with cool ASCII art.
This function has to deal with a lot of special cases, and the old
version got it wrong sometimes. In particular, it would sometimes leave
multiple uses in the stack interval in a single block. That causes bad
code with multiple reloads in the same basic block.
The new version handles block entry and exit in a single pass. It first
eliminates all the easy cases, and then goes on to create a local
interval for the blocks with difficult interference. Previously, we
would only create the local interval for completely isolated blocks.
It can happen that the stack interval becomes completely empty because
we could allocate a register in all edge bundles, and the new local
intervals deal with the interference. The empty stack interval is
harmless, but we need to remove a SplitKit assertion that checks for
empty intervals.
llvm-svn: 134125
This function has to deal with a lot of special cases, and the old
version got it wrong sometimes. In particular, it would sometimes leave
multiple uses in the stack interval in a single block. That causes bad
code with multiple reloads in the same basic block.
The new version handles block entry and exit in a single pass. It first
eliminates all the easy cases, and then goes on to create a local
interval for the blocks with difficult interference. Previously, we
would only create the local interval for completely isolated blocks.
It can happen that the stack interval becomes completely empty because
we could allocate a register in all edge bundles, and the new local
intervals deal with the interference. The empty stack interval is
harmless, but we need to remove a SplitKit assertion that checks for
empty intervals.
llvm-svn: 134047
When local live range splitting creates a live range with the same
number of instructions as the old range, mark it as RS_Local. When such
a range is seen again, require that it be split in a way that reduces
the number of instructions. That guarantees we are making progress while
still being able to perform 3 -> 2+3 splits as required by PR10070.
This also means that the PrevSlot map is no longer needed. This was also
used to estimate new spill weights, but that is no longer necessary
after slotIndexes::insertMachineInstrInMaps() got the extra Late
insertion argument.
llvm-svn: 132697
of reserved registers.
Use RegisterClassInfo in RABasic as well. This slightly changes som
allocation orders because RegisterClassInfo puts CSR aliases last.
llvm-svn: 132581
When assigned ranges are evicted, they are put in the RS_Evicted stage and are
not allowed to evict anything else. That prevents looping automatically.
When evicting ranges just to get a cheaper register, use only spill weights to
find the possible candidates. Avoid breaking hints for this purpose, it is not
worth it.
Start implementing more complex eviction heuristics, guarded by the temporary
-complex-eviction flag. The initial version permits a heavier range to be
evicted if it doesn't have any uses where the evicting range is live. This makes
it a good candidate for live ranfge splitting.
llvm-svn: 132358
Delete the Kill and Def markers in BlockInfo. They are no longer
necessary when BlockInfo describes a continuous live range.
This only affects the relatively rare kind of basic block where a live
range looks like this:
|---x o---|
Now live range splitting can pretend that it is looking at two blocks:
|---x
o---|
This allows the code to be simplified a bit.
llvm-svn: 132245
It is important that this function returns the same number of live blocks as
countLiveBlocks(CurLI) because live range splitting uses the number of live
blocks to ensure it is making progress.
This is in preparation of supporting duplicate UseBlock entries for basic blocks
that have a virtual register live-in and live-out, but not live-though.
llvm-svn: 132244
This doesn't change functionality (much), but it allows for a more fine-grained
eviction policy. The current policy only compares spill weights, and that is not
always the best thing to do. Spill weights are designed to serve linear scan,
and they don't consider live range splitting.
Add a mechanism so canEvict() can request that a live range be evicted and
split/spilled. This is to avoid infinite eviction loops.
llvm-svn: 132101
This can't be just an assertion, users can always write impossible inline
assembly. Such an assembly statement should be included in the error message.
llvm-svn: 131024
After a virtual register is split, update any debug user variables that resided
in the old register. This ensures that the LiveDebugVariables are still correct
after register allocation.
This may create DBG_VALUE instructions that place a user variable in a register
in parts of the function and in a stack slot in other parts. DwarfDebug
currently doesn't support that.
llvm-svn: 130998
Register coalescing can sometimes create live ranges that end in the middle of a
basic block without any killing instruction. When SplitKit detects this, it will
repair the live range by shrinking it to its uses.
Live range splitting also needs to know about this. When the range shrinks so
much that it becomes allocatable, live range splitting fails because it can't
find a good split point. It is paranoid about making progress, so an allocatable
range is considered an error.
The coalescer should really not be creating these bad live ranges. They appear
when coalescing dead copies.
llvm-svn: 130787
These intervals are allocatable immediately after splitting, but they may be
evicted because of later splitting. This is rare, but when it happens they
should be split again.
The remainder intervals that cannot be allocated after splitting still move
directly to spilling.
SplitEditor::finish can optionally provide a mapping from new live intervals
back to the original interval indexes returned by openIntv().
Each original interval index can map to multiple new intervals after connected
components have been separated. Dead code elimination may also add existing
intervals to the list.
The reverse mapping allows the SplitEditor client to treat the new intervals
differently depending on the split region they came from.
llvm-svn: 129925
On the x86-64 and thumb2 targets, some registers are more expensive to encode
than others in the same register class.
Add a CostPerUse field to the TableGen register description, and make it
available from TRI->getCostPerUse. This represents the cost of a REX prefix or a
32-bit instruction encoding required by choosing a high register.
Teach the greedy register allocator to prefer cheap registers for busy live
ranges (as indicated by spill weight).
llvm-svn: 129864
This merges the behavior of splitSingleBlocks into splitAroundRegion, so the
RS_Region and RS_Block register stages can be coalesced. That means the leftover
intervals after region splitting go directly to spilling instead of a second
pass of per-block splitting.
llvm-svn: 129379
It is common for large live ranges to have few basic blocks with register uses
and many live-through blocks without any uses. This approach grows the Hopfield
network incrementally around the use blocks, completely avoiding checking
interference for some through blocks.
llvm-svn: 129188
About 90% of the relevant blocks are live-through without uses, and the only
information required about them is their number. This saves memory and enables
later optimizations that need to look at only the use-blocks.
llvm-svn: 128985
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.
llvm-svn: 128764
When DCE clones a live range because it separates into connected components,
make sure that the clones enter the same register allocator stage as the
register they were cloned from.
For instance, clones may be split even when they where created during spilling.
Other registers created during spilling are not candidates for splitting or even
(re-)spilling.
llvm-svn: 128524
The reassignment phase was able to move interference with a higher spill weight,
but it didn't happen very often and it was fairly expensive.
The existing interference eviction picks up the slack.
llvm-svn: 128397
This simplifies the code and makes it faster too.
The interference patterns are saved for each candidate register. It will be
reused for actually executing the split. Work in progress.
llvm-svn: 127054
This effectively disables the 'turbo' functionality of the greedy register
allocator where all new live ranges created by splitting would be reconsidered
as if they were originals.
There are two reasons for doing this, 1. It guarantees that the algorithm
terminates. Early versions were prone to infinite looping in certain corner
cases. 2. It is a 2x speedup. We can skip a lot of unnecessary interference
checks that won't lead to good splitting anyway.
The problem is that region splitting only gets one shot, so it should probably
be changed to target multiple physical registers at once.
Local live range splitting is still 'turbo' enabled. It only accounts for a
small fraction of compile time, so it is probably not necessary to do anything
about that.
llvm-svn: 126781
New live ranges are assigned in long -> short order, but live ranges that have
been evicted at least once are deferred and assigned in short -> long order.
Also disable splitting and spilling for live ranges seen for the first time.
The intention is to create a realistic interference pattern from the heavy live
ranges before starting splitting and spilling around it.
llvm-svn: 126451
When a large live range is evicted, it will usually be split when it comes
around again. By deferring evicted live ranges, the splitting happens at a time
when the interference pattern is more realistic. This prevents repeated
splitting and evictions.
llvm-svn: 126282
Use interval sizes instead of spill weights to determine if it is legal to evict
interference. A smaller interval can evict interference if all interfering live
ranges are larger.
Allow multiple interferences to be evicted as along as they are all larger than
the live range being allocated.
Spill weights are still used to select the preferred eviction candidate.
llvm-svn: 126276
This is based on the observation that long live ranges are more difficult to
allocate, so there is a better chance of solving the puzzle by handling the big
pieces first. The allocator will evict and split long alive ranges when they get
in the way.
RABasic is still using spill weights for its priority queue, so the interface to
the queue has been virtualized.
llvm-svn: 126259
An original endpoint is an instruction that killed or defined the original live
range before any live ranges were split.
When splitting global live ranges, avoid creating local live ranges without any
original endpoints. We may still create global live ranges without original
endpoints, but such a range won't be split again, and live range splitting still
terminates.
llvm-svn: 126151
The rewriter works almost identically to -rewriter=trivial, except it also
eliminates any identity copies.
This makes the new register allocators independent of VirtRegRewriter.cpp which
will be going away at the same time as RegAllocLinearScan.
llvm-svn: 125967
A local live range is live in a single basic block. If such a range fails to
allocate, try to find a sub-range that would get a larger spill weight than its
interference.
llvm-svn: 125764
Registers are not allocated strictly in spill weight order when live range
splitting and spilling has created new shorter intervals with higher spill
weights.
When one of the new heavy intervals conflicts with a single lighter interval,
simply evict the old interval instead of trying to split the heavy one.
The lighter interval is a better candidate for splitting, it has a smaller use
density.
llvm-svn: 125151
If a live range is used by a terminator instruction, and that live range needs
to leave the block on the stack or in a different register, it can be necessary
to have both sides of the split live at the terminator instruction.
Example:
%vreg2 = COPY %vreg1
JMP %vreg1
Becomes after spilling %vreg2:
SPILL %vreg1
JMP %vreg1
The spill doesn't kill the register as is normally the case.
llvm-svn: 125102
If interference reaches the last split point, it is effectively live out and
should be marked as 'MustSpill'.
This can make a difference when the terminator uses a register. There is no way
that register can be reused in the outgoing CFG bundle, even if it isn't live
out.
llvm-svn: 124900
We should not be attempting a region split if it won't lead to at least one
directly allocatable interval. That could cause infinite splitting loops.
llvm-svn: 124893
When the live range is live through a block that doesn't use the register, but
that has interference, region splitting wants to split at the top and bottom of
the basic block.
llvm-svn: 124839
The greedy register allocator revealed some problems with the value mapping in
SplitKit. We would sometimes start mapping values before all defs were known,
and that could change a value from a simple 1-1 mapping to a multi-def mapping
that requires ssa update.
The new approach collects all defs and register assignments first without
filling in any live intervals. Only when finish() is called, do we compute
liveness and mapped values. At this time we know with certainty which values map
to multiple values in a split range.
This also has the advantage that we can compute live ranges based on the
remaining uses after rematerializing at split points.
The current implementation has many opportunities for compile time optimization.
llvm-svn: 124765
The value mapping gets confused about which original values have multiple new
definitions so they may need phi insertions.
This could probably be simplified by letting enterIntvBefore() take a live range
to be added following the instruction. As long as the range stays inside the
same basic block, value mapping shouldn't be a problem.
llvm-svn: 123926
interval after an instruction. The leaveIntvAfter() method only adds liveness
from the instruction's boundary index to the inserted copy.
Ideally, SplitKit should be smarter about this, perhaps by combining useIntv()
and leaveIntvAfter() into one method that guarantees continuity.
llvm-svn: 123858
Region splitting includes loop splitting as a subset, and it is more generic.
The splitting heuristics for variables that are live in more than one block are
now:
1. Try to create a region that covers multiple basic blocks.
2. Try to create a new live range for each block with multiple uses.
3. Spill.
Steps 2 and 3 are similar to what the standard spiller is doing.
llvm-svn: 123853
Analyze the live range's behavior entering and leaving basic blocks. Compute an
interference pattern for each allocation candidate, and use SpillPlacement to
find an optimal region where that register can be live.
This code is still not enabled.
llvm-svn: 123774
createMachineVerifierPass and MachineFunction::verify.
The banner is printed before the machine code dump, just like the printer pass.
llvm-svn: 122113
abstract priority queue interface in subclasses that want to override the
priority calculations.
Subclasses must provide a getPriority() implementation instead.
This approach requires less code as long as priorities are expressable as simple
floats, and it avoids the dangers of defining potentially expensive priority
comparison functions.
It also should speed up priority_queue operations since they no longer have to
chase pointers when comparing registers. This is not measurable, though.
Preferably, we shouldn't use floats to guide code generation. The use of floats
here is derived from the use of floats for spill weights. Spill weights have a
dynamic range that doesn't lend itself easily to a fixpoint implementation.
When someone invents a stable spill weight representation, it can be reused for
allocation priorities.
llvm-svn: 121294
This new register allocator is initially identical to RegAllocBasic, but it will
receive all of the tricks that RegAllocBasic won't get.
RegAllocGreedy will eventually replace linear scan.
llvm-svn: 121234