Commit Graph

97 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen 6d2bbc1c20 Extract SpillPlacement::addLinks for handling the special transparent blocks.
llvm-svn: 129079
2011-04-07 17:27:46 +00:00
Jakob Stoklund Olesen 8ce2f43694 Also account for the spill code that would be inserted in live-through blocks with interference.
llvm-svn: 129030
2011-04-06 21:32:41 +00:00
Jakob Stoklund Olesen 81439a83f4 Abort the constraint calculation early when all positive bias is lost.
Without any positive bias, there is nothing for the spill placer to to. It will
spill everywhere.

llvm-svn: 129029
2011-04-06 21:32:38 +00:00
Jakob Stoklund Olesen 6895b87dfe Keep track of the number of positively biased nodes when adding constraints.
If there are no positive nodes, the algorithm can be aborted early.

llvm-svn: 129021
2011-04-06 19:14:00 +00:00
Jakob Stoklund Olesen 36b5d8a698 Break the spill placement algorithm into three parts: prepare, addConstraints, and finish.
This will allow us to abort the algorithm early if it is determined to be futile.

llvm-svn: 129020
2011-04-06 19:13:57 +00:00
Jakob Stoklund Olesen f3b2dcc74d Oops. Scary.
llvm-svn: 128986
2011-04-06 04:07:14 +00:00
Jakob Stoklund Olesen bf91c4e85e Analyze blocks with uses separately from live-through blocks without uses.
About 90% of the relevant blocks are live-through without uses, and the only
information required about them is their number. This saves memory and enables
later optimizations that need to look at only the use-blocks.

llvm-svn: 128985
2011-04-06 03:57:00 +00:00
Jakob Stoklund Olesen 6aa0fbf4c0 Run LiveDebugVariables in RegAllocBasic and RegAllocGreedy.
llvm-svn: 128935
2011-04-05 21:40:37 +00:00
Jakob Stoklund Olesen d93b0e3ced Stop precomputing last split points, query the SplitAnalysis cache on demand.
llvm-svn: 128875
2011-04-05 04:20:29 +00:00
Jakob Stoklund Olesen 8933907b51 Stop caching basic block index ranges now that SlotIndexes can keep up.
llvm-svn: 128821
2011-04-04 15:32:15 +00:00
Jakob Stoklund Olesen ca26e0acbb Use InterferenceCache in RegAllocGreedy.
llvm-svn: 128765
2011-04-02 06:03:38 +00:00
Jakob Stoklund Olesen 91cbcaf957 Add an InterferenceCache class for caching per-block interference ranges.
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.

llvm-svn: 128764
2011-04-02 06:03:35 +00:00
Jakob Stoklund Olesen dd9a2ecef7 Treat clones the same as their origin.
When DCE clones a live range because it separates into connected components,
make sure that the clones enter the same register allocator stage as the
register they were cloned from.

For instance, clones may be split even when they where created during spilling.
Other registers created during spilling are not candidates for splitting or even
(re-)spilling.

llvm-svn: 128524
2011-03-30 02:52:39 +00:00
Jakob Stoklund Olesen e991f728d6 Recompute register class and hint for registers created during spilling.
The spill weight is not recomputed for an unspillable register - it stays infinite.

llvm-svn: 128490
2011-03-29 21:20:19 +00:00
Jakob Stoklund Olesen 28d79cdeab Drop interference reassignment in favor of eviction.
The reassignment phase was able to move interference with a higher spill weight,
but it didn't happen very often and it was fairly expensive.

The existing interference eviction picks up the slack.

llvm-svn: 128397
2011-03-27 22:49:21 +00:00
Jakob Stoklund Olesen 8698507fe1 Add debug output.
llvm-svn: 127959
2011-03-19 23:02:47 +00:00
Jakob Stoklund Olesen e14b2b226f Add a LiveRangeEdit delegate callback before shrinking a live range.
The register allocator needs to adjust its live interval unions when that happens.

llvm-svn: 127774
2011-03-16 22:56:16 +00:00
Jakob Stoklund Olesen 557a82c099 Clarify debugging output.
llvm-svn: 127771
2011-03-16 22:56:08 +00:00
Jakob Stoklund Olesen 43a87501b3 Tell the register allocator about new unused virtual registers.
This allows the allocator to free any resources used by the virtual register,
including physical register assignments.

llvm-svn: 127560
2011-03-13 01:23:11 +00:00
Jakob Stoklund Olesen 4d6eafa138 Change the Spiller interface to take a LiveRangeEdit reference.
This makes it possible to register delegates and get callbacks when the spiller
edits live ranges.

llvm-svn: 127389
2011-03-10 01:51:42 +00:00
Jakob Stoklund Olesen c6cc485051 Make SpillIs an optional pointer. Avoid creating a bunch of temporary SmallVectors.
llvm-svn: 127388
2011-03-10 01:21:58 +00:00
Jakob Stoklund Olesen 8e089640e0 Add a LiveRangeEdit::Delegate protocol.
This will we used for keeping register allocator data structures up to date
while LiveRangeEdit is trimming live intervals.

llvm-svn: 127300
2011-03-09 00:57:29 +00:00
Jakob Stoklund Olesen 27f942fa60 Make the UselessRegs argument optional in the LiveRangeEdit constructor.
llvm-svn: 127181
2011-03-07 22:42:16 +00:00
Jakob Stoklund Olesen 1a9b66c752 Rework the global split cost calculation.
The global cost is the sum of block frequencies for spill code that must be
inserted because preferences weren't met.

llvm-svn: 127062
2011-03-05 03:28:51 +00:00
Jakob Stoklund Olesen 4b598e156a Compute the constraints for global live range splitting from an interference pattern.
This simplifies the code and makes it faster too.

The interference patterns are saved for each candidate register. It will be
reused for actually executing the split. Work in progress.

llvm-svn: 127054
2011-03-05 01:10:31 +00:00
Jakob Stoklund Olesen 05a2f5178e Extract a method. No functional change.
llvm-svn: 127040
2011-03-04 22:11:11 +00:00
Jakob Stoklund Olesen d7e1bb80a9 Go back to comparing spill weights when deciding if interference can be evicted.
It gives better results. Sometimes, a live range can be large and still have
high spill weight. Such a range should not be spilled.

llvm-svn: 127036
2011-03-04 21:32:50 +00:00
Jakob Stoklund Olesen d4f788952d Tweak debug output. No functional changes.
llvm-svn: 127006
2011-03-04 18:08:26 +00:00
Jakob Stoklund Olesen c332e727b4 Precompute block frequencies, pow() isn't free.
llvm-svn: 126975
2011-03-04 00:58:40 +00:00
Jakob Stoklund Olesen 9a6382fc81 Cache basic block bounds instead of asking SlotIndexes::getMBBRange all the time.
This speeds up the greedy register allocator by 15%.
DenseMap is not as fast as one might hope.

llvm-svn: 126921
2011-03-03 03:41:29 +00:00
Jakob Stoklund Olesen c96019886c Change the SplitEditor interface to a single instance can be shared for multiple splits.
llvm-svn: 126912
2011-03-03 01:29:13 +00:00
Jakob Stoklund Olesen ff07178789 Drop RAGreedy::trySpillInterferences().
This is a waste of time since we already know how to evict all interferences
which is a better approach anyway.

llvm-svn: 126798
2011-03-01 23:14:48 +00:00
Jakob Stoklund Olesen 5f9f081d76 Keep track of which stage produced a live range, and bypass earlier stages when revisiting.
This effectively disables the 'turbo' functionality of the greedy register
allocator where all new live ranges created by splitting would be reconsidered
as if they were originals.

There are two reasons for doing this, 1. It guarantees that the algorithm
terminates. Early versions were prone to infinite looping in certain corner
cases. 2. It is a 2x speedup. We can skip a lot of unnecessary interference
checks that won't lead to good splitting anyway.

The problem is that region splitting only gets one shot, so it should probably
be changed to target multiple physical registers at once.

Local live range splitting is still 'turbo' enabled. It only accounts for a
small fraction of compile time, so it is probably not necessary to do anything
about that.

llvm-svn: 126781
2011-03-01 21:10:07 +00:00
Jakob Stoklund Olesen 9918b33451 Try harder to get the hint by preferring to evict hint interference.
llvm-svn: 126463
2011-02-25 01:04:22 +00:00
Jakob Stoklund Olesen e68a27eecd Tweak the register allocator priority queue some more.
New live ranges are assigned in long -> short order, but live ranges that have
been evicted at least once are deferred and assigned in short -> long order.

Also disable splitting and spilling for live ranges seen for the first time.

The intention is to create a realistic interference pattern from the heavy live
ranges before starting splitting and spilling around it.

llvm-svn: 126451
2011-02-24 23:21:36 +00:00
Jakob Stoklund Olesen b51f65c297 Keep track of how many times a live range has been dequeued, and prioritize new ranges.
When a large live range is evicted, it will usually be split when it comes
around again. By deferring evicted live ranges, the splitting happens at a time
when the interference pattern is more realistic. This prevents repeated
splitting and evictions.

llvm-svn: 126282
2011-02-23 00:56:56 +00:00
Jakob Stoklund Olesen 37de3235e5 Fix a bug in determining if there is only a single interfering register.
llvm-svn: 126277
2011-02-23 00:29:55 +00:00
Jakob Stoklund Olesen 6bd68cdffb Be more aggressive about evicting interference.
Use interval sizes instead of spill weights to determine if it is legal to evict
interference. A smaller interval can evict interference if all interfering live
ranges are larger.

Allow multiple interferences to be evicted as along as they are all larger than
the live range being allocated.

Spill weights are still used to select the preferred eviction candidate.

llvm-svn: 126276
2011-02-23 00:29:52 +00:00
Jakob Stoklund Olesen 2329c542e9 Change the RAGreedy register assignment order so large live ranges are allocated first.
This is based on the observation that long live ranges are more difficult to
allocate, so there is a better chance of solving the puzzle by handling the big
pieces first. The allocator will evict and split long alive ranges when they get
in the way.

RABasic is still using spill weights for its priority queue, so the interface to
the queue has been virtualized.

llvm-svn: 126259
2011-02-22 23:01:52 +00:00
Jakob Stoklund Olesen 60a26a6578 Add SplitKit::isOriginalEndpoint and use it to force live range splitting to terminate.
An original endpoint is an instruction that killed or defined the original live
range before any live ranges were split.

When splitting global live ranges, avoid creating local live ranges without any
original endpoints. We may still create global live ranges without original
endpoints, but such a range won't be split again, and live range splitting still
terminates.

llvm-svn: 126151
2011-02-21 23:09:46 +00:00
Jakob Stoklund Olesen f1a60a61ba Give SplitAnalysis a VRM member to access VirtRegMap::getOriginal().
llvm-svn: 126005
2011-02-19 00:53:42 +00:00
Jakob Stoklund Olesen 609bc44c2e Separate timers for local and global splitting.
llvm-svn: 126001
2011-02-19 00:38:40 +00:00
Jakob Stoklund Olesen 5bfec69b1d Add VirtRegMap::rewrite() and use it in the new register allocators.
The rewriter works almost identically to -rewriter=trivial, except it also
eliminates any identity copies.

This makes the new register allocators independent of VirtRegRewriter.cpp which
will be going away at the same time as RegAllocLinearScan.

llvm-svn: 125967
2011-02-18 22:03:18 +00:00
Jakob Stoklund Olesen 99827e861f Add basic register allocator statistics.
llvm-svn: 125789
2011-02-17 22:53:48 +00:00
Jakob Stoklund Olesen 93c8736abb Split local live ranges.
A local live range is live in a single basic block. If such a range fails to
allocate, try to find a sub-range that would get a larger spill weight than its
interference.

llvm-svn: 125764
2011-02-17 19:13:53 +00:00
Jakob Stoklund Olesen 77ba1cf286 Simplify using the new leaveIntvBefore()
llvm-svn: 125238
2011-02-09 23:33:02 +00:00
Jakob Stoklund Olesen b1b76adbd9 Move calcLiveBlockInfo() and the BlockInfo struct into SplitAnalysis.
No functional changes intended.

llvm-svn: 125231
2011-02-09 22:50:26 +00:00
Jakob Stoklund Olesen 1305bc0a65 Evict a lighter single interference before attempting to split a live range.
Registers are not allocated strictly in spill weight order when live range
splitting and spilling has created new shorter intervals with higher spill
weights.

When one of the new heavy intervals conflicts with a single lighter interval,
simply evict the old interval instead of trying to split the heavy one.

The lighter interval is a better candidate for splitting, it has a smaller use
density.

llvm-svn: 125151
2011-02-09 01:14:03 +00:00
Jakob Stoklund Olesen 5a9683b319 Fix one more case of splitting after the last split point.
llvm-svn: 125137
2011-02-08 23:26:48 +00:00
Jakob Stoklund Olesen f248b20d8c Reorganize interference code to check LastSplitPoint first.
The last split point can be anywhere in the block, so it interferes with the
strictly monotonic requirements of advanceTo().

llvm-svn: 125132
2011-02-08 23:02:58 +00:00