Commit Graph

182 Commits

Author SHA1 Message Date
Craig Topper 1d32658877 Use uint16_t to store register overlaps to reduce static data.
llvm-svn: 152001
2012-03-04 10:43:23 +00:00
Andrew Trick da84e64683 Clear virtual registers after they are no longer referenced.
Passes after RegAlloc should be able to rely on MRI->getNumVirtRegs() == 0.
This makes sharing code for pre/postRA passes more robust.
Now, to check if a pass is running before the RA pipeline begins, use MRI->isSSA().
To check if a pass is running after the RA pipeline ends, use !MRI->getNumVirtRegs().

PEI resets virtual regs when it's done scavenging.

PTX will either have to provide its own PEI pass or assign physregs.

llvm-svn: 151032
2012-02-21 04:51:23 +00:00
Jakob Stoklund Olesen b0c0d340f8 Fix details in local live range splitting with regmasks.
Perform all comparisons at instruction granularity, and make sure
register masks on uses count in both gaps.

llvm-svn: 150530
2012-02-14 23:51:27 +00:00
Jakob Stoklund Olesen 17402e3d5a Handle register masks in local live range splitting.
Again the goal is to produce identical assembly with register mask
operands enabled.

llvm-svn: 150287
2012-02-11 00:42:18 +00:00
Jakob Stoklund Olesen a16ae59722 Add register mask support to InterferenceCache.
This makes global live range splitting behave identically with and
without register mask operands.

This is not necessarily the best way of using register masks for live
range splitting.  It would be more efficient to first split global live
ranges around calls (i.e., register masks), and reserve the fine grained
per-physreg interference guidance for global live ranges that do not
cross calls.

For now the goal is to produce identical assembly when enabling register
masks.

llvm-svn: 150259
2012-02-10 18:58:34 +00:00
Andrew Trick d3f8fe81f4 RegAlloc superpass: includes phi elimination, coalescing, and scheduling.
Creates a configurable regalloc pipeline.

Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.

When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.

CodeGen transformation passes are never "required" as an analysis

ProcessImplicitDefs does not require LiveVariables.

We have a plan to massively simplify some of the early passes within the regalloc superpass.

llvm-svn: 150226
2012-02-10 04:10:36 +00:00
Jakob Stoklund Olesen 4a6a0eec52 Add register mask support to RAGreedy.
This only adds the interference checks required for correctness.
We still need to take advantage of register masks for the
interference driven live range splitting.

llvm-svn: 150191
2012-02-09 18:25:05 +00:00
Andrew Trick e1c034fefe Renamed MachineScheduler to ScheduleTopDownLive.
Responding to code review.

llvm-svn: 148290
2012-01-17 06:55:03 +00:00
Andrew Trick 8093eac51d Moving options declarations around.
More short term hackery until we have a way to configure passes that work on LiveIntervals.

llvm-svn: 148289
2012-01-17 06:54:59 +00:00
Andrew Trick e77e84e4b7 Added the MachineSchedulerPass skeleton.
llvm-svn: 148105
2012-01-13 06:30:30 +00:00
Jakob Stoklund Olesen 994fed689f Make SplitAnalysis::UseSlots private.
llvm-svn: 148031
2012-01-12 17:53:44 +00:00
Jakob Stoklund Olesen 20f19eb9ab Make data structures private.
llvm-svn: 147979
2012-01-11 23:19:08 +00:00
Jakob Stoklund Olesen 28df7ef8c9 Stop tracking spill slot uses in VirtRegMap.
Nobody cared, StackSlotColoring scans the instructions to find used stack
slots.

llvm-svn: 144485
2011-11-13 01:23:30 +00:00
Jakob Stoklund Olesen 559d4dcc16 Update split candidate correctly when interference cache is full.
No test case, spotted by inspection.

llvm-svn: 143407
2011-11-01 00:02:31 +00:00
Jakob Stoklund Olesen 811b9c475d Ignore the cloning of unknown registers.
THe LRE_DidCloneVirtReg callback may be called with vitual registers
that RAGreedy doesn't even know about yet.  In that case, there are no
data structures to update.

llvm-svn: 139702
2011-09-14 17:34:37 +00:00
Jakob Stoklund Olesen 45df7e0f22 Remove the -compact-regions flag.
It has been enabled by default for a while, it was only there to allow
performance comparisons.

llvm-svn: 139501
2011-09-12 16:54:42 +00:00
Jakob Stoklund Olesen eecb2fb183 Add an interface for SplitKit complement spill modes.
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.

Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions.  This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important.  The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting.  Stack slots are cheap.

This patch adds the interface to enable spill modes in SplitKit.  In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler.  In some cases, it can become much simpler because no
extra PHI-defs are required.  This will speed up both splitting and
spilling.

This is only the interface to enable spill modes, no implementation yet.

llvm-svn: 139500
2011-09-12 16:49:21 +00:00
Benjamin Kramer 4938edb02c Make a bunch of symbols private.
llvm-svn: 138025
2011-08-19 01:42:18 +00:00
Jakob Stoklund Olesen 4c9a2fb044 Refer to the RegisterCoalescer pass by ID.
A public interface is no longer needed since RegisterCoalescer is not an
analysis any more.

llvm-svn: 137082
2011-08-09 00:29:53 +00:00
Jakob Stoklund Olesen 22f37a1eb1 Fix typo. Thanks, Andy!
llvm-svn: 137023
2011-08-06 18:20:24 +00:00
Jakob Stoklund Olesen d4bb1d43e8 Reject RS_Spill ranges from local splitting as well.
All new local ranges are marked as RS_New now, so there is no need to
attempt splitting of RS_Spill ranges any more.

llvm-svn: 137002
2011-08-05 23:50:33 +00:00
Jakob Stoklund Olesen 02cf10bdfd Only mark remainder intervals as RS_Spill after per-block splitting.
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.

This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.

llvm-svn: 137001
2011-08-05 23:50:31 +00:00
Jakob Stoklund Olesen 0de95ef7f5 Remember to update LiveDebugVariables after per-block splitting.
llvm-svn: 136996
2011-08-05 23:10:40 +00:00
Jakob Stoklund Olesen cef5d8ff77 Extract per-block splitting into its own method.
No functional change.

llvm-svn: 136994
2011-08-05 23:04:18 +00:00
Jakob Stoklund Olesen 58995bc551 Also use shouldSplitSingleBlock() in the fallback splitting mode.
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go
through a SmallPtrSet any more.

llvm-svn: 136992
2011-08-05 22:43:23 +00:00
Jakob Stoklund Olesen 8627ea91cb Split around single instructions to enable register class inflation.
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.

The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.

llvm-svn: 136989
2011-08-05 22:20:45 +00:00
Jakob Stoklund Olesen 11b788d5be Enable compact region splitting by default.
This helps generate better code in functions with high register
pressure.

The previous version of compact region splitting caused regressions
because the regions were a bit too large. A stronger negative bias
applied in r136832 fixed this problem.

llvm-svn: 136836
2011-08-03 23:16:09 +00:00
Jakob Stoklund Olesen 869545203b Be more conservative when forming compact regions.
Apply twice the negative bias on transparent blocks when computing the
compact regions. This excludes loop backedges from the region when only
one of the loop blocks uses the register.

Previously, we would include the backedge in the region if the loop
preheader and the loop latch both used the register, but the loop header
didn't.

When both the header and latch blocks use the register, we still keep it
live on the backedge.

llvm-svn: 136832
2011-08-03 23:09:38 +00:00
Chandler Carruth 77eb5a0a37 Fix some warnings from Clang in release builds:
lib/CodeGen/RegAllocGreedy.cpp:1176:18: warning: unused variable 'B' [-Wunused-variable]
    if (unsigned B = Cand.getBundles(BundleCand, BestCand)) {
                 ^
lib/CodeGen/RegAllocGreedy.cpp:1188:18: warning: unused variable 'B' [-Wunused-variable]
    if (unsigned B = Cand.getBundles(BundleCand, 0)) {
                 ^

llvm-svn: 136831
2011-08-03 23:07:27 +00:00
Jakob Stoklund Olesen 3c14505164 Use the precomputed def presence in RAGreedy::calcSpillCost.
llvm-svn: 136742
2011-08-02 23:04:08 +00:00
Jakob Stoklund Olesen 057f9b68de Inform SpillPlacement about blocks with defs.
This information is not used for anything yet.

llvm-svn: 136741
2011-08-02 23:04:06 +00:00
Jakob Stoklund Olesen 43859a6ad2 Rename {First,Last}Use to {First,Last}Instr.
With a 'FirstDef' field right there, it is very confusing that FirstUse
refers to an instruction that may be a def.

llvm-svn: 136739
2011-08-02 22:54:14 +00:00
Jakob Stoklund Olesen 163e7a73f1 Time the emission of debug values.
llvm-svn: 136584
2011-07-31 03:53:42 +00:00
Jakob Stoklund Olesen eb5ea833ed Revert r136528 "Enable compact region splitting by default."
While this generally helped x86-64, there was some large regressions
for i386.

llvm-svn: 136571
2011-07-30 17:19:14 +00:00
Jakob Stoklund Olesen b5c2d3210c Enable compact region splitting by default.
This helps generate better code in functions with high register
pressure.

llvm-svn: 136528
2011-07-29 22:10:27 +00:00
Jakob Stoklund Olesen cad845f4c0 Reverse order of RS_Split live ranges under -compact-regions.
There are two conflicting strategies in play:

- Under high register pressure, we want to assign large live ranges
  first. Smaller live ranges are easier to place afterwards.

- Live range splitting is guided by interference, so splitting should be
  deferred until interference is as realistic as possible.

With the recent changes to the live range stages, and with compact
regions enabled, it is less traumatic to split a live range too early.
If some of the split products were too big, they can often be split
again.

By reversing the RS_Split order, we get this queue order:

1. Normal live ranges, large to small.
2. RS_Split live ranges, large to small.

The large-to-small order improves RAGreedy's puzzle solving skills under
high register pressure. It may cause a bit more iterated splitting, but
we handle that better now.

With this change, -compact-regions is mostly an improvement on SPEC.

llvm-svn: 136388
2011-07-28 20:48:23 +00:00
Jakob Stoklund Olesen dab4b9a4b2 Add support for multi-way live range splitting.
When splitting global live ranges, it is now possible to split for
multiple destination intervals at once. Previously, we only had the main
and stack intervals.

Each edge bundle is assigned to a split candidate, and splitAroundRegion
will insert copies between the candidate intervals and the stack
interval as needed.

The multi-way splitting is used to split around compact regions when
enabled with -compact-regions. The best candidate register still gets
all the bundles it wants, but everything outside the main interval is
first split around compact regions before we create single-block
intervals.

Compact region splitting still causes some regressions, so it is not
enabled by default.

llvm-svn: 136186
2011-07-26 23:41:46 +00:00
Jakob Stoklund Olesen 5387bd340b Revert to RA_Assign when a virtreg separates into components.
When dead code elimination deletes a PHI value, the virtual register may
split into multiple connected components. In that case, revert each
component to the RS_Assign stage.

The new components are guaranteed to be smaller (the original value
numbers are distributed among the components), so this will always be
making progress. The components are now allowed to evict other live
ranges or be split again.

llvm-svn: 136034
2011-07-26 00:54:56 +00:00
Jakob Stoklund Olesen 450111718c Add an RS_Split2 stage used for loop prevention.
This mechanism already exists, but the RS_Split2 stage makes it clearer.

When live range splitting creates ranges that may not be making
progress, they are marked RS_Split2 instead of RS_New. These ranges may
be split again, but only in a way that can be proven to make progress.

For local ranges, that means they must be split into ranges used by
strictly fewer instructions.

For global ranges, region splitting is bypassed and the RS_Split2
ranges go straight to per-block splitting.

llvm-svn: 135912
2011-07-25 15:25:43 +00:00
Jakob Stoklund Olesen 3ef8cf1370 Rename live range stages to better reflect how they are used.
The stage is used to control where a live range is going, not where it
is coming from. Live ranges created by splitting will usually be marked
RS_New, but some are marked RS_Spill to avoid wasting time trying to
split them again.

The old RS_Global and RS_Local stages are merged - they are really the
same thing for local and global live ranges.

llvm-svn: 135911
2011-07-25 15:25:41 +00:00
Jakob Stoklund Olesen ecad62f909 Add RAGreedy::calcCompactRegion.
This method computes the edge bundles that should be live when splitting
around a compact region. This is independent of interference.

The function returns false if the live range was already a compact
region, or the compact region doesn't have any live bundles - it would
be the same as splitting around basic blocks.

Compact regions are computed using the normal spill placement code. We
pretend there is interference in all live-through blocks that don't use
the live range. This removes all edges from the Hopfield network used
for spill placement, so it converges instantly.

llvm-svn: 135847
2011-07-23 03:41:57 +00:00
Jakob Stoklund Olesen a953bf135f Prepare RAGreedy::growRegion for compact regions.
A split candidate can have a null PhysReg which means that it doesn't
map to a real interference pattern. Instead, pretend that all through
blocks have interference.

This makes it possible to generate compact regions where the live range
doesn't go through blocks that don't use it. The live range will still
be live between directly connected blocks with uses.

Splitting around a compact region tends to produce a live range with a
high spill weight, so it may evict a less dense live range.

llvm-svn: 135845
2011-07-23 03:22:33 +00:00
Frits van Bommel 717d7edd3e Migrate LLVM and Clang to use the new makeArrayRef(...) functions where previously explicit non-default constructors were used.
Mostly mechanical with some manual reformatting.

llvm-svn: 135390
2011-07-18 12:00:32 +00:00
Jakub Staszak 6063549470 Remove unused LoopRanges from RegAllocGreedy.
llvm-svn: 135354
2011-07-16 20:43:00 +00:00
Jakob Stoklund Olesen 795da1c108 Extract parts of RAGreedy::splitAroundRegion as SplitKit methods.
This gets rid of some of the gory splitting details in RAGreedy and
makes them available to future SplitKit clients.

Slightly generalize the functionality to support multi-way splitting.
Specifically, SplitEditor::splitLiveThroughBlock() supports switching
between different register intervals in a block.

llvm-svn: 135307
2011-07-15 21:47:57 +00:00
Jakob Stoklund Olesen a153ca5885 Reapply r135121 with a fixed copy constructor.
Original commit message:

Count references to interference cache entries.

Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.

This makes it possible to have multiple live cursors examining
interference for different physregs.

The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().

Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.

llvm-svn: 135130
2011-07-14 05:35:11 +00:00
Jakob Stoklund Olesen 1d4badae74 Revert r135121 which broke a gcc-4.2 builder.
llvm-svn: 135122
2011-07-14 00:58:38 +00:00
Jakob Stoklund Olesen c270cb6e94 Count references to interference cache entries.
Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.

This makes it possible to have multiple live cursors examining
interference for different physregs.

The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().

Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.

llvm-svn: 135121
2011-07-14 00:31:14 +00:00
Jakob Stoklund Olesen d7e9937175 Reapply r135074 and r135080 with a fix.
The cache entry referenced by the best split candidate could become
clobbered by an unsuccessful candidate.

The correct fix here is to use reference counts on the cache entries.
Coming up.

llvm-svn: 135113
2011-07-14 00:17:10 +00:00
Jakob Stoklund Olesen fae30b240b Revert r135074 and r135080. They broke clamscan.
llvm-svn: 135096
2011-07-13 22:20:09 +00:00