Commit Graph

205 Commits

Author SHA1 Message Date
Bill Wendling a69d0aaa71 Remove unused #includes.
llvm-svn: 176467
2013-03-05 01:00:45 +00:00
Jakob Stoklund Olesen 3dd236cdd8 Limit the search space in RAGreedy::tryEvict().
When tryEvict() is looking for a cheaper register in the allocation
order, skip the tail of too expensive registers when possible.

llvm-svn: 172281
2013-01-12 00:57:44 +00:00
Jakob Stoklund Olesen 3cb2cb800f Speed up the AllocationOrder class a bit.
Allow the central functions to be inlined, and use the argumentless
isHint() function when possible.

llvm-svn: 169319
2012-12-04 22:25:16 +00:00
Jakob Stoklund Olesen 74052b041b Add VirtRegMap::hasKnownPreference().
Virtual registers with a known preferred register are prioritized by
RAGreedy. This function makes the condition explicit without depending
on getRegAllocPref().

llvm-svn: 169179
2012-12-03 23:23:50 +00:00
Chandler Carruth ed0881b2a6 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Jakob Stoklund Olesen 26c9d70d28 Make the LiveRegMatrix analysis available to targets.
No functional change, just moved header files.

Targets can inject custom passes between register allocation and
rewriting. This makes it possible to tweak the register allocation
before rewriting, using the full global interference checking available
from LiveRegMatrix.

llvm-svn: 168806
2012-11-28 19:13:06 +00:00
Evan Cheng b53825b82b Fix a significant recent(?) regression. StackSlotColoring no longer did anything
because LiveStackAnalysis was not preserved by VirtRegWriter. This caused
big stack usage regression in some cases.

rdar://12340383

llvm-svn: 164408
2012-09-21 20:04:28 +00:00
Dmitri Gribenko 881929c1b6 Fix a couple of Doxygen comment issues pointed out by -Wdocumentation.
llvm-svn: 163721
2012-09-12 16:59:47 +00:00
David Blaikie c8c2920a3f Tidy up a few more uses of MF.getFunction()->getName().
Based on CR feedback from r162301 and Craig Topper's refactoring in r162347
here are a few other places that could use the same API (& in one instance drop
a Function.h dependency).

llvm-svn: 162367
2012-08-22 17:18:53 +00:00
Craig Topper a538d831e6 Add a getName function to MachineFunction. Use it in places that previously did getFunction()->getName(). Remove includes of Function.h that are no longer needed.
llvm-svn: 162347
2012-08-22 06:07:19 +00:00
David Blaikie 9c7226b456 Remove unnecessary cast that was also unnecessarily casting away constness.
Even looking at the revision history I couldn't quite piece together why this
cast was ever written in the first place, but I assume it was because of some
change in the inheritance, perhaps this function was reimplemented in a
derived type & this caller was meant to get the base version (& it wasn't
virtual)?

llvm-svn: 162301
2012-08-21 18:54:23 +00:00
Jakob Stoklund Olesen 2d2dec96e0 Remove LiveIntervalUnions from RegAllocBase.
They are living in LiveRegMatrix now.

llvm-svn: 158868
2012-06-20 22:52:29 +00:00
Jakob Stoklund Olesen 96eebf0b14 Convert RAGreedy to LiveRegMatrix interference checking.
Stop depending on the LiveIntervalUnions in RegAllocBase, they are about
to be removed.

The changes are mostly replacing register alias iterators with regunit
iterators, and querying LiveRegMatrix instrad of RegAllocBase.

InterferenceCache is converted to work with per-regunit
LiveIntervalUnions, and it checks fixed regunit interference separately,
using the fixed live intervals provided by LiveIntervalAnalysis.

The local splitting helper calcGapWeights() is also considering fixed
regunit interference which is kept on the side now.

llvm-svn: 158867
2012-06-20 22:52:26 +00:00
Jakob Stoklund Olesen be336295cd Also compute MBB live-in lists in the new rewriter pass.
This deduplicates some code from the optimizing register allocators, and
it means that it is now possible to change the register allocators'
solutions simply by editing the VirtRegMap between the register
allocator pass and the rewriter.

llvm-svn: 158249
2012-06-09 00:14:47 +00:00
Jakob Stoklund Olesen 1224312f5b Reintroduce VirtRegRewriter.
OK, not really. We don't want to reintroduce the old rewriter hacks.

This patch extracts virtual register rewriting as a separate pass that
runs after the register allocator. This is possible now that
CodeGen/Passes.cpp can configure the full optimizing register allocator
pipeline.

The rewriter pass uses register assignments in VirtRegMap to rewrite
virtual registers to physical registers, and it inserts kill flags based
on live intervals.

These finalization steps are the same for the optimizing register
allocators: RABasic, RAGreedy, and PBQP.

llvm-svn: 158244
2012-06-08 23:44:45 +00:00
Benjamin Kramer 009b1c1cf1 Round 2 of dead private variable removal.
LLVM is now -Wunused-private-field clean except for
- lib/MC/MCDisassembler/Disassembler.h. Not sure why it keeps all those unaccessible fields.
- gtest.

llvm-svn: 158096
2012-06-06 19:47:08 +00:00
Jakob Stoklund Olesen 54038d796c Switch all register list clients to the new MC*Iterator interface.
No functional change intended.

Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.

This makes it possible to do so without changing all clients (again).

llvm-svn: 157854
2012-06-01 23:28:30 +00:00
Jakob Stoklund Olesen 05e2245fc6 Prioritize smaller register classes for urgent evictions.
It helps compile exotic inline asm. In the test case, normal GR32
virtual registers use up eax-edx so the final GR32_ABCD live range has
no registers left. Since all the live ranges were tiny, we had no way of
prioritizing the smaller register class.

This patch allows tiny unspillable live ranges to be evicted by tiny
unspillable live ranges from a smaller register class.

<rdar://problem/11542429>

llvm-svn: 157715
2012-05-30 21:46:58 +00:00
Jakob Stoklund Olesen 0ce90494e6 Add a last resort tryInstructionSplit() to RAGreedy.
Live ranges with a constrained register class may benefit from splitting
around individual uses. It allows the remaining live range to use a
larger register class where it may allocate. This is like spilling to a
different register class.

This is only attempted on constrained register classes.

<rdar://problem/11438902>

llvm-svn: 157354
2012-05-23 22:37:27 +00:00
Jakob Stoklund Olesen e5bbe37950 Allow LiveRangeEdit to be created with a NULL parent.
The dead code elimination with callbacks is still useful.

llvm-svn: 157100
2012-05-19 05:25:46 +00:00
Pete Cooper 3ca96f9950 Moved LiveRangeEdit.h so that it can be called from other parts of the backend, not just libCodeGen
llvm-svn: 153906
2012-04-02 22:44:18 +00:00
Jakob Stoklund Olesen 291007b055 Allocate virtual registers in ascending order.
This is just the fallback tie-breaker ordering, the main allocation
order is still descending size.

Patch by Shamil Kurmangaleev!

llvm-svn: 153904
2012-04-02 22:30:39 +00:00
Pete Cooper 2bde2f42b1 Refactored the LiveRangeEdit interface so that MachineFunction, TargetInstrInfo, MachineRegisterInfo, LiveIntervals, and VirtRegMap are all passed into the constructor and stored as members instead of passed in to each method.
llvm-svn: 153903
2012-04-02 22:22:53 +00:00
Craig Topper 1d32658877 Use uint16_t to store register overlaps to reduce static data.
llvm-svn: 152001
2012-03-04 10:43:23 +00:00
Andrew Trick da84e64683 Clear virtual registers after they are no longer referenced.
Passes after RegAlloc should be able to rely on MRI->getNumVirtRegs() == 0.
This makes sharing code for pre/postRA passes more robust.
Now, to check if a pass is running before the RA pipeline begins, use MRI->isSSA().
To check if a pass is running after the RA pipeline ends, use !MRI->getNumVirtRegs().

PEI resets virtual regs when it's done scavenging.

PTX will either have to provide its own PEI pass or assign physregs.

llvm-svn: 151032
2012-02-21 04:51:23 +00:00
Jakob Stoklund Olesen b0c0d340f8 Fix details in local live range splitting with regmasks.
Perform all comparisons at instruction granularity, and make sure
register masks on uses count in both gaps.

llvm-svn: 150530
2012-02-14 23:51:27 +00:00
Jakob Stoklund Olesen 17402e3d5a Handle register masks in local live range splitting.
Again the goal is to produce identical assembly with register mask
operands enabled.

llvm-svn: 150287
2012-02-11 00:42:18 +00:00
Jakob Stoklund Olesen a16ae59722 Add register mask support to InterferenceCache.
This makes global live range splitting behave identically with and
without register mask operands.

This is not necessarily the best way of using register masks for live
range splitting.  It would be more efficient to first split global live
ranges around calls (i.e., register masks), and reserve the fine grained
per-physreg interference guidance for global live ranges that do not
cross calls.

For now the goal is to produce identical assembly when enabling register
masks.

llvm-svn: 150259
2012-02-10 18:58:34 +00:00
Andrew Trick d3f8fe81f4 RegAlloc superpass: includes phi elimination, coalescing, and scheduling.
Creates a configurable regalloc pipeline.

Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.

When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.

CodeGen transformation passes are never "required" as an analysis

ProcessImplicitDefs does not require LiveVariables.

We have a plan to massively simplify some of the early passes within the regalloc superpass.

llvm-svn: 150226
2012-02-10 04:10:36 +00:00
Jakob Stoklund Olesen 4a6a0eec52 Add register mask support to RAGreedy.
This only adds the interference checks required for correctness.
We still need to take advantage of register masks for the
interference driven live range splitting.

llvm-svn: 150191
2012-02-09 18:25:05 +00:00
Andrew Trick e1c034fefe Renamed MachineScheduler to ScheduleTopDownLive.
Responding to code review.

llvm-svn: 148290
2012-01-17 06:55:03 +00:00
Andrew Trick 8093eac51d Moving options declarations around.
More short term hackery until we have a way to configure passes that work on LiveIntervals.

llvm-svn: 148289
2012-01-17 06:54:59 +00:00
Andrew Trick e77e84e4b7 Added the MachineSchedulerPass skeleton.
llvm-svn: 148105
2012-01-13 06:30:30 +00:00
Jakob Stoklund Olesen 994fed689f Make SplitAnalysis::UseSlots private.
llvm-svn: 148031
2012-01-12 17:53:44 +00:00
Jakob Stoklund Olesen 20f19eb9ab Make data structures private.
llvm-svn: 147979
2012-01-11 23:19:08 +00:00
Jakob Stoklund Olesen 28df7ef8c9 Stop tracking spill slot uses in VirtRegMap.
Nobody cared, StackSlotColoring scans the instructions to find used stack
slots.

llvm-svn: 144485
2011-11-13 01:23:30 +00:00
Jakob Stoklund Olesen 559d4dcc16 Update split candidate correctly when interference cache is full.
No test case, spotted by inspection.

llvm-svn: 143407
2011-11-01 00:02:31 +00:00
Jakob Stoklund Olesen 811b9c475d Ignore the cloning of unknown registers.
THe LRE_DidCloneVirtReg callback may be called with vitual registers
that RAGreedy doesn't even know about yet.  In that case, there are no
data structures to update.

llvm-svn: 139702
2011-09-14 17:34:37 +00:00
Jakob Stoklund Olesen 45df7e0f22 Remove the -compact-regions flag.
It has been enabled by default for a while, it was only there to allow
performance comparisons.

llvm-svn: 139501
2011-09-12 16:54:42 +00:00
Jakob Stoklund Olesen eecb2fb183 Add an interface for SplitKit complement spill modes.
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.

Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions.  This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important.  The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting.  Stack slots are cheap.

This patch adds the interface to enable spill modes in SplitKit.  In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler.  In some cases, it can become much simpler because no
extra PHI-defs are required.  This will speed up both splitting and
spilling.

This is only the interface to enable spill modes, no implementation yet.

llvm-svn: 139500
2011-09-12 16:49:21 +00:00
Benjamin Kramer 4938edb02c Make a bunch of symbols private.
llvm-svn: 138025
2011-08-19 01:42:18 +00:00
Jakob Stoklund Olesen 4c9a2fb044 Refer to the RegisterCoalescer pass by ID.
A public interface is no longer needed since RegisterCoalescer is not an
analysis any more.

llvm-svn: 137082
2011-08-09 00:29:53 +00:00
Jakob Stoklund Olesen 22f37a1eb1 Fix typo. Thanks, Andy!
llvm-svn: 137023
2011-08-06 18:20:24 +00:00
Jakob Stoklund Olesen d4bb1d43e8 Reject RS_Spill ranges from local splitting as well.
All new local ranges are marked as RS_New now, so there is no need to
attempt splitting of RS_Spill ranges any more.

llvm-svn: 137002
2011-08-05 23:50:33 +00:00
Jakob Stoklund Olesen 02cf10bdfd Only mark remainder intervals as RS_Spill after per-block splitting.
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.

This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.

llvm-svn: 137001
2011-08-05 23:50:31 +00:00
Jakob Stoklund Olesen 0de95ef7f5 Remember to update LiveDebugVariables after per-block splitting.
llvm-svn: 136996
2011-08-05 23:10:40 +00:00
Jakob Stoklund Olesen cef5d8ff77 Extract per-block splitting into its own method.
No functional change.

llvm-svn: 136994
2011-08-05 23:04:18 +00:00
Jakob Stoklund Olesen 58995bc551 Also use shouldSplitSingleBlock() in the fallback splitting mode.
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go
through a SmallPtrSet any more.

llvm-svn: 136992
2011-08-05 22:43:23 +00:00
Jakob Stoklund Olesen 8627ea91cb Split around single instructions to enable register class inflation.
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.

The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.

llvm-svn: 136989
2011-08-05 22:20:45 +00:00
Jakob Stoklund Olesen 11b788d5be Enable compact region splitting by default.
This helps generate better code in functions with high register
pressure.

The previous version of compact region splitting caused regressions
because the regions were a bit too large. A stronger negative bias
applied in r136832 fixed this problem.

llvm-svn: 136836
2011-08-03 23:16:09 +00:00