Commit Graph

528 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen 3ff74d8e62 Keep track of register masks in LiveIntervalAnalysis.
Build an ordered vector of register mask operands (i.e., calls) when
computing live intervals. Provide a checkRegMaskInterference() function
that computes a bit mask of usable registers for a live range.

This is a quick way of determining of a live range crosses any calls,
and restricting it to the callee saved registers if it does.
Previously, we had to discover call clobbers for each candidate register
independently.

llvm-svn: 150077
2012-02-08 17:33:45 +00:00
Andrew Trick 3bc0e0c651 Added MachineInstr::isBundled() to check if an instruction is part of a bundle.
llvm-svn: 150044
2012-02-08 02:17:25 +00:00
Jakob Stoklund Olesen abb26bae4e Drop the REDEF_BY_EC VNInfo flag.
A live range that has an early clobber tied redef now looks like a
normal tied redef, except the early clobber def uses the early clobber
slot.

This is enough to handle any strange interference problems.

llvm-svn: 149769
2012-02-04 05:51:25 +00:00
Jakob Stoklund Olesen e386578121 Correctly terminate a physreg redefined by an early clobber.
I don't have a test that fails because of this, but a test case like
CodeGen/X86/2009-12-01-EarlyClobberBug.ll exposes the problem.  EAX is
redefined by a tied early clobber operand on inline asm, and the live
range should look like this:

  %EAX,inf = [48r,64e:0)[64e,80r:1)  0@48r 1@64e

Previously, the two values got merged:

  %EAX,inf = [48r,80r:0)  0@48r

With this bug fixed, the REDEF_BY_EC VNInfo flag is no longer needed.

llvm-svn: 149768
2012-02-04 05:41:20 +00:00
Jakob Stoklund Olesen ad6b22eb16 Don't store COPY pointers in VNInfo.
If a value is defined by a COPY, that instuction can easily and cheaply
be found by getInstructionFromIndex(VNI->def).

This reduces the size of VNInfo from 24 to 16 bytes, and improves
llc compile time by 3%.

llvm-svn: 149763
2012-02-04 05:20:49 +00:00
Jakob Stoklund Olesen 22e490d908 Trim headers.
llvm-svn: 149722
2012-02-03 23:51:15 +00:00
Jakob Stoklund Olesen f798a0a0e6 Delete some dead code.
llvm-svn: 149717
2012-02-03 21:32:06 +00:00
Matt Beaumont-Gay 9cc6d524ea Here's a new one: GCC was complaining about an only-used-in-asserts
*function*. Wrap the function in #ifndef NDEBUG.

llvm-svn: 149259
2012-01-30 19:26:20 +00:00
Lang Hames a6958c6c4e Silence warning about parens for && within ||
llvm-svn: 149152
2012-01-27 23:52:25 +00:00
Lang Hames ad33d5ace7 Add a "moveInstr" method to LiveIntervals. This can be used to move instructions
around within a basic block while maintaining live-intervals.

Updated ScheduleTopDownLive in MachineScheduler.cpp to use the moveInstr API
when reordering MIs.

llvm-svn: 149147
2012-01-27 22:36:19 +00:00
Lang Hames f1508b78f9 Don't add live ranges for aliases of physregs that are live in to the
function. They don't appear to be used, and are inconsistent with handling of
other physreg intervals (i.e. intervals that are not live-in) where ranges are
not inserted for aliases.

llvm-svn: 148986
2012-01-25 22:11:06 +00:00
Lang Hames 19feb5f241 Always break upon finding a vreg operand (in Release as well as +Asserts). Remove assertion which can no longer trigger.
llvm-svn: 148984
2012-01-25 21:53:23 +00:00
Lang Hames 1997de0100 Fixed macro condition.
llvm-svn: 148408
2012-01-18 19:48:31 +00:00
Jakob Stoklund Olesen 67aec12409 Exclusively use SplitAnalysis::getLastSplitPoint().
Delete the alternative implementation in LiveIntervalAnalysis.

These functions computed the same thing, but SplitAnalysis caches the
result.

llvm-svn: 147911
2012-01-11 02:07:00 +00:00
Jakob Stoklund Olesen a8879087b5 Use the 'regalloc' debug tag for most register allocator tracing.
llvm-svn: 147725
2012-01-07 07:39:47 +00:00
Lang Hames c405ac4429 Clarified assert text.
llvm-svn: 147471
2012-01-03 20:05:57 +00:00
Evan Cheng 7f8e563a69 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Jakob Stoklund Olesen 7e6004a3c1 Fix early-clobber handling in shrinkToUses.
I broke this in r144515, it affected most ARM testers.

<rdar://problem/10441389>

llvm-svn: 144547
2011-11-14 18:45:38 +00:00
Jakob Stoklund Olesen 697979028f Use kill slots instead of the previous slot in shrinkToUses.
It's more natural to use the actual end points.

llvm-svn: 144515
2011-11-13 23:53:25 +00:00
Jakob Stoklund Olesen d8f2405e73 Terminate all dead defs at the dead slot instead of the 'next' slot.
This makes no difference for normal defs, but early clobber dead defs
now look like:

  [Slot_EarlyClobber; Slot_Dead)

instead of:

  [Slot_EarlyClobber; Slot_Register).

Live ranges for normal dead defs look like:

  [Slot_Register; Slot_Dead)

as before.

llvm-svn: 144512
2011-11-13 22:42:13 +00:00
Jakob Stoklund Olesen ce7cc08f3a Simplify early clobber slots a bit.
llvm-svn: 144507
2011-11-13 22:05:42 +00:00
Jakob Stoklund Olesen 90b5e565b6 Rename SlotIndexes to match how they are used.
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.

The load and store slots are not needed after the deferred spill code
insertion framework was deleted.

The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals.  In reality, these slots were used to distinguish
early-clobber defs from normal defs.

The new naming scheme also has 4 slots, but the names match how the
slots are really used.  This is a purely mechanical renaming, but some
of the code makes a lot more sense now.

llvm-svn: 144503
2011-11-13 20:45:27 +00:00
Jakob Stoklund Olesen f61a6fe221 Delete the old spilling framework from LiveIntervalAnalysis.
This is dead code, all register allocators use InlineSpiller.

llvm-svn: 144478
2011-11-12 23:57:05 +00:00
Jakob Stoklund Olesen ccdfbfb5e5 Add a FIXME.
TwoAddressInstructionPass should annotate instructions with <undef>
flags when it lower REG_SEQUENCE instructions.  LiveIntervals should not
be in the business of modifying code (except for kill flags, perhaps).

llvm-svn: 141187
2011-10-05 16:51:21 +00:00
Jakob Stoklund Olesen 10f2de3261 Allow <undef> flags on def operands as well as uses.
The <undef> flag says that a MachineOperand doesn't read its register,
or doesn't depend on the previous value of its register.

A full register def never depends on the previous register value.  A
partial register def may depend on the previous value if it is intended
to update part of a register.

For example:

  %vreg10:dsub_0<def,undef> = COPY %vreg1
  %vreg10:dsub_1<def> = COPY %vreg2

The first copy instruction defines the full %vreg10 register with the
bits not covered by dsub_0 defined as <undef>.  It is not considered a
read of %vreg10.

The second copy modifies part of %vreg10 while preserving the rest.  It
has an implicit read of %vreg10.

This patch adds a MachineOperand::readsReg() method to determine if an
operand reads its register.

Previously, this was modelled by adding a full-register <imp-def>
operand to the instruction.  This approach makes it possible to
determine directly from a MachineOperand if it reads its register.  No
scanning of MI operands is required.

llvm-svn: 141124
2011-10-04 21:49:33 +00:00
Jakob Stoklund Olesen b8b1d4c435 Speed up LiveIntervals::shrinkToUse with some caching.
Blocks with multiple PHI successors only need to go on the worklist
once.  Use a SmallPtrSet to track the live-out blocks that have already
been handled.  This is a lot faster than the two live range check we
would otherwise do.

Also stop recomputing hasPHIKill flags.  Like RenumberValues(), it is
conservatively correct to leave them in, and they are not used for
anything important.

llvm-svn: 139792
2011-09-15 15:24:16 +00:00
Jakob Stoklund Olesen 0494c5c35d Switch extendInBlock() to take a kill slot instead of the last use slot.
Three out of four clients prefer this interface which is consistent with
extendIntervalEndTo() and LiveRangeCalc::extend().

llvm-svn: 139604
2011-09-13 16:47:56 +00:00
Jakob Stoklund Olesen 38a0b94dce When a physreg is live-in and live through a basic block, make sure its live
range covers the entire block.

The live range can't be terminated at a random instruction.

llvm-svn: 130619
2011-04-30 19:12:33 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Jakob Stoklund Olesen f8beafe207 Don't add live ranges for sub-registers when clobbering a physical register.
Both coalescing and register allocation already check aliases for interference,
so these extra segments are only slowing us down.

This speeds up both linear scan and the greedy register allocator.

llvm-svn: 129283
2011-04-11 18:08:10 +00:00
Jakob Stoklund Olesen 64beb47783 Recompute hasPHIKill flags when shrinking live intervals.
PHI values may be deleted, causing the flags to be wrong. This fixes PR9616.

llvm-svn: 129092
2011-04-07 18:43:14 +00:00
Jakob Stoklund Olesen 2e85396509 Allow coalescing with reserved physregs in certain cases:
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:

  %vreg75<def> = COPY %ESP; GR32:%vreg75
  MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
  MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
  MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
  CALLpcrel32 ...

Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.

The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.

I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.

llvm-svn: 128845
2011-04-04 21:00:03 +00:00
NAKAMURA Takumi 41f32c7127 lib/CodeGen/LiveIntervalAnalysis.cpp: [PR9590] Don't use std::pow(float,float) here.
We don't expect the real "powf()" on some hosts (and powf() would be available on other hosts).
For consistency, std::pow(double,double) may be called instead.
Or, precision issue might attack us, to see unstable regalloc and stack coloring.

llvm-svn: 128629
2011-03-31 12:11:33 +00:00
Jakob Stoklund Olesen fdc09941f2 Accept instructions that read undefined values.
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.

llvm-svn: 127857
2011-03-18 03:06:04 +00:00
Jakob Stoklund Olesen 8630840c30 Dead code elimination may separate the live interval into multiple connected components.
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.

llvm-svn: 127827
2011-03-17 20:37:07 +00:00
Jakob Stoklund Olesen 557a82c099 Clarify debugging output.
llvm-svn: 127771
2011-03-16 22:56:08 +00:00
Jakob Stoklund Olesen c6cc485051 Make SpillIs an optional pointer. Avoid creating a bunch of temporary SmallVectors.
llvm-svn: 127388
2011-03-10 01:21:58 +00:00
Jakob Stoklund Olesen 71c380f6c7 Let shrinkToUses optionally return a list of now dead machine instructions.
llvm-svn: 127192
2011-03-07 23:29:10 +00:00
Jakob Stoklund Olesen ac32d8a691 Handle the special case of registers begin redefined by early-clobber defs.
In this case, the value need to be available at the load index instead of the
normal use index.

llvm-svn: 127167
2011-03-07 18:56:16 +00:00
Jakob Stoklund Olesen d58c8d12ab Fix PHI handling in LiveIntervals::shrinkToUses().
We need to wait until we meet a PHIDef in its defining block before resurrecting
PHIKills in the predecessors.

This should unbreak the llvm-gcc-build-x86_64-darwin10-x-mingw32-x-armeabi bot.

llvm-svn: 126905
2011-03-03 00:20:51 +00:00
Nick Lewycky 68faa2dbbe Quiet a compiler warning about unused variable 'ExtVNI'.
llvm-svn: 126815
2011-03-02 01:43:30 +00:00
Jakob Stoklund Olesen f3c6e9211c Simplify LiveIntervals::shrinkToUses() a bit by using the new extendInBlock().
llvm-svn: 126806
2011-03-02 00:33:03 +00:00
Jakob Stoklund Olesen 81eb18df34 Fix typo.
llvm-svn: 126805
2011-03-02 00:33:01 +00:00
Jakob Stoklund Olesen 1dd377d8c8 Move more fragments of spill weight calculation into CalcSpillWeights.h
Simplify the spill weight calculation a bit by bypassing
getApproximateInstructionCount() and using LiveInterval::getSize() directly.
This changes the computed spill weights, but only by a constant factor in each
function. It should not affect how spill weights compare against each other, and
so it shouldn't affect code generation.

llvm-svn: 125530
2011-02-14 23:15:38 +00:00
Jakob Stoklund Olesen b1b76adbd9 Move calcLiveBlockInfo() and the BlockInfo struct into SplitAnalysis.
No functional changes intended.

llvm-svn: 125231
2011-02-09 22:50:26 +00:00
Jakob Stoklund Olesen f2b16dc847 Add LiveIntervals::addKillFlags() to recompute kill flags after register allocation.
This is a lot easier than trying to get kill flags right during live range
splitting and rematerialization.

llvm-svn: 125113
2011-02-08 21:13:03 +00:00
Jakob Stoklund Olesen 55fc1d0b3e Add LiveIntervals::shrinkToUses().
After uses of a live range are removed, recompute the live range to only cover
the remaining uses. This is necessary after rematerializing the value before
some (but not all) uses.

llvm-svn: 125058
2011-02-08 00:03:05 +00:00
Jakob Stoklund Olesen e8ac8e93a1 Apparently, it is possible for a block with a landing pad successor to have no calls.
In that case we simply ignore the landing pad and split live ranges before the
first terminator.

llvm-svn: 124907
2011-02-04 23:11:13 +00:00
Jakob Stoklund Olesen 096bd8837f Add LiveIntervals::getLastSplitPoint().
A live range cannot be split everywhere in a basic block. A split must go before
the first terminator, and if the variable is live into a landing pad, the split
must happen before the call that can throw.

llvm-svn: 124894
2011-02-04 19:33:11 +00:00
Jakob Stoklund Olesen 2fb5b31578 Simplify a bunch of isVirtualRegister() and isPhysicalRegister() logic.
These functions not longer assert when passed 0, but simply return false instead.

No functional change intended.

llvm-svn: 123155
2011-01-10 02:58:51 +00:00