Commit Graph

51 Commits

Author SHA1 Message Date
David Blaikie 70573dcd9f Update SetVector to rely on the underlying set's insert to return a pair<iterator, bool>
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.

This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...

llvm-svn: 222334
2014-11-19 07:49:26 +00:00
Arnaud A. de Grandmaison 829dd81377 [PBQP] Tweak spill costs and coalescing benefits
This patch improves how the different costs (register, interference, spill
and coalescing) relates together. The assumption is now that:
 - coalescing (or any other "side effect" of reg alloc) is negative, and
   instead of being derived from a spill cost, they use the block
   frequency info.
 - spill costs are in the [MinSpillCost:+inf( range
 - register or interference costs are in [0.0:MinSpillCost( or +inf

The current MinSpillCost is set to 10.0, which is a random value high
enough that the current constraint builders do not need to worry about
when settings costs. It would however be worth adding a normalization
step for register and interference costs as the last step in the
constraint builder chain to ensure they are not greater than SpillMinCost
(unless this has some sense for some architectures). This would work well
with the current builder pipeline, where all costs are tweaked relatively
to each others, but could grow above MinSpillCost if the pipeline is
deep enough.

The current heuristic is tuned to depend rather on the number of uses of
a live interval rather than a density of uses, as used by the greedy
allocator. This heuristic provides a few percent improvement on a number
of benchmarks (eembc, spec, ...) and will definitely need to change once
spill placement is implemented: the current spill placement is really
ineficient, so making the cost proportionnal to the number of use is a
clear win.

llvm-svn: 221292
2014-11-04 20:51:24 +00:00
Eric Christopher 307c2cb26f Remove unnecessary TargetMachine.h includes.
llvm-svn: 219672
2014-10-14 07:22:08 +00:00
Eric Christopher fc6de428c8 Have MachineFunction cache a pointer to the subtarget to make lookups
shorter/easier and have the DAG use that to do the same lookup. This
can be used in the future for TargetMachine based caching lookups from
the MachineFunction easily.

Update the MIPS subtarget switching machinery to update this pointer
at the same time it runs.

llvm-svn: 214838
2014-08-05 02:39:49 +00:00
Eric Christopher d913448b38 Remove the TargetMachine forwards for TargetSubtargetInfo based
information and update all callers. No functional change.

llvm-svn: 214781
2014-08-04 21:25:23 +00:00
Chandler Carruth 1b9dde087e [Modules] Remove potential ODR violations by sinking the DEBUG_TYPE
define below all header includes in the lib/CodeGen/... tree. While the
current modules implementation doesn't check for this kind of ODR
violation yet, it is likely to grow support for it in the future. It
also removes one layer of macro pollution across all the included
headers.

Other sub-trees will follow.

llvm-svn: 206837
2014-04-22 02:02:50 +00:00
Duncan P. N. Exon Smith 7af3432e22 CalcSpillWeights: Hack to prevent x87 nonsense
This gross hack forces `hweight` into memory, preventing hidden
precision from making `1 > 1` occasionally equal `true`.

<rdar://problem/14292693>

llvm-svn: 206765
2014-04-21 17:57:01 +00:00
Craig Topper c0196b1b40 [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 206142
2014-04-14 00:51:57 +00:00
Owen Anderson abb90c9ddb Phase 1 of refactoring the MachineRegisterInfo iterators to make them suitable
for use with C++11 range-based for-loops.

The gist of phase 1 is to remove the skipInstruction() and skipBundle()
methods from these iterators, instead splitting each iterator into a version
that walks operands, a version that walks instructions, and a version that
walks bundles.  This has the result of making some "clever" loops in lib/CodeGen
more verbose, but also makes their iterator invalidation characteristics much
more obvious to the casual reader. (Making them concise again in the future is a
good motivating case for a pre-incrementing range adapter!)

Phase 2 of this undertaking with consist of removing the getOperand() method,
and changing operator*() of the operand-walker to return a MachineOperand&.  At
that point, it should be possible to add range views for them that work as one
might expect.

llvm-svn: 203757
2014-03-13 06:02:25 +00:00
Benjamin Kramer d6f1f84f51 [C++11] Replace llvm::tie with std::tie.
The old implementation is no longer needed in C++11.

llvm-svn: 202644
2014-03-02 13:30:33 +00:00
Michael Gottesman 9f49d74413 [block-freq] Refactor LiveInterals::getSpillWeight to use the new MachineBlockFrequencyInfo methods.
This is slightly more interesting than the previous batch of changes.
Specifically:

1. We refactor getSpillWeight to take a MachineBlockFrequencyInfo (MBFI)
object. This enables us to completely encapsulate the actual manner we
use the MachineBlockFrequencyInfo to get our spill weights. This yields
cleaner code since one does not need to fetch the actual block frequency
before getting the spill weight if all one wants it the spill weight. It
also gives us access to entry frequency which we need for our
computation.

2. Instead of having getSpillWeight take a MachineBasicBlock (as one
might think) to look up the block frequency via the MBFI object, we
instead take in a MachineInstr object. The reason for this is that the
method is supposed to return the spill weight for an instruction
according to the comments around the function.

llvm-svn: 197296
2013-12-14 00:53:32 +00:00
Arnaud A. de Grandmaison f5f040fa1e CalcSpillWeights: allow overidding the spill weight normalizing function
This will enable the PBQP register allocator to provide its own normalizing function.

No functionnal change.

llvm-svn: 194417
2013-11-11 19:56:14 +00:00
Arnaud A. de Grandmaison ea3ac1612c CalcSpillWeights: give a better describing name to calculateSpillWeights
Besides, this relates it more obviously to the VirtRegAuxInfo::calculateSpillWeightAndHint.

No functionnal change.

llvm-svn: 194404
2013-11-11 19:04:45 +00:00
Arnaud A. de Grandmaison 760c1e0b0a CalculateSpillWeights does not need to be a pass
Based on discussions with Lang Hames and Jakob Stoklund Olesen at the hacker's lab, and in the light of upcoming work on the PBQP register allocator, it was though that CalcSpillWeights does not need to be a pass. This change will enable to customize / tune the spill weight computation depending on the allocator.

Update the documentation style while there.

No functionnal change.

llvm-svn: 194356
2013-11-10 17:46:31 +00:00
Arnaud A. de Grandmaison f7a60a8e01 Revert "CalculateSpillWeights does not need to be a pass"
Temporarily revert my previous commit until I understand why it breaks 3 target tests.

llvm-svn: 194272
2013-11-08 18:19:19 +00:00
Arnaud A. de Grandmaison ed812f6590 CalculateSpillWeights does not need to be a pass
Based on discussions with Lang Hames and Jakob Stoklund Olesen at the hacker's lab, and in the light of upcoming work on the PBQP register allocator, it was though that CalcSpillWeights does not need to be a pass. This change will enable to customize / tune the spill weight computation depending on the allocator.

Update the documentation style while there.

No functionnal change.

llvm-svn: 194269
2013-11-08 17:56:29 +00:00
Arnaud A. de Grandmaison 3b52f0b135 CalculateSpillWeights cleanup: remove unneeded includes
llvm-svn: 194259
2013-11-08 15:13:05 +00:00
Benjamin Kramer e2a1d89e14 Switch spill weights from a basic loop depth estimation to BlockFrequencyInfo.
The main advantages here are way better heuristics, taking into account not
just loop depth but also __builtin_expect and other static heuristics and will
eventually learn how to use profile info. Most of the work in this patch is
pushing the MachineBlockFrequencyInfo analysis into the right places.

This is good for a 5% speedup on zlib's deflate (x86_64), there were some very
unfortunate spilling decisions in its hottest loop in longest_match(). Other
benchmarks I tried were mostly neutral.

This changes register allocation in subtle ways, update the tests for it.
2012-02-20-MachineCPBug.ll was deleted as it's very fragile and the instruction
it looked for was gone already (but the FileCheck pattern picked up unrelated
stuff).

llvm-svn: 184105
2013-06-17 19:00:36 +00:00
Nadav Rotem c4bd84c1d5 typo
llvm-svn: 178949
2013-04-06 04:24:12 +00:00
Jakob Stoklund Olesen cea596acf7 Remove LIS::isAllocatable() and isReserved() helpers.
All callers can simply use the corresponding MRI functions.

llvm-svn: 165985
2012-10-15 22:14:34 +00:00
David Blaikie c8c2920a3f Tidy up a few more uses of MF.getFunction()->getName().
Based on CR feedback from r162301 and Craig Topper's refactoring in r162347
here are a few other places that could use the same API (& in one instance drop
a Function.h dependency).

llvm-svn: 162367
2012-08-22 17:18:53 +00:00
Craig Topper a538d831e6 Add a getName function to MachineFunction. Use it in places that previously did getFunction()->getName(). Remove includes of Function.h that are no longer needed.
llvm-svn: 162347
2012-08-22 06:07:19 +00:00
Jakob Stoklund Olesen a1f43dcdb8 Avoid iterating with LiveIntervals::iterator.
That is a DenseMap iterator keyed by pointers, so the iteration order is
nondeterministic.

I would like to replace the DenseMap with an IndexedMap which doesn't
allow iteration.

llvm-svn: 158856
2012-06-20 21:25:05 +00:00
Jakob Stoklund Olesen 9e27e2621a Stop using LiveIntervals::isReMaterializable().
It is an old function that does a lot more than required by
CalcSpillWeights, which was the only remaining caller.

The isRematerializable() function never actually sets the isLoad
argument, so don't try to compute that.

llvm-svn: 157973
2012-06-05 01:06:12 +00:00
Jakob Stoklund Olesen da96006975 Move CalculateRegClass to MRI::recomputeRegClass.
This function doesn't have anything to do with spill weights, and MRI
already has functions for manipulating the register class of a virtual
register.

llvm-svn: 137123
2011-08-09 16:46:27 +00:00
Jakob Stoklund Olesen 39af582c57 Don't inflate register classes used by inline asm.
The constraints are represented by the register class of the original
virtual register created for the inline asm. If the register class were
included in the operand descriptor, we might be able to do this.

For now, just give up on regclass inflation when inline asm is involved.

No test case, this bug hasn't happened yet.

llvm-svn: 134226
2011-07-01 01:24:25 +00:00
Evan Cheng 8d71a75777 More refactoring. Move getRegClass from TargetOperandInfo to TargetInstrInfo.
llvm-svn: 133944
2011-06-27 21:26:13 +00:00
Jakob Stoklund Olesen 4edf17d91f Teach LiveInterval::isZeroLength about null SlotIndexes.
When instructions are deleted, they leave tombstone SlotIndex entries.
The isZeroLength method should ignore these null indexes.

This causes RABasic to sometimes spill a callee-saved register in the
abi-isel.ll test, so don't run that test with -regalloc=basic.  Prioritizing
register allocation according to spill weight can cause more registers to be
used.

llvm-svn: 131436
2011-05-16 23:50:05 +00:00
Jakob Stoklund Olesen 3b7b7bcc7f Use the new TRI->getLargestLegalSuperClass hook to constrain register class inflation.
This has two effects: 1. We never inflate to a larger register class than what
the sub-target can handle. 2. Completely unconstrained virtual registers get the
largest possible register class.

llvm-svn: 130229
2011-04-26 18:52:36 +00:00
Jakob Stoklund Olesen e991f728d6 Recompute register class and hint for registers created during spilling.
The spill weight is not recomputed for an unspillable register - it stays infinite.

llvm-svn: 128490
2011-03-29 21:20:19 +00:00
Jakob Stoklund Olesen c6cc485051 Make SpillIs an optional pointer. Avoid creating a bunch of temporary SmallVectors.
llvm-svn: 127388
2011-03-10 01:21:58 +00:00
Jakob Stoklund Olesen 1dd377d8c8 Move more fragments of spill weight calculation into CalcSpillWeights.h
Simplify the spill weight calculation a bit by bypassing
getApproximateInstructionCount() and using LiveInterval::getSize() directly.
This changes the computed spill weights, but only by a constant factor in each
function. It should not affect how spill weights compare against each other, and
so it shouldn't affect code generation.

llvm-svn: 125530
2011-02-14 23:15:38 +00:00
Jakob Stoklund Olesen 1331a15b0c Replace TargetRegisterInfo::printReg with a PrintReg class that also works without a TRI instance.
Print virtual registers numbered from 0 instead of the arbitrary
FirstVirtualRegister. The first virtual register is printed as %vreg0.
TRI::NoRegister is printed as %noreg.

llvm-svn: 123107
2011-01-09 03:05:53 +00:00
Owen Anderson 8ac477ffb5 Begin adding static dependence information to passes, which will allow us to
perform initialization without static constructors AND without explicit initialization
by the client.  For the moment, passes are required to initialize both their
(potential) dependencies and any passes they preserve.  I hope to be able to relax
the latter requirement in the future.

llvm-svn: 116334
2010-10-12 19:48:12 +00:00
Owen Anderson df7a4f2515 Now with fewer extraneous semicolons!
llvm-svn: 115996
2010-10-07 22:25:06 +00:00
Jakob Stoklund Olesen fa3ea11ae6 Clean up debug output.
llvm-svn: 110940
2010-08-12 18:50:55 +00:00
Jakob Stoklund Olesen 57f3db6e2e Give up on register class recalculation when the register is used with subreg
operands. We don't currently have a hook to provide "the largest super class of
A where all registers' getSubReg(subidx) is valid and in B".

llvm-svn: 110730
2010-08-10 21:16:16 +00:00
Jakob Stoklund Olesen 53c5022040 Implement register class inflation.
When splitting a live range, the new registers have fewer uses and the
permissible register class may be less constrained. Recompute the register class
constraint from the uses of new registers created for a split. This may let them
be allocated from a larger set, possibly avoiding a spill.

llvm-svn: 110703
2010-08-10 18:37:40 +00:00
Jakob Stoklund Olesen e00c49da11 Transpose the calculation of spill weights such that we are calculating one
register at a time. This turns out to be slightly faster than iterating over
instructions, but more importantly, it allows us to compute spill weights for
new registers created after the spill weight pass has run.

Also compute the allocation hint at the same time as the spill weight. This
allows us to use the spill weight as a cost metric for copies, and choose the
most profitable hint if there is more than one possibility.

The new hints provide a very small (< 0.1%) but universal code size improvement.

llvm-svn: 110631
2010-08-10 00:02:26 +00:00
Owen Anderson a57b97e7e7 Fix batch of converting RegisterPass<> to INTIALIZE_PASS().
llvm-svn: 109045
2010-07-21 22:09:45 +00:00
Jakob Stoklund Olesen b15cbd343c Remove remaining calls to TII::isMoveInstr.
llvm-svn: 108556
2010-07-16 21:03:55 +00:00
Eric Christopher 128a0197bb Fix typo.
llvm-svn: 107556
2010-07-03 01:09:18 +00:00
Jakob Stoklund Olesen c953acbd7f Always normalize spill weights, also for intervals created by spilling.
Moderate the weight given to very small intervals.

The spill weight given to new intervals created when spilling was not
normalized in the same way as the original spill weights calculated by
CalcSpillWeights. That meant that restored registers would tend to hang around
because they had a much higher spill weight that unspilled registers.

This improves the runtime of a few tests by up to 10%, and there are no
significant regressions.

llvm-svn: 96613
2010-02-18 21:33:05 +00:00
Evan Cheng 0a75ceee38 Remove duplicated #include.
llvm-svn: 95747
2010-02-10 01:22:57 +00:00
Evan Cheng 3ebd551aac Emit an error for illegal inline asm constraint (which uses illegal type) rather than asserting.
llvm-svn: 95746
2010-02-10 01:21:02 +00:00
Chris Lattner c9505b68c8 fix missing #includes.
llvm-svn: 95745
2010-02-10 01:17:36 +00:00
Chris Lattner b06015aa69 move target-independent opcodes out of TargetInstrInfo
into TargetOpcodes.h.  #include the new TargetOpcodes.h
into MachineInstr.  Add new inline accessors (like isPHI())
to MachineInstr, and start using them throughout the 
codebase.

llvm-svn: 95687
2010-02-09 19:54:29 +00:00
Dale Johannesen c3adf44885 Skip DEBUG_VALUE in some places where it was affecting codegen.
llvm-svn: 95647
2010-02-09 02:01:46 +00:00
David Greene e40730d8c7 Change errs() to dbgs().
llvm-svn: 92099
2009-12-24 00:39:02 +00:00
Lang Hames 4c052261de Changed slot index ranges for MachineBasicBlocks to be exclusive of endpoint.
This fixes an in-place update bug where code inserted at the end of basic blocks may not be covered by existing intervals which were live across the entire block. It is also consistent with the way ranges are specified for live intervals.

llvm-svn: 91859
2009-12-22 00:11:50 +00:00