Commit Graph

207 Commits

Author SHA1 Message Date
Duncan P. N. Exon Smith 3ac9cc6156 CodeGen: Take MachineInstr& in SlotIndexes and LiveIntervals, NFC
Take MachineInstr by reference instead of by pointer in SlotIndexes and
the SlotIndex wrappers in LiveIntervals.  The MachineInstrs here are
never null, so this cleans up the API a bit.  It also incidentally
removes a few implicit conversions from MachineInstrBundleIterator to
MachineInstr* (see PR26753).

At a couple of call sites it was convenient to convert to a range-based
for loop over MachineBasicBlock::instr_begin/instr_end, so I added
MachineBasicBlock::instrs.

llvm-svn: 262115
2016-02-27 06:40:41 +00:00
Wei Mi a62f058989 Some stackslots are allocated to vregs which have no real reference.
LiveRangeEdit::eliminateDeadDef is used to remove dead define instructions
after rematerialization. To remove a VNI for a vreg from its LiveInterval,
LiveIntervals::removeVRegDefAt is used. However, after non-PHI VNIs are all
removed, PHI VNI are still left in the LiveInterval. Such unused vregs will
be kept in RegsToSpill[] at the end of InlineSpiller::reMaterializeAll and
spiller will allocate stackslot for them.

The fix is to get rid of unused reg by checking whether it has non-dbg
reference instead of whether it has non-empty interval.

llvm-svn: 259895
2016-02-05 18:14:24 +00:00
Craig Topper 73275a2951 Use range-based for loops. NFC
llvm-svn: 256363
2015-12-24 05:20:40 +00:00
Matthias Braun 60d69e2865 CodeGen: Redo analyzePhysRegs() and computeRegisterLiveness()
computeRegisterLiveness() was broken in that it reported dead for a
register even if a subregister was alive. I assume this was because the
results of analayzePhysRegs() are hard to understand with respect to
subregisters.

This commit: Changes the results of analyzePhysRegs (=struct
PhysRegInfo) to be clearly understandable, also renames the fields to
avoid silent breakage of third-party code (and improve the grammar).

Fix all (two) users of computeRegisterLiveness() in llvm: By reenabling
it and removing workarounds for the bug.

This fixes http://llvm.org/PR24535 and http://llvm.org/PR25033

Differential Revision: http://reviews.llvm.org/D15320

llvm-svn: 255362
2015-12-11 19:42:09 +00:00
Chandler Carruth 7b560d40bd [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.

This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:

- FunctionAAResults is a type-erasing alias analysis results aggregation
  interface to walk a single query across a range of results from
  different alias analyses. Currently this is function-specific as we
  always assume that aliasing queries are *within* a function.

- AAResultBase is a CRTP utility providing stub implementations of
  various parts of the alias analysis result concept, notably in several
  cases in terms of other more general parts of the interface. This can
  be used to implement only a narrow part of the interface rather than
  the entire interface. This isn't really ideal, this logic should be
  hoisted into FunctionAAResults as currently it will cause
  a significant amount of redundant work, but it faithfully models the
  behavior of the prior infrastructure.

- All the alias analysis passes are ported to be wrapper passes for the
  legacy PM and new-style analysis passes for the new PM with a shared
  result object. In some cases (most notably CFL), this is an extremely
  naive approach that we should revisit when we can specialize for the
  new pass manager.

- BasicAA has been restructured to reflect that it is much more
  fundamentally a function analysis because it uses dominator trees and
  loop info that need to be constructed for each function.

All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.

The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.

This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.

Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.

One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.

Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.

Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.

Differential Revision: http://reviews.llvm.org/D12080

llvm-svn: 247167
2015-09-09 17:55:00 +00:00
Alexander Kornienko f00654e31b Revert r240137 (Fixed/added namespace ending comments using clang-tidy. NFC)
Apparently, the style needs to be agreed upon first.

llvm-svn: 240390
2015-06-23 09:49:53 +00:00
Alexander Kornienko 70bc5f1398 Fixed/added namespace ending comments using clang-tidy. NFC
The patch is generated using this command:

tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
  -checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
  llvm/lib/


Thanks to Eugene Kosov for the original patch!

llvm-svn: 240137
2015-06-19 15:57:42 +00:00
Quentin Colombet 7b73bfa67a [InlineSpiller] Fix rematerialization for bundles.
Prior to this patch, we could update the operand of another MI in the same
bundle.

Longer version:
Before InlineSpiller rematerializes a vreg, it iterates over operands of each MI
in a bundle, collecting all (MI, OpNo) pairs that reference that vreg.

Then if it does rematerialize, it goes through the pair list and replaces the
operands with the new (rematerialized) vreg.  The problem is, it tries to
replace all of these operands in the main MI ! This works fine for single MIs.
However, if we are processing a bundle of MIs and the list contains multiple
pairs - the rematerialization will either crash trying to access a non-existing
operand of the main MI, or silently corrupt one of the existing ones. It will
also ignore other MIs in the bundle.

The obvious fix is to use the MI pointers saved in collected (MI, OpNo) pairs.
This must have been the original intent of the pair list but somehow these
pointers got lost.

Patch by Dmitri Shtilman <dshtilman@icloud.com>!

Differential revision: http://reviews.llvm.org/D9904

<rdar://problem/21002163>

llvm-svn: 237964
2015-05-21 21:41:55 +00:00
Duncan P. N. Exon Smith a9308c49ef IR: Give 'DI' prefix to debug info metadata
Finish off PR23080 by renaming the debug info IR constructs from `MD*`
to `DI*`.  The last of the `DIDescriptor` classes were deleted in
r235356, and the last of the related typedefs removed in r235413, so
this has all baked for about a week.

Note: If you have out-of-tree code (like a frontend), I recommend that
you get everything compiling and tests passing with the *previous*
commit before updating to this one.  It'll be easier to keep track of
what code is using the `DIDescriptor` hierarchy and what you've already
updated, and I think you're extremely unlikely to insert bugs.  YMMV of
course.

Back to *this* commit: I did this using the rename-md-di-nodes.sh
upgrade script I've attached to PR23080 (both code and testcases) and
filtered through clang-format-diff.py.  I edited the tests for
test/Assembler/invalid-generic-debug-node-*.ll by hand since the columns
were off-by-three.  It should work on your out-of-tree testcases (and
code, if you've followed the advice in the previous paragraph).

Some of the tests are in badly named files now (e.g.,
test/Assembler/invalid-mdcompositetype-missing-tag.ll should be
'dicompositetype'); I'll come back and move the files in a follow-up
commit.

llvm-svn: 236120
2015-04-29 16:38:44 +00:00
Alexander Kornienko f817c1cb9a Use 'override/final' instead of 'virtual' for overridden methods
The patch is generated using clang-tidy misc-use-override check.

This command was used:

  tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py \
    -checks='-*,misc-use-override' -header-filter='llvm|clang' \
    -j=32 -fix -format

http://reviews.llvm.org/D8925

llvm-svn: 234679
2015-04-11 02:11:45 +00:00
Duncan P. N. Exon Smith e686f1591f CodeGen: Stop using DIDescriptor::is*() and auto-casting
Same as r234255, but for lib/CodeGen and lib/Target.

llvm-svn: 234258
2015-04-06 23:27:40 +00:00
Duncan P. N. Exon Smith 3bef6a3803 CodeGen: Assert that inlined-at locations agree
As a follow-up to r234021, assert that a debug info intrinsic variable's
`MDLocalVariable::getInlinedAt()` always matches the
`MDLocation::getInlinedAt()` of its `!dbg` attachment.

The goal here is to get rid of `MDLocalVariable::getInlinedAt()`
entirely (PR22778), but I'll let these assertions bake for a while
first.

If you have an out-of-tree backend that just broke, you're probably
attaching the wrong `DebugLoc` to a `DBG_VALUE` instruction.  The one
you want is the location that was attached to the corresponding
`@llvm.dbg.declare` or `@llvm.dbg.value` call that you started with.

llvm-svn: 234038
2015-04-03 19:20:26 +00:00
Benjamin Kramer 6cd780ff21 Prefer SmallVector::append/insert over push_back loops.
Same functionality, but hoists the vector growth out of the loop.

llvm-svn: 229500
2015-02-17 15:29:18 +00:00
Matthias Braun cfb8ad29b5 LiveIntervalAnalysis: Factor out code to update liveness on physreg def removal
This cleans up code and is more in line with the general philosophy of
modifying LiveIntervals through LiveIntervalAnalysis instead of changing
them directly.

llvm-svn: 226687
2015-01-21 18:50:21 +00:00
Patrik Hagglund cb06a36c9a Bugfix in InlineSpiller::traceSiblingValue().
Properly determine whether or not a phi was added by splitting.
Check against the current VNInfo of OrigLI instead of against the
OrigVNI argument.

Patch provided by Jonas Paulsson. Reviewed by Quentin Colombet.

llvm-svn: 224009
2014-12-11 10:40:17 +00:00
Philip Reames 0365f1a376 [Statepoints 2/4] Statepoint infrastructure for garbage collection: MI & x86-64 Backend
This is the second patch in a small series.  This patch contains the MachineInstruction and x86-64 backend pieces required to lower Statepoints.  It does not include the code to actually generate the STATEPOINT machine instruction and as a result, the entire patch is currently dead code.  I will be submitting the SelectionDAG parts within the next 24-48 hours.  Since those pieces are by far the most complicated, I wanted to minimize the size of that patch.  That patch will include the tests which exercise the functionality in this patch.  The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683.

The STATEPOINT psuedo node is generated after all gc values are explicitly spilled to stack slots.  The purpose of this node is to wrap an actual call instruction while recording the spill locations of the meta arguments used for garbage collection and other purposes.  The STATEPOINT is modeled as modifing all of those locations to prevent backend optimizations from forwarding the value from before the STATEPOINT to after the STATEPOINT.  (Doing so would break relocation semantics for collectors which wish to relocate roots.)

The implementation of STATEPOINT is closely modeled on PATCHPOINT.  Eventually, much of the code in this patch will be removed.  The long term plan is to merge the functionality provided by statepoints and patchpoints.  Merging their implementations in the backend is likely to be a good starting point.

Reviewed by: atrick, ributzka

llvm-svn: 223085
2014-12-01 22:52:56 +00:00
David Blaikie 70573dcd9f Update SetVector to rely on the underlying set's insert to return a pair<iterator, bool>
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.

This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...

llvm-svn: 222334
2014-11-19 07:49:26 +00:00
Craig Topper cf0444ba2a Move register class name strings to a single array in MCRegisterInfo to reduce static table size and number of relocation entries.
Indices into the table are stored in each MCRegisterClass instead of a pointer. A new method, getRegClassName, is added to MCRegisterInfo and TargetRegisterInfo to lookup the string in the table.

llvm-svn: 222118
2014-11-17 05:50:14 +00:00
Lang Hames cdd9077f3a [RegAlloc] Kill off the trivial spiller - nobody is using it any more.
llvm-svn: 221474
2014-11-06 19:12:38 +00:00
Eric Christopher 307c2cb26f Remove unnecessary TargetMachine.h includes.
llvm-svn: 219672
2014-10-14 07:22:08 +00:00
Adrian Prantl 87b7eb9d0f Move the complex address expression out of DIVariable and into an extra
argument of the llvm.dbg.declare/llvm.dbg.value intrinsics.

Previously, DIVariable was a variable-length field that has an optional
reference to a Metadata array consisting of a variable number of
complex address expressions. In the case of OpPiece expressions this is
wasting a lot of storage in IR, because when an aggregate type is, e.g.,
SROA'd into all of its n individual members, the IR will contain n copies
of the DIVariable, all alike, only differing in the complex address
reference at the end.

By making the complex address into an extra argument of the
dbg.value/dbg.declare intrinsics, all of the pieces can reference the
same variable and the complex address expressions can be uniqued across
the CU, too.
Down the road, this will allow us to move other flags, such as
"indirection" out of the DIVariable, too.

The new intrinsics look like this:
declare void @llvm.dbg.declare(metadata %storage, metadata %var, metadata %expr)
declare void @llvm.dbg.value(metadata %storage, i64 %offset, metadata %var, metadata %expr)

This patch adds a new LLVM-local tag to DIExpressions, so we can detect
and pretty-print DIExpression metadata nodes.

What this patch doesn't do:

This patch does not touch the "Indirect" field in DIVariable; but moving
that into the expression would be a natural next step.

http://reviews.llvm.org/D4919
rdar://problem/17994491

Thanks to dblaikie and dexonsmith for reviewing this patch!

Note: I accidentally committed a bogus older version of this patch previously.
llvm-svn: 218787
2014-10-01 18:55:02 +00:00
Adrian Prantl b458dc2eee Revert r218778 while investigating buldbot breakage.
"Move the complex address expression out of DIVariable and into an extra"

llvm-svn: 218782
2014-10-01 18:10:54 +00:00
Adrian Prantl 25a7174e7a Move the complex address expression out of DIVariable and into an extra
argument of the llvm.dbg.declare/llvm.dbg.value intrinsics.

Previously, DIVariable was a variable-length field that has an optional
reference to a Metadata array consisting of a variable number of
complex address expressions. In the case of OpPiece expressions this is
wasting a lot of storage in IR, because when an aggregate type is, e.g.,
SROA'd into all of its n individual members, the IR will contain n copies
of the DIVariable, all alike, only differing in the complex address
reference at the end.

By making the complex address into an extra argument of the
dbg.value/dbg.declare intrinsics, all of the pieces can reference the
same variable and the complex address expressions can be uniqued across
the CU, too.
Down the road, this will allow us to move other flags, such as
"indirection" out of the DIVariable, too.

The new intrinsics look like this:
declare void @llvm.dbg.declare(metadata %storage, metadata %var, metadata %expr)
declare void @llvm.dbg.value(metadata %storage, i64 %offset, metadata %var, metadata %expr)

This patch adds a new LLVM-local tag to DIExpressions, so we can detect
and pretty-print DIExpression metadata nodes.

What this patch doesn't do:

This patch does not touch the "Indirect" field in DIVariable; but moving
that into the expression would be a natural next step.

http://reviews.llvm.org/D4919
rdar://problem/17994491

Thanks to dblaikie and dexonsmith for reviewing this patch!

llvm-svn: 218778
2014-10-01 17:55:39 +00:00
Patrik Hagglund 296acbfe4f Fix in InlineSpiller to make the rematerilization loop also consider
implicit uses of the whole register when a sub register is defined.

Now the same iterator is used in the rematerilization loop as in the
spill loop later.

Patch provided by Mikael Holmen.

This fix was proposed and reviewed by Quentin Colombet,
http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-August/076135.html.

Unfortunately, this error in the rematerilization code has only been
seen in a large test case for an out-of-tree target, and is probably
hard to reproduce on an in-tree target. Therefore, no testcase is
provided.

llvm-svn: 216873
2014-09-01 11:04:07 +00:00
Eric Christopher fc6de428c8 Have MachineFunction cache a pointer to the subtarget to make lookups
shorter/easier and have the DAG use that to do the same lookup. This
can be used in the future for TargetMachine based caching lookups from
the MachineFunction easily.

Update the MIPS subtarget switching machinery to update this pointer
at the same time it runs.

llvm-svn: 214838
2014-08-05 02:39:49 +00:00
Eric Christopher d913448b38 Remove the TargetMachine forwards for TargetSubtargetInfo based
information and update all callers. No functional change.

llvm-svn: 214781
2014-08-04 21:25:23 +00:00
Chandler Carruth 1b9dde087e [Modules] Remove potential ODR violations by sinking the DEBUG_TYPE
define below all header includes in the lib/CodeGen/... tree. While the
current modules implementation doesn't check for this kind of ODR
violation yet, it is likely to grow support for it in the future. It
also removes one layer of macro pollution across all the included
headers.

Other sub-trees will follow.

llvm-svn: 206837
2014-04-22 02:02:50 +00:00
Craig Topper c0196b1b40 [C++11] More 'nullptr' conversion. In some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 206142
2014-04-14 00:51:57 +00:00
Manman Ren c935560568 Register allocator: add condition to hoist a spill to outer loop.
We make sure a spill is not hoisted to a hotter outer loop by adding
a condition. Hoist a spill to outer loop if there are multiple dependents
(it can be beneficial if more than one dependents are hoisted) or
if DepSV (the hoisting source) is hotter than SV (the hoisting destination).

rdar://16268194

llvm-svn: 204522
2014-03-21 21:46:24 +00:00
Owen Anderson ec5d480329 Revert r203883 (which was more of a bandaid) and fix the real underlying
issue in that the new MachineRegisterInfo bundle iterators didn't
dereference to the START of the bundle, while the old skipBundle()
method did.

llvm-svn: 203890
2014-03-14 05:02:18 +00:00
Pete Cooper 7360280e5c Fix issue with r203865. The old behaviour would get a MachineOperand then find the MI for the bundle the MI was in. The new behaviour was failing to get the parent bundle and instead just used the MI from the MachineOperand
llvm-svn: 203883
2014-03-14 02:28:05 +00:00
Owen Anderson abb90c9ddb Phase 1 of refactoring the MachineRegisterInfo iterators to make them suitable
for use with C++11 range-based for-loops.

The gist of phase 1 is to remove the skipInstruction() and skipBundle()
methods from these iterators, instead splitting each iterator into a version
that walks operands, a version that walks instructions, and a version that
walks bundles.  This has the result of making some "clever" loops in lib/CodeGen
more verbose, but also makes their iterator invalidation characteristics much
more obvious to the casual reader. (Making them concise again in the future is a
good motivating case for a pre-incrementing range adapter!)

Phase 2 of this undertaking with consist of removing the getOperand() method,
and changing operator*() of the operand-walker to return a MachineOperand&.  At
that point, it should be possible to add range views for them that work as one
might expect.

llvm-svn: 203757
2014-03-13 06:02:25 +00:00
Craig Topper 4584cd54e3 [C++11] Add 'override' keyword to virtual methods that override their base class.
llvm-svn: 203220
2014-03-07 09:26:03 +00:00
Benjamin Kramer d6f1f84f51 [C++11] Replace llvm::tie with std::tie.
The old implementation is no longer needed in C++11.

llvm-svn: 202644
2014-03-02 13:30:33 +00:00
Benjamin Kramer b6d0bd48bd [C++11] Replace llvm::next and llvm::prior with std::next and std::prev.
Remove the old functions.

llvm-svn: 202636
2014-03-02 12:27:27 +00:00
Chandler Carruth 8a8cd2bab9 Re-sort all of the includes with ./utils/sort_includes.py so that
subsequent changes are easier to review. About to fix some layering
issues, and wanted to separate out the necessary churn.

Also comment and sink the include of "Windows.h" in three .inc files to
match the usage in Memory.inc.

llvm-svn: 198685
2014-01-07 11:48:04 +00:00
Andrew Trick dfacda3635 Fix for PR18396: Assertion: MO->isDead "Cannot fold physreg def".
InlineSpiller::foldMemoryOperand needs to handle undef call operands.

llvm-svn: 198679
2014-01-07 07:31:10 +00:00
Andrew Trick 10d5be4e6e Added a size field to the stack map record to handle subregister spills.
Implementing this on bigendian platforms could get strange. I added a
target hook, getStackSlotRange, per Jakob's recommendation to make
this as explicit as possible.

llvm-svn: 194942
2013-11-17 01:36:23 +00:00
Matthias Braun f6fe6bfffe Print register in LiveInterval::print()
llvm-svn: 192398
2013-10-10 21:29:05 +00:00
Matthias Braun 34e1be9451 Represent RegUnit liveness with LiveRange instance
Previously LiveInterval has been used, but having a spill weight and
register number is unnecessary for a register unit.

llvm-svn: 192397
2013-10-10 21:29:02 +00:00
Matthias Braun 88dd0abd2d Pass LiveQueryResult by value
This makes the API a bit more natural to use and makes it easier to make
LiveRanges implementation details private.

llvm-svn: 192394
2013-10-10 21:28:52 +00:00
Matthias Braun 13ddb7cd65 Rename LiveRange to LiveInterval::Segment
The Segment struct contains a single interval; multiple instances of this struct
are used to construct a live range, but the struct is not a live range by
itself.

llvm-svn: 192392
2013-10-10 21:28:43 +00:00
Adrian Prantl db3e26d193 Debug info: Fix PR16736 and rdar://problem/14990587.
A DBG_VALUE is register-indirect iff the first operand is a register
_and_ the second operand is an immediate.

llvm-svn: 190821
2013-09-16 23:29:03 +00:00
Mark Lacey 9d8103de7a Auto-compute live intervals on demand.
When new virtual registers are created during splitting/spilling, defer
creation of the live interval until we need to use the live interval.

Along with the recent commits to notify LiveRangeEdit when new virtual
registers are created, this makes it possible for functions like
TargetInstrInfo::loadRegFromStackSlot() and
TargetInstrInfo::storeRegToStackSlot() to create multiple virtual
registers as part of the process of generating loads/stores for
different register classes, and then have the live intervals for those
new registers computed when they are needed.

llvm-svn: 188437
2013-08-14 23:50:16 +00:00
Adrian Prantl c31ec1c948 Safeguard DBG_VALUE handling. Unbreaks the ASAN buildbot.
llvm-svn: 186014
2013-07-10 16:56:47 +00:00
Andrew Trick 5749b8be01 Update physreg live intervals during remat.
llvm-svn: 184574
2013-06-21 18:33:26 +00:00
Benjamin Kramer e2a1d89e14 Switch spill weights from a basic loop depth estimation to BlockFrequencyInfo.
The main advantages here are way better heuristics, taking into account not
just loop depth but also __builtin_expect and other static heuristics and will
eventually learn how to use profile info. Most of the work in this patch is
pushing the MachineBlockFrequencyInfo analysis into the right places.

This is good for a 5% speedup on zlib's deflate (x86_64), there were some very
unfortunate spilling decisions in its hottest loop in longest_match(). Other
benchmarks I tried were mostly neutral.

This changes register allocation in subtle ways, update the tests for it.
2012-02-20-MachineCPBug.ll was deleted as it's very fragile and the instruction
it looked for was gone already (but the FileCheck pattern picked up unrelated
stuff).

llvm-svn: 184105
2013-06-17 19:00:36 +00:00
David Blaikie 0252265be0 Debug Info: Simplify Frame Index handling in DBG_VALUE Machine Instructions
Rather than using the full power of target-specific addressing modes in
DBG_VALUEs with Frame Indicies, simply use Frame Index + Offset. This
reduces the complexity of debug info handling down to two
representations of values (reg+offset and frame index+offset) rather
than three or four.

Ideally we could ensure that frame indicies had been eliminated by the
time we reached an assembly or dwarf generation, but I haven't spent the
time to figure out where the FIs are leaking through into that & whether
there's a good place to convert them. Some FI+offset=>reg+offset
conversion is done (see PrologEpilogInserter, for example) which is
necessary for some SelectionDAG assumptions about registers, I believe,
but it might be possible to make this a more thorough conversion &
ensure there are no remaining FIs no matter how instruction selection
is performed.

llvm-svn: 184066
2013-06-16 20:34:15 +00:00
Benjamin Kramer bc6666bedf InlineSpiller: Store bucket pointers instead of iterators.
Lets us use a SetVector instead of an explicit set + vector combination.

llvm-svn: 182586
2013-05-23 15:42:57 +00:00
Benjamin Kramer 391f5a6e21 InlineSpiller: Remove quadratic behavior.
No functionality change.

llvm-svn: 181149
2013-05-05 11:29:14 +00:00
Chandler Carruth ed0881b2a6 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Jakob Stoklund Olesen 26c9d70d28 Make the LiveRegMatrix analysis available to targets.
No functional change, just moved header files.

Targets can inject custom passes between register allocation and
rewriting. This makes it possible to tweak the register allocation
before rewriting, using the full global interference checking available
from LiveRegMatrix.

llvm-svn: 168806
2012-11-28 19:13:06 +00:00
James Molloy 381fab93d5 Add an analyzePhysReg() function to MachineOperandIteratorBase that analyses an instruction's use of a physical register, analogous to analyzeVirtReg.
Rename RegInfo to VirtRegInfo so as not to be confused with the new PhysRegInfo.

llvm-svn: 163694
2012-09-12 10:03:31 +00:00
Logan Chien 64f361e0e1 Fix typo.
llvm-svn: 163059
2012-09-01 12:11:41 +00:00
Jakob Stoklund Olesen 8f324a2cc8 Account for early-clobber reload instructions.
No test case, there are no in-tree targets that require this.

llvm-svn: 160219
2012-07-14 18:45:35 +00:00
Jakob Stoklund Olesen 45c1f9976c Print out register number in InlineSpiller.
llvm-svn: 158575
2012-06-15 23:47:09 +00:00
Benjamin Kramer 009b1c1cf1 Round 2 of dead private variable removal.
LLVM is now -Wunused-private-field clean except for
- lib/MC/MCDisassembler/Disassembler.h. Not sure why it keeps all those unaccessible fields.
- gtest.

llvm-svn: 158096
2012-06-06 19:47:08 +00:00
Jakob Stoklund Olesen 2aeead4bf6 Use LiveRangeQuery instead of getLiveRangeContaining().
llvm-svn: 157142
2012-05-20 02:44:33 +00:00
Pete Cooper 3ca96f9950 Moved LiveRangeEdit.h so that it can be called from other parts of the backend, not just libCodeGen
llvm-svn: 153906
2012-04-02 22:44:18 +00:00
Pete Cooper 2bde2f42b1 Refactored the LiveRangeEdit interface so that MachineFunction, TargetInstrInfo, MachineRegisterInfo, LiveIntervals, and VirtRegMap are all passed into the constructor and stored as members instead of passed in to each method.
llvm-svn: 153903
2012-04-02 22:22:53 +00:00
Jakob Stoklund Olesen abe8c09b20 Make InlineSpiller bundle-aware.
Simply treat bundles as instructions. Spill code is inserted between
bundles, never inside a bundle.  Rewrite all operands in a bundle at
once.

Don't attempt and memory operand folding inside bundles.

llvm-svn: 151787
2012-03-01 01:43:25 +00:00
Jakob Stoklund Olesen ad6b22eb16 Don't store COPY pointers in VNInfo.
If a value is defined by a COPY, that instuction can easily and cheaply
be found by getInstructionFromIndex(VNI->def).

This reduces the size of VNInfo from 24 to 16 bytes, and improves
llc compile time by 3%.

llvm-svn: 149763
2012-02-04 05:20:49 +00:00
Pete Cooper 76e4bc4e26 Fixed register allocator splitting a live range on a spilling variable.
If we create new intervals for a variable that is being spilled, then those new intervals are not guaranteed to also spill.  This means that anything reading from the original spilling value might not get the correct value if spills were missed.

Fixes <rdar://problem/10546864>

llvm-svn: 146428
2011-12-12 22:16:27 +00:00
Evan Cheng 7f8e563a69 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Jakob Stoklund Olesen d7bcf43dc2 Use getVNInfoBefore() when it makes sense.
llvm-svn: 144517
2011-11-14 01:39:36 +00:00
Jakob Stoklund Olesen d8f2405e73 Terminate all dead defs at the dead slot instead of the 'next' slot.
This makes no difference for normal defs, but early clobber dead defs
now look like:

  [Slot_EarlyClobber; Slot_Dead)

instead of:

  [Slot_EarlyClobber; Slot_Register).

Live ranges for normal dead defs look like:

  [Slot_Register; Slot_Dead)

as before.

llvm-svn: 144512
2011-11-13 22:42:13 +00:00
Jakob Stoklund Olesen 90b5e565b6 Rename SlotIndexes to match how they are used.
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.

The load and store slots are not needed after the deferred spill code
insertion framework was deleted.

The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals.  In reality, these slots were used to distinguish
early-clobber defs from normal defs.

The new naming scheme also has 4 slots, but the names match how the
slots are really used.  This is a purely mechanical renaming, but some
of the code makes a lot more sense now.

llvm-svn: 144503
2011-11-13 20:45:27 +00:00
Jakob Stoklund Olesen 28df7ef8c9 Stop tracking spill slot uses in VirtRegMap.
Nobody cared, StackSlotColoring scans the instructions to find used stack
slots.

llvm-svn: 144485
2011-11-13 01:23:30 +00:00
Jakob Stoklund Olesen eef48b6938 Strip old implicit operands after foldMemoryOperand.
The TII.foldMemoryOperand hook preserves implicit operands from the
original instruction.  This is not what we want when those implicit
operands refer to the register being spilled.

Implicit operands referring to other registers are preserved.

This fixes PR11347.

llvm-svn: 144247
2011-11-10 00:17:03 +00:00
Jakob Stoklund Olesen 7fb5632e73 Add value numbers when spilling dead defs.
When spilling around an instruction with a dead def, remember to add a
value number for the def.

The missing value number wouldn't normally create problems since there
would be an incoming live range as well.  However, due to another bug
we could spill a dead V_SET0 instruction which doesn't read any values.

The missing value number caused an empty live range to be created which
is dangerous since it doesn't interfere with anything.

This fixes part of PR11125.

llvm-svn: 141923
2011-10-14 00:34:31 +00:00
Jakob Stoklund Olesen e8339b2e63 Disable local spill hoisting for non-killing copies.
If the source register is live after the copy being spilled, there is no
point to hoisting it.  Hoisting inside a basic block only serves to
resolve interferences by shortening the live range of the source.

llvm-svn: 139882
2011-09-16 00:03:33 +00:00
Jakob Stoklund Olesen bceb9e5c05 Add an option to disable spill hoisting.
When -split-spill-mode is enabled, spill hoisting is performed by
SplitKit instead of by InlineSpiller.  This hidden command line option
is for testing the splitter spill mode.

llvm-svn: 139845
2011-09-15 21:06:00 +00:00
Jakob Stoklund Olesen c94c967656 Count correctly when a COPY turns into a spill or reload.
The number of spills could go negative since a folded COPY is just a
spill, and it may be eliminated.

llvm-svn: 139815
2011-09-15 18:22:52 +00:00
Jakob Stoklund Olesen 37eb6962c6 Count inserted spills and reloads more accurately.
Adjust counters when removing spill and reload instructions.

We still don't account for reloads being removed by eliminateDeadDefs().

llvm-svn: 139806
2011-09-15 17:54:28 +00:00
Jakob Stoklund Olesen 07b3503f8b Trace through sibling PHIs in bulk.
When traceSiblingValue() encounters a PHI-def value created by live
range splitting, don't look at all the predecessor blocks.  That can be
very expensive in a complicated CFG.

Instead, consider that all the non-PHI defs jointly dominate all the
PHI-defs.  Tracing directly to all the non-PHI defs is much faster that
zipping around in the CFG when there are many PHIs with many
predecessors.

This significantly improves compile time for indirectbr interpreters.

llvm-svn: 139797
2011-09-15 16:41:12 +00:00
Jakob Stoklund Olesen 278bf02581 Reapply r139247: Cache intermediate results during traceSiblingValue.
In some cases such as interpreters using indirectbr, the CFG can be very
complicated, and live range splitting may be forced to insert a large
number of phi-defs.  When that happens, traceSiblingValue can spend a
lot of time zipping around in the CFG looking for defs and reloads.

This patch causes more information to be cached in SibValues, and the
cached values are used to terminate searches early.  This speeds up
spilling by 20x in one interpreter test case.  For more typical code,
this is just a 10% speedup of spilling.

The previous version had bugs that caused miscompilations. They have
been fixed.

llvm-svn: 139378
2011-09-09 18:11:41 +00:00
Jakob Stoklund Olesen 946e0a4665 Revert r139247 "Cache intermediate results during traceSiblingValue."
It broke the self host and clang-x86_64-darwin10-RA.

llvm-svn: 139259
2011-09-07 21:43:52 +00:00
Jakob Stoklund Olesen b77d5c1484 Cache intermediate results during traceSiblingValue.
In some cases such as interpreters using indirectbr, the CFG can be very
complicated, and live range splitting may be forced to insert a large
number of phi-defs.  When that happens, traceSiblingValue can spend a
lot of time zipping around in the CFG looking for defs and reloads.

This patch causes more information to be cached in SibValues, and the
cached values are used to terminate searches early.  This speeds up
spilling by 20x in one interpreter test case.  For more typical code,
this is just a 10% speedup of spilling.

llvm-svn: 139247
2011-09-07 19:07:31 +00:00
Jakob Stoklund Olesen e417273fce Revert r138794, "Do not try to rematerialize a value from a partial definition."
The problem is fixed for all register allocators by r138944, so this
patch is no longer necessary.

<rdar://problem/10032939>

llvm-svn: 138945
2011-09-01 17:25:18 +00:00
Bob Wilson 358a5f6a72 Do not try to rematerialize a value from a partial definition.
I don't currently have a good testcase for this; will try to get one
tomorrow.  <rdar://problem/10032939>

llvm-svn: 138794
2011-08-30 05:36:02 +00:00
Jakob Stoklund Olesen c0dd3da9c5 Fix PR10387.
When trying to rematerialize a value before an instruction that has an
early-clobber redefine of the virtual register, make sure to look up the
correct value number.

Early-clobber defs are moved one slot back, so getBaseIndex is needed to
find the used value number.

Bugpoint was unable to reduce the test case for this, see PR10388.

llvm-svn: 135378
2011-07-18 05:31:59 +00:00
Jakob Stoklund Olesen 780db902f7 Oops, didn't mean to commit that.
Spills should be hoisted out of loops, but we don't want to hoist them
to dominating blocks at the same loop depth. That could cause the spills
to be executed more often.

llvm-svn: 134782
2011-07-09 01:02:44 +00:00
Jakob Stoklund Olesen bf6afec312 Hoist spills within a basic block.
Try to move spills as early as possible in their basic block. This can
help eliminate interferences by shortening the live range being
spilled.

This fixes PR10221.

llvm-svn: 134776
2011-07-09 00:25:03 +00:00
Jakob Stoklund Olesen bbad3bceb7 Fix PR10277.
Remat during spilling triggers dead code elimination. If a phi-def
becomes unused, that may also cause live ranges to split into separate
connected components.

This type of splitting is different from normal live range splitting. In
particular, there may not be a common original interval.

When the split range is its own original, make sure that the new
siblings are also their own originals. The range being split cannot be
used as an original since it doesn't cover the new siblings.

llvm-svn: 134413
2011-07-05 15:38:41 +00:00
Rafael Espindola 070f96c567 Create a isFullCopy predicate.
llvm-svn: 134189
2011-06-30 21:15:52 +00:00
Jakob Stoklund Olesen 31a0b5e2f0 Avoid hoisting spills when looking at a copy from another register that is also
about to be spilled.

This can only happen when two extra snippet registers are included in the spill,
and there is a copy between them. Hoisting the spill creates problems because
the hoist will mark the copy for later dead code elimination, and spilling the
second register will turn the copy into a spill.

<rdar://problem/9420853>

llvm-svn: 131192
2011-05-11 18:25:10 +00:00
Jakob Stoklund Olesen c5a8c08dba Add some statistics to the splitting and spilling frameworks.
llvm-svn: 130931
2011-05-05 17:22:53 +00:00
Jakob Stoklund Olesen ec9b4a6b8b Avoid using stale entries form the sibling value map.
This could happen when trying to use a value that had been eliminated after dead
code elimination and folding loads.

llvm-svn: 130597
2011-04-30 06:42:21 +00:00
Jakob Stoklund Olesen 86e53ced08 Add debug output for rematerializable instructions.
llvm-svn: 129883
2011-04-20 22:14:20 +00:00
Jakob Stoklund Olesen 9f294a9e52 Handle spilling around an instruction that has an early-clobber re-definition of
the spilled register.

This is quite common on ARM now that some stores have early-clobber defines.

llvm-svn: 129714
2011-04-18 20:23:27 +00:00
Jakob Stoklund Olesen ae044c06bf Pick a conservative register class when creating a small live range for remat.
The rematerialized instruction may require a more constrained register class
than the register being spilled. In the test case, the spilled register has been
inflated to the DPR register class, but we are rematerializing a load of the
ssub_0 sub-register which only exists for DPR_VFP2 registers.

The register class is reinflated after spilling, so the conservative choice is
only temporary.

llvm-svn: 128610
2011-03-31 03:54:44 +00:00
Jakob Stoklund Olesen e991f728d6 Recompute register class and hint for registers created during spilling.
The spill weight is not recomputed for an unspillable register - it stays infinite.

llvm-svn: 128490
2011-03-29 21:20:19 +00:00
Jakob Stoklund Olesen 0ed9ebca58 Remember to use the correct register when rematerializing for snippets.
llvm-svn: 128469
2011-03-29 17:47:02 +00:00
Jakob Stoklund Olesen add79c6abf Run dead code elimination immediately after rematerialization.
This may eliminate some uses of the spilled registers, and we don't want to
insert reloads for that.

llvm-svn: 128468
2011-03-29 17:47:00 +00:00
Jakob Stoklund Olesen d8af5298d1 Properly enable rematerialization when spilling after live range splitting.
The instruction to be rematerialized may not be the one defining the register
that is being spilled. The traceSiblingValue() function sees through sibling
copies to find the remat candidate.

llvm-svn: 128449
2011-03-29 03:12:02 +00:00
Jakob Stoklund Olesen e466345675 Use individual register classes when spilling snippets.
The main register class may have been inflated by live range splitting, so that
register class is not necessarily valid for the snippet instructions.

Use the original register class for the stack slot interval.

llvm-svn: 128351
2011-03-26 22:16:41 +00:00
Jakob Stoklund Olesen e55003fb04 Also eliminate redundant spills downstream of inserted reloads.
This can happen when multiple sibling registers are spilled after live range
splitting.

llvm-svn: 127965
2011-03-20 05:44:58 +00:00
Jakob Stoklund Olesen 39488642d3 Change an argument to a LiveInterval instead of a register number to save some redundant lookups.
llvm-svn: 127964
2011-03-20 05:44:55 +00:00
Jakob Stoklund Olesen 8698507fe1 Add debug output.
llvm-svn: 127959
2011-03-19 23:02:47 +00:00
Jakob Stoklund Olesen 27320cb864 Hoist spills when the same value is known to be in less loopy sibling registers.
Stack slot real estate is virtually free compared to registers, so it is
advantageous to spill earlier even though the same value is now kept in both a
register and a stack slot.

Also eliminate redundant spills by extending the stack slot live range
underneath reloaded registers.

This can trigger a dead code elimination, removing copies and even reloads that
were only feeding spills.

llvm-svn: 127868
2011-03-18 04:23:06 +00:00