programs on targets with large register files. The root of the compile time
overhead was in the use of llvm::SmallVector to hold PhysRegEntries, which
resulted in slow-down from calling llvm::SmallVector::assign(N, 0). In contrast
std::vector uses the faster __platform_bzero to zero out primitive buffers when
assign is called, while SmallVector uses an iterator.
The fix for this was simply to replace the SmallVector with a dynamically
allocated buffer and to initialize or reinitialize the buffer based on the
total registers that the target architecture requires. The changes support
cases where a pass manager may be reused for different targets, and note that
the PhysRegEntries is allocated using calloc mainly for good for, and also to
quite tools like Valgrind (see comments for more info on this).
There is an rdar to track the fact that SmallVector doesn't have platform
specific speedup optimizations inside of it for things like this, and I'll
create a bugzilla entry at some point soon as well.
TL;DR: This fix replaces the expensive llvm::SmallVector<unsigned
char>::assign(N, 0) with a call to calloc for N bytes which is much faster
because SmallVector's assign uses iterators.
llvm-svn: 200917
No functional change, just moved header files.
Targets can inject custom passes between register allocation and
rewriting. This makes it possible to tweak the register allocation
before rewriting, using the full global interference checking available
from LiveRegMatrix.
llvm-svn: 168806
Stop depending on the LiveIntervalUnions in RegAllocBase, they are about
to be removed.
The changes are mostly replacing register alias iterators with regunit
iterators, and querying LiveRegMatrix instrad of RegAllocBase.
InterferenceCache is converted to work with per-regunit
LiveIntervalUnions, and it checks fixed regunit interference separately,
using the fixed live intervals provided by LiveIntervalAnalysis.
The local splitting helper calcGapWeights() is also considering fixed
regunit interference which is kept on the side now.
llvm-svn: 158867
This makes global live range splitting behave identically with and
without register mask operands.
This is not necessarily the best way of using register masks for live
range splitting. It would be more efficient to first split global live
ranges around calls (i.e., register masks), and reserve the fine grained
per-physreg interference guidance for global live ranges that do not
cross calls.
For now the goal is to produce identical assembly when enabling register
masks.
llvm-svn: 150259
Original commit message:
Count references to interference cache entries.
Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.
This makes it possible to have multiple live cursors examining
interference for different physregs.
The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().
Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.
llvm-svn: 135130
Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.
This makes it possible to have multiple live cursors examining
interference for different physregs.
The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().
Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.
llvm-svn: 135121
The cache entry referenced by the best split candidate could become
clobbered by an unsuccessful candidate.
The correct fix here is to use reference counts on the cache entries.
Coming up.
llvm-svn: 135113
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.
llvm-svn: 128764