has a reference to it. Unfortunately, that doesn't work for codegen passes
since we don't get notified of MBB's being deleted (the original BB stays).
Use that fact to our advantage and after printing a function, check if
any of the IL BBs corresponds to a symbol that was not printed. This fixes
pr11202.
llvm-svn: 144674
block sequence when recovering from unanalyzable control flow
constructs, *always* use the function sequence. I'm not sure why I ever
went down the path of trying to use the loop sequence, it is
fundamentally not the correct sequence to use. We're trying to preserve
the incoming layout in the cases of unreasonable control flow, and that
is only encoded at the function level. We already have a filter to
select *exactly* the sub-set of blocks within the function that we're
trying to form into a chain.
The resulting code layout is also significantly better because of this.
In several places we were ending up with completely unreasonable control
flow constructs due to the ordering chosen by the loop structure for its
internal storage. This change removes a completely wasteful vector of
basic blocks, saving memory allocation in the common case even though it
costs us CPU in the fairly rare case of unnatural loops. Finally, it
fixes the latest crasher reduced out of GCC's single source. Thanks
again to Benjamin Kramer for the reduction, my bugpoint skills failed at
it.
llvm-svn: 144627
Two new TargetInstrInfo hooks lets the target tell ExecutionDepsFix
about instructions with partial register updates causing false unwanted
dependencies.
The ExecutionDepsFix pass will break the false dependencies if the
updated register was written in the previoius N instructions.
The small loop added to sse-domains.ll runs twice as fast with
dependency-breaking instructions inserted.
llvm-svn: 144602
Keep track of the last instruction to define each register individually
instead of per DomainValue. This lets us track more accurately when a
register was last written.
Also track register ages across basic blocks. When entering a new
basic block, use the least stale predecessor def as a worst case
estimate for register age.
The register age is used to arbitrate between conflicting domains. The
most recently defined register wins.
llvm-svn: 144601
"kill". This looks like a bug upstream. Since that's going to take some time
to understand, loosen the assertion and disable the optimization when
multiple kills are seen.
llvm-svn: 144568
instructions of the two-address operands) in order to avoid inserting copies.
This fixes the few regressions introduced when the two-address hack was
disabled (without regressing the improvements).
rdar://10422688
llvm-svn: 144559
cleans up all the chains allocated during the processing of each
function so that for very large inputs we don't just grow memory usage
without bound.
llvm-svn: 144533
tests when I forcibly enabled block placement.
It is apparantly possible for an unanalyzable block to fallthrough to
a non-loop block. I don't actually beleive this is correct, I believe
that 'canFallThrough' is returning true needlessly for the code
construct, and I've left a bit of a FIXME on the verification code to
try to track down why this is coming up.
Anyways, removing the assert doesn't degrade the correctness of the algorithm.
llvm-svn: 144532
this pass. We're leaving already merged blocks on the worklist, and
scanning them again and again only to determine each time through that
indeed they aren't viable. We can instead remove them once we're going
to have to scan the worklist. This is the easy way to implement removing
them. If this remains on the profile (as I somewhat suspect it will), we
can get a lot more clever here, as the worklist's order is essentially
irrelevant. We can use swapping and fold the two loops to reduce
overhead even when there are many blocks on the worklist but only a few
of them are removed.
llvm-svn: 144531
time it is queried to compute the probability of a single successor.
This makes computing the probability of every successor of a block in
sequence... really really slow. ;] This switches to a linear walk of the
successors rather than a quadratic one. One of several quadratic
behaviors slowing this pass down.
I'm not really thrilled with moving the sum code into the public
interface of MBPI, but I don't (at the moment) have ideas for a better
interface. My direction I'm thinking in for a better interface is to
have MBPI actually retain much more state and make *all* of these
queries cheap. That's a lot of work, and would require invasive changes.
Until then, this seems like the least bad (ie, least quadratic)
solution. Suggestions welcome.
llvm-svn: 144530
correctly handle blocks whose successor weights sum to more than
UINT32_MAX. This is slightly less efficient, but the entire thing is
already linear on the number of successors. Calling it within any hot
routine is a mistake, and indeed no one is calling it. It also
simplifies the code.
llvm-svn: 144527
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.
I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.
The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.
llvm-svn: 144526
get loop info structures associated with them, and so we need some way
to make forward progress selecting and placing basic blocks. The
technique used here is pretty brutal -- it just scans the list of blocks
looking for the first unplaced candidate. It keeps placing blocks like
this until the CFG becomes tractable.
The cost is somewhat unfortunate, it requires allocating a vector of all
basic block pointers eagerly. I have some ideas about how to simplify
and optimize this, but I'm trying to get the logic correct first.
Thanks to Benjamin Kramer for the reduced test case out of GCC. Sadly
there are other bugs that GCC is tickling that I'm reducing and working
on now.
llvm-svn: 144516
This makes no difference for normal defs, but early clobber dead defs
now look like:
[Slot_EarlyClobber; Slot_Dead)
instead of:
[Slot_EarlyClobber; Slot_Register).
Live ranges for normal dead defs look like:
[Slot_Register; Slot_Dead)
as before.
llvm-svn: 144512
when we fail to place all the blocks of a loop. Currently this is
happening for unnatural loops, and this logic helps more immediately
point to the problem.
llvm-svn: 144504
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.
The load and store slots are not needed after the deferred spill code
insertion framework was deleted.
The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals. In reality, these slots were used to distinguish
early-clobber defs from normal defs.
The new naming scheme also has 4 slots, but the names match how the
slots are really used. This is a purely mechanical renaming, but some
of the code makes a lot more sense now.
llvm-svn: 144503
branches that also may involve fallthrough. In the case of blocks with
no fallthrough, we can still re-order the blocks profitably. For example
instruction decoding will in some cases continue past an indirect jump,
making laying out its most likely successor there profitable.
Note, no test case. I don't know how to write a test case that exercises
this logic, but it matches the described desired semantics in
discussions with Jakob and others. If anyone has a nice example of IR
that will trigger this, that would be lovely.
Also note, there are still assertion failures in real world code with
this. I'm digging into those next, now that I know this isn't the cause.
llvm-svn: 144499
second algorithm, but only loosely. It is more heavily based on the last
discussion I had with Andy. It continues to walk from the inner-most
loop outward, but there is a key difference. With this algorithm we
ensure that as we visit each loop, the entire loop is merged into
a single chain. At the end, the entire function is treated as a "loop",
and merged into a single chain. This chain forms the desired sequence of
blocks within the function. Switching to a single algorithm removes my
biggest problem with the previous approaches -- they had different
behavior depending on which system triggered the layout. Now there is
exactly one algorithm and one basis for the decision making.
The other key difference is how the chain is formed. This is based
heavily on the idea Andy mentioned of keeping a worklist of blocks that
are viable layout successors based on the CFG. Having this set allows us
to consistently select the best layout successor for each block. It is
expensive though.
The code here remains very rough. There is a lot that needs to be done
to clean up the code, and to make the runtime cost of this pass much
lower. Very much WIP, but this was a giant chunk of code and I'd rather
folks see it sooner than later. Everything remains behind a flag of
course.
I've added a couple of tests to exercise the issues that this iteration
was motivated by: loop structure preservation. I've also fixed one test
that was exhibiting the broken behavior of the previous version.
llvm-svn: 144495
It was off by default.
The new register allocators don't have the problems that made it
necessary to reallocate registers during stack slot coloring.
llvm-svn: 144481
It is worth noting that the old spiller would split live ranges around
basic blocks. The new spiller doesn't do that.
PBQP should do its own live range splitting with
SplitEditor::splitSingleBlock() if desired. See
RAGreedy::tryBlockSplit().
llvm-svn: 144476
RegAllocGreedy has been the default for six months now.
Deleting RegAllocLinearScan makes it possible to also delete
VirtRegRewriter and clean up the spiller code.
llvm-svn: 144475
instance and a concrete inlined instance are the use of DW_TAG_subprogram
instead of DW_TAG_inlined_subroutine and the who owns the tree.
We were also omitting DW_AT_inline from the abstract roots. To fix this,
make sure we mark abstract instance roots with DW_AT_inline even when
we have only out-of-line instances referring to them with DW_AT_abstract_origin.
FileCheck is not a very good tool for tests like this, maybe we should add
a -verify mode to llvm-dwarfdump.
llvm-svn: 144441
instruction lower optimization" in the pre-RA scheduler.
The optimization, rather the hack, was done before MI use-list was available.
Now we should be able to implement it in a better way, perhaps in the
two-address pass until a MI scheduler is available.
Now that the scheduler has to backtrack to handle call sequences. Adding
artificial scheduling constraints is just not safe. Furthermore, the hack
is not taking all the other scheduling decisions into consideration so it's just
as likely to pessimize code. So I view disabling this optimization goodness
regardless of PR11314.
llvm-svn: 144267
The TII.foldMemoryOperand hook preserves implicit operands from the
original instruction. This is not what we want when those implicit
operands refer to the register being spilled.
Implicit operands referring to other registers are preserved.
This fixes PR11347.
llvm-svn: 144247
dragonegg self-host buildbot will recover (it is complaining about object
files differing between different build stages). Original commit message:
Add a hack to the scheduler to disable pseudo-two-address dependencies in
basic blocks containing calls. This works around a problem in which
these artificial dependencies can get tied up in calling seqeunce
scheduling in a way that makes the graph unschedulable with the current
approach of using artificial physical register dependencies for calling
sequences. This fixes PR11314.
llvm-svn: 144188
During the initial RPO traversal of the basic blocks, remember the ones
that are incomplete because of back-edges from predecessors that haven't
been visited yet.
After the initial RPO, revisit all those loop headers so the incoming
DomainValues on the back-edges can be properly collapsed.
This will properly fix execution domains on software pipelined code,
like the included test case.
llvm-svn: 144151
When merging two uncollapsed DomainValues, place a link to the active
DomainValue from the passive DomainValue. This allows old stale
references to the passive DomainValue to be updated to point to the
active DomainValue.
The new resolve() function finds the active DomainValue and updates the
pointer.
This change makes old live-out lists more useful since they may contain
uncollapsed DomainValues that have since been merged into other
DomainValues.
llvm-svn: 144149
This new function will decrement the reference count, and collapse a
domain value when the last reference is gone.
This simplifies DomainValue reference counting, and decouples it from
the LiveRegs array.
llvm-svn: 144131
basic blocks containing calls. This works around a problem in which
these artificial dependencies can get tied up in calling seqeunce
scheduling in a way that makes the graph unschedulable with the current
approach of using artificial physical register dependencies for calling
sequences. This fixes PR11314.
llvm-svn: 144124
The old value may still be referenced by some live-out list, and we
don't wan't to collapse those instructions twice.
This fixes the "Can only swizzle VMOVD" assertion in some armv7 SPEC
builds.
<rdar://problem/10413292>
llvm-svn: 144117
Add support for trimming constants to GetDemandedBits. This fixes some funky
constant generation that occurs when stores are expanded for targets that don't
support unaligned stores natively.
llvm-svn: 144102
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses
llvm-svn: 144100
DomainValues that are only used by "don't care" instructions are now
collapsed to the first possible execution domain after all basic blocks
have been processed. This typically means the PS domain on x86.
For example, the vsel_i64 and vsel_double functions in sse2-blend.ll are
completely collapsed to the PS domain instead of containing a mix of
execution domains created by isel.
llvm-svn: 144037
The enterBasicBlock() function is combining live-out values from
predecessor blocks. The RPO traversal means that more predecessors
have been visited when that happens, only back-edges are missing.
llvm-svn: 144025
the pubnames and pubtypes tables. LLDB can currently use this format
and a full spec is forthcoming and submission for standardization is planned.
A basic summary:
The dwarf accelerator tables are an indirect hash table optimized
for null lookup rather than access to known data. They are output into
an on-disk format that looks like this:
.-------------.
| HEADER |
|-------------|
| BUCKETS |
|-------------|
| HASHES |
|-------------|
| OFFSETS |
|-------------|
| DATA |
`-------------'
where the header contains a magic number, version, type of hash function,
the number of buckets, total number of hashes, and room for a special
struct of data and the length of that struct.
The buckets contain an index (e.g. 6) into the hashes array. The hashes
section contains all of the 32-bit hash values in contiguous memory, and
the offsets contain the offset into the data area for the particular
hash.
For a lookup example, we could hash a function name and take it modulo the
number of buckets giving us our bucket. From there we take the bucket value
as an index into the hashes table and look at each successive hash as long
as the hash value is still the same modulo result (bucket value) as earlier.
If we have a match we look at that same entry in the offsets table and
grab the offset in the data for our final match.
llvm-svn: 143921