properly account for the *global* probability of the edge being taken.
This manifested as a very large number of unconditional branches to
blocks being merged against the CFG even though they weren't
particularly hot within the CFG.
The fix is to check whether the edge being merged is both locally hot
relative to other successors for the source block, and globally hot
compared to other (unmerged) predecessors of the destination block.
This introduces a new crasher on GCC single-source, but it's currently
behind a flag, and Ben has offered to work on the reduction. =]
llvm-svn: 145010
is actually being tested. Also add some FileCheck goodness to much more
carefully ensure that the result is the desired result. Before this test
would only have failed through an assert failure if the underlying fix
were reverted.
Also, add some weight metadata and a comment explaining exactly what is
going on to a trick section of the test case. Originally, we were
getting very unlucky and trying to form a block chain that isn't
actually profitable. I'm working on a fix to avoid forming these
unprofitable chains, and that would also have masked any failure from
this test case. The easy solution is to add some metadata that makes it
*really* profitable to form the bad chain here.
llvm-svn: 145006
formation phase and into the initial walk of the basic blocks. We
essentially pre-merge all blocks where unanalyzable fallthrough exists,
as we won't be able to update the terminators effectively after any
reorderings. This is quite a bit more principled as there may be CFGs
where the second half of the unanalyzable pair has some analyzable
predecessor that gets placed first. Then it may get placed next,
implicitly breaking the unanalyzable branch even though we never even
looked at the part that isn't analyzable. I've included a test case that
triggers this (thanks Benjamin yet again!), and I'm hoping to synthesize
some more general ones as I dig into related issues.
Also, to make this new scheme work we have to be able to handle branches
into the middle of a chain, so add this check. We always fallback on the
incoming ordering.
Finally, this starts to really underscore a known limitation of the
current implementation -- we don't consider broken predecessors when
merging successors. This can caused major missed opportunities, and is
something I'm planning on looking at next (modulo more bug reports).
llvm-svn: 144994
The loop tree's inclusive block lists are painful and expensive to
update. (I have no idea why they're inclusive). The design was
supposed to handle this case but the implementation missed it and my
unit tests weren't thorough enough.
Fixes PR11335: loop unroll update.
llvm-svn: 144970
The right way to check for a binary operation is
cast<BinaryOperator>. The original check: cast<Instruction> &&
numOperands() == 2 would match phi "instructions", leading to an
infinite loop in extreme corner case: a useless phi with operands
[self, constant] that prior optimization passes failed to remove,
being used in the loop by another useless phi, in turn being used by an
lshr or udiv.
Fixes PR11350: runaway iteration assertion.
llvm-svn: 144935
ADDs. MaxOffs is used as a threshold to limit the size of the offset. Tradeoffs
being: (1) If we can't materialize the large constant then we'll cause fast-isel
to bail. (2) Too large of an offset can't be directly encoded in the ADD
resulting in a MOV+ADD. Generally not a bad thing because otherwise we would
have had ADD+ADD, but on Thumb this turns into a MOVS+MOVT+ADD. Working on a fix
for that. (3) Conversely, too low of a threshold we'll miss opportunities to
coalesce ADDs.
rdar://10412592
llvm-svn: 144886
properly quote strings when writing the CMakeFiles/Makefile.cmake output file
(which lists the dependencies). This shows up when using CMake + MSYS Makefile
generator.
llvm-svn: 144873
We don't (yet) have the granularity in the fixups to be specific about which
bitranges are affected. That's a future cleanup, but we're not there yet.
llvm-svn: 144852
for a single miss and not all predecessor instructions that get selected by
the selection DAG instruction selector. This is still not exact (e.g., over
states misses when folded/dead instructions are present), but it is a step in
the right direction.
llvm-svn: 144832
and code model. This eliminates the need to pass OptLevel flag all over the
place and makes it possible for any codegen pass to use this information.
llvm-svn: 144788
There may be many invokes that share one landing pad, and the previous code
would record the landing pad once for each invoke. Besides the wasted
effort, a pair of volatile loads gets inserted every time the landing pad is
processed. The rest of the code can get optimized away when a landing pad
is processed repeatedly, but the volatile loads remain, resulting in code like:
LBB35_18:
Ltmp483:
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r2, [r7, #-72]
ldr r2, [r7, #-68]
ldr r4, [r7, #-72]
ldr r2, [r7, #-68]
llvm-svn: 144787
This same basic code was in the older version of the SjLj exception handling,
but it was removed in the recent revisions to that code. It needs to be there.
llvm-svn: 144782
The EmitBasePointerRecalculation function has 2 problems, one minor and one
fatal. The minor problem is that it inserts the code at the setjmp
instead of in the dispatch block. The fatal problem is that at the point
where this code runs, we don't know whether there will be a base pointer,
so the entire function is a no-op. The base pointer recalculation needs to
be handled as it was before, by inserting a pseudo instruction that gets
expanded late.
Most of the support for the old approach is still here, but it no longer
has any connection to the eh_sjlj_dispatchsetup intrinsic. Clean up the
parts related to the intrinsic and just generate the pseudo instruction
directly.
llvm-svn: 144781
This will widen 32-bit register vmov instructions to 64-bit when
possible. The 64-bit vmovd instructions can then be translated to NEON
vorr instructions by the execution dependency fix pass.
The copies are only widened if they are marked as clobbering the whole
D-register.
llvm-svn: 144734