We were failing to match the output line, which led to us collecting no
stats at all, which led to a divide-by-zero error.
Fixes PR15510.
llvm-svn: 177084
Fixed a crasher in the new DWARF in .o files line table linking function where "back()" could end up being called on an empty std::vector.
llvm-svn: 177082
This yields a log(#ast_nodes) worst-case improvement with matchers like
stmt(unless(hasAncestor(...))).
Also made the order of visitation for ancestor matches BFS, as the most
common use cases (for example finding the closest enclosing function
definition) rely on that.
llvm-svn: 177081
Summary:
Aligns continuation lines of multi-line comments to the base
indentation level +1:
class A {
/*
* test
*/
void f() {}
};
The first revision is work in progress. The implementation is not yet complete.
Reviewers: djasper
Reviewed By: djasper
CC: cfe-commits, klimek
Differential Revision: http://llvm-reviews.chandlerc.com/D541
llvm-svn: 177080
The stronger binding of a string ending in :/= does not really make
sense if it is the only character.
Before:
llvm::outs() << aaaaaaaaaaaaaaaaaaaaaaaa
<< "=" << bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb;
After:
llvm::outs() << aaaaaaaaaaaaaaaaaaaaaaaa << "="
<< bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb;
llvm-svn: 177075
The fundamental problem is that SROA didn't allow for overly wide loads
where the bits past the end of the alloca were masked away and the load
was sufficiently aligned to ensure there is no risk of page fault, or
other trapping behavior. With such widened loads, SROA would delete the
load entirely rather than clamping it to the size of the alloca in order
to allow mem2reg to fire. This was exposed by a test case that neatly
arranged for GVN to run first, widening certain loads, followed by an
inline step, and then SROA which miscompiles the code. However, I see no
reason why this hasn't been plaguing us in other contexts. It seems
deeply broken.
Diagnosing all of the above took all of 10 minutes of debugging. The
really annoying aspect is that fixing this completely breaks the pass.
;] There was an implicit reliance on the fact that no loads or stores
extended past the alloca once we decided to rewrite them in the final
stage of SROA. This was used to encode information about whether the
loads and stores had been split across multiple partitions of the
original alloca. That required threading explicit tracking of whether
a *use* of a partition is split across multiple partitions.
Once that was done, another problem arose: we allowed splitting of
integer loads and stores iff they were loads and stores to the entire
alloca. This is a really arbitrary limitation, and splitting at least
some integer loads and stores is crucial to maximize promotion
opportunities. My first attempt was to start removing the restriction
entirely, but currently that does Very Bad Things by causing *many*
common alloca patterns to be fully decomposed into i8 operations and
lots of or-ing together to produce larger integers on demand. The code
bloat is terrifying. That is still the right end-goal, but substantial
work must be done to either merge partitions or ensure that small i8
values are eagerly merged in some other pass. Sadly, figuring all this
out took essentially all the time and effort here.
So the end result is that we allow splitting only when the load or store
at least covers the alloca. That ensures widened loads and stores don't
hurt SROA, and that we don't rampantly decompose operations more than we
have previously.
All of this was already fairly well tested, and so I've just updated the
tests to cover the wide load behavior. I can add a test that crafts the
pass ordering magic which caused the original PR, but that seems really
brittle and to provide little benefit. The fundamental problem is that
widened loads should Just Work.
llvm-svn: 177055
isa and a cast inside the assert. The efficiency concern isn't really
important here. The code should likely be cleaned up a bit more,
especially getting a message into the assert.
Please review Rafael.
llvm-svn: 177053