I brazenly think this change is slightly simpler than r178793 because:
- no "state" in functor
- "OpndPtrs[i]" looks simpler than "&Opnds[OpndIndices[i]]"
While I can reproduce the probelm in Valgrind, it is rather difficult to come up
a standalone testing case. The reason is that when an iterator is invalidated,
the stale invalidated elements are not yet clobbered by nonsense data, so the
optimizer can still proceed successfully.
Thank Benjamin for fixing this bug and generously providing the test case.
llvm-svn: 179062
When two template decls with the same name are used in this diagnostic,
force them to print their qualified names. This changes the bad message of:
candidate template ignored: could not match 'array' against 'array'
to the better message of:
candidate template ignored: could not match 'NS2::array' against 'NS1::array'
llvm-svn: 179056
This was committed without tests and contains obvious bugs. That's not
acceptable. It broke address sanitizer for most programs using glob(3).
llvm-svn: 179054
The idea is to indent according to operator precedence and pretty much
identical to how stuff would be indented with parenthesis.
Before:
bool value = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ==
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa *
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb +
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb &&
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa *
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa >
ccccccccccccccccccccccccccccccccccccccccc;
After:
bool value = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ==
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa *
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb +
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb &&
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa *
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa >
ccccccccccccccccccccccccccccccccccccccccc;
llvm-svn: 179049
The costs are overfitted so that I can still use the legalization factor.
For example the following kernel has about half the throughput vectorized than
unvectorized when compiled with SSE2. Before this patch we would vectorize it.
unsigned short A[1024];
double B[1024];
void f() {
int i;
for (i = 0; i < 1024; ++i) {
B[i] = (double) A[i];
}
}
radar://13599001
llvm-svn: 179033
Add a regression test for the case where such behavior helps TSan:
1. race is reported in the main module
2. new shared library is loaded
3. race is reported in the shared library
llvm-svn: 179032
This slightly propagates an existing hack that delays when we provide
access specifiers for the visible conversion functions of a class by
copying the available access specifier early. The only client this
affects is LLDB, which tends to discover and add conversion functions
after the class is technically "complete". As such, the only
observable difference is in LLDB, so the testing will go there.
llvm-svn: 179029
PowerPC has a conditional branch to the link register (return) instruction: BCLR.
This should be used any time when we'd otherwise have a conditional branch to a
return. This adds a small pass, PPCEarlyReturn, which runs just prior to the
branch selection pass (and, importantly, after block placement) to generate
these conditional returns when possible. It will also eliminate unconditional
branches to returns (these happen rarely; most of the time these have already
been tail duplicated by the time PPCEarlyReturn is invoked). This is a nice
optimization for small functions that do not maintain a stack frame.
llvm-svn: 179026