Summary:
TestCases/Linux/heavy_uar_test.cc was failing on my
PowerPC64 box with GCC 4.8.2, because the compiler recognised
a memset-like loop and turned it into a call to memset, which
got intercepted by __asan_memset, which got upset because it was
being called on an address in high shadow memory.
Use break_optimization to stop the compiler from doing this.
Reviewers: kcc, samsonov
Reviewed By: kcc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6266
llvm-svn: 222572
GCC and ICC both reject this and the 'Runtime-sized arrays with
automatic storage duration' (N3639) paper forbade this as well.
Previously, we would crash on our way to mangling.
This fixes PR21632.
llvm-svn: 222569
We can now use the ELF relocation .def files to create the mapping
of relocation numbers to names and avoid having to duplicate the
list of relocations.
Patch by Will Newton.
llvm-svn: 222567
We can now use the ELF relocation .def files to create the mapping
of relocation numbers to names and avoid having to duplicate the
list of relocations.
Patch by Will Newton.
llvm-svn: 222566
This should allow the list of relocations for a particular
architecture to be kept in a single header rather than duplicated
whenever we need to enumerate all the relocations.
Patch by Will Newton.
llvm-svn: 222565
We previously had support for char and wchar_t string literals. VS 2015
added support for char16_t and char32_t.
String literals must be mangled in the MS ABI in order for them to be
deduplicated across translation units: their linker has no notion of
mergeable section. Instead, they use the mangled name to make a COMDAT
for the string literal; the COMDAT will merge with other COMDATs in
other object files.
This allows strings in object files generated by clang to get merged
with strings in object files generated by MSVC.
llvm-svn: 222564
This patch adds a feature flag to avoid unaligned 32-byte load/store AVX codegen
for Sandy Bridge and Ivy Bridge. There is no functionality change intended for
those chips. Previously, the absence of AVX2 was being used as a proxy to detect
this feature. But that hindered codegen for AVX-enabled AMD chips such as btver2
that do not have the 32-byte unaligned access slowdown.
Performance measurements are included in PR21541 ( http://llvm.org/bugs/show_bug.cgi?id=21541 ).
Differential Revision: http://reviews.llvm.org/D6355
llvm-svn: 222544
shuffle lowering to allow much better blend matching.
Specifically, with the new structure the code seems clearer to me and we
correctly can hit the cases where merging two 128-bit lanes is a clear
win and can be shuffled cheaply afterward.
llvm-svn: 222539
offsets for code models other than small/medium. For JIT application,
memory layout is less controlled and can result in truncations
otherwise.
Patch from Akos Kiss.
Differential Revision: http://reviews.llvm.org/D6079
llvm-svn: 222538
a bunch more improvements.
Non-lane-crossing is fine, the key is that lane merging only makes sense
for single-input shuffles. Not sure why I got so turned around here. The
code all works, I was just using the wrong model for it.
This only updates v4 and v8 lowering. The v16 and v32 lowering requires
restructuring the entire check sequence.
llvm-svn: 222537
Before this patch, the DAGCombiner only tried to convert build_vector dag nodes
into shuffles if all operands were either extract_vector_elt or undef.
This patch improves that logic and teaches the DAGCombiner how to deal with
build_vector dag nodes where one or more operands are zero. A build_vector
dag node with some zero operands is turned into a shuffle only if the resulting
shuffle mask is legal for the target.
llvm-svn: 222536
Let the users of SymbolizationLoop define a function that produces the list of .dSYM hints (possible path to the .dSYM bundle) for the given binary.
Because the hints can't be added to an existing llvm-symbolizer process, we spawn a new symbolizer process ones each time a new hint appears.
Those can only appear for binaries that we haven't seen before.
llvm-svn: 222535
lanes.
By special casing these we can often either reduce the total number of
shuffles significantly or reduce the number of (high latency on Haswell)
AVX2 shuffles that potentially cross 128-bit lanes. Even when these
don't actually cross lanes, they have much higher latency to support
that. Doing two of them and a blend is worse than doing a single insert
across the 128-bit lanes to blend and then doing a single interleaved
shuffle.
While this seems like a narrow case, it kept cropping up on me and the
difference is *huge* as you can see in many of the test cases. I first
hit this trying to perfectly fix the interleaving shuffle patterns used
by Halide for AVX2.
llvm-svn: 222533
Previously this was only used for JavaScript.
Before:
functionCall({
int i;
int j;
},
aaaa, bbbb, cccc);
After:
functionCall({
int i;
int j;
}, aaaa, bbbb, cccc);
llvm-svn: 222531
With AllowShortCaseLabelsOnASingleLine set to true:
This gets now left unchanged:
case 1:
// comment
return;
Whereas before it was changed into:
case 1: // comment return;
This fixes llvm.org/PR21630.
llvm-svn: 222529
Before:
public final<X> Foo foo() {
}
public abstract<X> Foo foo();
After:
public final <X> Foo foo() {
}
public abstract <X> Foo foo();
Patch by Harry Terkelsen. Thank you.
llvm-svn: 222527