We already add the align parameter attribute for function parameters that have
the align_value attribute (or those with a typedef type having that attribute),
which is an important special case, but does not handle pointers with value
alignment assumptions that come into scope in any other way. To handle the
general case, emit an @llvm.assume-based alignment assumption whenever we load
the pointer-typed lvalue of an align_value-attributed variable (except for
function parameters, which we already deal with at entry).
I'll also note that this is more general than Intel's described support in:
https://software.intel.com/en-us/articles/data-alignment-to-assist-vectorization
which states that the compiler inserts __assume_aligned directives in response
to align_value-attributed variables only for function parameters and for the
initializers of local variables. I think that we can make the optimizer deal
with this more-general scheme (which could lead to a lot of calls to
@llvm.assume inside of loop bodies, for example), but if not, I'll rework this
to be less aggressive.
llvm-svn: 219052
Windows TLS relies on indexing through a tls_index in order to get at
the DLL's thread local variables. However, this index is not exported
along with the variable: it is assumed that all accesses to thread local
variables are inside the same module which created the variable in the
first place.
While there are several implementation techniques we could adopt to fix
this (notably, the Itanium ABI gets this for free), it is not worth the
heroics.
Instead, let's just ban this combination. We could revisit this in the
future if we need to.
This fixes PR21111.
llvm-svn: 219049
Update the entire regression test suite for the new shuffles. Remove
most of the old testing which was devoted to the old shuffle lowering
path and is no longer relevant really. Also remove a few other random
tests that only really exercised shuffles and only incidently or without
any interesting aspects to them.
Benchmarking that I have done shows a few small regressions with this on
LNT, zero measurable regressions on real, large applications, and for
several benchmarks where the loop vectorizer fires in the hot path it
shows 5% to 40% improvements for SSE2 and SSE3 code running on Sandy
Bridge machines. Running on AMD machines shows even more dramatic
improvements.
When using newer ISA vector extensions the gains are much more modest,
but the code is still better on the whole. There are a few regressions
being tracked (PR21137, PR21138, PR21139) but by and large this is
expected to be a win for x86 generated code performance.
It is also more correct than the code it replaces. I have fuzz tested
this extensively with ISA extensions up through AVX2 and found no
crashes or miscompiles (yet...). The old lowering had a few miscompiles
and crashers after a somewhat smaller amount of fuzz testing.
There is one significant area where the new code path lags behind and
that is in AVX-512 support. However, there was *extremely little*
support for that already and so this isn't a significant step backwards
and the new framework will probably make it easier to implement lowering
that uses the full power of AVX-512's table-based shuffle+blend (IMO).
Many thanks to Quentin, Andrea, Robert, and others for benchmarking
assistance. Thanks to Adam and others for help with AVX-512. Thanks to
Hal, Eric, and *many* others for answering my incessant questions about
how the backend actually works. =]
I will leave the old code path in the tree until the 3 PRs above are at
least resolved to folks' satisfaction. Then I will rip it (and 1000s of
lines of code) out. =] I don't expect this flag to stay around for very
long. It may not survive next week.
llvm-svn: 219046
It turns out this combine was always somewhat flawed -- there are cases
where nested VZEXT nodes *can't* be combined: if their types have
a mismatch that can be observed in the result. While none of these show
up in currently, once I switch to the new vector shuffle lowering a few
test cases actually form such nested VZEXT nodes. I've not come up with
any IR pattern that I can sensible write to exercise this, but it will
be covered by tests once I flip the switch.
llvm-svn: 219044
nodes to the DAG combining of them.
This will allow the combine to fire on both old vector shuffle lowering
and the new vector shuffle lowering and generally seems like a cleaner
design. I've trimmed down the code a bit and tried to make it and the
surrounding combine fairly clean while moving it around.
llvm-svn: 219042
The arm builtins converted into thumb in r213481 are not working
on darwin. On apple platforms, .thumb_func directive is required
to generated correct symbols for thumb functions.
<rdar://problem/18523605>
llvm-svn: 219040
This option is added by Xcode when it runs the linker. It produces a binary
file which contains the file the linker used. Xcode uses the info to
dynamically update it dependency tracking.
To check the content of the binary file, the test case uses a python script
to dump the binary file as text which FileCheck can check.
llvm-svn: 219039
When creating the graph edges of the atoms of an ELF file, special care must be
taken with atoms that represent weak symbols. They cannot be the target of any
Reference::kindLayoutAfter edge because they can be merged and point to other
code, screwing up the final layout of the atoms. ELFFile::createAtoms()
correctly handles this corner case. The problem is that createAtoms() assumed
that there can be no zero-sized weak symbols, which is not true. Consider:
my_weak_func1:
my_weak_func2:
my_weak_func3:
code
In this case, we have two zero-sized weak symbols, my_weak_func1 and
my_weak_func2, and one non-zero weak symbol my_weak_func3. createAtoms() would
correctly handle my_weak_func3, but not the first two symbols. This problem
happens in the musl C library when a zero-sized weak symbol is merged and
screws up the file layout. Since this musl code lives at the finalization hooks,
any C program linked with LLD and musl was correctly executing, but segfaulting
at the end.
Reviewers: shankarke
http://reviews.llvm.org/D5606
llvm-svn: 219034
the various ways in which blends can be used to do vector element
insertion for lowering with the scalar math instruction forms that
effectively re-blend with the high elements after performing the
operation.
This then allows me to bail on the element insertion lowering path when
we have SSE4.1 and are going to be doing a normal blend, which in turn
restores the last of the blends lost from the new vector shuffle
lowering when I got it to prioritize insertion in other cases (for
example when we don't *have* a blend instruction).
Without the patterns, using blends here would have regressed
sse-scalar-fp-arith.ll *completely* with the new vector shuffle
lowering. For completeness, I've added RUN-lines with the new lowering
here. This is somewhat superfluous as I'm about to flip the default, but
hey, it shows that this actually significantly changed behavior.
The patterns I've added are just ridiculously repetative. Suggestions on
making them better very much welcome. In particular, handling the
commuted form of the v2f64 patterns is somewhat obnoxious.
llvm-svn: 219033
This adds -nostdsysteminc to the %clang_cc1 expansion, which should
make it harder to accidentally write tests that depend on headers in
/usr/include. It also updates a few tests that use -isysroot <x> and a
darwin triple to omit the triple and use -isystem <x>/usr/include
instead, making them a little bit more general.
Incidentally, this fixes a test failure I'm seeing on darwin in
Modules/stddef.c, that happens because my system finds a stddef.h in
/usr/include.
llvm-svn: 219030
There are three copies of IsCompleteType(...) functions in CSA and all
of them are incomplete (I experienced crashes in some CSA's test cases).
I have replaced these function calls with Type::isIncompleteType() calls.
A patch by Aleksei Sidorin!
llvm-svn: 219026
The MallocChecker does currently not track the memory allocated by
if_nameindex. That memory is dynamically allocated and should be freed
by calling if_freenameindex. The attached patch teaches the checker
about these functions.
Memory allocated by if_nameindex is treated as a separate allocation
"family". That way the checker can verify it is freed by the correct
function.
A patch by Daniel Fahlgren!
llvm-svn: 219025
The return value of mempcpy is only correct when the destination type is
one byte in size. This patch casts the argument to a char* so the
calculation is also correct for structs, ints etc.
A patch by Daniel Fahlgren!
llvm-svn: 219024
Otherwise we're left with an half-initialized bag of variables that may or may
not explode later on. Should bring the MSVC buildbot back to life.
llvm-svn: 219023
perform a load to use blendps rather than movss when it is available.
For non-loads, blendps is *much* faster. It can execute on two ports in
Sandy Bridge and Ivy Bridge, and *three* ports on Haswell. This fixes
one of the "regressions" from aggressively taking the "insertion" path
in the new vector shuffle lowering.
This does highlight one problem with blendps -- it isn't commuted as
heavily as it should be. That's future work though.
llvm-svn: 219022
Update debug info testcases for the LLVM metadata schema change in
r219010 to fold metadata constant operands into a single `MDString`.
Part of PR17891.
llvm-svn: 219019
C++14 adds new builtin signatures for 'operator delete'. This change allows
new/delete pairs to be removed in C++14 onwards, as they were in C++11 and
before.
llvm-svn: 219014
Also remove an extra extern "C" from a global variable redeclaration.
This allows building libcxxabi with GCC on my system.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D5604
llvm-svn: 219012
This reverts commit r218917, effectively reapplying r218913. Original
commit message follows.
--
Update debug info testcases for an LLVM metadata schema change to fold
metadata constant operands into a single `MDString`.
Part of PR17891.
llvm-svn: 219011
This reverts commit r218918, effectively reapplying r218914 after fixing
an Ocaml bindings test and an Asan crash. The root cause of the latter
was a tightened-up check in `DILexicalBlock::Verify()`, so I'll file a
PR to investigate who requires the loose check (and why).
Original commit message follows.
--
This patch addresses the first stage of PR17891 by folding constant
arguments together into a single MDString. Integers are stringified and
a `\0` character is used as a separator.
Part of PR17891.
Note: I've attached my testcases upgrade scripts to the PR. If I've
just broken your out-of-tree testcases, they might help.
llvm-svn: 219010
In the X86 backend, matching an address is initiated by the 'addr' complex
pattern and its friends. During this process we may reassociate and-of-shift
into shift-of-and (FoldMaskedShiftToScaledMask) to allow folding of the
shift into the scale of the address.
However as demonstrated by the testcase, this can trigger CSE of not only the
shift and the AND which the code is prepared for but also the underlying load
node. In the testcase this node is sitting in the RecordedNode and MatchScope
data structures of the matcher and becomes a deleted node upon CSE. Returning
from the complex pattern function, we try to access it again hitting an assert
because the node is no longer a load even though this was checked before.
Now obviously changing the DAG this late is bending the rules but I think it
makes sense somewhat. Outside of addresses we prefer and-of-shift because it
may lead to smaller immediates (FoldMaskAndShiftToScale is an even better
example because it create a non-canonical node). We currently don't recognize
addresses during DAGCombiner where arguably this canonicalization should be
performed. On the other hand, having this in the matcher allows us to cover
all the cases where an address can be used in an instruction.
I've also talked a little bit to Dan Gohman on llvm-dev who added the RAUW for
the new shift node in FoldMaskedShiftToScaledMask. This RAUW is responsible
for initiating the recursive CSE on users
(http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-September/076903.html) but it
is not strictly necessary since the shift is hooked into the visited user. Of
course it's safer to keep the DAG consistent at all times (e.g. for accurate
number of uses, etc.).
So rather than changing the fundamentals, I've decided to continue along the
previous patches and detect the CSE. This patch installs a very targeted
DAGUpdateListener for the duration of a complex-pattern match and updates the
matching state accordingly. (Previous patches used HandleSDNode to detect the
CSE but that's not practical here). The listener is only installed on X86.
I tested that there is no measurable overhead due to this while running
through the spec2k BC files with llc. The only thing we pay for is the
creation of the listener. The callback never ever triggers in spec2k since
this is a corner case.
Fixes rdar://problem/18206171
llvm-svn: 219009
This is a simple implementation which just copies data synchronously.
v2:
- Use size_t.
v3:
- Fix possible race condition by splitting the copy among multiple
work items.
llvm-svn: 219008
+ Generalized function names and comments
+ Removed OpenMP (omp) from the names and comments
+ Use common names (non OpenMP specific) for runtime library call creation
methodes
+ Commented the parallel code generator and all its member functions
+ Refactored some values and methodes
Differential Revision: http://reviews.llvm.org/D4990
llvm-svn: 219003