The backend has to legalize i64 types by splitting them into two 32-bit pieces,
which leads to poor quality code. If we produce code for these intrinsics that
uses one-element vector types, which can live in Neon vector registers without
getting split up, then the generated code is much better. Radar 11998303.
llvm-svn: 161879
intrinsics. The second instruction(s) to be handled are the vector versions
of count set bits (ctpop).
The changes here are to clang so that it generates a target independent
vector ctpop when it sees an ARM dependent vector bits set count. The changes
in llvm are to match the target independent vector ctpop and in
VMCore/AutoUpgrade.cpp to update any existing bc files containing ARM
dependent vector pop counts with target-independent ctpops. There are also
changes to an existing test case in llvm for ARM vector count instructions and
to a test for the bitcode upgrade.
<rdar://problem/11892519>
There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>
llvm-svn: 160409
intrinsics with target-indepdent intrinsics. The first instruction(s) to be
handled are the vector versions of count leading zeros (ctlz).
The changes here are to clang so that it generates a target independent
vector ctlz when it sees an ARM dependent vector ctlz. The changes in llvm
are to match the target independent vector ctlz and in VMCore/AutoUpgrade.cpp
to update any existing bc files containing ARM dependent vector ctlzs with
target-independent ctlzs. There are also changes to an existing test case in
llvm for ARM vector count instructions and a new test for the bitcode upgrade.
<rdar://problem/11831778>
There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>
llvm-svn: 160201
in the ABI arrangement, and leave a hook behind so that we can easily
tweak CCs on platforms that use different CCs by default for C++
instance methods.
llvm-svn: 159894
The tablegen'd code does the same thing without this egregious duplication.
In my limited testing everything seems to work, however there can be
differences if the clang and llvm builtin definitions don't match.
llvm-svn: 159371
pointer, but such folding encounters side-effects, ignore the side-effects
rather than performing them at runtime: CodeGen generates wrong code for
__builtin_object_size in that case.
llvm-svn: 157310
r155047. See the LLVM log for the primary motivation:
http://llvm.org/viewvc/llvm-project?rev=155047&view=rev
Primary commit r154828:
- Several issues were raised in review, and fixed in subsequent
commits.
- Follow-up commits also reverted, and which should be folded into the
original before reposting:
- r154837: Re-add the 'undef BUILTIN' thing to fix the build.
- r154928: Fix build warnings, re-add (and correct) header and
license
- r154937: Typo fix.
Please resubmit this patch with the relevant LLVM resubmission.
llvm-svn: 155048
__atomic_test_and_set, __atomic_clear, plus a pile of undocumented __GCC_*
predefined macros.
Implement library fallback for __atomic_is_lock_free and
__c11_atomic_is_lock_free, and implement __atomic_always_lock_free.
Contrary to their documentation, GCC's __atomic_fetch_add family don't
multiply the operand by sizeof(T) when operating on a pointer type.
libstdc++ relies on this quirk. Remove this handling for all but the
__c11_atomic_fetch_add and __c11_atomic_fetch_sub builtins.
Contrary to their documentation, __atomic_test_and_set and __atomic_clear
take a first argument of type 'volatile void *', not 'void *' or 'bool *',
and __atomic_is_lock_free and __atomic_always_lock_free have an argument
of type 'const volatile void *', not 'void *'.
With this change, libstdc++4.7's <atomic> passes libc++'s atomic test suite,
except for a couple of libstdc++ bugs and some cases where libc++'s test
suite tests for properties which implementations have latitude to vary.
llvm-svn: 154640
<stdatomic.h> header.
In passing, fix LanguageExtensions to note that C11 and C++11 are no longer
"upcoming standards" but are now actually standardized.
llvm-svn: 154513
match the behavior of GCC. Also add a test for these intrinsics, which
apparently have *zero* tests. =[ Not surprisingly, Clang crashed when
compiling these.
Fix the bug in CodeGen where we failed to bitcast the argument type to
x86mmx prior to calling the LLVM intrinsic. This fixes an assert on the
new 3dnow-builtins.c test.
This is one issue impacting the efforts to get Clang to emulate the
Microsoft intrinsics headers -- 3dnow intrinsics are implictitly made
available there.
llvm-svn: 150948
We had been generating load/store instructions with the default alignment
for the vector element type, even when the pointer argument had less alignment.
<rdar://problem/10538555>
llvm-svn: 149794
ARM supports clz and ctz directly and both operations have well-defined
results for zero. There is no disadvantage in performance to using the
defined-at-zero versions of llvm.ctlz/cttz intrinsics. We're running into
ARM-specific code written with the assumption that __builtin_clz(0) == 32,
even though that value is technically undefined. The code is failing now
because of llvm optimizations that are taking advantage of the undef
behavior (specifically svn r147255). There's nothing wrong with that
optimization on x86 where any incorrect assumptions about __builtin_clz(0)
will quickly be exposed. For ARM, though, optimizations based on that undef
behavior are likely to cause subtle bugs. Other targets with defined-at-zero
clz/ctz support may want to override the default behavior as well.
llvm-svn: 149086
"use the new ConstantVector::getSplat method where it makes sense."
Also simplify a bunch of code to use the Builder->getInt32 instead
of doing it the hard and ugly way. Much more progress could be made
here, but I don't plan to do it.
llvm-svn: 148926