[Clang part]
These patches rename the loop unrolling and loop vectorizer metadata
such that they have a common 'llvm.loop.' prefix. Metadata name
changes:
llvm.vectorizer.* => llvm.loop.vectorizer.*
llvm.loopunroll.* => llvm.loop.unroll.*
This was a suggestion from an earlier review
(http://reviews.llvm.org/D4090) which added the loop unrolling
metadata.
Patch by Mark Heffernan.
llvm-svn: 211712
The < 8 instead of <= 8 meant that a bunch of vreinterprets were not available on v8 AArch32. Simplify the guard to just !defined(aarch64) while we're at it, and enable some v8 AArch32 testing.
llvm-svn: 211686
The C++ language requires that the address of a function be the same
across all translation units. To make __declspec(dllimport) useful,
this means that a dllimported function must also obey this rule. MSVC
implements this by dynamically querying the import address table located
in the linked executable. This means that the address of such a
function in C++ is not constant (which violates other rules).
However, the C language has no notion of ODR nor does it permit dynamic
initialization whatsoever. This requires implementations to _not_
dynamically query the import address table and instead utilize a wrapper
function that will be synthesized by the linker which will eventually
query the import address table. The effect this has is, to say the
least, perplexing.
Consider the following C program:
__declspec(dllimport) void f(void);
typedef void (*fp)(void);
static const fp var = &f;
const fp fun() { return &f; }
int main() { return fun() == var; }
MSVC will statically initialize "var" with the address of the wrapper
function and "fun" returns the address of the actual imported function.
This means that "main" will return false!
Note that LLVM's optimizers are strong enough to figure out that "main"
should return true. However, this result is dependent on having
optimizations enabled!
N.B. This change also permits the usage of dllimport declarators inside
of template arguments; they are sufficiently constant for such a
purpose. Add tests to make sure we don't regress here.
llvm-svn: 211677
According to the x86-64 ABI, structures with both floating point and
integer members are split between floating-point and general purpose
registers, and consecutive 32-bit floats can be packed into a single
floating point register.
In the case of variadic functions these are stored to memory and the position
recorded in the va_list. This was already correctly implemented in
llvm.va_start.
The problem is that the code in clang for implementing va_arg was reading
floating point registers from the wrong location.
Patch by Thomas Jablin.
Fixes PR20018.
llvm-svn: 211626
When small arguments (structures < 8 bytes or "float") are passed in a
stack slot in the ppc64 SVR4 ABI, they must reside in the least
significant part of that slot. On BE, this means that an offset needs
to be added to the stack address of the parameter, but on LE, the least
significant part of the slot has the same address as the slot itself.
For the most part, this is handled in the LLVM back-end, where I just
fixed the LE case in commit r211368.
However, there is one piece of the clang front-end that is also aware of
these stack-slot offsets: PPC64_SVR4_ABIInfo::EmitVAArg. This patch
updates that routine to take endianness into account.
llvm-svn: 211370
Relax the tests to allow for differences between release and debug builds. This
should fix the buildbots.
Thanks to Benjamin Kramer and Eric Christo for their invaluable tip that this
was release build specific issue.
llvm-svn: 211227
Add support for _InterlockedCompareExchangePointer, _InterlockExchangePointer,
_InterlockExchange. These are available as a compiler intrinsic on ARM and x86.
These are used directly by the Windows SDK headers without use of the intrin
header.
llvm-svn: 211216
In the final phase of the merge, I managed to disable a bunch of Clang
tests accidentally. Fortunately none of them seem to have broken in
the interim.
llvm-svn: 211149
There comes a time in the life of any amateur code generator when dumb string
concatenation just won't cut it any more. For NeonEmitter.cpp, that time has
come.
There were a bunch of magic type codes which meant different things depending on
the context. There were a bunch of special cases that really had no reason to be
there but the whole thing was so creaky that removing them would cause something
weird to fall over. There was a 1000 line switch statement for code generation
involving string concatenation, which actually did lexical scoping to an extent
(!!) with a bunch of semi-repeated cases.
I tried to refactor this three times in three different ways without
success. The only way forward was to rewrite the entire thing. Luckily the
testing coverage on this stuff is absolutely massive, both with regression tests
and the "emperor" random test case generator.
The main change is that previously, in arm_neon.td a bunch of "Operation"s were
defined with special names. NeonEmitter.cpp knew about these Operations and
would emit code based on a huge switch. Actually this doesn't make much sense -
the type information was held as strings, so type checking was impossible. Also
TableGen's DAG type actually suits this sort of code generation very well
(surprising that...)
So now every operation is defined in terms of TableGen DAGs. There are a bunch
of operators to use, including "op" (a generic unary or binary operator), "call"
(to call other intrinsics) and "shuffle" (take a guess...). One of the main
advantages of this apart from making it more obvious what is going on, is that
we have proper type inference. This has two obvious advantages:
1) TableGen can error on bad intrinsic definitions easier, instead of just
generating wrong code.
2) Calls to other intrinsics are typechecked too. So
we no longer need to work out whether the thing we call needs to be the Q-lane
version or the D-lane version - TableGen knows that itself!
Here's an example: before:
case OpAbdl: {
std::string abd = MangleName("vabd", typestr, ClassS) + "(__a, __b)";
if (typestr[0] != 'U') {
// vabd results are always unsigned and must be zero-extended.
std::string utype = "U" + typestr.str();
s += "(" + TypeString(proto[0], typestr) + ")";
abd = "(" + TypeString('d', utype) + ")" + abd;
s += Extend(utype, abd) + ";";
} else {
s += Extend(typestr, abd) + ";";
}
break;
}
after:
def OP_ABDL : Op<(cast "R", (call "vmovl", (cast $p0, "U",
(call "vabd", $p0, $p1))))>;
As an example of what happens if you do something wrong now, here's what happens
if you make $p0 unsigned before the call to "vabd" - that is, $p0 -> (cast "U",
$p0):
arm_neon.td:574:1: error: No compatible intrinsic found - looking up intrinsic 'vabd(uint8x8_t, int8x8_t)'
Available overloads:
- float64x2_t vabdq_v(float64x2_t, float64x2_t)
- float64x1_t vabd_v(float64x1_t, float64x1_t)
- float64_t vabdd_f64(float64_t, float64_t)
- float32_t vabds_f32(float32_t, float32_t)
... snip ...
This makes it seriously easy to work out what you've done wrong in fairly nasty
intrinsics.
As part of this I've massively beefed up the documentation in arm_neon.td too.
Things still to do / on the radar:
- Testcase generation. This was implemented in the previous version and not in
the new one, because
- Autogenerated tests are not being run. The testcase in test/ differs from
the autogenerated version.
- There were a whole slew of special cases in the testcase generation that just
felt (and looked) like hacks.
If someone really feels strongly about this, I can try and reimplement it too.
- Big endian. That's coming soon and should be a very small diff on top of this one.
llvm-svn: 211101
Most builtins date from before the "cmpxchg weak" was a gleam in the
C++ committee's eye, so fortunately not much needs to change. But a
few of them *do* acknowledge that failure is possible.
For these, we'll emit the usual cartesian product of cmpxchg
operations if we can't statically determine weakness. CodeGen can
sort it out later if the function gets inlined.
The only other non-trivial aspect of this is (I think) that we emit
the scalar expression for "IsWeak" once, at the beginning, and
propagate its value through the successive blocks. There's not much in
it, but it's slightly more consistent with the existing handling of
FailureOrder.
llvm-svn: 210932
Init-order and use-after-return modes can currently be enabled
by runtime flags. use-after-scope mode is not really working at the
moment.
The only problem I see is that users won't be able to disable extra
instrumentation for init-order and use-after-scope by a top-level Clang flag.
But this instrumentation was implicitly enabled for quite a while and
we didn't hear from users hurt by it.
llvm-svn: 210924
This is a minimal fix for clang. I'll soon add support for generating
weak variants when requested, but that's not really necessary for the
LLVM change in isolation.
llvm-svn: 210907
The vec_sld and vec_vsldoi interfaces perform a left-shift on vector
arguments for both big and little endian. However, because they rely
on the vec_perm interface which is endian-dependent, the permutation
vector needs to be reversed for LE to get the proper shift direction.
I've added some extra testing for these interfaces for LE in the
builtins-ppc-altivec.c.
llvm-svn: 210657
Instructions from __nodebug__ functions don't have file:line
information even when inlined into no-nodebug functions. As a result,
intrinsics (SSE and other) from <*intrin.h> clang headers _never_
have file:line information.
With this change, an instruction without !dbg metadata gets one from
the call instruction when inlined.
Fixes PR19001.
llvm-svn: 210459
The PowerPC vsumsws instruction, accessed via vec_sums, is defined
architecturally with a big-endian bias, in that the second input vector
and the result always reference big-endian element 3 (little-endian
element 0). For ease of porting, the programmer wants elements 3 in
both cases.
To provide this semantics, for little endian we generate a permute for
the second input vector prior to the vsumsws instruction, and generate
a permute for the result vector following the vsumsws instruction.
The correctness of this code is tested by the new sums.c test added in
a previous patch, as well as the modifications to
builtins-ppc-altivec.c in the present patch.
llvm-svn: 210449
This uncovered something strange. Diagnostics for InlineAsm have source locations
that don't really map to where they are within the .c source file.
llvm-svn: 210440
The PowerPC vector-unpack-high and vector-unpack-low instructions
are defined architecturally with a big-endian bias, in that the vector
element numbering is assumed to be "left to right" regardless of
whether the processor is in big-endian or little-endian mode. This
effectively reverses the meaning of "high" and "low." Such a
definition is unnatural for little-endian code generation.
To facilitate ease of porting, the vec_unpackh and vec_unpackl
interfaces are designed to use natural element ordering, so that
elements are numbered according to little-endian design principles
when code is generated for a little-endian target. The desired
semantics can be achieved by using the opposite instruction for
little-endian mode. That is, when a call to vec_unpackh appears in
the code, a vector-unpack-low is generated, and when a call to
vec_unpackl appears in the code, a vector-unpack-high is generated.
The correctness of this code is tested by the new unpack.c test
added in a previous patch, as well as the modifications to
builtins-ppc-altivec.c in the present patch.
Note that these interfaces were originally incorrectly implemented
when they take a vector pixel argument. This patch corrects this
implementation for both big- and little-endian code generation.
llvm-svn: 210391
Commit r210384 prematurely included changes to the little-endian
implementation of the vec_sum2s interface. This patch modifies
test/CodeGen/builtins-ppc-altivec.c to test those changes.
llvm-svn: 210389
The Altivec builtin test case test/CodeGen/builtins-ppc-altivec.c has
always been executed only for 32-bit PowerPC. These tests are equally
valid for 64-bit PowerPC. This patch updates the test to be run for
three targets: powerpc-unknown-unknown, powerpc64-unknown-unknown,
and powerpc64le-unknown-unknown. The expected code generation changes
for some of the Altivec builtins for little endian, so this patch adds
new CHECK-LE variants to the test for the powerpc64le target.
These tests satisfy the testing requirements for some previous patches
committed over the last couple of days for lib/Headers/altivec.h:
r210279 for vec_perm, r210337 for vec_mul[eo], and r210340 for
vec_pack.
llvm-svn: 210384
This patch adds support for pointer types in global named registers variables.
It'll be lowered as a pair of read/write_register and inttoptr/ptrtoint calls.
Also adds some early checks on types on SemaDecl to avoid the assert.
Tests changed accordingly. (PR19837)
llvm-svn: 210274
These intrinsics are special because they directly take a memory operand (AVX2
adds the register counterparts). Typically, other non-memop intrinsics take
registers and then it's left to isel to fold memory operands.
In order to LICM intrinsics directly reading memory, we require that no stores
are in the loop (LICM) or that the folded load accesses constant memory
(MachineLICM). When neither is the case we fail to hoist a loop-invariant
broadcast.
We can work around this limitation if we expose the load as a regular load and
then just implement the broadcast using the vector initializer syntax. This
exposes the load to LICM and other optimizations.
At the IR level this is translated into a series of insertelements. The
sequence is already recognized as a broadcast so there is no impact on the
quality of codegen.
_mm256_broadcast_pd and _mm256_broadcast_ps are not updated by this patch
because right now we lack the DAG-combiner smartness to recover the broadcast
instructions. This will be tackled in a follow-on.
There will be completing changes on the LLVM side to remove the LLVM
intrinsics and to auto-upgrade bitcode files.
Fixes <rdar://problem/16494520>
llvm-svn: 209846
Clang knows about the sanitizer blacklist and it makes no sense to
add global to the list of llvm.asan.dynamically_initialized_globals if it
will be blacklisted in the instrumentation pass anyway. Instead, we should
do as much blacklisting as possible (if not all) in the frontend.
llvm-svn: 209789
I opened a discussion on cfe-commits. Ideally we've got a few things
that need to happen. CompilerRT should probably have blacklists tests.
Asan should probably not depend on that specific field.
llvm-svn: 209766
That small change, although it looked harmless, it made emitting the LValue
on the PHI node without the proper cast. Reverting it fixes PR19841.
llvm-svn: 209663
A few (mostly CodeGen) parts of Clang were tightly coupled to the
AArch64 backend. Now that it's gone, they will not even compile.
I've also deduplicated RUN lines in many of the AArch64 tests. This
might improve "make check-all" time noticably: some of those NEON
tests were monsters.
llvm-svn: 209578
I forgot to fix this one in r209145. We use these flags on dllimport tests
to make sure we emit code for available_externaly functions and don't inline
the IR.
llvm-svn: 209564