We can use the 'H' typespec modifier to use 128-bit vectors directly
in the only two users of this special-case: the vcvt f16 intrinsics.
This also lets us use more meaningful prototype modifiers.
llvm-svn: 245778
We had "vcvt_f16" and "VCVT_HIGH_F16": for other FP types, this naming
is used for intrinsics with integer overloads. The FP->FP conversions,
on the other hand, use the full "vcvt_f32_f64" name instead.
Use the same naming convention for the f16<->f32 conversions.
While there, reorder the definitions a little bit.
llvm-svn: 245763
Improvement to the memory leak fix in 244196.
Address validity is required for the Intrinsic objects, but since the
collections only ever grow (no elements are removed), deque provides
sufficient guarantees (that the objects will never be reallocated/moved
around) for this use case.
llvm-svn: 244241
The patch is generated using this command:
$ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
work/llvm/tools/clang
To reduce churn, not touching namespaces spanning less than 10 lines.
llvm-svn: 240270
On ARM/AArch64, we currently always use EmitScalarExpr for the immediate
builtin arguments, instead of directly emitting the constant. When the
overflow sanitizer is enabled, this generates overflow intrinsics
instead of constants, breaking assumptions in various places.
Instead, use the knowledge of "immediates" to directly emit a constant:
- teach the tablegen backend to emit the "immediate" modifiers
- use those modifiers in the NEON CodeGen, on ARM and AArch64.
Fixes PR23517.
Differential Revision: http://reviews.llvm.org/D10045
llvm-svn: 239002
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
llvm-svn: 238601
The NEON intrinsics in arm_neon.h are designed to work on vectors
"as-if" loaded by (V)LDR. We load vectors "as-if" (V)LD1, so the
intrinsics are currently incorrect.
This patch adds big-endian versions of the intrinsics that does the
"obvious but dumb" thing of reversing all vector inputs and all
vector outputs. This will produce extra REVs, but we trust the
optimizer to remove them.
llvm-svn: 211893
There comes a time in the life of any amateur code generator when dumb string
concatenation just won't cut it any more. For NeonEmitter.cpp, that time has
come.
There were a bunch of magic type codes which meant different things depending on
the context. There were a bunch of special cases that really had no reason to be
there but the whole thing was so creaky that removing them would cause something
weird to fall over. There was a 1000 line switch statement for code generation
involving string concatenation, which actually did lexical scoping to an extent
(!!) with a bunch of semi-repeated cases.
I tried to refactor this three times in three different ways without
success. The only way forward was to rewrite the entire thing. Luckily the
testing coverage on this stuff is absolutely massive, both with regression tests
and the "emperor" random test case generator.
The main change is that previously, in arm_neon.td a bunch of "Operation"s were
defined with special names. NeonEmitter.cpp knew about these Operations and
would emit code based on a huge switch. Actually this doesn't make much sense -
the type information was held as strings, so type checking was impossible. Also
TableGen's DAG type actually suits this sort of code generation very well
(surprising that...)
So now every operation is defined in terms of TableGen DAGs. There are a bunch
of operators to use, including "op" (a generic unary or binary operator), "call"
(to call other intrinsics) and "shuffle" (take a guess...). One of the main
advantages of this apart from making it more obvious what is going on, is that
we have proper type inference. This has two obvious advantages:
1) TableGen can error on bad intrinsic definitions easier, instead of just
generating wrong code.
2) Calls to other intrinsics are typechecked too. So
we no longer need to work out whether the thing we call needs to be the Q-lane
version or the D-lane version - TableGen knows that itself!
Here's an example: before:
case OpAbdl: {
std::string abd = MangleName("vabd", typestr, ClassS) + "(__a, __b)";
if (typestr[0] != 'U') {
// vabd results are always unsigned and must be zero-extended.
std::string utype = "U" + typestr.str();
s += "(" + TypeString(proto[0], typestr) + ")";
abd = "(" + TypeString('d', utype) + ")" + abd;
s += Extend(utype, abd) + ";";
} else {
s += Extend(typestr, abd) + ";";
}
break;
}
after:
def OP_ABDL : Op<(cast "R", (call "vmovl", (cast $p0, "U",
(call "vabd", $p0, $p1))))>;
As an example of what happens if you do something wrong now, here's what happens
if you make $p0 unsigned before the call to "vabd" - that is, $p0 -> (cast "U",
$p0):
arm_neon.td:574:1: error: No compatible intrinsic found - looking up intrinsic 'vabd(uint8x8_t, int8x8_t)'
Available overloads:
- float64x2_t vabdq_v(float64x2_t, float64x2_t)
- float64x1_t vabd_v(float64x1_t, float64x1_t)
- float64_t vabdd_f64(float64_t, float64_t)
- float32_t vabds_f32(float32_t, float32_t)
... snip ...
This makes it seriously easy to work out what you've done wrong in fairly nasty
intrinsics.
As part of this I've massively beefed up the documentation in arm_neon.td too.
Things still to do / on the radar:
- Testcase generation. This was implemented in the previous version and not in
the new one, because
- Autogenerated tests are not being run. The testcase in test/ differs from
the autogenerated version.
- There were a whole slew of special cases in the testcase generation that just
felt (and looked) like hacks.
If someone really feels strongly about this, I can try and reimplement it too.
- Big endian. That's coming soon and should be a very small diff on top of this one.
llvm-svn: 211101
Most 64-bit targets define int64_t as long int, and AArch64 should
make same definition to follow LP64 model. In GNU tool chain, int64_t
is defined as long int for 64-bit target. So to get consistent with GNU,
it's better Changing int64_t from 'long long int' to 'long int',
otherwise clang will get different name mangling suffix compared with g++.
llvm-svn: 202004
This fixes one immediate bug where an expression with side-effects
could be emitted twice during a NEON call.
It also prepares the way for folding CodeGen for many of the SISD
intrinsics into a table, reducing code size and hopefully increasing
performance eventually ("binary search + few switch cases" should be
better than "lots of switch cases").
llvm-svn: 201667
We used to have special handling for isCrypto and isA64 bits in the
NeonEmitter.cpp file (it knew the former was predicated on __ARM_FEATURE_CRYPTO
and the latter on __aarch64__ and went through various contortions to make sure
the correct intrinsics were emitted under the correct guard.
This is ugly and has obvious scalability problems (e.g. vcvtX intrinsics are
needed, which are ARMv8 only but available on both, yet another category). This
patch moves the #if predicate into the arm_neon.td file directly and makes
NeonEmitter.cpp agnostic about what goes in there.
It also deduplicates arm_neon.td so that each desired intrinsic is mentioned in
just one place (necessary because of the new mechanism for creating
arm_neon.h).
rdar://problem/16035743
llvm-svn: 201660
There are two kinds of automatically generated tests for NEON intrinsics, both
of which can be merged without adversely affecting users.
1. We check that a valid kind of __builtin_neon_XYZ overload is requested (e.g.
we're not asking for a float32x4_t version when it only accepts integers. Since
the __builtin_neon_XYZ intrinsics should only be used in arm_neon.h, relaxing
this test and permitting AArch64 types for AArch32 should not cause a problem.
The extra arm_neon.h definitions should be #ifdefed out anyway.
2. We check that intrinsics which take immediates are actually given
compile-time constants within range. Since all NEON intrinsics should be
backwards compatible, these tests should be identical on AArch64 and AArch32
anyway.
This patch, therefore, merges the separate AArch64 and 32-bit checks.
rdar://problem/16035743
llvm-svn: 201659
Previously, range checking on the __builtin_neon_XYZ_v Clang intrinsics didn't
take account of the type actually passed to the call, which meant a request
like "vext_s16(a, b, 7)" was allowed through (TableGen was conservative and
allowed 0-7 for all types). This caused an assert in the backend because the
lane doesn't make sense.
llvm-svn: 201232
This is a duplicate implementation.
E.g. this patch defines:
float64_t vabd_f64(float64_t a, float64_t b)
But there is already a similar intrinsic "vabdd_f64" with the same types.
Also, this intrinsic will be conflicted to the vector type intrinsic as following(Which is implemented by me and will be committed to trunk):
float64x1_t vabd_f64(float64x1_t a, float64x1_t b).
Two functions shouldn't have a same name in arm_neon.h.
According to ARM ACLE document, such vabd_f64 with float64_t is not existing.
So I revert this commit.
llvm-svn: 196205
There seem to be quite a few references to the old macro __ARM_NEON__ on the
internet, so I don't think it's a good idea to remove it entirely (at least
yet), but the canonical name does not have the trailing underscores so we
should use that ourselves.
llvm-svn: 195353