This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.
There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.
In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.
Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard
Reviewed By: dmgreen, MarkMurrayARM
Subscribers: echristo, hokein, rdhindsa, kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D71065
Summary:
This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.
There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.
In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.
Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard
Reviewed By: MarkMurrayARM
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D71065
Summary:
First, call os.path.normpath on the filename argument. I passed in
./foo-asdf.cpp, and this meant that the script failed to find the
filename, and bad things happened.
Second, call os.path.abspath on binaries. CReduce runs the
interestingness test in a temp dir, so relative paths will not work.
Reviewers: akhuang
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D71098
Summary:
This commit adds the `vpselq` intrinsics which take an MVE predicate
word and select lanes from two vectors; the `vctp` intrinsics which
create a tail predicate word suitable for processing the first m
elements of a vector (e.g. in the last iteration of a loop); and
`vpnot`, which simply complements a predicate word and is just
syntactic sugar for the `~` operator.
The `vctp` ACLE intrinsics are lowered to the IR intrinsics we've
already added (and which D70592 just reorganized). I've filled in the
missing isel rule for VCTP64, and added another set of rules to
generate the predicated forms.
I needed one small tweak in MveEmitter to allow the `unpromoted` type
modifier to apply to predicates as well as integers, so that `vpnot`
doesn't pointlessly convert its input integer to an `<n x i1>` before
complementing it.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D70485
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.
When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.
The original version broke vcreate_* because it became a macro and didn't
apply the normal integer promotion rules before bitcasting to a vector.
This adds a temporary.
This reverts commit 3f76260dc0.
Breaks at least these tests on Windows:
Clang :: Driver/clang-offload-bundler.c
Clang :: Driver/clang-offload-wrapper.c
This broke the vcreate_u64 intrinsic. Example:
$ cat /tmp/a.cc
#include <arm_neon.h>
void g() {
auto v = vcreate_u64(0);
}
$ bin/clang -c /tmp/a.cc --target=arm-linux-androideabi16 -march=armv7-a
/tmp/a.cc:4:12: error: C-style cast from scalar 'int' to vector 'uint64x1_t' (vector of 1 'uint64_t' value) of different size
auto v = vcreate_u64(0);
^~~~~~~~~~~~~~
/work/llvm.monorepo/build.release/lib/clang/10.0.0/include/arm_neon.h:4144:11: note: expanded from macro 'vcreate_u64'
__ret = (uint64x1_t)(__p0); \
^~~~~~~~~~~~~~~~~~
Reverting until this can be investigated.
> The modifier system used to mutate types on NEON intrinsic definitions had a
> separate letter for all kinds of transformations that might be needed, and we
> were quite quickly running out of letters to use. This patch converts to a much
> smaller set of orthogonal modifiers that can be applied together to achieve the
> desired effect.
>
> When merging with downstream it is likely to cause a conflict with any local
> modifications to the .td files. There is a new script in
> utils/convert_arm_neon.py that was used to convert all .td definitions and I
> would suggest running it on the last downstream version of those files before
> this commit rather than resolving conflicts manually.
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.
When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.
For some reason we were not casting a fairly obscure class of builtin calls we
expected to be polymorphic to vectors of char. It worked because the only
affected intrinsics weren't actually polymorphic after all, but is
unnecessarily complicated.
This adds the `vgetq_lane` and `vsetq_lane` families, to copy between
a scalar and a specified lane of a vector.
One of the new `vgetq_lane` intrinsics returns a `float16_t`, which
causes a compile error if `%clang_cc1` doesn't get the option
`-fallow-half-arguments-and-returns`. The driver passes that option to
cc1 already, but I've had to edit all the explicit cc1 command lines
in the existing MVE intrinsics tests.
A couple of fixes are included for the code I wrote up front in
MveEmitter to support lane-index immediates (and which nothing has
tested until now): the type was wrong (`uint32_t` instead of `int`)
and the range was off by one.
I've also added a method of bypassing the default promotion to `i32`
that is done by the MveEmitter code generation: it's sensible to
promote short scalars like `i16` to `i32` if they're going to be
passed to custom IR intrinsics representing a machine instruction
operating on GPRs, but not if they're going to be passed to standard
IR operations like `insertelement` which expect the exact type.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D70188
This batch of intrinsics includes lots of things that move vector data
around or change its type without really affecting its value very
much. It includes the `vreinterpretq` family (cast one vector type to
another); `vuninitializedq` (create a vector of a given type with
don't-care contents); and `vcreateq` (make a 128-bit vector out of two
`uint64_t` halves).
These are all implemented using completely standard IR that's already
tested in existing LLVM unit tests, so I've just written a clang test
to check the IR is correct, and left it at that.
I've also added some richer infrastructure to the MveEmitter Tablegen
backend, to make it specify the exact integer type of integer
arguments passed to IR construction functions, and wrap those
arguments in a `static_cast` in the autogenerated C++. That was
necessary to prevent an overloading ambiguity when passing the integer
literal `0` to `IRBuilder::CreateInsertElement`, because otherwise, it
could mean either a null pointer `llvm::Value *` or a zero `uint64_t`.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Subscribers: kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D70133
This patch adds the ACLE intrinsics for all the MVE load and store
instructions not already handled by D69791. These ones don't need new
IR intrinsics, because they can be implemented in terms of standard
LLVM IR constructions.
Some of the load and store instructions access less than 128 bits of
memory, sign/zero extending each value to a wider vector lane on load
or truncating it on store. These are represented in IR by a load of a
shorter vector followed by a zext/sext, and conversely, a trunc
followed by a short store. Existing ISel patterns already recognize
those combinations and turn them into the right MVE instructions.
The predicated forms of all these instructions are represented in the
same way, except that the ordinary load/store operation is replaced
with the existing intrinsics @llvm.masked.{load,store}. These are
currently only code-generated as predicated MVE load/store
instructions if you give LLVM the `-enable-arm-maskedldst` option; so
I've done that in the LLVM codegen test. When we make that the
default, that option can be removed.
In the Tablegen backend, I've had to add a handful of extra support
features:
* We need to be able to make clang::Address objects out of a
pointer and an alignment (previously we only needed these when the
user passed us an existing one).
* We can now specify vector types that aren't 128 bits wide (for use
in those intermediate values in IR), the parametrized type system
can make one starting from two existing vector types (using the lane
count of one and the element type of the other).
* I've added support for code generation of pointer casts, and for
specifying LLVM types as operands to IRBuilder operations (for zext
and sext, though I think they'll come in useful again).
* Now not all IR construction operations need to be specified as
Builder.CreateFoo; some don't involve a Builder at all, and one
passes it as a parameter to a tiny static helper function in
CGBuiltin.cpp.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Subscribers: kristof.beyls, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D70088
'a' used to implement a splat in C++ code in NeonEmitter.cpp, but this
can be done directly from .td expansions now (and most ops already did).
So removing it simplifies the overall code.
https://reviews.llvm.org/D69716
Previously we had a handful of bools (Signed, Floating, ...) that could
easily end up in an inconsistent state. This adds an enum Kind which
holds the mutually exclusive states a type might be in, retaining some
of the bools that modified an underlying type.
https://reviews.llvm.org/D69715
This patch adds two new families of intrinsics, both of which are
memory accesses taking a vector of locations to load from / store to.
The vldrq_gather_base / vstrq_scatter_base intrinsics take a vector of
base addresses, and an immediate offset to be added consistently to
each one. vldrq_gather_offset / vstrq_scatter_offset take a scalar
base address, and a vector of offsets to add to it. The
'shifted_offset' variants also multiply each offset by the element
size type, so that the vector is effectively of array indices.
At the IR level, these operations are represented by a single set of
four IR intrinsics: {gather,scatter} × {base,offset}. The other
details (signed/unsigned, shift, and memory element size as opposed to
vector element size) are all specified by IR intrinsic polymorphism
and immediate operands, because that made the selection job easier
than making a huge family of similarly named intrinsics.
I considered using the standard IR representations such as
llvm.masked.gather, but they're not a good fit. In order to use
llvm.masked.gather to represent a gather_offset load with element size
smaller than a pointer, you'd have to expand the <8 x i16> vector of
offsets into an <8 x i16*> vector of pointers, which would be split up
during legalization, so you'd spend most of your time undoing the mess
it had made. Also, ISel support for llvm.masked.gather would be easy
enough in a trivial way (you can expand it into a gather-base load
with a zero immediate offset), but instruction-selecting lots of
fiddly idioms back into all the _other_ MVE load instructions would be
much more work. So I think dedicated IR intrinsics are the more
sensible approach, at least for the moment.
On the clang tablegen side, I've added two new features to the
Tablegen source accepted by MveEmitter: a 'CopyKind' type node for
defining a type that varies with the parameter type (it lets you ask
for an unsigned integer type of the same width as the parameter), and
an 'unsignedflag' value node for passing an immediate IR operand which
is 0 for a signed integer type or 1 for an unsigned one. That lets me
write each kind of intrinsic just once and get all its subtypes and
immediate arguments generated automatically.
Also I've tweaked the handling of pointer-typed values in the code
generation part of MveEmitter: they're generated as Address rather
than Value (i.e. including an alignment) so that they can be given to
the ordinary IR load and store operations, but I'd omitted the code to
convert them back to Value when they're going to be used as an
argument to an IR intrinsic.
On the MC side, I've enhanced MVEVectorVTInfo so that it can tell you
not only the full assembly-language suffix for a given vector type
(like 's32' or 'u16') but also the numeric-only one used by store
instructions (just '32' or '16').
Reviewers: dmgreen
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D69791
A few integer types in the ACLE definitions of MVE intrinsics are
given as 'int' or 'unsigned' instead of <stdint.h> fixed-size types
like uint32_t. Usually these are the ones where the size isn't that
important, such as immediate offsets in loads (which have a range
limited by the instruction encoding) or the carry flag in vadcq which
can only be 0 or 1 anyway.
With this change, <arm_mve.h> follows that exact type naming, so that
the function prototypes look identical to the ones in ACLE, instead of
replacing int and unsigned with int32_t and uint32_t.
Reviewers: dmgreen
Subscribers: kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D69790
In the code that generates Sema range checks on constant arguments, I
had a piece of code that checks the bounds specified in the Tablegen
intrinsic description against the range of the integer type being
tested. If the bounds are large enough to permit any value of the
integer type, you can omit the compile-time range check. (This case is
expected to come up in some of the bitwise operation intrinsics.)
But somehow I got my signed/unsigned check backwards (asking for the
signed min/max of an unsigned type and vice versa), and also made a
sign extension error in which a signed negative value gets
zero-extended. Now rewritten more sensibly, and it should get its
first sensible test from the next batch of intrinsics I'm planning to
add in D69791.
Reviewers: dmgreen
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D69789
The TableGen-generated file containing the function definitions can be
reorganized to save some memory in the Clang binary. Functions having
the same prototype(s) will point to a shared list of prototype(s).
Patch by Pierre Gondois and Sven van Haastregt.
Differential Revision: https://reviews.llvm.org/D63557
Add handling for the "pure", "const" and "convergent" function
attributes for OpenCL builtin functions.
Patch by Pierre Gondois and Sven van Haastregt.
Differential Revision: https://reviews.llvm.org/D64319
This commit sets up the infrastructure for auto-generating <arm_mve.h>
and doing clang-side code generation for the builtins it relies on,
and demonstrates that it works by implementing a representative sample
of the ACLE intrinsics, more or less matching the ones introduced in
LLVM IR by D67158,D68699,D68700.
Like NEON, that header file will provide a set of vector types like
uint16x8_t and C functions with names like vaddq_u32(). Unlike NEON,
the ACLE spec for <arm_mve.h> includes a polymorphism system, so that
you can write plain vaddq() and disambiguate by the vector types you
pass to it.
Unlike the corresponding NEON code, I've arranged to make every user-
facing ACLE intrinsic into a clang builtin, and implement all the code
generation inside clang. So <arm_mve.h> itself contains nothing but
typedefs and function declarations, with the latter all using the new
`__attribute__((__clang_builtin))` system to arrange that the user-
facing function names correspond to the right internal BuiltinIDs.
So the new MveEmitter tablegen system specifies the full sequence of
IRBuilder operations that each user-facing ACLE intrinsic should
translate into. Where possible, the ACLE intrinsics map to standard IR
operations such as vector-typed `add` and `fadd`; where no standard
representation exists, I call down to the sample IR intrinsics
introduced in an earlier commit.
Doing it like this means that you get the polymorphism for free just
by using __attribute__((overloadable)): the clang overload resolution
decides which function declaration is the relevant one, and _then_ its
BuiltinID is looked up, so by the time we're doing code generation,
that's all been resolved by the standard system. It also means that
you get really nice error messages if the user passes the wrong
combination of types: clang will show the declarations from the header
file and explain why each one doesn't match.
(The obvious alternative approach would be to have wrapper functions
in <arm_mve.h> which pass their arguments to the underlying builtins.
But that doesn't work in the case where one of the arguments has to be
a constant integer: the wrapper function can't pass the constantness
through. So you'd have to do that case using a macro instead, and then
use C11 `_Generic` to handle the polymorphism. Then you have to add
horrible workarounds because `_Generic` requires even the untaken
branches to type-check successfully, and //then// if the user gets the
types wrong, the error message is totally unreadable!)
Reviewers: dmgreen, miyuki, ostannard
Subscribers: mgorny, javed.absar, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D67161
ExplodedGraph nodes will now have a numeric identifier stored in them
which will keep track of the order in which the nodes were created
and it will be fully deterministic both accross runs and across machines.
This is extremely useful for debugging as it allows reliably setting
conditional breakpoints by node IDs.
llvm-svn: 375186
Because cast expressions have their own hierarchy, it's extremely useful
to have some information about what kind of casts are we dealing with.
llvm-svn: 375185
It's completely impossible to check that I've actually found all the
issues, due to the use of macros in arm_neon.h, but hopefully this time
it'll take more than a few hours for someone to find another issue.
I have no idea why, but apparently there's a rule that some, but not
all, builtins which should take an fp16 vector actually take an int8
vector as an argument. Fix this, and add test coverage.
Differential Revision: https://reviews.llvm.org/D68838
llvm-svn: 375179
Just running -fsyntax-only over arm_neon.h doesn't cover some intrinsics
which are defined using macros. Add more test coverage for that.
arm-neon-header.c wasn't checking the full set of available NEON target
features; change the target architecture of the test to account for
that.
Fix the generator for arm_neon.h to generate casts in more cases where
they are necessary.
Fix VFMLAL_LOW etc. to express their signatures differently, so the
builtins have the expected type. Maybe the TableGen backend should
detect intrinsics that are defined the wrong way, and produce an error.
The rules here are sort of strange.
Differential Revision: https://reviews.llvm.org/D68743
llvm-svn: 374419
Really, we were already 99% of the way there; just needed a couple minor
fixes that affected 64-bit-only builtins. Based on D61717.
Note that the change to builtin_str changes the type of a few
__builtin_neon_* intrinsics that had the "wrong" type.
Fixes https://bugs.llvm.org/show_bug.cgi?id=43341
Differential Revision: https://reviews.llvm.org/D68683
llvm-svn: 374191
Add install targets as necessary to install bash-autocomplete,
scan-build and scan-view via LLVM_DISTRIBUTION_TARGETS.
Differential Revision: https://reviews.llvm.org/D68413
llvm-svn: 373695
The primary goal here is to make the type node hierarchy available to
other tblgen backends, although it should also make it easier to generate
more selective x-macros in the future.
Because tblgen doesn't seem to allow backends to preserve the source
order of defs, this is not NFC because it significantly re-orders IDs.
I've fixed the one (fortunately obvious) place where we relied on
the old order. Unfortunately, I wasn't able to share code with the
existing AST-node x-macro generators because the x-macro schema we use
for types is different in a number of ways. The main loss is that
subclasses aren't ordered together, which doesn't seem important for
types because the hierarchy is generally very shallow with little
clustering.
llvm-svn: 373407
Allow setting a MinVersion, stating from which OpenCL version a
builtin function is available, and a MaxVersion, stating from which
OpenCL version a builtin function should not be available anymore.
Guard some definitions of the "work-item" builtin functions according
to the OpenCL versions from which they are available.
Add the "vector data load and store" builtin functions (e.g.
vload/vstore), whose signatures differ before and after OpenCL 2.0 in
the pointer argument address spaces.
Patch by Pierre Gondois and Sven van Haastregt.
Differential Revision: https://reviews.llvm.org/D63504
llvm-svn: 372321
r371875 moved some functionality around to a Basic header file, but
didn't move its definitions as well. This patch moves some things
around so that shared library building can work.
llvm-svn: 371985
Apparently Clang complains about the name hiding here in a way that my
GCC build does not, so a shocking number of buildbots decided to tell me
about it. Change the name of the variable to prevent the name hiding
and hope we don't have to fix this again.
llvm-svn: 371876
In order to enable future improvements to our attribute diagnostics,
this moves info from ParsedAttr into CommonAttributeInfo, then makes
this type the base of the *Attr and ParsedAttr types. Quite a bit of
refactoring took place, including removing a bunch of redundant Spelling
Index propogation.
Differential Revision: https://reviews.llvm.org/D67368
llvm-svn: 371875
Summary:
This patch introduces the skeleton of the constexpr interpreter,
capable of evaluating a simple constexpr functions consisting of
if statements. The interpreter is described in more detail in the
RFC. Further patches will add more features.
Reviewers: Bigcheese, jfb, rsmith
Subscribers: bruno, uenoku, ldionne, Tyker, thegameg, tschuett, dexonsmith, mgorny, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64146
llvm-svn: 371834
Image types were previously available, but not working. This patch
adds image type handling.
Rename the image type definitions in the .td file to make them
consistent with other type names. Use abstract types to represent the
unqualified types. Instantiate access-qualified image types at the
point of use using, e.g. `ImageType<Image2d, "RO">`.
Add/update TableGen definitions for the read_image/write_image
builtin functions.
Patch by Pierre Gondois and Sven van Haastregt.
Differential Revision: https://reviews.llvm.org/D63480
llvm-svn: 371046
Breaks BUILD_SHARED_LIBS build, introduces cycles in library dependency
graphs. (clangInterp depends on clangAST which depends on clangInterp)
This reverts r370839, which is an yet another recommit of D64146.
llvm-svn: 370874
Summary:
This patch introduces the skeleton of the constexpr interpreter,
capable of evaluating a simple constexpr functions consisting of
if statements. The interpreter is described in more detail in the
RFC. Further patches will add more features.
Reviewers: Bigcheese, jfb, rsmith
Subscribers: bruno, uenoku, ldionne, Tyker, thegameg, tschuett, dexonsmith, mgorny, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64146
llvm-svn: 370839