This is applied to the vector types defined in <arm_mve.h> for use
with the intrinsics for the ARM MVE vector architecture.
Its purpose is to inhibit lax vector conversions, but only in the
context of overload resolution of the MVE polymorphic intrinsic
functions. This solves an ambiguity problem with polymorphic MVE
intrinsics that take a vector and a scalar argument: the scalar
argument can often have the wrong integer type due to default integer
promotions or unsuffixed literals, and therefore, the type of the
vector argument should be considered trustworthy when resolving MVE
polymorphism.
As part of the same change, I've added the new attribute to the
declarations generated by the MveEmitter Tablegen backend (and
corrected a namespace issue with the other attribute while I was
there).
Reviewers: aaron.ballman, dmgreen
Reviewed By: aaron.ballman
Subscribers: kristof.beyls, JDevlieghere, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D72518
-Werror clang build is broken now.
tools/clang/lib/Sema/OpenCLBuiltins.inc:11824:5: error: default label in
switch which covers all enumeration values
[-Werror,-Wcovered-switch-default]
default:
We don't need default now, since all enumeration values are covered.
Reviewed By: svenvh
Differential Revision: https://reviews.llvm.org/D72707
This batch of intrinsics covers two sets of immediate shift
instructions, which have in common that they only overwrite part of
their output register and so they need an extra input giving its
previous value.
The VSLI and VSRI instructions shift each lane of the input vector
left or right just as if they were normal immediate VSHL/VSHR, but
then they only overwrite the output bits that correspond to actual
shifted bits of the input. So VSLI will leave the low n bits of each
output lane unchanged, and VSRI the same with the top n bits.
The V[Q][R]SHR[U]N family are all narrowing shifts: they take an input
vector of 2n-bit integers, shift each lane right by a constant, and
then narrowing the shifted result to only n bits. So they only
overwrite half of the n-bit lanes in the output register, and the B/T
suffix indicates whether it's the bottom or top half of each 2n-bit
lane.
I've implemented the whole of the latter family using a single IR
intrinsic `vshrn`, which takes a lot of i32 parameters indicating
which instruction it expands to (by specifying signedness of the input
and output types, whether it saturates and/or rounds, etc).
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D72328
Summary:
The ACLE intrinsics with `gather_base` or `scatter_base` in the name
are wrappers on the MVE load/store instructions that take a vector of
base addresses and an immediate offset. The immediate offset can be up
to 127 times the alignment unit, and it can be positive or negative.
At the MC layer, we got that right. But in the Sema error checking for
the wrapping intrinsics, the offset was erroneously constrained to be
positive.
To fix this I've adjusted the `imm_mem7bit` class in the Tablegen that
defines the intrinsics. But that causes integer literals like
`0xfffffffffffffe04` to appear in the autogenerated calls to
`SemaBuiltinConstantArgRange`, which provokes a compiler warning
because that's out of the non-overflowing range of an `int64_t`. So
I've also tweaked `MveEmitter` to emit that as `-0x1fc` instead.
Updated the tests of the Sema checks themselves, and also adjusted a
random sample of the CodeGen tests to actually use negative offsets
and prove they get all the way through code generation without causing
a crash.
Reviewers: dmgreen, miyuki, MarkMurrayARM
Reviewed By: dmgreen
Subscribers: kristof.beyls, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D72268
Provide a mechanism to attach OpenCL extension information to builtin
functions, so that their use can be restricted according to the
extension(s) the builtin is part of.
Patch by Pierre Gondois and Sven van Haastregt.
Differential Revision: https://reviews.llvm.org/D71476
Summary: With %p, each test file that we're using to generate profile data will make its own profraw file which is around 60 MB in size. If we have a lot of test files, that quickly uses a lot of space. Use %4m instead to share the profraw files used to store the profile data. We use 4 here based on the default value in https://reviews.llvm.org/source/llvm-github/browse/master/llvm/CMakeLists.txt$604
Reviewers: beanz, phosek, xiaobai, smeenai, vsk
Reviewed By: vsk
Subscribers: vsk, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D71585
We may wish to keep the PGO training data outside the repository. Add a
CMake variable to allow referencing an external lit testsuite.
Differential Revision: https://reviews.llvm.org/D71507
The basic technical design here is that we have three levels
of readers and writers:
- At the lowest level, there's a `Basic{Reader,Writer}` that knows
how to emit the basic structures of the AST. CRTP allows this to
be metaprogrammed so that the client only needs to support a handful
of primitive types (e.g. `uint64_t` and `IdentifierInfo*`) and more
complicated "inline" structures such as `DeclarationName` can just
be emitted in terms of those primitives.
In Clang's binary-serialization code, these are
`ASTRecord{Reader,Writer}`. For now, a large number of basic
structures are still emitted explicitly by code on those classes
rather than by either TableGen or CRTP metaprogramming, but I
expect to move more of these over.
- In the middle, there's a `Property{Reader,Writer}` which is
responsible for processing the properties of a larger object. The
object-level reader/writer asks the property-level reader/writer to
project out a particular property, yielding a basic reader/writer
which will be used to read/write the property's value, like so:
```
propertyWriter.find("count").writeUInt32(node->getCount());
```
Clang's binary-serialization code ignores this level (it uses
the basic reader/writer as the property reader/writer and has the
projection methods just return `*this`) and simply relies on the
roperties being read/written in a stable order.
- At the highest level, there's an object reader/writer (e.g.
`Type{Reader,Writer}` which emits a logical object with properties.
Think of this as writing something like a JSON dictionary literal.
I haven't introduced support for bitcode abbreviations yet --- it
turns out that there aren't any operative abbreviations for types
besides the QualType one --- but I do have some ideas of how they
should work. At any rate, they'll be necessary in order to handle
statements.
I'm sorry for not disentangling the patches that added basic and type
reader/writers; I made some effort to, but I ran out of energy after
disentangling a number of other patches from the work.
Negligible impact on module size, time to build a set of about 20
fairly large modules, or time to read a few declarations out of them.
There are three significant changes here:
- Most of the methods to read various embedded structures (`APInt`,
`NestedNameSpecifier`, `DeclarationName`, etc.) have been moved
from `ASTReader` to `ASTRecordReader`. This cleans up quite a
bit of code which was passing around `(F, Record, Idx)` arguments
everywhere or doing explicit indexing, and it nicely parallels
how it works on the writer side. It also sets us up to then move
most of these methods into the `BasicReader`s that I'm introducing
as part of abstract serialization.
As part of this, several of the top-level reader methods (e.g.
`readTypeRecord`) have been converted to use `ASTRecordReader`
internally, which is a nice readability improvement.
- I've standardized most of these method names on `readFoo` rather
than `ReadFoo` (used in some of the helper structures) or `GetFoo`
(used for some specific types for no apparent reason).
- I've changed a few of these methods to return their result instead
of reading into an argument passed by reference. This is partly
for general consistency and partly because it will make the
metaprogramming easier with abstract serialization.
the tblgen AST node hierarchies.
Not totally NFC because both of the emitters now emit in a different
order. The type-nodes emitter now visits nodes in hierarchy order,
which means we could use range checks in classof if we had any types
that would benefit from that; currently we do not. The AST-nodes
emitter now uses a multimap keyed by the name of the record; previously
it was using `Record*`, which of couse isn't stable across processes
and may have led to non-reproducible builds in some circumstances.
This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.
There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.
In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.
Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard
Reviewed By: dmgreen, MarkMurrayARM
Subscribers: echristo, hokein, rdhindsa, kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D71065
Summary:
This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.
There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.
In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.
Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard
Reviewed By: MarkMurrayARM
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D71065
Summary:
First, call os.path.normpath on the filename argument. I passed in
./foo-asdf.cpp, and this meant that the script failed to find the
filename, and bad things happened.
Second, call os.path.abspath on binaries. CReduce runs the
interestingness test in a temp dir, so relative paths will not work.
Reviewers: akhuang
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D71098
Summary:
This commit adds the `vpselq` intrinsics which take an MVE predicate
word and select lanes from two vectors; the `vctp` intrinsics which
create a tail predicate word suitable for processing the first m
elements of a vector (e.g. in the last iteration of a loop); and
`vpnot`, which simply complements a predicate word and is just
syntactic sugar for the `~` operator.
The `vctp` ACLE intrinsics are lowered to the IR intrinsics we've
already added (and which D70592 just reorganized). I've filled in the
missing isel rule for VCTP64, and added another set of rules to
generate the predicated forms.
I needed one small tweak in MveEmitter to allow the `unpromoted` type
modifier to apply to predicates as well as integers, so that `vpnot`
doesn't pointlessly convert its input integer to an `<n x i1>` before
complementing it.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D70485
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.
When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.
The original version broke vcreate_* because it became a macro and didn't
apply the normal integer promotion rules before bitcasting to a vector.
This adds a temporary.
This reverts commit 3f76260dc0.
Breaks at least these tests on Windows:
Clang :: Driver/clang-offload-bundler.c
Clang :: Driver/clang-offload-wrapper.c
This broke the vcreate_u64 intrinsic. Example:
$ cat /tmp/a.cc
#include <arm_neon.h>
void g() {
auto v = vcreate_u64(0);
}
$ bin/clang -c /tmp/a.cc --target=arm-linux-androideabi16 -march=armv7-a
/tmp/a.cc:4:12: error: C-style cast from scalar 'int' to vector 'uint64x1_t' (vector of 1 'uint64_t' value) of different size
auto v = vcreate_u64(0);
^~~~~~~~~~~~~~
/work/llvm.monorepo/build.release/lib/clang/10.0.0/include/arm_neon.h:4144:11: note: expanded from macro 'vcreate_u64'
__ret = (uint64x1_t)(__p0); \
^~~~~~~~~~~~~~~~~~
Reverting until this can be investigated.
> The modifier system used to mutate types on NEON intrinsic definitions had a
> separate letter for all kinds of transformations that might be needed, and we
> were quite quickly running out of letters to use. This patch converts to a much
> smaller set of orthogonal modifiers that can be applied together to achieve the
> desired effect.
>
> When merging with downstream it is likely to cause a conflict with any local
> modifications to the .td files. There is a new script in
> utils/convert_arm_neon.py that was used to convert all .td definitions and I
> would suggest running it on the last downstream version of those files before
> this commit rather than resolving conflicts manually.
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.
When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.
For some reason we were not casting a fairly obscure class of builtin calls we
expected to be polymorphic to vectors of char. It worked because the only
affected intrinsics weren't actually polymorphic after all, but is
unnecessarily complicated.
This adds the `vgetq_lane` and `vsetq_lane` families, to copy between
a scalar and a specified lane of a vector.
One of the new `vgetq_lane` intrinsics returns a `float16_t`, which
causes a compile error if `%clang_cc1` doesn't get the option
`-fallow-half-arguments-and-returns`. The driver passes that option to
cc1 already, but I've had to edit all the explicit cc1 command lines
in the existing MVE intrinsics tests.
A couple of fixes are included for the code I wrote up front in
MveEmitter to support lane-index immediates (and which nothing has
tested until now): the type was wrong (`uint32_t` instead of `int`)
and the range was off by one.
I've also added a method of bypassing the default promotion to `i32`
that is done by the MveEmitter code generation: it's sensible to
promote short scalars like `i16` to `i32` if they're going to be
passed to custom IR intrinsics representing a machine instruction
operating on GPRs, but not if they're going to be passed to standard
IR operations like `insertelement` which expect the exact type.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Reviewed By: dmgreen
Subscribers: kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D70188
This batch of intrinsics includes lots of things that move vector data
around or change its type without really affecting its value very
much. It includes the `vreinterpretq` family (cast one vector type to
another); `vuninitializedq` (create a vector of a given type with
don't-care contents); and `vcreateq` (make a 128-bit vector out of two
`uint64_t` halves).
These are all implemented using completely standard IR that's already
tested in existing LLVM unit tests, so I've just written a clang test
to check the IR is correct, and left it at that.
I've also added some richer infrastructure to the MveEmitter Tablegen
backend, to make it specify the exact integer type of integer
arguments passed to IR construction functions, and wrap those
arguments in a `static_cast` in the autogenerated C++. That was
necessary to prevent an overloading ambiguity when passing the integer
literal `0` to `IRBuilder::CreateInsertElement`, because otherwise, it
could mean either a null pointer `llvm::Value *` or a zero `uint64_t`.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Subscribers: kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D70133
This patch adds the ACLE intrinsics for all the MVE load and store
instructions not already handled by D69791. These ones don't need new
IR intrinsics, because they can be implemented in terms of standard
LLVM IR constructions.
Some of the load and store instructions access less than 128 bits of
memory, sign/zero extending each value to a wider vector lane on load
or truncating it on store. These are represented in IR by a load of a
shorter vector followed by a zext/sext, and conversely, a trunc
followed by a short store. Existing ISel patterns already recognize
those combinations and turn them into the right MVE instructions.
The predicated forms of all these instructions are represented in the
same way, except that the ordinary load/store operation is replaced
with the existing intrinsics @llvm.masked.{load,store}. These are
currently only code-generated as predicated MVE load/store
instructions if you give LLVM the `-enable-arm-maskedldst` option; so
I've done that in the LLVM codegen test. When we make that the
default, that option can be removed.
In the Tablegen backend, I've had to add a handful of extra support
features:
* We need to be able to make clang::Address objects out of a
pointer and an alignment (previously we only needed these when the
user passed us an existing one).
* We can now specify vector types that aren't 128 bits wide (for use
in those intermediate values in IR), the parametrized type system
can make one starting from two existing vector types (using the lane
count of one and the element type of the other).
* I've added support for code generation of pointer casts, and for
specifying LLVM types as operands to IRBuilder operations (for zext
and sext, though I think they'll come in useful again).
* Now not all IR construction operations need to be specified as
Builder.CreateFoo; some don't involve a Builder at all, and one
passes it as a parameter to a tiny static helper function in
CGBuiltin.cpp.
Reviewers: ostannard, MarkMurrayARM, dmgreen
Subscribers: kristof.beyls, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D70088
'a' used to implement a splat in C++ code in NeonEmitter.cpp, but this
can be done directly from .td expansions now (and most ops already did).
So removing it simplifies the overall code.
https://reviews.llvm.org/D69716
Previously we had a handful of bools (Signed, Floating, ...) that could
easily end up in an inconsistent state. This adds an enum Kind which
holds the mutually exclusive states a type might be in, retaining some
of the bools that modified an underlying type.
https://reviews.llvm.org/D69715
This patch adds two new families of intrinsics, both of which are
memory accesses taking a vector of locations to load from / store to.
The vldrq_gather_base / vstrq_scatter_base intrinsics take a vector of
base addresses, and an immediate offset to be added consistently to
each one. vldrq_gather_offset / vstrq_scatter_offset take a scalar
base address, and a vector of offsets to add to it. The
'shifted_offset' variants also multiply each offset by the element
size type, so that the vector is effectively of array indices.
At the IR level, these operations are represented by a single set of
four IR intrinsics: {gather,scatter} × {base,offset}. The other
details (signed/unsigned, shift, and memory element size as opposed to
vector element size) are all specified by IR intrinsic polymorphism
and immediate operands, because that made the selection job easier
than making a huge family of similarly named intrinsics.
I considered using the standard IR representations such as
llvm.masked.gather, but they're not a good fit. In order to use
llvm.masked.gather to represent a gather_offset load with element size
smaller than a pointer, you'd have to expand the <8 x i16> vector of
offsets into an <8 x i16*> vector of pointers, which would be split up
during legalization, so you'd spend most of your time undoing the mess
it had made. Also, ISel support for llvm.masked.gather would be easy
enough in a trivial way (you can expand it into a gather-base load
with a zero immediate offset), but instruction-selecting lots of
fiddly idioms back into all the _other_ MVE load instructions would be
much more work. So I think dedicated IR intrinsics are the more
sensible approach, at least for the moment.
On the clang tablegen side, I've added two new features to the
Tablegen source accepted by MveEmitter: a 'CopyKind' type node for
defining a type that varies with the parameter type (it lets you ask
for an unsigned integer type of the same width as the parameter), and
an 'unsignedflag' value node for passing an immediate IR operand which
is 0 for a signed integer type or 1 for an unsigned one. That lets me
write each kind of intrinsic just once and get all its subtypes and
immediate arguments generated automatically.
Also I've tweaked the handling of pointer-typed values in the code
generation part of MveEmitter: they're generated as Address rather
than Value (i.e. including an alignment) so that they can be given to
the ordinary IR load and store operations, but I'd omitted the code to
convert them back to Value when they're going to be used as an
argument to an IR intrinsic.
On the MC side, I've enhanced MVEVectorVTInfo so that it can tell you
not only the full assembly-language suffix for a given vector type
(like 's32' or 'u16') but also the numeric-only one used by store
instructions (just '32' or '16').
Reviewers: dmgreen
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D69791
A few integer types in the ACLE definitions of MVE intrinsics are
given as 'int' or 'unsigned' instead of <stdint.h> fixed-size types
like uint32_t. Usually these are the ones where the size isn't that
important, such as immediate offsets in loads (which have a range
limited by the instruction encoding) or the carry flag in vadcq which
can only be 0 or 1 anyway.
With this change, <arm_mve.h> follows that exact type naming, so that
the function prototypes look identical to the ones in ACLE, instead of
replacing int and unsigned with int32_t and uint32_t.
Reviewers: dmgreen
Subscribers: kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D69790
In the code that generates Sema range checks on constant arguments, I
had a piece of code that checks the bounds specified in the Tablegen
intrinsic description against the range of the integer type being
tested. If the bounds are large enough to permit any value of the
integer type, you can omit the compile-time range check. (This case is
expected to come up in some of the bitwise operation intrinsics.)
But somehow I got my signed/unsigned check backwards (asking for the
signed min/max of an unsigned type and vice versa), and also made a
sign extension error in which a signed negative value gets
zero-extended. Now rewritten more sensibly, and it should get its
first sensible test from the next batch of intrinsics I'm planning to
add in D69791.
Reviewers: dmgreen
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D69789
The TableGen-generated file containing the function definitions can be
reorganized to save some memory in the Clang binary. Functions having
the same prototype(s) will point to a shared list of prototype(s).
Patch by Pierre Gondois and Sven van Haastregt.
Differential Revision: https://reviews.llvm.org/D63557
Add handling for the "pure", "const" and "convergent" function
attributes for OpenCL builtin functions.
Patch by Pierre Gondois and Sven van Haastregt.
Differential Revision: https://reviews.llvm.org/D64319