None of these have any reordering issues, and they still emit the same reduction intrinsics without any change in the existing test coverage:
llvm-project\clang\test\CodeGen\X86\avx512-reduceIntrin.c
llvm-project\clang\test\CodeGen\X86\avx512-reduceMinMaxIntrin.c
Differential Revision: https://reviews.llvm.org/D117881
D111985 added the generic `__builtin_elementwise_max` and `__builtin_elementwise_min` intrinsics with the same integer behaviour as the SSE/AVX instructions
This patch removes the `__builtin_ia32_pmax/min` intrinsics and just uses `__builtin_elementwise_max/min` - the existing tests see no changes:
```
__m256i test_mm256_max_epu32(__m256i a, __m256i b) {
// CHECK-LABEL: test_mm256_max_epu32
// CHECK: call <8 x i32> @llvm.umax.v8i32(<8 x i32> %{{.*}}, <8 x i32> %{{.*}})
return _mm256_max_epu32(a, b);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Sibling patch to D117791
Differential Revision: https://reviews.llvm.org/D117798
D111986 added the generic `__builtin_elementwise_abs()` intrinsic with the same integer absolute behaviour as the SSE/AVX instructions (abs(INT_MIN) == INT_MIN)
This patch removes the `__builtin_ia32_pabs*` intrinsics and just uses `__builtin_elementwise_abs` - the existing tests see no changes:
```
__m256i test_mm256_abs_epi8(__m256i a) {
// CHECK-LABEL: test_mm256_abs_epi8
// CHECK: [[ABS:%.*]] = call <32 x i8> @llvm.abs.v32i8(<32 x i8> %{{.*}}, i1 false)
return _mm256_abs_epi8(a);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Differential Revision: https://reviews.llvm.org/D117791
D111985 added the generic `__builtin_elementwise_max` and `__builtin_elementwise_min` intrinsics with the same integer behaviour as the SSE/AVX instructions
This patch removes the `__builtin_ia32_pmax/min` intrinsics and just uses `__builtin_elementwise_max/min` - the existing tests see no changes:
```
__m256i test_mm256_max_epu32(__m256i a, __m256i b) {
// CHECK-LABEL: test_mm256_max_epu32
// CHECK: call <8 x i32> @llvm.umax.v8i32(<8 x i32> %{{.*}}, <8 x i32> %{{.*}})
return _mm256_max_epu32(a, b);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Sibling patch to D117791
Differential Revision: https://reviews.llvm.org/D117798
D111986 added the generic `__builtin_elementwise_abs()` intrinsic with the same integer absolute behaviour as the SSE/AVX instructions (abs(INT_MIN) == INT_MIN)
This patch removes the `__builtin_ia32_pabs*` intrinsics and just uses `__builtin_elementwise_abs` - the existing tests see no changes:
```
__m256i test_mm256_abs_epi8(__m256i a) {
// CHECK-LABEL: test_mm256_abs_epi8
// CHECK: [[ABS:%.*]] = call <32 x i8> @llvm.abs.v32i8(<32 x i8> %{{.*}}, i1 false)
return _mm256_abs_epi8(a);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Differential Revision: https://reviews.llvm.org/D117791
C17 deprecated ATOMIC_VAR_INIT with the resolution of DR 485. C++
followed suit when adopting P0883R2 for C++20, but additionally chose
to deprecate ATOMIC_FLAG_INIT at the same time despite the macro still
being required in C. This patch marks both macros as deprecated when
appropriate to do so.
This completes the implementation of
WG14 N2412 (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2412.pdf),
which standardizes C on a twos complement representation for integer
types. The only work that remained there was to define the correct
macros in the standard headers, which this patch does.
ROCm 4.5 device library introduced __ockl_dm_alloc and __ockl_dm_dealloc
for supporting device side malloc/free.
This patch redefines device malloc/free to use these functions.
It also fixes a bug in the wrapper header which incorrectly defines free
with return type void* instead of void.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D116967
HVX does not have load/store instructions for vector predicates (i.e. bool
vectors). Because of that, vector predicates need to be converted to another
type before being stored, and the most convenient representation is an HVX
vector.
As a consequence, in C/C++, source-level builtins that either take or
produce vector predicates take or return regular vectors instead. On the
other hand, the corresponding LLVM intrinsics do have boolean types that,
and so a conversion of the operand or the return value was necessary.
This conversion would happen inside clang's codegen, but was somewhat
fragile.
This patch changes the strategy: a builtin that takes a vector predicate
now really expects a vector predicate. Since such a predicate cannot be
provided via a variable, this builtin must be composed with other builtins
that either convert vector to a predicate (V6_vandvrt) or predicate to a
vector (V6_vandqrt).
For users using builtins defined in hvx_hexagon_protos.h there is no impact:
the conversions were added to that file. Other users will need to insert
- __builtin_HEXAGON_V6_vandvrt[_128B](V, -1) to convert vector V to a
vector predicate, or
- __builtin_HEXAGON_V6_vandqrt[_128B](Q, -1) to convert vector predicate Q
to a vector.
Builtins __builtin_HEXAGON_V6_vmaskedstore.* are a temporary exception to
that, but they are deprecated and should not be used anyway. In the future
they will either follow the same rule, or be removed.
Use the "pure" attribute (or "readonly") for the vload, vload_half and
vloada_half builtins.
Includes test changes to SemaOpenCL/fdeclare-opencl-builtins.cl to avoid
triggering unused-result warnings.
Reviewed By: svenvh
Differential Revision: https://reviews.llvm.org/D110742
Use the "pure" attribute (or "readonly") for the vload, vload_half and
vloada_half builtins.
Reviewed By: svenvh
Differential Revision: https://reviews.llvm.org/D110742
This patch implements the following:
- Emit PACBTI-M build attributes in libunwind asm files
- Authenticate LR in DWARF32 using PACBTI
Use Armv8.1-M.Main PACBTI extension to authenticate the return address
(stored in the LR register) before moving it to the PC (IP) register.
The AUTG instruction is used with the candidate return address, the CFA,
and the authentication code that is retrieved from the saved
pseudo-register RA_AUTH_CODE.
- Authenticate LR in EHABI using PACBTI
Authenticate the contents of the LR register using Armv8.1-M.Main PACBTI
extension.
A new frame unwinding instruction is introduced (0xb4). This
instruction pops out of the stack the return address authentication
code, which is then used in conjunction with the SP and the next-to-be
instruction pointer to perform authentication.
This authentication code is popped into a new register,
UNW_ARM_PSEUDO_PAC, which is a pseudo-register.
This patch is part of a series that adds support for the PACBTI-M extension of
the Armv8.1-M architecture, as detailed here:
https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/armv8-1-m-pointer-authentication-and-branch-target-identification-extension
The PACBTI-M specification can be found in the Armv8-M Architecture Reference
Manual:
https://developer.arm.com/documentation/ddi0553/latest
The following people contributed to this patch:
- Momchil Velikov
- Victor Campos
- Ties Stuij
Reviewed By: #libunwind, danielkiss, mstorsjo
Differential Revision: https://reviews.llvm.org/D112430
The 14.31.30818 toolset has the following in the `stdatomic.h`:
~~~
#ifndef __cplusplus
#error <stdatomic.h> is not yet supported when compiling as C, but this is planned for a future release.
#endif
~~~
This results in clang failing to build existing code which relied on
`stdatomic.h` in C mode on Windows. Simply fallback to the clang header
until that header is available as a complete implementation.
The clang portion of c933c2eb33 was missed as I made
some kind of mistake squashing the commits with git.
This patch just adds those.
The original review: https://reviews.llvm.org/D114088
The XL implementation of vec_round for vector double uses
"round-to-nearest, ties to even" just as the vector float
`version does. However clang and gcc use "round-to-nearest-away"
for vector double and "round-to-nearest, ties to even"
for vector float.
The XL behaviour is implemented under the __XL_COMPAT_ALTIVEC__
macro similarly to other instances of incompatibility.
Differential revision: https://reviews.llvm.org/D113642
This enables Intel intrinsics support on FreeBSD.
Thanks to @pkubaj who noticed this feature was missing
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D113451
With this,
void f() { __asm__("mov eax, ebx"); }
now compiles with clang with -masm=intel.
This matches gcc.
The flag is not accepted in clang-cl mode. It has no effect on
MSVC-style `__asm {}` blocks, which are unconditionally in intel
mode both before and after this change.
One difference to gcc is that in clang, inline asm strings are
"local" while they're "global" in gcc. Building the following with
-masm=intel works with clang, but not with gcc where the ".att_syntax"
from the 2nd __asm__() is in effect until file end (or until a
".intel_syntax" somewhere later in the file):
__asm__("mov eax, ebx");
__asm__(".att_syntax\nmovl %ebx, %eax");
__asm__("mov eax, ebx");
This also updates clang's intrinsic headers to work both in
-masm=att (the default) and -masm=intel modes.
The official solution for this according to "Multiple assembler dialects in asm
templates" in gcc docs->Extensions->Inline Assembly->Extended Asm
is to write every inline asm snippet twice:
bt{l %[Offset],%[Base] | %[Base],%[Offset]}
This works in LLVM after D113932 and D113894, so use that.
(Just putting `.att_syntax` at the start of the snippet works in some but not
all cases: When LLVM interpolates in parameters like `%0`, it uses at&t or
intel syntax according to the inline asm snippet's flavor, so the `.att_syntax`
within the snippet happens to late: The interpolated-in parameter is already
in intel style, and then won't parse in the switched `.att_syntax`.)
It might be nice to invent a `#pragma clang asm_dialect push "att"` /
`#pragma clang asm_dialect pop` to be able to force asm style per snippet,
so that the inline asm string doesn't contain the same code in two variants,
but let's leave that for a follow-up.
Fixes PR21401 and PR20241.
Differential Revision: https://reviews.llvm.org/D113707
This splits out the generated headers and conditonalises them upon the
target being enabled.
The motivation here is that the RISCV header alone added 10MB to the
resource directory, which was previously at 10MB, increasing the build
size and time. This header is contributing ~50% of the size of the
resource headers (~10MB).
The ARM generated headers are contributing about ~10% or 1MB.
This could be extended further adding only the static resource headers
for the targets that the LLVM build supports.
The changes to the tests for ARM mirror what the RISCV target already
did and rnk identified as a possible issue.
Testing:
cmake -G Ninja -D LLVM_TARGETS_TO_BUILD=X86 -D LLVM_ENABLE_PROJECTS="clang;lld" ../clang
ninja check-clang
Differential Revision: https://reviews.llvm.org/D112890
Reviewed By: craig.topper
Clang builtin utility `__remove_address_space` now works if generic
address space is not supported in C++ for OpenCL 2021.
Differential Revision: https://reviews.llvm.org/D110155
Add new triple and target info for ‘spirv32’ and ‘spirv64’ and,
thus, enabling clang (LLVM IR) code emission to SPIR-V target.
The target for SPIR-V is mostly reused from SPIR by derivation
from a common base class since IR output for SPIR-V is mostly
the same as SPIR. Some refactoring are made accordingly.
Added and updated tests for parts that are different between
SPIR and SPIR-V.
Patch by linjamaki (Henry Linjamäki)!
Differential Revision: https://reviews.llvm.org/D109144
- Add the missing NVVM predicate builtins on address space checking
- Redefine them as pure functions so that they could be used in
__builtin_assume.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D112053
CUDA-11 headers rely on these NVCC builtins.
Despite having `__nv` previx, those are *not* provided by libdevice.
Differential Revision: https://reviews.llvm.org/D111665
Add atomic_half types and builtins operating on the types from the
cl_ext_float_atomics extension.
Patch by Haonan Yang.
Differential Revision: https://reviews.llvm.org/D109740
The SSE4 header (smmintrin.h) should include SSSE3 (tmmintrin.h) instead
of SSE2 (emmintrin.h).
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D111482
This patch updates the vec_extract builtins to take a signed int as the second
parameter, as defined by the Power Vector Intrinsics Programming Reference.
This patch is NFC and all existing tests pass.
Differential Revision: https://reviews.llvm.org/D110935
This patch updates the vec_popcnt builtins to return vector unsigned,
as defined by the Power Vector Intrinsics Programming Reference.
This patch is NFC and all existing tests pass.
Differential Revision: https://reviews.llvm.org/D110934
The patch implements header-only support for testure lookups.
The patch has been tested on a source file with all possible combinations of
argument types supported by CUDA headers, compiled and verified that the
generated instructions and their parameters match the code generated by NVCC.
Unfortunately, compiling texture code requires CUDA headers and can't be tested
in clang itself. The test will need to be added to the test-suite later.
While generated code compiles and seems to match NVCC, I do not have any code
that uses textures that I could test correctness of the implementation. Hence
the experimental status.
Differential Revision: https://reviews.llvm.org/D110089
It's true that docs.microsoft.com says:
"""The _ReadBarrier, _WriteBarrier, and _ReadWriteBarrier compiler
intrinsics and the MemoryBarrier macro are all deprecated and should not
be used. For inter-thread communication, use mechanisms such as
atomic_thread_fence and std::atomic<T>, which are defined in the C++
Standard Library. For hardware access, use the /volatile:iso compiler
option together with the volatile keyword."""
And these attributes have been here since these builtins were added in
r192860.
However:
- cl.exe does not warn on them even with /Wall
- none of the replacements are useful for C code
- we don't add __attribute__((__deprecated__())) to any other
declarations in intrin.h
- intrin0.h in the MSVC headers declares _ReadWriteBarrier() (but
without the deprecation attribute), so you get inconsistent
deprecation warnings depending on if you include intrin.h or intrin0.h
The motivation is that compiling sqlite.h with clang-cl produces a
deprecation warning with clang-cl for _ReadWriteBarrier(), but not with
cl.exe.
Differential Revision: https://reviews.llvm.org/D111232
The builtin for vec_orc has support for the following two signatures,
but currently the compiler marks it ambiguous:
vector float vec_orc(vector float, vector float)
vector double vec_orc(vector double, vector double)
This patch implements these two builtins.
Differential revision: https://reviews.llvm.org/D110858
These builtins produce inefficient code for CPU's prior to Power8
due to vcmpequd being unavailable. The predicate forms can actually
leverage the available vcmpequw along with xxlxor to produce a better
sequence.
When a user specifies an out-of-range index for vec_insert, we
just produce IR that has undefined behaviour even though the
documentation states that modulo arithmetic is used. This patch
just truncates the value to a valid index.