The rounding mode is checked in CGBuiltin.cpp to generate the correct intrinsic call.
Making this switch switchs the masking to use the i8 bitcast to <8 x i1> and extract i1 version of the IR for the mask. Previously we ended up with a scalar 'and' plus an icmp.
llvm-svn: 336637
This will convert the i8 mask argument to <8 x i1> and extract an i1 and then emit a select instruction. This replaces the '(__U & 1)" and ternary operator used in some of intrinsics. The old sequence was lowered to a scalar and and compare. The new sequence uses an i1 vector that will interoperate better with other mask intrinsics.
This removes the need to handle div_ss/sd specially in CGBuiltin.cpp. A follow up patch will add the GCCBuiltin name back in llvm and remove the custom handling.
I made some adjustments to legacy move_ss/sd intrinsics which we reused here to do a simpler extract and insert instead of 2 extracts and two inserts or a shuffle.
llvm-svn: 336622
This is part of an ongoing attempt at making 512 bit vectors illegal in the X86 backend type legalizer due to CPU frequency penalties associated with wide vectors on Skylake Server CPUs. We want the loop vectorizer to be able to emit IR containing wide vectors as intermediate operations in vectorized code and allow these wide vectors to be legalized to 256 bits by the X86 backend even though we are targetting a CPU that supports 512 bit vectors. This is similar to what happens with an AVX2 CPU, the vectorizer can emit wide vectors and the backend will split them. We want this splitting behavior, but still be able to use new Skylake instructions that work on 256-bit vectors and support things like masking and gather/scatter.
Of course if the user uses explicit vector code in their source code we need to not split those operations. Especially if they have used any of the 512-bit vector intrinsics from immintrin.h. And we need to make it so that merely using the intrinsics produces the expected code in order to be backwards compatible.
To support this goal, this patch adds a new IR function attribute "min-legal-vector-width" that can indicate the need for a minimum vector width to be legal in the backend. We need to ensure this attribute is set to the largest vector width needed by any intrinsics from immintrin.h that the function uses. The inliner will be reponsible for merging this attribute when a function is inlined. We may also need a way to limit inlining in the future as well, but we can discuss that in the future.
To make things more complicated, there are two different ways intrinsics are implemented in immintrin.h. Either as an always_inline function containing calls to builtins(can be target specific or target independent) or vector extension code. Or as a macro wrapper around a taget specific builtin. I believe I've removed all cases where the macro was around a target independent builtin.
To support the always_inline function case this patch adds attribute((min_vector_width(128))) that can be used to tag these functions with their vector width. All x86 intrinsic functions that operate on vectors have been tagged with this attribute.
To support the macro case, all x86 specific builtins have also been tagged with the vector width that they require. Use of any builtin with this property will implicitly increase the min_vector_width of the function that calls it. I've done this as a new property in the attribute string for the builtin rather than basing it on the type string so that we can opt into it on a per builtin basis and avoid any impact to target independent builtins.
There will be future work to support vectors passed as function arguments and supporting inline assembly. And whatever else we can find that isn't covered by this patch.
Special thanks to Chandler who suggested this direction and reviewed a preview version of this patch. And thanks to Eric Christopher who has had many conversations with me about this issue.
Differential Revision: https://reviews.llvm.org/D48617
llvm-svn: 336583
I believe these have been broken since their introduction into clang.
I've enhanced the tests for these intrinsics to using a real rounding mode and checking all the intrinsic arguments instead of just the name.
llvm-svn: 336498
We had the mask versions of the rounding intrinsics, but not one without masking.
Also change the rounding tests to not use the CUR_DIRECTION rounding mode.
llvm-svn: 336470
All of these found by grepping through IR from the builtin tests for extra trunc and zext/sext instructions that shouldn't have been there.
Some of these were real bugs where we lost bits from the user input:
_mm512_mask_broadcast_f32x8
_mm512_maskz_broadcast_f32x8
_mm512_mask_broadcast_i32x8
_mm512_maskz_broadcast_i32x8
_mm256_mask_cvtusepi16_storeu_epi8
llvm-svn: 336042
Summary: Tests in a separate change to the test-suite.
Reviewers: rsmith, tra
Subscribers: lahwaacz, sanjoy, cfe-commits
Differential Revision: https://reviews.llvm.org/D48151
llvm-svn: 336026
Summary:
Fixes PR37753: min/max can't be called from __host__ __device__
functions in C++14 mode.
Testcase in a separate test-suite commit.
Reviewers: rsmith
Subscribers: sanjoy, lahwaacz, cfe-commits
Differential Revision: https://reviews.llvm.org/D48036
llvm-svn: 336025
ud2 and int2c were missing declarations entirely. And the bitscans were only under x86_64, but they seem to be in BuiltinsARM.def as well and are tested by ms_intrinsics.c
Differential Revision: https://reviews.llvm.org/D48187
llvm-svn: 335259
Similar to what was done to max/min recently.
These already reduced the vector width to 256 and 128 bit as we go unlike the original max/min code.
Differential Revision: https://reviews.llvm.org/D48346
llvm-svn: 335253
We only need to use 512 bit vectors all the way through v8i64 reductions since those max instructions are new to avx512f and only available in 512 bits until SKX.
For v16i32 and floating point we have legacy 128/256 bit instructions we can use.
I've tried to use other intrinsics to reduce the verbosity of the code and avoid having to mention all the shuffles. I've also removed all the -1 shuffle indices so the output sequence is fully specified and not left to backend optimization.
Differential Revision: https://reviews.llvm.org/D47401
llvm-svn: 335070
The previous names took the shift amount in bits to match gcc and required a multiply by 8 in the header. This creates a misleading error message when we check the range of the immediate to the builtin since the allowed range also got multiplied by 8.
This commit changes the builtins to use a byte shift amount to match the underlying instruction and the Intel intrinsic.
Fixes the remaining issue from PR37795.
llvm-svn: 334773
Clang/LLVM doesn't have a way to pass an HLE hint through to the X86 backend to emit HLE prefixed instructions. So this is a good short term fix.
Differential Revision: https://reviews.llvm.org/D47672
llvm-svn: 334751
I'd like to make the select builtins require an avx512f, avx512bw, or avx512vl fature to match what is normally required to get masking. Truncate is special in that there are instructions with a 128/256-bit masked result even without avx512vl.
By using special buitlins we can emit a select without using the 128/256-bit select builtins.
llvm-svn: 334331
I'm looking into making the select builtins require avx512f, avx512bw, or avx512vl since masking operations generally require those features.
The extract builtins are funny because the 512-bit versions return a 128 or 256 bit vector with masking even when avx512vl is not supported.
llvm-svn: 334330
These builtins are all handled by CGBuiltin.cpp so it doesn't much matter what the immediate type is, but int matches the intrinsic spec.
llvm-svn: 334310
Test changes are due to differences in how we generate undef elements now. We also changed the types used for extractf128_si256/insertf128_si256 to match the signature of the builtin that previously existed which this patch resurrects. This also matches gcc.
llvm-svn: 334261
Adds support for these intrinsics, which are ARM and ARM64 only:
_interlockedbittestandreset_acq
_interlockedbittestandreset_rel
_interlockedbittestandreset_nf
_interlockedbittestandset_acq
_interlockedbittestandset_rel
_interlockedbittestandset_nf
Refactor the bittest intrinsic handling to decompose each intrinsic into
its action, its width, and its atomicity.
llvm-svn: 334239
We still emit shufflevector instructions we just do it from CGBuiltin.cpp now. This ensures the intrinsics that use this are only available on CPUs that support the feature.
I also added range checking to the immediate, but only checked it is 8 bits or smaller. We should maybe be stricter since we never use all 8 bits, but gcc doesn't seem to do that.
llvm-svn: 334237
We still lower them to native shuffle IR, but we do it in CGBuiltin.cpp now. This allows us to check the target feature and ensure the immediate fits in 8 bits.
This also improves our -O0 codegen slightly because we're able to see the zeroinitializer in the shuffle. It looks like it got lost behind a store+load previously.
llvm-svn: 334208
Summary:
We recently switch to using a selects in the intrinsics header files for FMA instructions. But the 512-bit versions support flavors with rounding mode which must be an Integer Constant Expression. This has forced those intrinsics to be implemented as macros. As it stands now the mask and mask3 intrinsics evaluate one of their macro arguments twice. If that argument itself is another intrinsic macro, we can end up over expanding macros. Or if its something we can CSE later it would show up multiple times when it shouldn't.
I tried adding __extension__ around the macro and making it an expression statement and declaring a local variable. But whatever name you choose for the local variable can never be used as the name of an input to the macro in user code. If that happens you would end up with the same name on the LHS and RHS of an assignment after expansion. We might be safe if we use __ in front of the variable names because those names are reserved and user code shouldn't use that, but I wasn't sure I wanted to make that claim.
The other option which I've chosen here, is to add back _mask, _maskz, and _mask3 flavors of the builtin which we will expand in CGBuiltin.cpp to replicate the argument as needed and insert any fneg needed on the third operand to make a subtract. The _maskz isn't truly necessary if we have an unmasked version or if we use the masked version with a -1 mask and wrap a select around it. But I've chosen to make things more uniform.
I separated out the scalar builtin handling to avoid too many things going on in EmitX86FMAExpr. It was different enough due to the extract and insert that the minor duplication of the CreateCall was probably worth it.
Reviewers: tkrupa, RKSimon, spatel, GBuella
Reviewed By: tkrupa
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D47724
llvm-svn: 334159
Previously we were just using extended vector operations in the header file.
This unfortunately allowed non-constant indices to be used with the intrinsics. This is incompatible with gcc, icc, and MSVC. It also introduces a different performance characteristic because non-constant index gets lowered to a vector store and an element sized load.
By adding the builtins we can check for the index to be a constant and ensure its in range of the vector element count.
User code still has the option to use extended vector operations themselves if they need non-constant indexing.
llvm-svn: 334057
Previously we only checked the sse feature, but this means that if you passed -mno-mmx, the builtins/intrinsics wouldn't be disabled in the frontend and would instead fail backend isel.
llvm-svn: 333980
We need to implement _interlockedbittestandset as a builtin for
windows.h, so we might as well do the whole family. It reduces code
duplication anyway.
Fixes PR33188, a long standing bug in our bittest implementation
encountered by Chakra.
llvm-svn: 333978
Adding __attribute__((aligned(32))) to __m256 breaks the implementation
of _mm256_loadu_ps on Windows. On Windows, alignment attributes have
higher precedence than packing attributes.
We also might want to carefully consider the consequences of changing
our vector typedefs, since many users copy them and invent their own
new, non-Intel specific vector type names.
llvm-svn: 333958
This is more consistent with other usages of builtin_shufflevector. Later optimization passes or codegen will detect the duplicate vector and replace it with undef. Using _mm_undefined just puts a zeroinitializer that still needs to be optimized out later.
llvm-svn: 333944
One of the arguments was being used when the passthru argument is unused due to the mask being all 1s. But in that case the actual value doesn't matter so we should use undef instead to avoid expanding the macro argument unnecessarily.
llvm-svn: 333865
This fixes two major problems:
- We were not capping vector alignment as desired on 32-bit ARM.
- We were using different alignments based on the AVX settings on
Intel, so we did not have a consistent ABI.
This is an ABI break, but we think we can get away with it because
vectors tend to be used mostly in inline code (which is why not having
a consistent ABI has not proven disastrous on Intel).
Intel's AVX types are specified as having 32-byte / 64-byte alignment,
so align them explicitly instead of relying on the base ABI rule.
Note that this sort of attribute is stripped from template arguments
in template substitution, so there's a possibility that code templated
over vectors will produce inadequately-aligned objects. The right
long-term solution for this is for alignment attributes to be
interpreted as true qualifiers and thus preserved in the canonical type.
llvm-svn: 333791
This is more consistent with all of our other avx512 macro intrinsics.
It also fixes a bad cast where an argument was casted to mmask8 when it should have been a mmask16.
llvm-svn: 333778
The majority of the cases were correct. This fixes the few that weren't.
I also removed some superfluous parentheses in non-macros that confused by attempts at grepping for missing casts.
llvm-svn: 333615
I think this is a holdover from when we used to declare variables inside the macros. And then its been copy and pasted forward for years every time a new macro intrinsic gets added.
Interestingly this caused some tests for IRGen to be slightly more optimized. We now return a zeroinitializer directly instead of going through a store+load.
It also removed a bogus error message on another test.
llvm-svn: 333613
Most of the origial comments used C style /* */ comments, but some C++ // comments had snuck in over time.
Still need to convert all the doxygen comments. Which is much harder to do.
llvm-svn: 333603
We don't need the insertion back into the original vector at the end. The builtin already understands that.
This is different than _mm_sqrt_sd which takes two arguments and we do need to insert.
llvm-svn: 333572
We had quite a few for different element sizes of integers sometimes with strange target features attached to them.
We only need a single version for each of _m128i, _m256i, and _m512i with the target feature that first introduced those types.
llvm-svn: 333568
This patch replaces all packed (and scalar without rounding
mode) fused intrinsics with fmadd/fmaddsub variations.
Then fmadd/fmaddsub are lowered to native IR.
Patch by tkrupa
Reviewers: craig.topper, sroland, spatel, RKSimon
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D47444
llvm-svn: 333555
Mostly this fixes the names of all the 128-bit intrinsics to start with _mm_ instead of _mm128_ as is the convention and what the Intel docs say.
This also fixes the name of the bitshuffle intrinsics to say epi64 for 128 and 256 bit versions.
llvm-svn: 333497
Summary:
We only need to use 512 bit vectors all the way through v8i64 reductions since those max instructions are new to avx512f and only available in 512 bits until SKX.
For v16i32 and floating point we have legacy 128/256 bit instructions we can use.
I've tried to use other intrinsics to reduce the verbosity of the code and avoid having to mention all the shuffles. I've also removed all the -1 shuffle indices so the output sequence is fully specified and not left to backend optimization.
Reviewers: RKSimon, spatel, GBuella
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D47401
llvm-svn: 333347
An intrinsic for an old instruction, as described in the Intel SDM.
Reviewers: craig.topper, rnk
Reviewed By: craig.topper, rnk
Differential Revision: https://reviews.llvm.org/D47142
llvm-svn: 333256
Summary:
Since clang r332929 these two headers throw errors when included from somewhere else than their wrapper header. It seems marking them as textual is the best way to fix the builds.
Fixes this new module build error:
While building module '_Builtin_intrinsics' imported from ...:
In file included from <module-includes>:2:
In file included from lib/clang/7.0.0/include/immintrin.h:54:
In file included from lib/clang/7.0.0/include/wmmintrin.h:29:
lib/clang/7.0.0/include/__wmmintrin_aes.h:25:2: error: "Never use <__wmmintrin_aes.h> directly; include <wmmintrin.h> instead."
#error "Never use <__wmmintrin_aes.h> directly; include <wmmintrin.h> instead."
Reviewers: rsmith, v.g.vassilev, craig.topper
Reviewed By: craig.topper
Subscribers: craig.topper, cfe-commits
Differential Revision: https://reviews.llvm.org/D47277
llvm-svn: 333123
This matches the Intel documentation which shows them available by importing immintrin.h. x86intrin.h also includes immintrin.h so anyone including x86intrin.h will still get them.
This is different than gcc, but I don't think we were a perfect match there already. I'm unclear what gcc's policy is about how they choose which to add things to.
Differential Revision: https://reviews.llvm.org/D47182
llvm-svn: 333110
(1) I added some \see cross-references to a few select intrinsics that are related (and have the same or similar semantics).
(2) pmmintrin.h, smmintrin.h, xmmintrin.h have very few minor formatting changes. They make rendering of our intrinsics documentation better.
llvm-svn: 333065
Previously we negated the whole vector after splatting infinity. But its better to negate the infinity before splatting. This generates IR with the negate already folded with the infinity constant.
llvm-svn: 333062
These were included in emmintrin.h to match Intel Intrinsics Guide documentation. But this is because icc is capable of emulating them on targets that don't support F16C using library calls. Clang/LLVM doesn't have this emulation support. So it makes more sense to include them in immintrin.h instead.
I've left a comment behind to hopefully deter someone from trying to move them again in the future.
llvm-svn: 333033
Intel documents the 128-bit versions as being in emmintrin.h and the 256-bit version as being in immintrin.h.
This patch makes a new __emmtrin_f16c.h to hold the 128-bit versions to be included from emmintrin.h. And makes the existing f16cintrin.h contain the 256-bit versions and include it from immintrin.h with an error if its included directly.
Differential Revision: https://reviews.llvm.org/D47174
llvm-svn: 333014
I believe this is safe assuming default default FP environment. The conversion might be inexact, but it can never overflow the FP type so this shouldn't be undefined behavior for the uitofp/sitofp instructions.
We already do something similar for scalar conversions.
Differential Revision: https://reviews.llvm.org/D46863
llvm-svn: 332882
Summary:
These look to be a couple things that weren't removed when we switched to target attribute.
The popcnt makes including just smmintrin.h also include popcntintrin.h. The popcnt file itself already contains target attrributes.
The prefetch ones are just wrappers around __builtin_prefetch which we have graceful fallbacks for in the backend if the exact instruction isn't available. So there's no reason to hide them. And it makes them available in functions that have the write target attribute but not a -march command line flag.
Reviewers: echristo, RKSimon, spatel, DavidKreitzer
Reviewed By: echristo
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D47029
llvm-svn: 332830
As long as the destination type is a 256 or 128 bit vector with the same number of elements we can use __builtin_convertvector to directly generate trunc IR instruction which will be handled natively by the backend.
Differential Revision: https://reviews.llvm.org/D46742
llvm-svn: 332266
If we're using default rounding mode we can let __builtin_convertvector to generate an fpextend. This matches 128 and 256 bit.
If we're using the version that takes an explicit rounding mode argument we would need to look at the immediate to see if its CUR_DIRECTION.
llvm-svn: 332210
We can use direct C code for these that will use uitofp and insertelement instructions.
For the versions that take an explicit rounding mode we can't do this.
llvm-svn: 332203
This is unnecessary for AVX512VL supporting CPUs like SKX. We can just emit a 128-bit masked load/store here no matter what. The backend will widen it to 512-bits on KNL CPUs.
Fixes the frontend portion of PR37386. Need to fix the backend to optimize the new sequences well.
llvm-svn: 331958
This is similar to the LLVM change https://reviews.llvm.org/D46290.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46320
llvm-svn: 331834
Without this we throw an error on the header file instead of the user code when the right features aren't enabled in clang.
Rename the other DEFAULT_FN_ATTRS defines to _Z for 512-bit since I used _Y for this case.
llvm-svn: 331682
It reverts r331378 as it caused test failures
ThreadSanitizer-x86_64 :: Darwin/gcd-groups-destructor.mm
ThreadSanitizer-x86_64 :: Darwin/libcxx-shared-ptr-stress.mm
ThreadSanitizer-x86_64 :: Darwin/xpc-race.mm
Only clang part of the change is reverted, libc++ part remains as is because it
emits error less aggressively.
llvm-svn: 331392
Atomics in C and C++ are incompatible at the moment and mixing the
headers can result in confusing error messages.
Emit an error explicitly telling about the incompatibility. Introduce
the macro `__ALLOW_STDC_ATOMICS_IN_CXX__` that allows to choose in C++
between C atomics and C++ atomics.
rdar://problem/27435938
Reviewers: rsmith, EricWF, mclow.lists
Reviewed By: mclow.lists
Subscribers: jkorous-apple, christof, bumblebritches57, JonChesterfield, smeenai, cfe-commits
Differential Revision: https://reviews.llvm.org/D45470
llvm-svn: 331378
On AVX512F targets we'll produce an emulated sequence using 3 pmuludqs with shifts and adds. On AVX512DQ we'll use vpmulld.
Fixes PR37140.
llvm-svn: 330923
The unmasked versions already didn't have this restrction. I don't think gcc or icc limit these to 64-bit mode so we shouldn't either.
llvm-svn: 330681
A previously missing intrinsic for an old instruction.
Reviewers: craig.topper, echristo
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D45311
llvm-svn: 329937
The WBNOINVD instruction writes back all modified
cache lines in the processor’s internal cache to main memory
but does not invalidate (flush) the internal caches.
Reviewers: craig.topper, zvi, ashlykov
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D43817
llvm-svn: 329848
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
- Fix instruction mappings/listings for various intrinsics
This patch was made by Craig Flores
Differential Revision: https://reviews.llvm.org/D41517
llvm-svn: 327090
Summary:
The _get_ssp intrinsic can be used to retrieve the
shadow stack pointer, independent of the current arch -- in
contract with the rdsspd and the rdsspq intrinsics.
Also, this intrinsic returns zero on CPUs which don't
support CET. The rdssp[d|q] instruction is decoded as nop,
essentially just returning the input operand, which is zero.
Example result of compilation:
```
xorl %eax, %eax
movl %eax, %ecx
rdsspq %rcx # NOP when CET is not supported
movq %rcx, %rax # return zero
```
Reviewers: craig.topper
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43814
llvm-svn: 326689
Initial commit missed sincos(float), llabs() and few atomics that we
used to pull in from device_functions.hpp, which we no longer include.
Differential Revision: https://reviews.llvm.org/D43602
llvm-svn: 325814
Clang can use CUDA-9.1 now, though new APIs (are not implemented yet.
The major change is that headers in CUDA-9.1 went through substantial
changes that started in CUDA-9.0 which required substantial changes
in the cuda compatibility headers provided by clang.
There are two major issues:
* CUDA SDK no longer provides declarations for libdevice functions.
* A lot of device-side functions have become nvcc's builtins and
CUDA headers no longer contain their implementations.
This patch changes the way CUDA headers are handled if we compile
with CUDA 9.x. Both 9.0 and 9.1 are affected.
* Clang provides its own declarations of libdevice functions.
* For CUDA-9.x clang now provides implementation of device-side
'standard library' functions using libdevice.
This patch should not affect compilation with CUDA-8. There may be
some observable differences for CUDA-9.0, though they are not expected
to affect functionality.
Tested: CUDA test-suite tests for all supported combinations of:
CUDA: 7.0,7.5,8.0,9.0,9.1
GPU: sm_20, sm_35, sm_60, sm_70
Differential Revision: https://reviews.llvm.org/D42513
llvm-svn: 323713
- Fix inaccurate instruction listings.
- Fix small issues in _mm_getcsr and _mm_setcsr.
- Fix description of NaN handling in comparison intrinsics.
- Fix inaccurate description of _mm_movemask_pi8.
- Fix inaccurate instruction mappings.
- Fix typos.
- Clarify wording on some descriptions.
- Fix bit ranges in return value.
- Fix typo in _mm_move_ms intrinsic instruction since it operates on singe-precision values, not double.
- This patch was made by Craig Flores
Differential Revision: https://reviews.llvm.org/D41523
llvm-svn: 322778
Summary:
kunpck intrinsics were removed in favor of native IR a few months ago. The implementation lowers them as by operation on the integer types passed to the intrinsic and then just shifting, masking, and oring them together. A special X86 DAG combine was added to recognize this patter and turn it into a concat_vector operation.
I think it makes more sense to keep the IR implementation closer to vector operations on vXi1. Given that we expect these builtins to be used around other builtins that operate on k-registers which we try to represent in IR with vXi1. InstCombine should be able to get rid of the bitcasts between integers and vXi1 leaving only the vector operations.
Reviewers: RKSimon, spatel, zvi, jina.nahias
Reviewed By: RKSimon
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D42016
llvm-svn: 322461
- Fix incorrect wording in various intrinsic descriptions. Previously the descriptions used "low-order" and "high-order" when the intended meaning was "even-indexed" and "odd-indexed".
- Fix a few typos and errors found during review.
- Restore new line endings.
This patch was made by Craig Flores
llvm-svn: 322027
- Fix formatting issue due to hyphenated terms at line breaks.
- Fix typo
This patch was made by Craig Flores
Differential Revision: https://reviews.llvm.org/D41520
llvm-svn: 321671
- Fix incorrect wording in various intrinsic descriptions. Previously the descriptions used "low-order" and "high-order" when the intended meaning was "even-indexed" and "odd-indexed".
This patch was made by Craig Flores
Differential Revision: https://reviews.llvm.org/D41518
llvm-svn: 321670
- Fixed innaccurate instruction mappings for various intrinsics.
- Fixed description of NaN handling in comparison intrinsics.
- Unify description of _mm_store_pd1 to match _mm_store1_pd.
- Fix incorrect wording in various intrinsic descriptions. Previously the descriptions used "low-order" and "high-order" when the intended meaning was "even-indexed" and "odd-indexed".
- Fix typos.
- Add missing italics command (\a) for params and fixed some parameter spellings.
This patch was made by Craig Flores
Differential Revision: https://reviews.llvm.org/D41516
llvm-svn: 321669
added vbmi2 feature recognition
added intrinsics support for vbmi2 instructions
_mm[128,256,512]_mask[z]_compress_epi[16,32]
_mm[128,256,512]_mask_compressstoreu_epi[16,32]
_mm[128,256,512]_mask[z]_expand_epi[16,32]
_mm[128,256,512]_mask[z]_expandloadu_epi[16,32]
_mm[128,256,512]_mask[z]_sh[l,r]di_epi[16,32,64]
_mm[128,256,512]_mask_sh[l,r]dv_epi[16,32,64]
matching a similar work on the backend (D40206)
Differential Revision: https://reviews.llvm.org/D41557
llvm-svn: 321487
added vpclmulqdq feature recognition
added intrinsics support for vpclmulqdq instructions
_mm256_clmulepi64_epi128
_mm512_clmulepi64_epi128
matching a similar work on the backend (D40101)
Differential Revision: https://reviews.llvm.org/D41573
llvm-svn: 321480
added vaes feature recognition
added intrinsics support for vaes instructions, matching a similar work on the backend (D40078)
_mm256_aesenc_epi128
_mm512_aesenc_epi128
_mm256_aesenclast_epi128
_mm512_aesenclast_epi128
_mm256_aesdec_epi128
_mm512_aesdec_epi128
_mm256_aesdeclast_epi128
_mm512_aesdeclast_epi128
llvm-svn: 321474
* __shfl_{up,down}* uses unsigned int for the third parameter.
* added [unsigned] long overloads for non-sync shuffles.
Differential Revision: https://reviews.llvm.org/D41521
llvm-svn: 321326
This patch, together with a matching llvm patch (https://reviews.llvm.org/D39720), implements the lowering of X86 kunpack intrinsics to IR.
Differential Revision: https://reviews.llvm.org/D39719
Change-Id: Id5d3cb394ad33b98be79a6783d1d15569e2b798d
llvm-svn: 319777
Use this function to create the install targets rather than doing so
manually, which gains us the `-stripped` install targets to perform
stripped installations.
Differential Revision: https://reviews.llvm.org/D40675
llvm-svn: 319489
CUDA-9 headers check for specific libc++ version and ifdef out
some of the definitions we need if LIBCPP_VERSION >= 3800.
Differential Revision: https://reviews.llvm.org/D40198
llvm-svn: 319485
Shadow stack solution introduces a new stack for return addresses only.
The stack has a Shadow Stack Pointer (SSP) that points to the last address to which we expect to return.
If we return to a different address an exception is triggered.
This patch includes shadow stack intrinsics as well as the corresponding CET header.
It includes CET clang flags for shadow stack and Indirect Branch Tracking.
For more information, please see the following:
https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf
Differential Revision: https://reviews.llvm.org/D40224
Change-Id: I79ad0925a028bbc94c8ecad75f6daa2f214171f1
llvm-svn: 318995
fma4 instructions zero the upper bits of the xmm register. fma3 instructions leave the bits unmodified. This requires separate builtins for the different semantics.
While we're cleaning up the scalar builtins this also removes the fma3 fmsub/fnmadd/fnmsub builtins by using negates in the header file.
llvm-svn: 318985
Summary:
__builtin_nexttoward lowers to a libcall, e.g. nexttowardf(), that CUDA
does not have.
Rather than try to implement it, we simply remove these functions --
nvcc doesn't support them either, and nextafter, which does work, does
essentially the same thing on GPUs, because GPUs don't have long double.
Reviewers: tra
Subscribers: cfe-commits, sanjoy
Differential Revision: https://reviews.llvm.org/D40152
llvm-svn: 318494
Change Header files of the intrinsics for lowering test and testn intrinsics to IR code.
Removed test and testn builtins from clang
Differential Revision: https://reviews.llvm.org/D38737
llvm-svn: 318035
This patch, together with a matching llvm patch (https://reviews.llvm.org/D38671), implements the lowering of X86 shuffle i/f intrinsics to IR.
Differential Revision: https://reviews.llvm.org/D38672
Change-Id: I9b3c2f2b34323bd9ccb21d0c1832f848b88ec047
llvm-svn: 318025
Summary:
How embarrassing.
This is tested in the test-suite -- fix to come there in a separate
patch.
Reviewers: tra
Subscribers: sanjoy, cfe-commits
Differential Revision: https://reviews.llvm.org/D39817
llvm-svn: 317961
The backend should be able to combine the negates to create fmsub, fnmadd, and fnmsub. faddsub converting to fsubadd still needs work I think, but should be very doable.
This matches what we already do for the masked builtins.
This only covers the packed builtins. Scalar builtins will be done after FMA4 is fixed.
llvm-svn: 317873
I think we need to use different builtins for the FMA4 instructions since those instructions zero the upper bits and FMA3 instructions pass the bits through.
So this moves the existing builtins to be the FMA3 versions. New versions will be added for FMA4.
llvm-svn: 317766
Summary: According to Intel docs this should take void const *. We had char*. The lack of const is the main issue.
Reviewers: RKSimon, zvi, igorb
Reviewed By: igorb
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38782
llvm-svn: 315470
The __builtin_ia32_pbroadcastq512_mem_mask we were previously trying to use in 32-bit mode is not implemented in the x86 backend and causes isel to fail in release builds. In debug builds it fails even earlier during legalization with an llvm_unreachable.
While there add the missing test case for this intrinsic for this for 64-bit mode.
This fixes PR34631. D37668 should be able to recover this for 32-bit mode soon. But I wanted to fix the crash ahead of that.
llvm-svn: 313392
In CUDA-9 some of device-side math functions that we need are conditionally
defined within '#if _GLIBCXX_MATH_H'. We need to temporarily undo the guard
around inclusion of math_functions.hpp.
Differential Revision: https://reviews.llvm.org/D37906
llvm-svn: 313369
For now CUDA-9 is not included in the list of CUDA versions clang
searches for, so the path to CUDA-9 must be explicitly passed
via --cuda-path=.
On LLVM side NVPTX added sm_70 GPU type which bumps required
PTX version to 6.0, but otherwise is equivalent to sm_62 at the moment.
Differential Revision: https://reviews.llvm.org/D37576
llvm-svn: 312734
Summary:
Tests have to live in the test-suite, and so will come in a separate
patch.
Fixes PR34360.
Reviewers: tra
Subscribers: llvm-commits, sanjoy
Differential Revision: https://reviews.llvm.org/D37539
llvm-svn: 312681
Based off the Intel Intrinsics guide, we should expect a void const* argument.
Prevents 'passing 'const void *' to parameter of type 'void *' discards qualifiers' warnings.
Differential Revision: https://reviews.llvm.org/D37449
llvm-svn: 312523
This patch implements the broadcastf32x2/broadcasti32x2 intrinsics using __builtin_shufflevector.
Differential Revision: https://reviews.llvm.org/D37287
llvm-svn: 312135
GCC will interpret `__attribute__((__aligned__))` as 8-byte alignment on
ARM, but clang will not. Explicitly specify the alignment. This
mirrors the declaration in libunwind.
llvm-svn: 311576
The C++ ABI requires that the exception object (which under AEABI is the
`_Unwind_Control_Block`) is double-word aligned. The attribute was
applied to the `_Unwind_Exception` type, but not the
`_Unwind_Control_Block`. This should fix the libunwind test for the
alignment of the exception type.
llvm-svn: 311563
OpenCL spec v2.0 s6.13.6:
gentype select (gentype a,
gentype b,
igentype c)
gentype select (gentype a,
gentype b,
ugentype c)
igentype and ugentype must have the same number
of elements and bits as gentype.
Differential Revision: https://reviews.llvm.org/D36259
llvm-svn: 310160
OpenCL 2.0 atomic builtin functions have a scope argument which is ideally
represented as synchronization scope argument in LLVM atomic instructions.
Clang supports translating Clang atomic builtin functions to LLVM atomic
instructions. However it currently does not support synchronization scope
of LLVM atomic instructions. Without this, users have to use LLVM assembly
code to implement OpenCL atomic builtin functions.
This patch adds OpenCL 2.0 atomic builtin functions as Clang builtin
functions, which supports generating LLVM atomic instructions with
synchronization scope operand.
Currently only constant memory scope argument is supported. Support of
non-constant memory scope argument will be added later.
Differential Revision: https://reviews.llvm.org/D28691
llvm-svn: 310082
Clang specifies a max type alignment of 16 bytes on darwin targets (annoyingly in the driver not via cc1), meaning that the builtin nontemporal stores don't correctly align the loads/stores to 32 or 64 bytes when required, resulting in lowering to temporal unaligned loads/stores.
This patch casts the vectors to explicitly aligned types prior to the load/store to ensure that the require alignment is respected.
Differential Revision: https://reviews.llvm.org/D35996
llvm-svn: 309488
The EHABI definition was being inlined into the users even when EHABI
was not in use. Adjust the condition to ensure that the right version
is defined.
llvm-svn: 309327
Ensure that we define the `_Unwind_Control_Block` structure used on ARM
EHABI targets. This is needed for building libc++abi with the unwind.h
from the resource dir. A minor fallout of this is that we needed to
create a typedef for _Unwind_Exception to work across ARM EHABI and
non-EHABI targets. The structure definitions here are based originally
on the documentation from ARM under the "Exception Handling ABI for the
ARM® Architecture" Section 7.2. They are then adjusted to more closely
reflect the definition in libunwind from LLVM. Those changes are
compatible in layout but permit easier use in libc++abi and help
maintain compatibility between libunwind and the compiler provided
definition.
llvm-svn: 309226
This patch updates the vecintrin.h header file to provide the new
set of high-level vector built-in functions. This matches the
updated definition implemented by other compilers for the platform,
indicated by the pre-defined macro __VEC__ == 10302.
Note that some of the new functions (notably those involving the
vector float data type) are only available with -march=z14
(indicated by __ARCH__ == 12).
llvm-svn: 308199
Corrected several typos and incorrect parameters description that Sony
's techinical writer found during review.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 307838
Summary:
The _bit_scan_forward and _bit_scan_reverse intrinsics were accidentally
masked under the preprocessor checks that prune intrinsics definitions for the
benefit of faster compile-time on Windows. This patch moves the
definitons out of that region.
Fixes pr33722
Reviewers: craig.topper, aaboud, thakis
Reviewed By: craig.topper
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D35184
llvm-svn: 307524
The second argument must be a constant, otherwise instruction selection
will fail. always_inline is not enough for isel to always fold
everything away at -O0.
Sadly the overloading turned this into a big macro mess. Fixes PR33212.
llvm-svn: 304205
AVX512_VPOPCNTDQ is a new feature set that was published by Intel.
The patch represents the Clang side of the addition of six intrinsics for two new machine instructions (vpopcntd and vpopcntq).
It also includes the addition of the new feature set.
Differential Revision: https://reviews.llvm.org/D33170
llvm-svn: 303857
(2) Removed uncessary anymore \c commands, since the same effect will be achived by <c> ... </c> sequence.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 303228
Separated very long brief sections into two sections.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 303031
This patch adds support for the the LightWeight Profiling (LWP) instructions which are available on all AMD Bulldozer class CPUs (bdver1 to bdver4).
Differential Revision: https://reviews.llvm.org/D32770
llvm-svn: 302418
Implemented the remaining integer data processing intrinsics from
the ARM ACLE v2.1 spec, such as parallel arithemtic and DSP style
multiplications.
Differential Revision: https://reviews.llvm.org/D32282
llvm-svn: 302131
- I removed doxygen comments for the intrinsics that "alias" the other existing documented intrinsics and that only sligtly differ in spelling (single underscores vs. double underscores).
#define _tzcnt_u16(a) (__tzcnt_u16((a)))
It will be very hard to keep the documentation for these "aliases" in sync with the documentation for the intrinsics they alias to. Out of sync documentation will be more confusing than no documentation.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 301652
size_t is usually defined as unsigned long, but on 64-bit platforms,
stdint.h currently defines SIZE_MAX using "ull" (unsigned long long).
Although this is the same width, it doesn't necessarily have the same
alignment or calling convention. It also triggers printf warnings when
using the format flag "%zu" to print SIZE_MAX.
This changes SIZE_MAX to reuse the compiler-provided __SIZE_MAX__, and
provides similar fixes for the other integers:
- INTPTR_MIN
- INTPTR_MAX
- UINTPTR_MAX
- PTRDIFF_MIN
- PTRDIFF_MAX
- INTMAX_MIN
- INTMAX_MAX
- UINTMAX_MAX
- INTMAX_C()
- UINTMAX_C()
... and fixes the typedefs for intptr_t and uintptr_t to use
__INTPTR_TYPE__ and __UINTPTR_TYPE__ instead of int32_t, effectively
reverting r89224, r89226, and r89237 (r89221 already having been
effectively reverted).
We can probably also kill __INTPTR_WIDTH__, __INTMAX_WIDTH__, and
__UINTMAX_WIDTH__ in a follow-up, but I was hesitant to delete all the
per-target CHECK lines in this commit since those might serve their own
purpose.
rdar://problem/11811377
llvm-svn: 301593
Summary: This patch makes the header `stdatomic.h` work when `-fms-compatibility` is specified.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D32322
llvm-svn: 300919
- To be consistent with the rest of the intrinsics headers, I removed the tags <i> .. </i> for marking instruction names in italics in in smmintrin.h.
- Formatting changes to fit into 80 characters.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 300578
MOVNTDQA non-temporal aligned vector loads can be correctly represented using generic builtin loads, allowing us to remove the existing x86 intrinsics.
LLVM companion patch: D31767.
Differential Revision: https://reviews.llvm.org/D31766
llvm-svn: 300326
It's used by MS headers in VS 2017 without including intrin.h, so we
can't implement it in the header anymore.
Differential Revision: https://reviews.llvm.org/D31736
llvm-svn: 299782
It seems MS headers have started using __readgsqword, and since it's
used in a header that doesn't include intrin.h, we can't implement it as
an inline function anymore.
That was already the case for __readfsdword, which Saleem added support
for in r220859. This patch reuses that codegen to implement all of
__read[fg]s{byte,word,dword,qword}.
Differential Revision: https://reviews.llvm.org/D31248
llvm-svn: 298538
I made some small changes in smmintrin.h and emmintrin.h intrinsics.
- changed some regular comments '//' into doxygen-style comments '///' where necessary
- removed some trailing spaces in doxygen comments.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 298371
- Fix a variable naming mismatch
- Fix gcc extension pointer arithmetic on void to cast to char *.
- Test that the header (and htmintrin.h) parse.
llvm-svn: 298318
Reapply r289181 but rename the include guard to avoid
conflict with the one from Darwin.
Allow darwin to provide additional definitions and implementation
specifc values for tgmath.h on Apple platforms.
rdar://problem/19019845
llvm-svn: 298013
The DAZ feature introduces the denormal zero support for x86.
Currently the definitions are located under SSE3 header, however there are some SSE2 targets that support the feature as well.
Differential Revision: https://reviews.llvm.org/D30194
llvm-svn: 296296
Note: The doxygen comments are automatically generated based on Sony's intrinsic
s document.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 295404
Removed ndrange_t as Clang builtin type and added
as a struct type in the OpenCL header.
Use type name to do the Sema checking in enqueue_kernel
and modify IR generation accordingly.
Review: D28058
Patch by Dmitry Borisenkov!
llvm-svn: 295311
__fastfail terminates the process immediately with a special system
call. It does not run any process shutdown code or exception recovery
logic.
Fixes PR31854
llvm-svn: 294606
1. Adds the command line flag for clzero.
2. Includes the clzero flag under znver1.
3. Defines the macro for clzero.
4. Adds a new file which has the intrinsic definition for clzero instruction.
Patch by Ganesh Gopalasubramanian with some additional tests from me.
Differential revision: https://reviews.llvm.org/D29386
llvm-svn: 294559
Added doxygen comments to prfchwintrin.h's intrinsics.
Note: The doxygen comments are automatically generated based on Sony's intrinsic
s document.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.
llvm-svn: 293745
Prior to OpenCL 2.0, image3d_t can only be used with the write_only
access qualifier when the cl_khr_3d_image_writes extension is enabled,
see e.g. OpenCL 1.1 s6.8b.
Require the extension for write_only image3d_t types and guard uses of
write_only image3d_t in the OpenCL header.
Patch by Sven van Haastregt!
Review: https://reviews.llvm.org/D28860
llvm-svn: 293050
For a << b (as original vec_sl does), if b >= sizeof(a) * 8, the
behavior is undefined. However, Power instructions do define the
behavior, which is equivalent to a << (b % (sizeof(a) * 8)).
This patch changes altivec.h to use a << (b % (sizeof(a) * 8)), to
ensure the consistent semantic of the instructions. Then it combines
the generated multiple instructions back to a single shift.
This patch handles left shift only. Right shift, on the other hand, is
more complicated, considering arithematic/logical right shift.
Differential Revision: https://reviews.llvm.org/D28037
llvm-svn: 292659
Added doxygen comments for the newly added intrinsics in avxintrin.h, namely _mm256_cvtsd_f64, _mm256_cvtsi256_si32 and _mm256_cvtss_f32
Added doxygen comments for the new intrinsics in emmintrin.h, namely _mm_loadu_si64 and _mm_load_sd.
Explicit parameter names were added for _mm_clflush and _mm_setcsr
The rest of the changes are editorial, removing trailing spaces at the end of the lines.
Differential Revision: https://reviews.llvm.org/D28503
llvm-svn: 291876
Add builtins for the functions and custom codegen mapping the builtins to their
corresponding intrinsics and handling the endian related swapping.
https://reviews.llvm.org/D26546
llvm-svn: 291179
Summary:
MSVC seems to use "__in" and "__out" for its own purposes, so we have to
pick different names in this macro.
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D28325
llvm-svn: 291138
Summary:
These duplicate declarations cause a problem for CUDA compiles on
Windows. All implicitly-defined functions are host+device, and this
applies to the declarations in Builtin.def. But then when we see the
declarations in intrin.h, they have no attributes, so are host-only
functions. This is an error.
(A better fix might be to make these builtins host-only, but that is a
much bigger change.)
Reviewers: rnk
Subscribers: cfe-commits, echristo
Differential Revision: https://reviews.llvm.org/D28317
llvm-svn: 291128
CUDA-8.0 comes with new headers which nvcc pre-includes via cuda_runtime.h
Clang now makes them available as well.
Differential Revision: https://reviews.llvm.org/D28301
llvm-svn: 290982
Added \n commands to insert a line breaks where necessary, since one long line of documentation is nearly unreadable.
Formatted comments to fit into 80 chars.
In some cases added \a command in front of the parameter names to display them in italics.
llvm-svn: 290619
Improved doxygen comments for the following intrinsics headers: __wmmintrin_pclmul.h, bmiintrin.h, emmintrin.h, f16cintrin.h, immintrin.h, mmintrin.h, pmmintrin.h, tmmintrin.h
Added \n commands to insert a line breaks where necessary, since one long line of documentation is nearly unreadable.
Formatted comments to fit into 80 chars.
In some cases added \a command in front of the parameter names to display them in italics.
llvm-svn: 290561
According to extended asm syntax, a case where the clobber list includes a variable from the inputs or outputs should be an error - conflict.
for example:
const long double a = 0.0;
int main()
{
char b;
double t1 = a;
__asm__ ("fucompp": "=a" (b) : "u" (t1), "t" (t1) : "cc", "st", "st(1)");
return 0;
}
This should conflict with the output - t1 which is st, and st which is st aswell.
The patch fixes it.
Commit on behald of Ziv Izhar.
Differential Revision: https://reviews.llvm.org/D15075
llvm-svn: 290539
Added \n commands to insert a line breaks where necessary to make the documentation more readable.
Formatted comments to fit into 80 chars.
llvm-svn: 290458
Tagged parameter names with \a doxygen command to display parameters in italics.
Added \n commands to insert a line break to make the documentation more readable.
Formatted comments to fit into 80 chars.
llvm-svn: 290455
Added a map to associate types and declarations with extensions.
Refactored existing diagnostic for disabled types associated with extensions and extended it to declarations for generic situation.
Fixed some bugs for types associated with extensions.
Allow users to use pragma to declare types and functions for supported extensions, e.g.
#pragma OPENCL EXTENSION the_new_extension_name : begin
// declare types and functions associated with the extension here
#pragma OPENCL EXTENSION the_new_extension_name : end
Differential Revision: https://reviews.llvm.org/D21698
llvm-svn: 289979
Reverts r289181: it's currently breaking modules using simd.h in
10.12 SDK.
This reverts commit 6e73e3464e96a4e00492c24aa790d36e1adb5702.
llvm-svn: 289487
This will allow the backend to constant fold these to generic shuffle vectors like 128-bit and 256-bit without having to working about handling masking.
llvm-svn: 289351
This will allow the backend to constant fold these to generic shuffle vectors like 128-bit and 256-bit without having to working about handling masking.
llvm-svn: 289345
Tagged instruction names with <c> INSTR_NAME </c> to display them in typewriter font.
In the past, \c command was used, unfortunately it applied to only one word.
<c> .. </c> has the same meaning, but applies to all words in between the tags.
llvm-svn: 289249
Allow darwin to provide additional definitions and implementation
specifc values for tgmath.h on Apple platforms.
rdar://problem/19019845
llvm-svn: 289181
Improved doxygen comments for fxsrintrin.h and mmintrin.h intrinsics by taagging parameter names with \a doxygen command to display parameters in italics.
Formatted comments to fit into 80 chars.
llvm-svn: 289154
Improved doxygen comments for __wmmintrin_pclmul.h and ammintrin.h intrinsics by taagging parameter names with \a doxygen command to display parameters in italics.
Formatted comments to fit into 80 chars.
llvm-svn: 289083
Documentation for some of the avxintrin.h's intrinsics errorneously said that
non VEX-prefixed instructions could be generated. This was fixed.
I tried several different solutions to achieve pretty printing of unordered lists (nested and non-nested) in param sections in doxygen.
llvm-svn: 287990
(commit again after fixing the buildbot failures)
This adds various overloads of the following builtins to altivec.h:
vec_neg
vec_nabs
vec_adde
vec_addec
vec_sube
vec_subec
vec_subc
Note that for vec_sub builtins on 32 bit integers, the semantics is similar to
what ISA describes for instructions like vsubecuq that work on quadwords: the
first operand is added to the one's complement of the second operand. (As
opposed to two's complement which I expected).
llvm-svn: 287872
(commit again after fixing the buildbot failures)
This adds various overloads of the following builtins to altivec.h:
vec_neg
vec_nabs
vec_adde
vec_addec
vec_sube
vec_subec
vec_subc
Note that for vec_sub builtins on 32 bit integers, the semantics is similar to
what ISA describes for instructions like vsubecuq that work on quadwords: the
first operand is added to the one's complement of the second operand. (As
opposed to two's complement which I expected).
llvm-svn: 287795
This adds various overloads of the following builtins to altivec.h:
vec_neg
vec_nabs
vec_adde
vec_addec
vec_sube
vec_subec
vec_subc
Note that for vec_sub builtins on 32 bit integers, the semantics is similar to
what ISA describes for instructions like vsubecuq that work on quadwords: the
first operand is added to the one's complement of the second operand. (As
opposed to two's complement which I expected).
llvm-svn: 287772
The doxygen comments are automatically generated based on Sony's intrinsics docu
ment.
I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream. This patch was internally reviewed by Charles Li.
llvm-svn: 287483