Commit Graph

44 Commits

Author SHA1 Message Date
Chandler Carruth 4cf5743b77 Move the builtin headers to use the new license file header.
Summary:
These all had somewhat custom file headers with different text from the
ones I searched for previously, and so I missed them. Thanks to Hal and
Kristina and others who prompted me to fix this, and sorry it took so
long.

Reviewers: hfinkel

Subscribers: mcrosier, javed.absar, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D60406

llvm-svn: 357941
2019-04-08 20:51:30 +00:00
Craig Topper d88f76a891 [X86] Add ktest intrinsics to match gcc and icc.
These aren't documented in the Intel Intrinsics Guide, but are supported by gcc and icc.

Includes these intrinsics:
_ktestc_mask8_u8, _ktestz_mask8_u8, _ktest_mask8_u8
_ktestc_mask16_u8, _ktestz_mask16_u8, _ktest_mask16_u8
_ktestc_mask32_u8, _ktestz_mask32_u8, _ktest_mask32_u8
_ktestc_mask64_u8, _ktestz_mask64_u8, _ktest_mask64_u8

llvm-svn: 341265
2018-08-31 22:29:56 +00:00
Craig Topper 42a4d0822e [X86] Add k-mask conversion and load/store instrinsics to match gcc and icc.
This adds:
_cvtmask8_u32, _cvtmask16_u32, _cvtmask32_u32, _cvtmask64_u64
_cvtu32_mask8, _cvtu32_mask16, _cvtu32_mask32, _cvtu64_mask64
_load_mask8, _load_mask16, _load_mask32, _load_mask64
_store_mask8, _store_mask16, _store_mask32, _store_mask64

These are currently missing from the Intel Intrinsics Guide webpage.

llvm-svn: 341251
2018-08-31 20:41:06 +00:00
Craig Topper 2aa8efc820 [X86] Add kshift intrinsics to match gcc and icc.
This adds the following intrinsics:
_kshiftli_mask8
_kshiftli_mask16
_kshiftli_mask32
_kshiftli_mask64
_kshiftri_mask8
_kshiftri_mask16
_kshiftri_mask32
_kshiftri_mask64

llvm-svn: 341234
2018-08-31 18:22:52 +00:00
Craig Topper a65bf65e0b [X86] Add kadd intrinsics to match gcc and icc.
This adds the following intrinsics:
_kadd_mask64
_kadd_mask32
_kadd_mask16
_kadd_mask8

These are missing from the Intel Intrinsics Guide, but are implemented by both gcc and icc.

llvm-svn: 340879
2018-08-28 22:32:14 +00:00
Craig Topper cb5fd56c7f [X86] Add kortest intrinsics for 8, 32, and 64 bit masks. Add new intrinsic names for 16 bit masks.
This matches gcc and icc despite not being documented in the Intel Intrinsics Guide.

llvm-svn: 340798
2018-08-28 06:28:25 +00:00
Craig Topper c330ca8611 [X86] Add intrinsics for kand/kandn/knot/kor/kxnor/kxor with 8, 32, and 64-bit mask registers.
This also adds a second intrinsic name for the 16-bit mask versions.

These intrinsics match gcc and icc. They just aren't published in the Intel Intrinsics Guide so I only recently found they existed.

llvm-svn: 340719
2018-08-27 06:20:22 +00:00
Craig Topper e0b5d4cd9d [X86] Rename __DEFAULT_FN_ATTRS to a__DEFAULT_FN_ATTRS512 in avx512dqintrin.h and avx512bwintrin.h.
This is preparation for adding removing min_vector_width 512 from some intrinsics.

llvm-svn: 340717
2018-08-27 06:20:19 +00:00
Eric Christopher 83225b4075 Remove unnecessary trailing ; in macro intrinsic definition.
llvm-svn: 337321
2018-07-17 20:22:17 +00:00
Craig Topper 74c10e3236 [Builtins][Attributes][X86] Tag all X86 builtins with their required vector width. Add a min_vector_width function attribute and tag all x86 instrinsics with it
This is part of an ongoing attempt at making 512 bit vectors illegal in the X86 backend type legalizer due to CPU frequency penalties associated with wide vectors on Skylake Server CPUs. We want the loop vectorizer to be able to emit IR containing wide vectors as intermediate operations in vectorized code and allow these wide vectors to be legalized to 256 bits by the X86 backend even though we are targetting a CPU that supports 512 bit vectors. This is similar to what happens with an AVX2 CPU, the vectorizer can emit wide vectors and the backend will split them. We want this splitting behavior, but still be able to use new Skylake instructions that work on 256-bit vectors and support things like masking and gather/scatter.

Of course if the user uses explicit vector code in their source code we need to not split those operations. Especially if they have used any of the 512-bit vector intrinsics from immintrin.h. And we need to make it so that merely using the intrinsics produces the expected code in order to be backwards compatible.

To support this goal, this patch adds a new IR function attribute "min-legal-vector-width" that can indicate the need for a minimum vector width to be legal in the backend. We need to ensure this attribute is set to the largest vector width needed by any intrinsics from immintrin.h that the function uses. The inliner will be reponsible for merging this attribute when a function is inlined. We may also need a way to limit inlining in the future as well, but we can discuss that in the future.

To make things more complicated, there are two different ways intrinsics are implemented in immintrin.h. Either as an always_inline function containing calls to builtins(can be target specific or target independent) or vector extension code. Or as a macro wrapper around a taget specific builtin. I believe I've removed all cases where the macro was around a target independent builtin.

To support the always_inline function case this patch adds attribute((min_vector_width(128))) that can be used to tag these functions with their vector width. All x86 intrinsic functions that operate on vectors have been tagged with this attribute.

To support the macro case, all x86 specific builtins have also been tagged with the vector width that they require. Use of any builtin with this property will implicitly increase the min_vector_width of the function that calls it. I've done this as a new property in the attribute string for the builtin rather than basing it on the type string so that we can opt into it on a per builtin basis and avoid any impact to target independent builtins.

There will be future work to support vectors passed as function arguments and supporting inline assembly. And whatever else we can find that isn't covered by this patch.

Special thanks to Chandler who suggested this direction and reviewed a preview version of this patch. And thanks to Eric Christopher who has had many conversations with me about this issue.

Differential Revision: https://reviews.llvm.org/D48617

llvm-svn: 336583
2018-07-09 19:00:16 +00:00
Craig Topper 0029470dde [X86] Correct the width of mask arguments in intrinsic headers and tests.
All of these found by grepping through IR from the builtin tests for extra trunc and zext/sext instructions that shouldn't have been there.

Some of these were real bugs where we lost bits from the user input:
_mm512_mask_broadcast_f32x8
_mm512_maskz_broadcast_f32x8
_mm512_mask_broadcast_i32x8
_mm512_maskz_broadcast_i32x8
_mm256_mask_cvtusepi16_storeu_epi8

llvm-svn: 336042
2018-06-30 06:05:17 +00:00
Craig Topper 5f50f33806 [X86] Fold masking into subvector extract builtins.
I'm looking into making the select builtins require avx512f, avx512bw, or avx512vl since masking operations generally require those features.

The extract builtins are funny because the 512-bit versions return a 128 or 256 bit vector with masking even when avx512vl is not supported.

llvm-svn: 334330
2018-06-08 21:50:07 +00:00
Craig Topper 3428beeb2f [X86] Add subvector insert and extract builtins to enable target feature checking and immediate range checking.
Test changes are due to differences in how we generate undef elements now. We also changed the types used for extractf128_si256/insertf128_si256 to match the signature of the builtin that previously existed which this patch resurrects. This also matches gcc.

llvm-svn: 334261
2018-06-08 03:24:47 +00:00
Craig Topper 95ed88a1a9 [X86] Avoid passing _mm_undefined* to builtin_shufflevector if we are able to pass the first input a second time.
This is more consistent with other usages of builtin_shufflevector. Later optimization passes or codegen will detect the duplicate vector and replace it with undef. Using _mm_undefined just puts a zeroinitializer that still needs to be optimized out later.

llvm-svn: 333944
2018-06-04 19:28:09 +00:00
Craig Topper cbf3929bc9 [X86] Fix some places where macro arguments to intrinsics weren't cast to _m512(i|d)/_m256(i|d/_m128(i|d) first.
The majority of the cases were correct. This fixes the few that weren't.

I also removed some superfluous parentheses in non-macros that confused by attempts at grepping for missing casts.

llvm-svn: 333615
2018-05-31 01:24:40 +00:00
Craig Topper c633867944 [X86] Remove __extension__ from macro intrinsics when its not needed.
I think this is a holdover from when we used to declare variables inside the macros. And then its been copy and pasted forward for years every time a new macro intrinsic gets added.

Interestingly this caused some tests for IRGen to be slightly more optimized. We now return a zeroinitializer directly instead of going through a store+load.

It also removed a bogus error message on another test.

llvm-svn: 333613
2018-05-31 00:51:20 +00:00
Craig Topper dff5b311af [X86] Reduce the number of setzero intrinsics to just the set defined by the Intel Intrinsics Guide.
We had quite a few for different element sizes of integers sometimes with strange target features attached to them.

We only need a single version for each of _m128i, _m256i, and _m512i with the target feature that first introduced those types.

llvm-svn: 333568
2018-05-30 18:02:11 +00:00
Craig Topper 842171de36 [X86] Use __builtin_convertvector to implement some of the packed integer to packed float conversion intrinsics.
I believe this is safe assuming default default FP environment. The conversion might be inexact, but it can never overflow the FP type so this shouldn't be undefined behavior for the uitofp/sitofp instructions.

We already do something similar for scalar conversions.

Differential Revision: https://reviews.llvm.org/D46863

llvm-svn: 332882
2018-05-21 20:19:17 +00:00
Craig Topper 5ece4cfe1e [X86] Implement broadcastf32x2 and broadcasti32x2 intrinsics using __builtin_shufflevector instead builtins
This patch implements the broadcastf32x2/broadcasti32x2 intrinsics using __builtin_shufflevector.

Differential Revision: https://reviews.llvm.org/D37287

llvm-svn: 312135
2017-08-30 16:15:12 +00:00
Craig Topper 367c86ddbe [AVX-512] Replace subvector broadcast builtins with shufflevectors and selects.
Verified that the backend codegens this equally well.

llvm-svn: 292329
2017-01-18 02:17:10 +00:00
Craig Topper 08bf53ffda [AVX-512] Remove masked vector insert builtins and replace with native shufflevectors and selects.
Unfortunately, the backend currently doesn't fold masks into the instructions correctly when they come from these shufflevectors. I'll work on that in a future commit.

llvm-svn: 285667
2016-11-01 05:47:56 +00:00
Craig Topper 93ffabd28d [AVX-512] Remove masked vector extract builtins and replace with native shufflevectors and selects.
Unfortunately, the backend currently doesn't fold masks into the instructions correctly when they come from these shufflevectors. I'll work on that in a future commit.

llvm-svn: 285540
2016-10-31 04:30:56 +00:00
Craig Topper f43e4a1728 [AVX-512] Remove masked integer mullo builtins and replace with native IR.
llvm-svn: 280597
2016-09-03 19:19:49 +00:00
Craig Topper a815f488d5 [AVX-512] Implement masked floating point logical operations with native IR and remove the builtins.
llvm-svn: 280197
2016-08-31 05:38:58 +00:00
Lama Saba 5d01f224cf [X86][AVX512] lower __mm512_andnot_ps/__mm512_andnot_pd to IR
Differential revision: https://reviews.llvm.org/D23262
 

llvm-svn: 278209
2016-08-10 10:34:45 +00:00
Michael Zuckerman 3f316abdce [Clang][Intrinsics][AVX512][BuiltIn] adding intrinsics for vrangesd instruction set
Differential Revision: http://reviews.llvm.org/D21734

llvm-svn: 274218
2016-06-30 08:05:46 +00:00
Michael Zuckerman c4ae8537cf [Clang][AVX512][BUILTIN]Adding intrinsics for range_round_{sd|ss}
Differential Revision: http://reviews.llvm.org/D21002

llvm-svn: 272123
2016-06-08 08:19:27 +00:00
Craig Topper f3efec65bb [AVX512] Reformat macro intrinsics, ensure arguments have proper typecasts, ensure result is typecasted back to the generic types.
llvm-svn: 272119
2016-06-08 06:08:07 +00:00
Michael Zuckerman 96d0399658 [clang][AVX512][Intrinsics] Adding intrinsics reduce_[round]_{ss|sd} to clang
Differential Revision: http://reviews.llvm.org/D21014

llvm-svn: 272012
2016-06-07 14:00:20 +00:00
Craig Topper 6a77b62640 [X86] Use unsigned types for vector arithmetic in intrinsics to avoid undefined behavior for signed integer overflow.
This is really only needed for addition, subtraction, and multiplication, but I did the bitwise ops too for overall consistency. Clang currently doesn't set NSW for signed vector operations so the undefined behavior shouldn't happen today.

llvm-svn: 271778
2016-06-04 05:43:41 +00:00
Craig Topper 406d5cdf7c [AVX512] Remove space in -1 constants. NFC
llvm-svn: 271777
2016-06-04 05:43:37 +00:00
Craig Topper 41ad25a0f9 [AVX512] Add parentheses around macro arguments in AVX512DQ intrinsics. Remove leading underscores from macro argument names. Add explicit typecasts to all macro arguments and return values. And finally reformat after all the adjustments.
This is a mostly mechanical change accomplished with a script. I tried to split out any changes to the typecasts that already existed into separate commits.

llvm-svn: 269740
2016-05-17 04:41:36 +00:00
Craig Topper dca1f230ae [AVX512] Add intrinsics for 512-bit insertf32x8/insertf32x4/inserti32x4.
llvm-svn: 269617
2016-05-15 21:26:20 +00:00
Michael Zuckerman edc82fe3ef [Clang][Builtin][AVX512]Adding intrinsics for vfpclass{sd|ss} vfpclass{pd|ps} instruction set
Differential Revision: http://reviews.llvm.org/D19476

llvm-svn: 267414
2016-04-25 14:48:23 +00:00
Michael Zuckerman ef2979af50 [Clang][AVX512][BUILTIN] Adding intrinsics support to VEXTRACT{I|F} and VINSERT{I|F} instruction set
Differential Revision: http://reviews.llvm.org/D19097

llvm-svn: 266745
2016-04-19 15:18:23 +00:00
Michael Zuckerman c2b6128a8f [Clang][AVX512][Builtin] Adding support for VBROADCAST and VPBROADCASTB/W/D/Q instruction set
Differential Revision: http://reviews.llvm.org/D19012

llvm-svn: 266195
2016-04-13 12:58:01 +00:00
Michael Zuckerman 074edd7c1e [Clang][AVX512][Builtin] Adding supporting to intrinsics of cvt{b|d|q}2mask{128|256|512} and cvtmask2{b|d|q}{128|256|512} instruction set.
Differential Revision: http://reviews.llvm.org/D19009

llvm-svn: 266188
2016-04-13 10:49:37 +00:00
Asaf Badouh 2718051dd7 re-apply r.247881
fixed the tests.

llvm-svn: 247892
2015-09-17 14:53:37 +00:00
Asaf Badouh 8a61250709 revert r.247881 due to tests failures
llvm-svn: 247883
2015-09-17 13:09:33 +00:00
Asaf Badouh a0e5e71ef1 [X86][AVX512DQ] add new intrinsics
convert i64 to FP and vice versa
reduceps & reducepd
rangeps & rangepd
all in their 512bit versions


Differential Revision: http://reviews.llvm.org/D11716

llvm-svn: 247881
2015-09-17 11:56:04 +00:00
Michael Kuperstein e45af54cdb [X86] Rename DEFAULT_FN_ATTR macro to __DEFAULT_FN_ATTR
llvm-svn: 241065
2015-06-30 13:36:19 +00:00
Eric Christopher 9fc7fb274e Update the intel intrinsic headers to use the target attribute support.
This involved removing the conditional inclusion and replacing them
with target attributes matching the original conditional inclusion
and checks. The testcase update removes the macro checks for each
file and replaces them with usage of the __target__ attribute, e.g.:

int __attribute__((__target__(("sse3")))) foo(int a) {
  _mm_mwait(0, 0);
  return 4;
}

This usage does require the enclosing function have the requisite
__target__ attribute for inlining and code generation - also for
any macro intrinsic uses in the enclosing function. There's no change
for existing uses of the intrinsic headers.

llvm-svn: 239883
2015-06-17 07:09:32 +00:00
Eric Christopher 4d185168e9 Use a define for per-file function attributes for the Intel intrinsic headers.
This is a precursor to changing them to use the new target attribute
code.

llvm-svn: 239882
2015-06-17 07:09:20 +00:00
Elena Demikhovsky e7d4c2e229 AVX-512: Added AVX-512 intrinsics and tests
by Asaf Badouh (asaf.badouh@intel.com)

llvm-svn: 236218
2015-04-30 09:24:29 +00:00