Commit Graph

8 Commits

Author SHA1 Message Date
Chandler Carruth 4cf5743b77 Move the builtin headers to use the new license file header.
Summary:
These all had somewhat custom file headers with different text from the
ones I searched for previously, and so I missed them. Thanks to Hal and
Kristina and others who prompted me to fix this, and sorry it took so
long.

Reviewers: hfinkel

Subscribers: mcrosier, javed.absar, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D60406

llvm-svn: 357941
2019-04-08 20:51:30 +00:00
Craig Topper 74c10e3236 [Builtins][Attributes][X86] Tag all X86 builtins with their required vector width. Add a min_vector_width function attribute and tag all x86 instrinsics with it
This is part of an ongoing attempt at making 512 bit vectors illegal in the X86 backend type legalizer due to CPU frequency penalties associated with wide vectors on Skylake Server CPUs. We want the loop vectorizer to be able to emit IR containing wide vectors as intermediate operations in vectorized code and allow these wide vectors to be legalized to 256 bits by the X86 backend even though we are targetting a CPU that supports 512 bit vectors. This is similar to what happens with an AVX2 CPU, the vectorizer can emit wide vectors and the backend will split them. We want this splitting behavior, but still be able to use new Skylake instructions that work on 256-bit vectors and support things like masking and gather/scatter.

Of course if the user uses explicit vector code in their source code we need to not split those operations. Especially if they have used any of the 512-bit vector intrinsics from immintrin.h. And we need to make it so that merely using the intrinsics produces the expected code in order to be backwards compatible.

To support this goal, this patch adds a new IR function attribute "min-legal-vector-width" that can indicate the need for a minimum vector width to be legal in the backend. We need to ensure this attribute is set to the largest vector width needed by any intrinsics from immintrin.h that the function uses. The inliner will be reponsible for merging this attribute when a function is inlined. We may also need a way to limit inlining in the future as well, but we can discuss that in the future.

To make things more complicated, there are two different ways intrinsics are implemented in immintrin.h. Either as an always_inline function containing calls to builtins(can be target specific or target independent) or vector extension code. Or as a macro wrapper around a taget specific builtin. I believe I've removed all cases where the macro was around a target independent builtin.

To support the always_inline function case this patch adds attribute((min_vector_width(128))) that can be used to tag these functions with their vector width. All x86 intrinsic functions that operate on vectors have been tagged with this attribute.

To support the macro case, all x86 specific builtins have also been tagged with the vector width that they require. Use of any builtin with this property will implicitly increase the min_vector_width of the function that calls it. I've done this as a new property in the attribute string for the builtin rather than basing it on the type string so that we can opt into it on a per builtin basis and avoid any impact to target independent builtins.

There will be future work to support vectors passed as function arguments and supporting inline assembly. And whatever else we can find that isn't covered by this patch.

Special thanks to Chandler who suggested this direction and reviewed a preview version of this patch. And thanks to Eric Christopher who has had many conversations with me about this issue.

Differential Revision: https://reviews.llvm.org/D48617

llvm-svn: 336583
2018-07-09 19:00:16 +00:00
Martin Storsjo cad7a5f1aa [X86] Remove leftover semicolons at end of macros
This was missed in a few places in SVN r333613, causing compilation
errors if these macros are used e.g. as parameter to a function.

llvm-svn: 333734
2018-06-01 09:40:50 +00:00
Craig Topper c633867944 [X86] Remove __extension__ from macro intrinsics when its not needed.
I think this is a holdover from when we used to declare variables inside the macros. And then its been copy and pasted forward for years every time a new macro intrinsic gets added.

Interestingly this caused some tests for IRGen to be slightly more optimized. We now return a zeroinitializer directly instead of going through a store+load.

It also removed a bogus error message on another test.

llvm-svn: 333613
2018-05-31 00:51:20 +00:00
Craig Topper 73d1d403e2 [X86] Use C style comments in intrinsic headers for overall consistency.
Most of the origial comments used C style /* */ comments, but some C++ // comments had snuck in over time.

Still need to convert all the doxygen comments. Which is much harder to do.

llvm-svn: 333603
2018-05-30 22:33:21 +00:00
Craig Topper dff5b311af [X86] Reduce the number of setzero intrinsics to just the set defined by the Intel Intrinsics Guide.
We had quite a few for different element sizes of integers sometimes with strange target features attached to them.

We only need a single version for each of _m128i, _m256i, and _m512i with the target feature that first introduced those types.

llvm-svn: 333568
2018-05-30 18:02:11 +00:00
Craig Topper 66ef4185bc [X86] Make _mm256_gf2p8mul_epi8 require avx features since its 256 bits.
Without this we throw an error on the header file instead of the user code when the right features aren't enabled in clang.

Rename the other DEFAULT_FN_ATTRS defines to _Z for 512-bit since I used _Y for this case.

llvm-svn: 331682
2018-05-07 21:47:11 +00:00
Coby Tayree f4811ebc39 [x86][icelake][gfni]
added gfni feature recognition
added intrinsics support for gfni instructions
  _mm_gf2p8affineinv_epi64_epi8
  _mm_mask_gf2p8affineinv_epi64_epi8
  _mm_maskz_gf2p8affineinv_epi64_epi8
  _mm256_gf2p8affineinv_epi64_epi8
  _mm256_mask_gf2p8affineinv_epi64_epi8
  _mm256_maskz_gf2p8affineinv_epi64_epi8
  _mm512_gf2p8affineinv_epi64_epi8
  _mm512_mask_gf2p8affineinv_epi64_epi8
  _mm512_maskz_gf2p8affineinv_epi64_epi8
  _mm_gf2p8affine_epi64_epi8
  _mm_mask_gf2p8affine_epi64_epi8
  _mm_maskz_gf2p8affine_epi64_epi8
  _mm256_gf2p8affine_epi64_epi8
  _mm256_mask_gf2p8affine_epi64_epi8
  _mm256_maskz_gf2p8affine_epi64_epi8
  _mm512_gf2p8affine_epi64_epi8
  _mm512_mask_gf2p8affine_epi64_epi8
  _mm512_maskz_gf2p8affine_epi64_epi8
  _mm_gf2p8mul_epi8
  _mm_mask_gf2p8mul_epi8
  _mm_maskz_gf2p8mul_epi8
  _mm256_gf2p8mul_epi8
  _mm256_mask_gf2p8mul_epi8
  _mm256_maskz_gf2p8mul_epi8
  _mm512_gf2p8mul_epi8
  _mm512_mask_gf2p8mul_epi8
  _mm512_maskz_gf2p8mul_epi8
matching a similar work on the backend (D40373)
Differential Revision: https://reviews.llvm.org/D41582

llvm-svn: 321477
2017-12-27 08:37:47 +00:00