Commit Graph

24 Commits

Author SHA1 Message Date
Elad Cohen b107a22afb [X86] Remove the mm_malloc.h include guard hack from the X86 builtins tests
The X86 clang/test/CodeGen/*builtins.c tests define the mm_malloc.h include
guard as a hack for avoiding its inclusion (mm_malloc.h requires a hosted
environment since it expects stdlib.h to be available - which is not the case
in these internal clang codegen tests).
This patch removes this hack and instead passes -ffreestanding to clang cc1.

Differential Revision: https://reviews.llvm.org/D24825

llvm-svn: 282581
2016-09-28 11:59:09 +00:00
Craig Topper d0681d528d [X86] Use v2i64 vectors to implement _mm_and/andn/or/xor_pd.
These will be reused when removing some builtins from avx512vldqintrin.h and this will make the tests for that change show a better number of vector elements.

llvm-svn: 280196
2016-08-31 05:38:55 +00:00
Eric Christopher abb2b54ad3 After PR28761 use -Wall with -Werror in builtins tests to identify
possible problems in headers.

llvm-svn: 277696
2016-08-04 06:02:50 +00:00
Simon Pilgrim e3b9ee0645 [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

Differential Revision: https://reviews.llvm.org/D22105

llvm-svn: 276102
2016-07-20 10:18:01 +00:00
Simon Pilgrim 6350054017 [X86][SSE2] Updated tests to match llvm\test\CodeGen\X86\sse2-intrinsics-fast-isel-x86_64.ll
llvm-svn: 274126
2016-06-29 14:04:08 +00:00
Asaf Badouh 57819aa185 [X86] add _mm_loadu_si64
Differential Revision: http://reviews.llvm.org/D21504

llvm-svn: 273812
2016-06-26 13:51:54 +00:00
Craig Topper 50e3dfe9d0 [X86] Fix pslldq/psrldq intrinsics to not fail compilation with immediates larger than 16. This was accidentally broken in r272246.
llvm-svn: 273775
2016-06-25 07:31:14 +00:00
Sanjay Patel 280cfd1a69 [x86] translate SSE packed FP comparison builtins to IR
As noted in the code comment, a potential follow-on would be to remove
the builtins themselves. Other than ord/unord, this already works as 
expected. Eg:

  typedef float v4sf __attribute__((__vector_size__(16)));
  v4sf fcmpgt(v4sf a, v4sf b) { return a > b; }

Differential Revision: http://reviews.llvm.org/D21268

llvm-svn: 272840
2016-06-15 21:20:04 +00:00
Sanjay Patel 7495ec026e [x86] generate IR for SSE integer min/max builtins
Sibling patch to r272806:
http://reviews.llvm.org/rL272806

llvm-svn: 272807
2016-06-15 17:18:50 +00:00
Simon Pilgrim 00880511b1 [X86][SSE] Replace (V)CVTTPS2DQ and VCVTTPD2DQ truncating (round to zero) f32/f64 to i32 with generic IR (clang)
The 'cvtt' truncation (round to zero) conversions can be safely represented as generic __builtin_convertvector (fptosi) calls instead of x86 intrinsics. We already do this (implicitly) for the scalar equivalents.

Note: I looked at updating _mm_cvttpd_epi32 as well but this still requires a lot more backend work to correctly lower (both for debug and optimized builds).

Differential Revision: http://reviews.llvm.org/D20859

llvm-svn: 271436
2016-06-01 21:46:51 +00:00
Simon Pilgrim 0e90936fea [X86] Ensure load/store tests unaligned pointers really are align 1
llvm-svn: 271227
2016-05-30 19:20:55 +00:00
Simon Pilgrim 645e1ad33a [X86][SSE] _mm_store1_ps/_mm_store1_pd should require an aligned pointer
According to the gcc headers, intel intrinsics docs and msdn codegen the _mm_store1_pd (and its _mm_store_pd1 equivalent) should use an aligned pointer - the clang headers are the only implementation I can find that assume non-aligned stores (by storing with _mm_storeu_pd).

Additionally, according to the intel intrinsics docs and msdn codegen the _mm_store1_ps (_mm_store_ps1) requires a similarly aligned pointer.

This patch raises the alignment requirements to match the other implementations by calling _mm_store_ps/_mm_store_pd instead.

I've also added the missing _mm_store_pd1 intrinsic (which maps to _mm_store1_pd like _mm_store_ps1 does to _mm_store1_ps).

As a followup I'll update the llvm fast-isel tests to match this codegen.

Differential Revision: http://reviews.llvm.org/D20617

llvm-svn: 271218
2016-05-30 17:55:25 +00:00
Craig Topper 09175dab31 [X86] Replace unaligned store builtins in SSE/AVX intrinsic files with code that will compile to a native unaligned store. Remove the builtins since they are no longer used.
Intrinsics will be removed from llvm in a future commit.

llvm-svn: 271214
2016-05-30 17:10:30 +00:00
Craig Topper f70a61ff3f [X86] Update test cases to make sure storeu builtins use the storeu instrinsics. We were previously matching on other stores in the IR from this being an -O0 test.
We should probably look into making the storeu builtins just emit a normal store with an alignment of 1.

llvm-svn: 270664
2016-05-25 05:26:23 +00:00
Simon Pilgrim 90770c7c76 [X86][SSE] Replace lossless i32/f32 to f64 conversion intrinsics with generic IR
Both the (V)CVTDQ2PD(Y) (i32 to f64) and (V)CVTPS2PD(Y) (f32 to f64) conversion instructions are lossless and can be safely represented as generic __builtin_convertvector calls instead of x86 intrinsics without affecting final codegen.

This patch removes the clang builtins and their use in the sse2/avx headers - a future patch will deal with removing the llvm intrinsics, but that will require a bit more work.

Differential Revision: http://reviews.llvm.org/D20528

llvm-svn: 270499
2016-05-23 22:13:02 +00:00
Simon Pilgrim bcf8846be5 [X86][SSE2] Fixed shuffle of results in _mm_cmpnge_sd/_mm_cmpngt_sd tests
llvm-svn: 270079
2016-05-19 16:48:59 +00:00
Simon Pilgrim 97728dfb39 [X86][SSE2] Added _mm_move_* tests
llvm-svn: 270043
2016-05-19 11:18:49 +00:00
Simon Pilgrim cddcd2bd45 [X86][SSE2] Added _mm_cast* and _mm_set* tests
llvm-svn: 270042
2016-05-19 11:03:48 +00:00
Simon Pilgrim 3f64bb9618 [X86][SSE2] Sync with llvm/test/CodeGen/X86/sse2-intrinsics-fast-isel.ll
llvm-svn: 270034
2016-05-19 09:52:59 +00:00
Simon Pilgrim 063c57c1f9 Revert r269967 (SSE2 builtin checks) due to failed buildbots
llvm-svn: 269970
2016-05-18 18:22:20 +00:00
Simon Pilgrim 8beed747ce [X86][SSE2] Sync with llvm/test/CodeGen/X86/sse2-intrinsics-fast-isel.ll
llvm-svn: 269967
2016-05-18 18:12:34 +00:00
Simon Pilgrim 2d1decf7cb [X86][SSE] Tidied up MMX/SSE/SSE2 builtin tests to the correct test file
llvm-svn: 269852
2016-05-17 22:03:31 +00:00
Simon Pilgrim 068c2ce836 [X86] Stripped backend codegen tests
As discussed on the ml, backend tests need to be put in llvm/test/CodeGen/X86 as fast-isel tests using IR that is as close to what is generated here as possible.

The llvm tests will (re)added in a future commit

I will update PR24580 on this new plan

llvm-svn: 254594
2015-12-03 08:45:21 +00:00
Simon Pilgrim a5c0493ddb [X86][SSE2] Added SSE2 IR + assembly codegen builtin tests
Improved tests as discussed in PR24580

llvm-svn: 254262
2015-11-29 20:23:00 +00:00