Commit Graph

48 Commits

Author SHA1 Message Date
Simon Pilgrim 12919f7e49 [X86][SSE] Replace 128-bit SSE41 PMOVSX intrinsics with native IR
128-bit vector integer sign extensions correctly lower to the pmovsx instructions even for debug builds.

This patch removes the builtins and reimplements the _mm_cvtepi*_epi* intrinsics __using builtin_shufflevector (to extract the bottom most subvector) and __builtin_convertvector (to actually perform the sign extension).

Differential Revision: http://reviews.llvm.org/D12835

llvm-svn: 248092
2015-09-19 15:12:38 +00:00
Simon Pilgrim a79dfb3223 [X86] Add __builtin_ia32_undef* intrinsics to test
Minor tweak to rL246083

llvm-svn: 246200
2015-08-27 20:29:13 +00:00
Michael Kuperstein a3c7b74208 [X86] Add FXSR intrinsics
Add intrinsics for the FXSR instructions (FXSAVE/FXSAVE64/FXRSTOR/FXRSTOR64)

These were previously declared in Intrin.h for MSVC compatibility, but now
that we have them implemented, these declarations can be removed.

llvm-svn: 241053
2015-06-30 09:45:38 +00:00
Sanjay Patel 0c351aba25 [X86, AVX] replace vextractf128 intrinsics with generic shuffles
This is very much like D8088 (checked in at r231792).

Now that we've replaced the vinsertf128 intrinsics,
do the same for their extract twins.

Differential Revision: http://reviews.llvm.org/D8275

llvm-svn: 232052
2015-03-12 15:50:36 +00:00
Sanjay Patel 7f6aa52e93 [X86, AVX] Replace vinsertf128 intrinsics with generic shuffles.
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

This is the sibling patch for the LLVM half of this change:
http://reviews.llvm.org/D8086

Differential Revision: http://reviews.llvm.org/D8088

llvm-svn: 231792
2015-03-10 15:19:26 +00:00
Craig Topper b1bc5cf4bc [X86] Remove pblendw and pblendd builtins that aren't being used by the intrinsic headers.
llvm-svn: 230738
2015-02-27 06:54:25 +00:00
Craig Topper ac0d58bc4c [X86] Remove the blendps/blendpd builtins. They aren't used by the intrinsic headers. We use appropriate shuffle vector instead.
llvm-svn: 230616
2015-02-26 08:09:05 +00:00
Craig Topper 1601525f1c [X86] Add range checking to the immediate arguments of many of the SSE/AVX builtins.
llvm-svn: 227674
2015-01-31 06:31:23 +00:00
Chandler Carruth 2949e548f4 [x86] Clean up the x86 builtin specs to reflect r217310 in LLVM which
made the 8-bit masks actually 8-bit arguments to these intrinsics.

These builtins are a mess. Many were missing the I qualifier which
I added where obviously correct. Most aren't tested, but I've updated
the relevant tests. I've tried to catch all the things that should
become 'c' in this round.

It's also frustrating because the set of these is really ad-hoc and
doesn't really map that cleanly to the set supported by either GCC or
LLVM. Oh well...

llvm-svn: 217311
2014-09-06 10:30:51 +00:00
Andrea Di Biagio eb606a3c27 [x86] Add Clang support for intrinsic __rdpmc.
This patch adds intrinsic __rdpmc to header file 'ia32intrin.h'.
Intrinsic __rdmpc can be used to read performance monitoring counters. It is
implemented as a direct call to __builtin_ia32_rdpmc.

It takes as input a value representing the index of the performance counter to
read. The value of the performance counter is then returned as a unsigned
64-bit quantity.

llvm-svn: 212053
2014-06-30 18:23:58 +00:00
Adam Nemet 286ae08e7d Implement AVX1 vbroadcast intrinsics with vector initializers
These intrinsics are special because they directly take a memory operand (AVX2
adds the register counterparts).  Typically, other non-memop intrinsics take
registers and then it's left to isel to fold memory operands.

In order to LICM intrinsics directly reading memory, we require that no stores
are in the loop (LICM) or that the folded load accesses constant memory
(MachineLICM).  When neither is the case we fail to hoist a loop-invariant
broadcast.

We can work around this limitation if we expose the load as a regular load and
then just implement the broadcast using the vector initializer syntax.  This
exposes the load to LICM and other optimizations.

At the IR level this is translated into a series of insertelements.  The
sequence is already recognized as a broadcast so there is no impact on the
quality of codegen.

_mm256_broadcast_pd and _mm256_broadcast_ps are not updated by this patch
because right now we lack the DAG-combiner smartness to recover the broadcast
instructions.  This will be tackled in a follow-on.

There will be completing changes on the LLVM side to remove the LLVM
intrinsics and to auto-upgrade bitcode files.

Fixes <rdar://problem/16494520>

llvm-svn: 209846
2014-05-29 20:47:29 +00:00
Andrea Di Biagio 7ceec07cf6 [X86] Add Clang support for intrinsics __rdtsc and __rdtscp.
This patch:
 1. Adds a definition for two new GCCBuiltins in BuiltinsX86.def:
   __builtin_ia32_rdtsc;
   __builtin_ia32_rdtscp;

 2. Replaces the already existing definition of intrinsic __rdtsc in
    ia32intrin.h with a simple call to the new GCC builtin __builtin_ia32_rdtsc.

 3. Adds a definition for the new intrinsic __rdtscp in ia32intrin.h

llvm-svn: 207132
2014-04-24 18:26:35 +00:00
Eli Friedman f9d8c6cebb Add _mm_stream_si64 intrinsic.
While I'm here, also fix the alignment computation for the whole family of
intrinsics.

PR17298.

llvm-svn: 191243
2013-09-23 23:38:39 +00:00
Ben Langmuir 58078d0103 Add C intrinsics for Intel SHA Extensions
Intrinsics added shaintrin.h, which is included from x86intrin.h if __SHA__ is
enabled. SHA implies SSE2, which is needed for the __m128i type.

Also add the -msha/-mno-sha option.

llvm-svn: 190999
2013-09-19 13:22:04 +00:00
Chad Rosier 87622b8b84 Get rid of storelv4si builtin as it can be expressed directly. This is general
goodness because it provides opportunites to cleanup things.  For example,

uint64_t t1(__m128i vA)
{
  uint64_t Alo;
  _mm_storel_epi64((__m128i*)&Alo, vA);
  return Alo;
}

was generating 

	movq	%xmm0, -8(%rbp)
	movq	-8(%rbp), %rax

and now generates

	movd	%xmm0, %rax

rdar://11282581

llvm-svn: 155924
2012-05-01 18:11:51 +00:00
Craig Topper 26e74e50b6 Convert vperm2f128 and vperm2i128 intrinsics back to using llvm intrinsics. Unfortunately, these instructions have behavior that can't be modeled with shuffle vector.
llvm-svn: 154906
2012-04-17 05:16:56 +00:00
Craig Topper e5ea3b0239 Remove vperm2f* and vperm2i builtins. Same effect can be achieved with builtin_shufflevector.
llvm-svn: 150064
2012-02-08 07:33:36 +00:00
Craig Topper fec9f8edb7 Remove vpermilp* builtins. Same effect can be achieved with builtin_shufflevector.
llvm-svn: 150056
2012-02-08 05:16:54 +00:00
Craig Topper d6d3a05b4f Cleanup 3dnow builtin handling. Most of them were already handled by LLVM connecting intrinsics and builtins in IntrinsicsX86.td.
llvm-svn: 149233
2012-01-30 08:18:19 +00:00
Craig Topper 80df922f2f Re-enable test that was broken by r148919
llvm-svn: 148932
2012-01-25 06:23:23 +00:00
Chris Lattner f07b612313 disable this test for now.
llvm-svn: 148928
2012-01-25 05:38:06 +00:00
Craig Topper a89747dd1e Add AVX2 intrinsics for pavg, pblend, and pcmp instructions. Also remove unneeded builtins for SSE pcmp. Change SSE pcmpeqq and pcmpgtq to not use builtins and just use vector == and >.
llvm-svn: 146969
2011-12-20 09:55:26 +00:00
Chad Rosier 060d03be1c Fix _mm256_round_pd, _mm256_round_ps, _mm_permute_pd and _mm256_permute_pd AVX
intrinsics to use "I" (ICE) markings.  Fix avxintrin.h to take them into 
account.
Part of rdar://10595450

llvm-svn: 146791
2011-12-17 00:15:26 +00:00
Bill Wendling bb455154a1 Remove the 'unaligned load' builtins now that they're no longer used in the *mmintrin.h files.
llvm-svn: 131300
2011-05-13 18:52:28 +00:00
Bill Wendling e106c34817 LLVM doesn't always optimize away the four loads from this:
(__m128){ p[0], p[1], p[2], p[3] }

which produces really bad code. This could be done in instcombine, but it's
probably better to do it in the front-end instead.
<rdar://problem/9424836>

llvm-svn: 131237
2011-05-12 19:02:15 +00:00
Michael J. Spencer 6826eb816a Add 3DNow! Intrinsics.
llvm-svn: 129570
2011-04-15 15:07:13 +00:00
Bill Wendling a865185ad6 Removing the unaligned load tests from builtins-x86.c since they're generated by a regular 'load' now.
llvm-svn: 129464
2011-04-13 20:17:22 +00:00
Argyrios Kyrtzidis 073c9cb592 Implement __builtin_ia32_vec_ext_v2si function (required by Qt).
llvm-svn: 116162
2010-10-10 03:19:11 +00:00
Bruno Cardoso Lopes 762e401911 Remove rsqrtps_nr256 and sqrtps_nr256 builtins, at least until we need them
llvm-svn: 110844
2010-08-11 19:18:36 +00:00
Bruno Cardoso Lopes 65954ffc69 Remove 256-bit cast built-ins and make the AVX intrinsic call llvm __builtin_shufflevector with the appropriate arguments
llvm-svn: 110771
2010-08-11 02:14:38 +00:00
Bruno Cardoso Lopes a4f1930b75 Remove 256-bit unpack built-ins and make the AVX intrinsic call llvm __builtin_shufflevector with the appropriate arguments
llvm-svn: 110768
2010-08-11 01:43:24 +00:00
Bruno Cardoso Lopes e712a135b7 Remove 256-bit shuffle built-ins and make the AVX intrinsic call llvm __builtin_shufflevector with the appropriate arguments
llvm-svn: 110766
2010-08-11 01:17:34 +00:00
Bruno Cardoso Lopes 3d3fc1d075 Make replicate intrinsics use shufflevector instead of dup builtins, also remove the dup builtins
llvm-svn: 110646
2010-08-10 02:23:54 +00:00
Bruno Cardoso Lopes e2538c4ecf We don't want to support built-ins which aren't needed by the intrinsics. Remove them
llvm-svn: 110399
2010-08-05 23:47:43 +00:00
Bruno Cardoso Lopes 6586724f71 Add more AVX 256-bit intrinsics and test cases for them
llvm-svn: 110178
2010-08-04 01:11:26 +00:00
Bruno Cardoso Lopes 1f927ccaa2 Support x86 AVX 256-bit instructions built-ins. Right now support all of them, but
as soon as we properly codegen the simple vector operations, remove the
unnecessary built-ins/intrinsics from clang and llvm. Also add tests for the new
built-ins

llvm-svn: 110096
2010-08-03 01:57:18 +00:00
Eric Christopher 45fa7e60fb Fix __builtin_ia32_roundss and __builtin_ia32_roundsd definitions.
Re-enable test.

llvm-svn: 97707
2010-03-04 01:34:19 +00:00
Daniel Dunbar 8fbe78f6fc Update tests to use %clang_cc1 instead of 'clang-cc' or 'clang -cc1'.
- This is designed to make it obvious that %clang_cc1 is a "test variable"
   which is substituted. It is '%clang_cc1' instead of '%clang -cc1' because it
   can be useful to redefine what gets run as 'clang -cc1' (for example, to set
   a default target).

llvm-svn: 91446
2009-12-15 20:14:24 +00:00
Daniel Dunbar 8b57697954 Eliminate &&s in tests.
- 'for i in $(find . -type f); do sed -e 's#\(RUN:.*[^ ]\) *&& *$#\1#g' $i | FileUpdate $i; done', for the curious.

llvm-svn: 86430
2009-11-08 01:45:36 +00:00
Eli Friedman e9ff191459 Remove a few more vector builtins.
llvm-svn: 73022
2009-06-07 09:32:56 +00:00
Eli Friedman 5a996fc0fc Now that LLVM CodeGen can handle the generic variations a bit better,
get rid of a few more clang vector builtins.

llvm-svn: 73015
2009-06-07 07:12:56 +00:00
Eli Friedman 5f75ff84b7 Test changes to account for removed builtins.
llvm-svn: 73004
2009-06-06 18:15:42 +00:00
Anders Carlsson 8dd2947696 Remove an unused builtin.
llvm-svn: 72033
2009-05-18 19:25:54 +00:00
Anders Carlsson 2081200b8c Add 'cmp' SSE builtins and get rid of a bunch of other builtins.
llvm-svn: 72032
2009-05-18 19:16:46 +00:00
Daniel Dunbar a45cf5b6b0 Rename clang to clang-cc.
Tests and drivers updated, still need to shuffle dirs.

llvm-svn: 67602
2009-03-24 02:24:46 +00:00
Daniel Dunbar 7a88d1fcc4 Fix definition of __builtin_ia32_vec_set_v2di and de-XFAIL
builtins-x86.c.

llvm-svn: 63069
2009-01-26 23:43:02 +00:00
Mon P Wang 179a72f000 Added vec_set intrinsics
llvm-svn: 57749
2008-10-18 02:43:25 +00:00
Daniel Dunbar 93bc49d882 Add X86 builtin code generation test case.
llvm-svn: 57104
2008-10-05 06:38:36 +00:00