According to the Intel documentation, the mask operand of a maskload and
maskstore intrinsics is always a vector of packed integer/long integer values.
This patch introduces the following two changes:
1. It fixes the avx maskload/store intrinsic definitions in avxintrin.h.
2. It changes BuiltinsX86.def to match the correct gcc definitions for avx
maskload/store (see D13861 for more details).
Differential Revision: http://reviews.llvm.org/D13861
llvm-svn: 250816
128-bit vector integer sign extensions correctly lower to the pmovsx instructions even for debug builds.
This patch removes the builtins and reimplements the _mm_cvtepi*_epi* intrinsics __using builtin_shufflevector (to extract the bottom most subvector) and __builtin_convertvector (to actually perform the sign extension).
Differential Revision: http://reviews.llvm.org/D12835
llvm-svn: 248092
Add intrinsics for the FXSR instructions (FXSAVE/FXSAVE64/FXRSTOR/FXRSTOR64)
These were previously declared in Intrin.h for MSVC compatibility, but now
that we have them implemented, these declarations can be removed.
llvm-svn: 241053
This is very much like D8088 (checked in at r231792).
Now that we've replaced the vinsertf128 intrinsics,
do the same for their extract twins.
Differential Revision: http://reviews.llvm.org/D8275
llvm-svn: 232052
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
This is the sibling patch for the LLVM half of this change:
http://reviews.llvm.org/D8086
Differential Revision: http://reviews.llvm.org/D8088
llvm-svn: 231792
made the 8-bit masks actually 8-bit arguments to these intrinsics.
These builtins are a mess. Many were missing the I qualifier which
I added where obviously correct. Most aren't tested, but I've updated
the relevant tests. I've tried to catch all the things that should
become 'c' in this round.
It's also frustrating because the set of these is really ad-hoc and
doesn't really map that cleanly to the set supported by either GCC or
LLVM. Oh well...
llvm-svn: 217311
This patch adds intrinsic __rdpmc to header file 'ia32intrin.h'.
Intrinsic __rdmpc can be used to read performance monitoring counters. It is
implemented as a direct call to __builtin_ia32_rdpmc.
It takes as input a value representing the index of the performance counter to
read. The value of the performance counter is then returned as a unsigned
64-bit quantity.
llvm-svn: 212053
These intrinsics are special because they directly take a memory operand (AVX2
adds the register counterparts). Typically, other non-memop intrinsics take
registers and then it's left to isel to fold memory operands.
In order to LICM intrinsics directly reading memory, we require that no stores
are in the loop (LICM) or that the folded load accesses constant memory
(MachineLICM). When neither is the case we fail to hoist a loop-invariant
broadcast.
We can work around this limitation if we expose the load as a regular load and
then just implement the broadcast using the vector initializer syntax. This
exposes the load to LICM and other optimizations.
At the IR level this is translated into a series of insertelements. The
sequence is already recognized as a broadcast so there is no impact on the
quality of codegen.
_mm256_broadcast_pd and _mm256_broadcast_ps are not updated by this patch
because right now we lack the DAG-combiner smartness to recover the broadcast
instructions. This will be tackled in a follow-on.
There will be completing changes on the LLVM side to remove the LLVM
intrinsics and to auto-upgrade bitcode files.
Fixes <rdar://problem/16494520>
llvm-svn: 209846
This patch:
1. Adds a definition for two new GCCBuiltins in BuiltinsX86.def:
__builtin_ia32_rdtsc;
__builtin_ia32_rdtscp;
2. Replaces the already existing definition of intrinsic __rdtsc in
ia32intrin.h with a simple call to the new GCC builtin __builtin_ia32_rdtsc.
3. Adds a definition for the new intrinsic __rdtscp in ia32intrin.h
llvm-svn: 207132
Intrinsics added shaintrin.h, which is included from x86intrin.h if __SHA__ is
enabled. SHA implies SSE2, which is needed for the __m128i type.
Also add the -msha/-mno-sha option.
llvm-svn: 190999
goodness because it provides opportunites to cleanup things. For example,
uint64_t t1(__m128i vA)
{
uint64_t Alo;
_mm_storel_epi64((__m128i*)&Alo, vA);
return Alo;
}
was generating
movq %xmm0, -8(%rbp)
movq -8(%rbp), %rax
and now generates
movd %xmm0, %rax
rdar://11282581
llvm-svn: 155924
(__m128){ p[0], p[1], p[2], p[3] }
which produces really bad code. This could be done in instcombine, but it's
probably better to do it in the front-end instead.
<rdar://problem/9424836>
llvm-svn: 131237
as soon as we properly codegen the simple vector operations, remove the
unnecessary built-ins/intrinsics from clang and llvm. Also add tests for the new
built-ins
llvm-svn: 110096
- This is designed to make it obvious that %clang_cc1 is a "test variable"
which is substituted. It is '%clang_cc1' instead of '%clang -cc1' because it
can be useful to redefine what gets run as 'clang -cc1' (for example, to set
a default target).
llvm-svn: 91446