Commit Graph

526 Commits

Author SHA1 Message Date
Hao Liu 8a0099e02c Fix the problem that the range check for scalar narrow shift is too wide.
E.g. the immediate value of vshrns_n_s16 is [1,16], which should be [1,8].

llvm-svn: 195942
2013-11-29 02:13:17 +00:00
Chad Rosier 9e59285cc8 [AArch64] Add support for NEON scalar floating-point absolute difference.
llvm-svn: 195804
2013-11-27 01:46:19 +00:00
Chad Rosier 52e31b20cb [AArch64] Add support for NEON scalar floating-point to integer convert
instructions.

llvm-svn: 195789
2013-11-26 22:17:51 +00:00
Ana Pazos dbd1a22496 Implemented Neon scalar vdup_lane intrinsics.
Fixed scalar dup alias and added test case.

llvm-svn: 195329
2013-11-21 08:15:01 +00:00
Ana Pazos 2b02688fd9 Implemented Neon scalar by element intrinsics.
Intrinsics implemented: vqdmull_lane, vqdmulh_lane, vqrdmulh_lane,
vqdmlal_lane, vqdmlsl_lane scalar Neon intrinsics.

llvm-svn: 195326
2013-11-21 07:36:33 +00:00
Hao Liu 171cedf61e Implement AArch64 neon instructions class SIMD lsone and SIMD lone-post.
llvm-svn: 195079
2013-11-19 02:17:31 +00:00
Hao Liu 5e4ce1ae9d Implement the newly added AArch64 ACLE functions for ld1/st1 with 2/3/4 vectors.
The functions are like: vst1_s8_x2 ...

llvm-svn: 194991
2013-11-18 06:33:43 +00:00
Benjamin Kramer 847c1d90e1 Remove unused but set variable.
llvm-svn: 194920
2013-11-16 11:47:52 +00:00
Ana Pazos 6f2a47a9e5 Implemented aarch64 Neon scalar vmulx_lane intrinsics
Implemented aarch64 Neon scalar vfma_lane intrinsics
Implemented aarch64 Neon scalar vfms_lane intrinsics

Implemented legacy vmul_n_f64, vmul_lane_f64, vmul_laneq_f64
intrinsics (v1f64 parameter type) using Neon scalar instructions.

Implemented legacy vfma_lane_f64, vfms_lane_f64,
vfma_laneq_f64, vfms_laneq_f64 intrinsics (v1f64 parameter type)
using Neon scalar instructions.

llvm-svn: 194889
2013-11-15 23:33:31 +00:00
Chad Rosier 7aaee48bf0 [AArch64] Add support for legacy AArch32 NEON scalar shift right by immediate
and accumulate instructions.

llvm-svn: 194732
2013-11-14 22:02:24 +00:00
Kevin Qin caac85e612 [AArch64 neon] support poly64 and relevant intrinsic functions.
llvm-svn: 194660
2013-11-14 03:29:16 +00:00
Kevin Qin 1718af6f0a Implement aarch64 neon instruction class misc.
llvm-svn: 194657
2013-11-14 02:45:18 +00:00
Jiangning Liu 18b707cb3f Implement AArch64 NEON instruction set AdvSIMD (table).
llvm-svn: 194649
2013-11-14 01:57:55 +00:00
Reid Kleckner 59e4a6f5e2 -fms-extensions: Recognize _alloca as an alias for the alloca builtin
Differential Revision: http://llvm-reviews.chandlerc.com/D1989

llvm-svn: 194617
2013-11-13 22:58:53 +00:00
Chad Rosier e714a962b5 [AArch64] Tests for legacy AArch32 NEON scalar shift by immediate instructions.
A number of non-overloaded intrinsics have been replaced by thier overloaded
counterparts.

llvm-svn: 194599
2013-11-13 20:05:44 +00:00
Chad Rosier 249c714bb4 [AArch64] Add support for NEON scalar floating-point convert to fixed-point instructions.
llvm-svn: 194395
2013-11-11 18:04:22 +00:00
Jiangning Liu c628af66c7 Implement AArch64 Neon instruction set Perm.
llvm-svn: 194124
2013-11-06 03:35:53 +00:00
Jiangning Liu 37f5bb1b28 Implement AArch64 Neon instruction set Bitwise Extract.
llvm-svn: 194119
2013-11-06 02:26:12 +00:00
Jiangning Liu 34a7109b47 Implement AArch64 Neon Crypto instruction classes AES, SHA, and 3 SHA.
llvm-svn: 194086
2013-11-05 17:42:24 +00:00
Kevin Qin 9eece7b5e0 Implemented aarch64 neon intrinsic vcopy_lane with float type.
llvm-svn: 194042
2013-11-05 02:05:44 +00:00
Chad Rosier 74329d6cff [AArch64] Add support for NEON scalar fixed-point convert to floating-point instructions.
llvm-svn: 193817
2013-10-31 22:37:08 +00:00
Chad Rosier bdca387884 [AArch64] Add support for NEON scalar shift immediate instructions.
llvm-svn: 193791
2013-10-31 19:29:05 +00:00
Mark Lacey a8e7df3602 Add CodeGenABITypes.h for use in LLDB.
CodeGenABITypes is a wrapper built on top of CodeGenModule that exposes
some of the functionality of CodeGenTypes (held by CodeGenModule),
specifically methods that determine the LLVM types appropriate for
function argument and return values.

I addition to CodeGenABITypes.h, CGFunctionInfo.h is introduced, and the
definitions of ABIArgInfo, RequiredArgs, and CGFunctionInfo are moved
into this new header from the private headers ABIInfo.h and CGCall.h.

Exposing this functionality is one part of making it possible for LLDB
to determine the actual ABI locations of function arguments and return
values, making it possible for it to determine this for any supported
target without hard-coding ABI knowledge in the LLDB code.

llvm-svn: 193717
2013-10-30 21:53:58 +00:00
Chad Rosier 4d55e6e0a4 [AArch64] Add support for NEON scalar floating-point compare instructions.
llvm-svn: 193692
2013-10-30 15:20:07 +00:00
Peter Collingbourne b453cd64a7 Implement function type checker for the undefined behavior sanitizer.
This uses function prefix data to store function type information at the
function pointer.

Differential Revision: http://llvm-reviews.chandlerc.com/D1338

llvm-svn: 193058
2013-10-20 21:29:19 +00:00
Chad Rosier 3c03dee1d1 [AArch64] Add support for NEON scalar extract narrow instructions.
llvm-svn: 192971
2013-10-18 14:03:36 +00:00
Chad Rosier e7465644c6 [AArch64] Add support for NEON scalar three register different instruction
class.  The instruction class includes the signed saturating doubling
multiply-add long, signed saturating doubling multiply-subtract long, and
the signed saturating doubling multiply long instructions.

llvm-svn: 192909
2013-10-17 18:12:50 +00:00
Chad Rosier 00eef17dbe [AArch64] Add support for NEON scalar negate instruction.
llvm-svn: 192845
2013-10-16 21:04:53 +00:00
Chad Rosier e904137c01 [AArch64] Add support for NEON scalar absolute value instruction.
llvm-svn: 192844
2013-10-16 21:04:49 +00:00
Chad Rosier 2681b3fb61 Update comment.
llvm-svn: 192807
2013-10-16 16:30:39 +00:00
Chad Rosier 069b90463d [AArch64] Add support for NEON scalar signed saturating accumulated of unsigned
value and unsigned saturating accumulate of signed value instructions.

llvm-svn: 192801
2013-10-16 16:09:16 +00:00
Chad Rosier a70fb7b716 [AArch64] Add support for NEON scalar signed saturating absolute value and
scalar signed saturating negate instructions.

llvm-svn: 192734
2013-10-15 21:19:02 +00:00
Chad Rosier 193573ec89 [AArch64] Add support for NEON scalar integer compare instructions.
llvm-svn: 192597
2013-10-14 14:37:40 +00:00
Kevin Qin f22bf50443 Implemented aarch64 SIMD copy related ACLE intrinsic :
vget_lane, vset_lane, vcopy_lane, vcreate, vdup_n, vdup_lane, vmov_n.

llvm-svn: 192411
2013-10-11 02:34:30 +00:00
Hao Liu 1eade6d927 Implement AArch64 vector load/store multiple N-element structure class SIMD(lselem).
Including following 14 instructions:
4 ld1 insts: load multiple 1-element structure to sequential 1/2/3/4 registers.
ld2/ld3/ld4: load multiple N-element structure to sequential N registers (N=2,3,4).
4 st1 insts: store multiple 1-element structure from sequential 1/2/3/4 registers.
st2/st3/st4: store multiple N-element structure from sequential N registers (N = 2,3,4).

llvm-svn: 192362
2013-10-10 17:01:49 +00:00
Tim Northover 72ace5cf12 Revert "Implement AArch64 vector load/store multiple N-element structure class SIMD(lselem). "
This reverts commit r192351. The LLVM side broke the build and the Clang tests
will inevitably fail without it.

llvm-svn: 192356
2013-10-10 16:00:08 +00:00
Hao Liu c319193636 Implement AArch64 vector load/store multiple N-element structure class SIMD(lselem).
Including following 14 instructions:
4 ld1 insts: load multiple 1-element structure to sequential 1/2/3/4 registers.
ld2/ld3/ld4: load multiple N-element structure to sequential N registers (N=2,3,4).
4 st1 insts: store multiple 1-element structure from sequential 1/2/3/4 registers.
st2/st3/st4: store multiple N-element structure from sequential N registers (N = 2,3,4).

E.g. ld1(3 registers version) will load 32-bit elements {A, B, C, D, E, F} sequentially into the three 64-bit vectors list {BA, DC, FE}.
E.g. ld3 will load 32-bit elements {A, B, C, D, E, F} into the three 64-bit vectors list {DA, EB, FC}.

llvm-svn: 192351
2013-10-10 14:59:36 +00:00
Chad Rosier 0a903478c6 [AArch64] Add support for NEON scalar floating-point reciprocal estimate,
reciprocal exponent, and reciprocal square root estimate instructions.

llvm-svn: 192243
2013-10-08 22:09:29 +00:00
Chad Rosier 0babda4b9c [AArch64] Add support for NEON scalar signed/unsigned integer to floating-point
convert instructions.

llvm-svn: 192232
2013-10-08 20:43:46 +00:00
Matt Arsenault 2f15263807 Fix objectsize tests after r192117
llvm-svn: 192120
2013-10-07 19:00:18 +00:00
Chad Rosier 027dfade54 [AArch64] Add support for NEON scalar arithmetic instructions:
SQDMULH, SQRDMULH, FMULX, FRECPS, and FRSQRTS.

llvm-svn: 192112
2013-10-07 17:07:17 +00:00
Jiangning Liu b96ebac02b Implement aarch64 neon instruction set AdvSIMD (Across).
llvm-svn: 192029
2013-10-05 08:22:55 +00:00
Amaury de la Vieuville 21bf6ed730 Do not emit undefined lsrh/ashr for NEON shifts
These IR instructions are undefined when the amount is equal to operand
size, but NEON right shifts support such shifts. Work around that by
emitting a different IR in these cases.

llvm-svn: 191953
2013-10-04 13:13:15 +00:00
Jiangning Liu 4617e9dc85 Implement aarch64 neon instruction set AdvSIMD (3V elem).
llvm-svn: 191945
2013-10-04 09:21:17 +00:00
Joey Gouly 75987a65f3 [ARM] Add a builtin to allow you to use the 'sevl' instruction.
llvm-svn: 191816
2013-10-02 10:00:18 +00:00
Benjamin Kramer 9b1dfe8b56 Mark an impossible path as unreachable to pacify GCC.
llvm-svn: 191436
2013-09-26 16:36:08 +00:00
Benjamin Kramer 39c4924db9 Remove tabs.
llvm-svn: 191427
2013-09-26 12:16:47 +00:00
NAKAMURA Takumi 788af10a8a CGBuiltin.cpp: Prune a stray default: label. [-Wcovered-switch-default]
llvm-svn: 191277
2013-09-24 04:37:50 +00:00
Jiangning Liu 036f16dc8c Initial support for Neon scalar instructions.
Patch by Ana Pazos.

1.Added support for v1ix and v1fx types.
2.Added Scalar Pairwise Reduce instructions.
3.Added initial implementation of Scalar Arithmetic instructions.

llvm-svn: 191264
2013-09-24 02:48:06 +00:00
Eli Friedman f9d8c6cebb Add _mm_stream_si64 intrinsic.
While I'm here, also fix the alignment computation for the whole family of
intrinsics.

PR17298.

llvm-svn: 191243
2013-09-23 23:38:39 +00:00
Joey Gouly 1e8637b259 [ARMv8] Add builtins for CRC instructions.
Patch by Bradley Smith!

llvm-svn: 190931
2013-09-18 10:07:09 +00:00
Hal Finkel 28b2ae3692 Restore the sqrt -> llvm.sqrt mapping in fast-math mode
This restores the sqrt -> llvm.sqrt mapping, but only in fast-math mode
(specifically, when the UnsafeFPMath or NoNaNsFPMath CodeGen options are
enabled). The @llvm.sqrt* intrinsics have slightly different semantics from the
libm call, specifically, they are undefined when given a non-zero negative
number (the libm calls will always return NaN for any negative number).

This mapping was removed in r100613, and replaced with a TODO, but at that time
the fast-math flags were not yet implemented. Now that we have these, restoring
this mapping is important because it will enable autovectorization of sqrt
calls in loops (at least in fast-math mode).

llvm-svn: 190646
2013-09-12 23:57:55 +00:00
Jiangning Liu 1bda93a252 Implement aarch64 neon instruction set AdvSIMD (3V Diff), covering the following 26 instructions,
SADDL, UADDL, SADDW, UADDW, SSUBL, USUBL, SSUBW, USUBW, ADDHN, RADDHN, SABAL, UABAL, SUBHN, RSUBHN, SABDL, UABDL, SMLAL, UMLAL, SMLSL, UMLSL, SQDMLAL, SQDMLSL, SMULL, UMULL, SQDMULL, PMULL

llvm-svn: 190289
2013-09-09 02:21:08 +00:00
Hao Liu b1852eed38 Inplement aarch64 neon instructions in AdvSIMD(shift). About 24 shift instructions:
sshr,ushr,ssra,usra,srshr,urshr,srsra,ursra,sri,shl,sli,sqshlu,sqshl,uqshl,shrn,sqrshr$
 and 4 convert instructions:
      scvtf,ucvtf,fcvtzs,fcvtzu

llvm-svn: 189926
2013-09-04 09:29:13 +00:00
Tim Northover 550ce58312 ARM: comment on why vmull intrinsic has to exist for now.
llvm-svn: 189464
2013-08-28 09:46:40 +00:00
Tim Northover 4ae9812283 ARM: Emit normal IR for vaddhn/vsubhn NEON intrinsics
These operations "vector add high-half narrow" actually correspond to the
sequence:

    %sum = add <4 x i32> %lhs, %rhs
    %high = lshr <4 x i32> %sum, <i32 16, i32 16, i32 16, i32 16>
    %res = trunc <4 x i32> %high to <4 x i16>

Now that LLVM can spot this, Clang should emit the corresponding LLVM IR.

llvm-svn: 189463
2013-08-28 09:46:37 +00:00
Tim Northover 4e423f724a ARM: use vqdmull and vqadds/vqsubs to implement vqdmlal/vqdmlsl
The NEON intrinsics vqdmlal and vqdmlsl are really just combinations of a
saturating-doubling-multiply (vqdmull) and a saturating add/sub, so now that
LLVM can spot those patterns Clang should emit them instead of specialised
intrinsics.

Feature already tested by existing ARM NEON intrinsics tests.

llvm-svn: 189462
2013-08-28 09:46:34 +00:00
Juergen Ributzka 53e2f275d2 Fix last commit.
llvm-svn: 188724
2013-08-19 23:08:53 +00:00
Juergen Ributzka c6ab1f8bfd Simplify code by using CreateMemTemp. No functional change intended.
Reviewer: Eli
llvm-svn: 188722
2013-08-19 22:20:37 +00:00
Juergen Ributzka 2c2dbf4542 Fix the name and the type of the argument for intrinisc
_mm256_broadcastsi128_si256 to align with the Intel documentation.

This fixes bug PR 16581 and rdar:14747994.

llvm-svn: 188609
2013-08-17 16:40:09 +00:00
Hao Liu 0e9837a385 Fix the build failure of Realease version
llvm-svn: 188456
2013-08-15 11:38:54 +00:00
Hao Liu 4efa1402fe Clang and AArch64 backend patches to support shll/shl and vmovl instructions and ACLE functions
llvm-svn: 188452
2013-08-15 08:26:30 +00:00
Tim Northover 2fe823a6c3 AArch64: initial NEON support
Patch by Ana Pazos

- Completed implementation of instruction formats:
AdvSIMD three same
AdvSIMD modified immediate
AdvSIMD scalar pairwise

- Completed implementation of instruction classes
(some of the instructions in these classes
belong to yet unfinished instruction formats):
Vector Arithmetic
Vector Immediate
Vector Pairwise Arithmetic

- Initial implementation of instruction formats:
AdvSIMD scalar two-reg misc
AdvSIMD scalar three same

- Intial implementation of instruction class:
Scalar Arithmetic

- Initial clang changes to support arm v8 intrinsics.
Note: no clang changes for scalar intrinsics function name mangling yet.

- Comprehensive test cases for added instructions
To verify auto codegen, encoding, decoding, diagnosis, intrinsics.

llvm-svn: 187568
2013-08-01 09:23:19 +00:00
Bill Schmidt 778d387684 [PowerPC] Support powerpc64le as a syntax-checking target.
This patch provides basic support for powerpc64le as an LLVM target.
However, use of this target will not actually generate little-endian
code.  Instead, use of the target will cause the correct little-endian
built-in defines to be generated, so that code that tests for
__LITTLE_ENDIAN__, for example, will be correctly parsed for
syntax-only testing.  Code generation will otherwise be the same as
powerpc64 (big-endian), for now.

The patch leaves open the possibility of creating a little-endian
PowerPC64 back end, but there is no immediate intent to create such a
thing.

The new test case variant ensures that correct built-in defines for
little-endian code are generated.

llvm-svn: 187180
2013-07-26 01:36:11 +00:00
Eli Bendersky c3496b0643 Partial revert of r185568.
r186899 and r187061 added a preferred way for some architectures not to get
intrinsic generation for math builtins. So the code changes in r185568 can
now be undone (the test remains).

llvm-svn: 187079
2013-07-24 21:22:01 +00:00
Tim Northover 6aacd49094 ARM: implement low-level intrinsics for the atomic exclusive operations.
This adds three overloaded intrinsics to Clang:
    T __builtin_arm_ldrex(const volatile T *addr)
    int __builtin_arm_strex(T val, volatile T *addr)
    void __builtin_arm_clrex()

The intent is that these do what users would expect when given most sensible
types. Currently, "sensible" translates to ints, floats and pointers.

llvm-svn: 186394
2013-07-16 09:47:53 +00:00
Richard Smith 6cbd65d84d Add a __builtin_addressof that performs the same functionality as the built-in
& operator (ignoring any overloaded operator& for the type). The purpose of
this builtin is for use in std::addressof, to allow it to be made constexpr;
the existing implementation technique (reinterpret_cast to some reference type,
take address, reinterpert_cast back) does not permit this because
reinterpret_cast between reference types is not permitted in a constant
expression in C++11 onwards.

llvm-svn: 186053
2013-07-11 02:27:57 +00:00
Eli Bendersky 9b64ec18c1 Add target hook CodeGen queries when generating builtin pow*.
Without fmath-errno, Clang currently generates calls to @llvm.pow.* intrinsics
when it sees pow*(). This may not be suitable for all targets (for
example le32/PNaCl), so the attached patch adds a target hook that CodeGen
queries. The target can state its preference for having or not having the
intrinsic generated. Non-PNaCl behavior remains unchanged;
PNaCl-specific test added.

llvm-svn: 185568
2013-07-03 19:19:12 +00:00
Eli Bendersky 099888eccd Remove misplaced comment
llvm-svn: 184862
2013-06-25 17:07:56 +00:00
Michael Gottesman 930ecdb77b [checked-arithmetic builtins] Added builtins to enable users to perform checked-arithmetic in c.
This will enable users in security critical applications to perform
checked-arithmetic in a fast safe manner that is amenable to c.

Tests/an update to Language Extensions is included as well.

rdar://13421498.

llvm-svn: 184497
2013-06-20 23:28:10 +00:00
Michael Gottesman 1534399059 [multiprecision-builtins] Added missing builtin __builtin_{add,sub}cb for {add,sub} with carry for bytes.
I have had several people ask me about why this builtin was not available in
clang (since it seems like a logical conclusion). This patch implements said
builtins.

Relevant tests are included as well. I also updated the Clang language extension reference.

rdar://14192664.

llvm-svn: 184227
2013-06-18 20:40:40 +00:00
Rafael Espindola 2219fc5821 Fix __clear_cache on ARM.
Current gcc's produce an error if __clear_cache is anything but

__clear_cache(char *a, char *b);

It looks like we had just implemented a gcc bug that is now fixed.

llvm-svn: 181784
2013-05-14 12:45:47 +00:00
Benjamin Kramer 4757d0aadf Revert accidental commit.
llvm-svn: 181782
2013-05-14 12:23:08 +00:00
Benjamin Kramer 324bf7a159 Take a stab at trying to unbreak the makefile build.
There is no clangRewrite.a.

llvm-svn: 181781
2013-05-14 12:21:21 +00:00
Tim Northover 8ec8c4bf89 AArch64: teach Clang about __clear_cache intrinsic
libgcc provides a __clear_cache intrinsic on AArch64, much like it
does on 32-bit ARM.

llvm-svn: 181111
2013-05-04 07:15:13 +00:00
John McCall c8e0170578 Standardize accesses to the TargetInfo in IR-gen.
Patch by Stephen Lin!

llvm-svn: 179638
2013-04-16 22:48:15 +00:00
Michael Liao ffaae3511a Add RDSEED intrinsic support defined in AVX2 extension
llvm-svn: 178331
2013-03-29 05:17:55 +00:00
John McCall 47fb950871 Change hasAggregateLLVMType, which conflates complex and
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.

Also, normalize the API for loading and storing complexes.

I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.

llvm-svn: 176656
2013-03-07 21:37:08 +00:00
John McCall 882987f30c Use the actual ABI-determined C calling convention for runtime
calls and declarations.

LLVM has a default CC determined by the target triple.  This is
not always the actual default CC for the ABI we've been asked to
target, and so we sometimes find ourselves annotating all user
functions with an explicit calling convention.  Since these
calling conventions usually agree for the simple set of argument
types passed to most runtime functions, using the LLVM-default CC
in principle has no effect.  However, the LLVM optimizer goes
into histrionics if it sees this kind of formal CC mismatch,
since it has no concept of CC compatibility.  Therefore, if this
module happens to define the "runtime" function, or got LTO'ed
with such a definition, we can miscompile;  so it's quite
important to get this right.

Defining runtime functions locally is quite common in embedded
applications.

llvm-svn: 176286
2013-02-28 19:01:20 +00:00
Will Dietz f54319c891 [ubsan] Add support for -fsanitize-blacklist
llvm-svn: 172808
2013-01-18 11:30:38 +00:00
Tim Northover 4ef746768b Correct order of operands forwarding NEON vfma to LLVM fma
llvm-svn: 172650
2013-01-16 20:13:15 +00:00
Michael Gottesman a2b5c4ba6a Multiprecision subtraction builtins.
We lower these into 2x chained usub.with.overflow intrinsics.

llvm-svn: 172476
2013-01-14 21:44:30 +00:00
NAKAMURA Takumi 7ab4fbf5c2 CGBuiltin.cpp: Fix abuse of ArrayRef in EmitOverflowIntrinsic().
In ArrayRef<T>(X), X should not be temporary value. It could be rewritten more redundantly;

  llvm::Type *XTy = X->getType();
  ArrayRef<llvm::Type *> Ty(XTy);
  llvm::Value *Callee = CGF.CGM.getIntrinsic(IntrinsicID, Ty);

Since it is safe if both XTy and Ty are temporary value in one statement, it could be shorten;

  llvm::Value *Callee = CGF.CGM.getIntrinsic(IntrinsicID, ArrayRef<llvm::Type*>(X->getType()));

ArrayRef<T> has an implicit constructor to create uni-entry of T;

  llvm::Value *Callee = CGF.CGM.getIntrinsic(IntrinsicID, X->getType());

MSVC-generated clang.exe crashed.

llvm-svn: 172352
2013-01-13 11:26:44 +00:00
Michael Gottesman 54398015bf Added builtins for multiprecision adds.
We lower all of these intrinsics into a 2x chained usage of
uadd.with.overflow.

llvm-svn: 172341
2013-01-13 02:22:39 +00:00
Dmitri Gribenko f857950d39 Remove useless 'llvm::' qualifier from names like StringRef and others that are
brought into 'clang' namespace by clang/Basic/LLVM.h

llvm-svn: 172323
2013-01-12 19:30:44 +00:00
Chandler Carruth ffd5551bc7 Rewrite #includes for llvm/Foo.h to llvm/IR/Foo.h as appropriate to
reflect the migration in r171366.

Re-sort the #include lines to reflect the new paths.

llvm-svn: 171369
2013-01-02 11:45:17 +00:00
Meador Inge b97878a235 CodeGen: Expand creal and cimag into complex field loads
PR 14529 was opened because neither Clang or LLVM was expanding
calls to creal* or cimag* into instructions that just load the
respective complex field.  After some discussion, it was not
considered realistic to do this in LLVM because of the platform
specific way complex types are expanded.  Thus a way to solve
this in Clang was pursued.  GCC does a similar expansion.

This patch adds the feature to Clang by making the creal* and
cimag* functions library builtins and modifying the builtin code
generator to look for the new builtin types.

llvm-svn: 170455
2012-12-18 20:58:04 +00:00
Chandler Carruth 3a02247dc9 Sort all of Clang's files under 'lib', and fix up the broken headers
uncovered.

This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.

I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.

llvm-svn: 169237
2012-12-04 09:13:33 +00:00
Will Dietz 88e0233ff4 [ubsan] Add flag to enable recovery from checks when possible.
llvm-svn: 169114
2012-12-02 19:50:33 +00:00
Richard Smith b1b0ab41e7 Use the individual -fsanitize=<...> arguments to control which of the UBSan
checks to enable. Remove frontend support for -fcatch-undefined-behavior,
-faddress-sanitizer and -fthread-sanitizer now that they don't do anything.

llvm-svn: 167413
2012-11-05 22:21:05 +00:00
Micah Villmow ea2fea2a60 Cleanup some clang code to use new type functions instead of using cast<>.
llvm-svn: 166684
2012-10-25 15:39:14 +00:00
Nico Weber 636fc09960 "Implement" codegen support for __noop().
Eli discovered that __noop's sema behavior also needs some love. I filed
PR14081 for that and intend to improve it.

llvm-svn: 165886
2012-10-13 22:30:41 +00:00
Richard Smith e30752c93b -fcatch-undefined-behavior: emit calls to the runtime library whenever one of the checks fails.
llvm-svn: 165536
2012-10-09 19:52:38 +00:00
Micah Villmow dd31ca10ef Move TargetData to DataLayout.
llvm-svn: 165395
2012-10-08 16:25:52 +00:00
Benjamin Kramer a801f4a81d Expose __builtin_bswap16.
GCC has always supported this on PowerPC and 4.8 supports it on all platforms,
so it's a good idea to expose it in clang too. LLVM supports this on all targets.

llvm-svn: 165362
2012-10-06 14:42:22 +00:00
Bob Wilson 39d8a132df Add an FMA intrinsic for ARM Neon.
llvm-svn: 164904
2012-09-29 23:52:48 +00:00
Sylvestre Ledru 33b5baf189 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164766
llvm-svn: 164769
2012-09-27 10:16:10 +00:00
Sylvestre Ledru a876013dc9 Fix a typo 'iff' => 'if'
llvm-svn: 164766
2012-09-27 09:57:10 +00:00
Nico Weber ca496f34a2 Add codegen support for the __debugbreak intrinsic.
llvm-svn: 164660
2012-09-26 05:40:16 +00:00
Jim Grosbach 11b6fe5e9c ARM: Use a dedicated intrinsic for vector bitwise select.
The expression based expansion too often results in IR level optimizations
splitting the intermediate values into separate basic blocks, preventing
the formation of the VBSL instruction as the code author intended. In
particular, LICM would often hoist part of the computation out of a loop.

rdar://11011471

llvm-svn: 164342
2012-09-21 00:18:30 +00:00
Jim Grosbach d3608f433a Tidy up. Trailing whitespace and 80 columns.
llvm-svn: 164341
2012-09-21 00:18:27 +00:00
Richard Smith 4d1458ed38 -fcatch-undefined-behavior: Factor emission of the creation of, and branch to,
the trap BB out of the individual checks and into a common function, to prepare
for making this code call into a runtime library. Rename the existing EmitCheck
to EmitTypeCheck to clarify it and to move it out of the way of the new
EmitCheck.

llvm-svn: 163451
2012-09-08 02:08:36 +00:00
Eli Friedman 504f9a2872 Make alignment computation for pointer values for builtins handle
non-pointer types with a pointer representation correctly. PR13660.

llvm-svn: 162862
2012-08-29 21:21:11 +00:00
Eli Friedman 5d14c48dbb Attempt to fix clang bootstrap (broken by r162425).
llvm-svn: 162440
2012-08-23 11:27:56 +00:00
Eli Friedman a5dd5684dc Use the alignment from lvalue emission to more accurately compute the alignment
of a pointer for builtin emission, instead of just depending on the type of the
pointee.  <rdar://problem/11314941>.

llvm-svn: 162425
2012-08-23 03:10:17 +00:00
Fariborz Jahanian 1ac111989d irgen: inline code for several of complex builtin
calls. // rdar://8315199

llvm-svn: 161891
2012-08-14 20:09:28 +00:00
Bob Wilson 2605fef7db Avoid using i64 types for vld1q_lane/vst1q_lane intrinsics.
The backend has to legalize i64 types by splitting them into two 32-bit pieces,
which leads to poor quality code.  If we produce code for these intrinsics that
uses one-element vector types, which can live in Neon vector registers without
getting split up, then the generated code is much better.  Radar 11998303.

llvm-svn: 161879
2012-08-14 17:27:04 +00:00
Hal Finkel 3fadbb54fd Add __builtin_readcyclecounter() to produce the @llvm.readcyclecounter() intrinsic.
llvm-svn: 161310
2012-08-05 22:03:08 +00:00
Joel Jones 682150364a More replacing of target-dependent intrinsics with target-indepdent
intrinsics.  The second instruction(s) to be handled are the vector versions 
of count set bits (ctpop).

The changes here are to clang so that it generates a target independent 
vector ctpop when it sees an ARM dependent vector bits set count.  The changes 
in llvm are to match the target independent vector ctpop and in 
VMCore/AutoUpgrade.cpp to update any existing bc files containing ARM 
dependent vector pop counts with target-independent ctpops.  There are also 
changes to an existing test case in llvm for ARM vector count instructions and 
to a test for the bitcode upgrade.

<rdar://problem/11892519>

There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>

llvm-svn: 160409
2012-07-18 00:01:03 +00:00
Simon Atanasyan 94a6d863a9 Revert commit r160308. We decide to move builtins selection to the backend.
llvm-svn: 160353
2012-07-17 08:15:06 +00:00
Simon Atanasyan a06d06b660 MIPS: Implement __builtin_mips_shll_qb builtin function overloading.
This function has two versions. The first one is used for a register operand.
The second one is used for an immediate number.

llvm-svn: 160308
2012-07-16 18:52:02 +00:00
Eric Christopher 934a1c0231 Capitalize comment.
llvm-svn: 160220
2012-07-14 19:29:12 +00:00
Joel Jones 3e00e9d5c1 This is one of the first steps at moving to replace target-dependent
intrinsics with target-indepdent intrinsics.  The first instruction(s) to be 
handled are the vector versions of count leading zeros (ctlz).

The changes here are to clang so that it generates a target independent 
vector ctlz when it sees an ARM dependent vector ctlz.  The changes in llvm 
are to match the target independent vector ctlz and in VMCore/AutoUpgrade.cpp 
to update any existing bc files containing ARM dependent vector ctlzs with 
target-independent ctlzs.  There are also changes to an existing test case in 
llvm for ARM vector count instructions and a new test for the bitcode upgrade.

<rdar://problem/11831778>

There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>

llvm-svn: 160201
2012-07-13 23:26:27 +00:00
Benjamin Kramer a43b6999ff Add _rdrand{16,32,64}_step intrinsics to immintrin.h
llvm-svn: 160118
2012-07-12 09:33:03 +00:00
John McCall 8dda7b27ee Distinguish more carefully between free functions and C++ instance methods
in the ABI arrangement, and leave a hook behind so that we can easily
tweak CCs on platforms that use different CCs by default for C++
instance methods.

llvm-svn: 159894
2012-07-07 06:41:13 +00:00
Benjamin Kramer 46a72fb741 Dead code eliminate the massive hexagon builtin intrinsic supporting code.
The tablegen'd code does the same thing without this egregious duplication.
In my limited testing everything seems to work, however there can be
differences if the clang and llvm builtin definitions don't match.

llvm-svn: 159371
2012-06-28 20:08:55 +00:00
Benjamin Kramer 8652ca8a6a Now that we use the GCC builtin <-> llvm intrinsic, dead code eliminate the handwritten emitter.
The generated code uncovered an invalid prototype for __builtin_mips_shilo, fix it along the way.

llvm-svn: 159368
2012-06-28 19:10:01 +00:00
Simon Atanasyan 07ce7d8fb5 Support MIPS DSP Rev1 intrinsics.
This patch was reviewed in the llvm-commits list by Jim Grosbach.

llvm-svn: 159366
2012-06-28 18:23:16 +00:00
Richard Smith 01ade177e9 If the first argument of __builtin_object_size can be folded to a constant
pointer, but such folding encounters side-effects, ignore the side-effects
rather than performing them at runtime: CodeGen generates wrong code for
__builtin_object_size in that case.

llvm-svn: 157310
2012-05-23 04:13:20 +00:00
Nuno Lopes 2b1ff46ed1 revert the usage of the objectsize intrinsic with 3 parameters (to match LLVM r157255)
llvm-svn: 157256
2012-05-22 15:26:48 +00:00
Sirish Pande 84dce5d0c2 Hexagon V5 intrinsics support in clang.
llvm-svn: 156630
2012-05-11 19:39:08 +00:00
Nuno Lopes ddcce0bb90 update calls to objectsize intrinsic to match LLVM r156473
add a test for -fbounds-checking code generation

llvm-svn: 156474
2012-05-09 15:53:34 +00:00
Craig Topper c83dff0993 Convert AVX non-temporal store builtins to LLVM-native IR. This was previously done for SSE builtins.
llvm-svn: 156296
2012-05-07 06:25:45 +00:00
Chandler Carruth 70ac923ebc Revert r155363, due to the underlying patches in LLVM causing regression
test suite failures.

llvm-svn: 155371
2012-04-23 18:25:40 +00:00
Sirish Pande 7039d0eaee Hexagon V5 (floating point) support in cfe.
llvm-svn: 155363
2012-04-23 17:48:57 +00:00
Chandler Carruth b8ae76037a Revert some Hexagon builtin commits to match reverts done to LLVM in
r155047. See the LLVM log for the primary motivation:
  http://llvm.org/viewvc/llvm-project?rev=155047&view=rev

Primary commit r154828:
  - Several issues were raised in review, and fixed in subsequent
    commits.
  - Follow-up commits also reverted, and which should be folded into the
    original before reposting:
    - r154837: Re-add the 'undef BUILTIN' thing to fix the build.
    - r154928: Fix build warnings, re-add (and correct) header and
      license
    - r154937: Typo fix.

Please resubmit this patch with the relevant LLVM resubmission.

llvm-svn: 155048
2012-04-18 21:32:25 +00:00
Sirish Pande f02eebef2a Hexagon V5(Floating Point) support.
llvm-svn: 154828
2012-04-16 17:04:05 +00:00
Richard Smith 01ba47d7b6 Implement the missing pieces needed to support libstdc++4.7's <atomic>:
__atomic_test_and_set, __atomic_clear, plus a pile of undocumented __GCC_*
predefined macros.

Implement library fallback for __atomic_is_lock_free and
__c11_atomic_is_lock_free, and implement __atomic_always_lock_free.

Contrary to their documentation, GCC's __atomic_fetch_add family don't
multiply the operand by sizeof(T) when operating on a pointer type.
libstdc++ relies on this quirk. Remove this handling for all but the
__c11_atomic_fetch_add and __c11_atomic_fetch_sub builtins.

Contrary to their documentation, __atomic_test_and_set and __atomic_clear
take a first argument of type 'volatile void *', not 'void *' or 'bool *',
and __atomic_is_lock_free and __atomic_always_lock_free have an argument
of type 'const volatile void *', not 'void *'.

With this change, libstdc++4.7's <atomic> passes libc++'s atomic test suite,
except for a couple of libstdc++ bugs and some cases where libc++'s test
suite tests for properties which implementations have latitude to vary.

llvm-svn: 154640
2012-04-13 00:45:38 +00:00
Richard Smith b1e36c662b Provide, and document, a set of __c11_atomic_* intrinsics to implement C11's
<stdatomic.h> header.

In passing, fix LanguageExtensions to note that C11 and C++11 are no longer
"upcoming standards" but are now actually standardized.

llvm-svn: 154513
2012-04-11 17:55:32 +00:00
Eli Friedman fefe0d07ea Don't try to create "store atomic" instructions of non-integer types; they aren't supported at the moment. PR12040.
llvm-svn: 152891
2012-03-16 01:48:04 +00:00
James Molloy 4813fc8ed6 Fix codegen for vld{3,4}_dup intrinsics.
Patch by Silviu Baranga!

llvm-svn: 152788
2012-03-15 09:12:01 +00:00
Chris Lattner aaa18fad7d add a testcase for PR12094 and fix a crash on pointer to incomplete type,
reported by Richard Smith.

llvm-svn: 151993
2012-03-04 00:52:12 +00:00
Jay Foad b0f3344b10 PR12094: Set the alignment of memory intrinsic instructions based on the
types of the pointer arguments.

llvm-svn: 151927
2012-03-02 18:34:30 +00:00
Bill Wendling f1a3fcac0d Use an ArrayRef when we can instead of passing in a SmallVectorImpl reference.
llvm-svn: 151150
2012-02-22 09:30:11 +00:00
Chandler Carruth a2a5410e6d Add 3dNOW intrinsic header to x86intrin.h, conditioned on __3dNOW__ to
match the behavior of GCC. Also add a test for these intrinsics, which
apparently have *zero* tests. =[ Not surprisingly, Clang crashed when
compiling these.

Fix the bug in CodeGen where we failed to bitcast the argument type to
x86mmx prior to calling the LLVM intrinsic. This fixes an assert on the
new 3dnow-builtins.c test.

This is one issue impacting the efforts to get Clang to emulate the
Microsoft intrinsics headers -- 3dnow intrinsics are implictitly made
available there.

llvm-svn: 150948
2012-02-20 07:35:45 +00:00
Chris Lattner ece0409a1a simplify a bunch of code to use the well-known LLVM IR types computed by CodeGenModule.
llvm-svn: 149943
2012-02-07 00:39:47 +00:00
Bob Wilson 49708d41a6 Preserve alignment for Neon vld1_lane/dup and vst1_lane intrinsics.
We had been generating load/store instructions with the default alignment
for the vector element type, even when the pointer argument had less alignment.
<rdar://problem/10538555>

llvm-svn: 149794
2012-02-04 23:58:08 +00:00
Craig Topper 2962e7b656 Remove long dead code for handling vector shift by immediate builtins.
llvm-svn: 149237
2012-01-30 08:51:36 +00:00
Craig Topper 2c82f9ecf8 Remove custom handling for cmpsd/cmpss/cmppd/cmpps builtins. The builtins are now in IntrinsicsX86.td.
llvm-svn: 149235
2012-01-30 08:38:42 +00:00
Craig Topper d6d3a05b4f Cleanup 3dnow builtin handling. Most of them were already handled by LLVM connecting intrinsics and builtins in IntrinsicsX86.td.
llvm-svn: 149233
2012-01-30 08:18:19 +00:00
Benjamin Kramer 1412816686 Make the __builtin_c[lt]zs builtins target independent.
There is really no reason to have these only available on x86. It's
just __builtin_c[tl]z for shorts.

Modernize the test while at it.

llvm-svn: 149183
2012-01-28 18:42:57 +00:00
Bob Wilson a7a61e2701 Make clz/ctz builtins defined for zero on ARM targets. rdar://10732455
ARM supports clz and ctz directly and both operations have well-defined
results for zero.  There is no disadvantage in performance to using the
defined-at-zero versions of llvm.ctlz/cttz intrinsics.  We're running into
ARM-specific code written with the assumption that __builtin_clz(0) == 32,
even though that value is technically undefined.  The code is failing now
because of llvm optimizations that are taking advantage of the undef
behavior (specifically svn r147255).  There's nothing wrong with that
optimization on x86 where any incorrect assumptions about __builtin_clz(0)
will quickly be exposed.  For ARM, though, optimizations based on that undef
behavior are likely to cause subtle bugs.  Other targets with defined-at-zero
clz/ctz support may want to override the default behavior as well.

llvm-svn: 149086
2012-01-26 22:14:27 +00:00
Chris Lattner 2d6b7b91b9 reapply r148902:
"use the new ConstantVector::getSplat method where it makes sense."

Also simplify a bunch of code to use the Builder->getInt32 instead
of doing it the hard and ugly way.  Much more progress could be made
here, but I don't plan to do it.

llvm-svn: 148926
2012-01-25 05:34:41 +00:00
Argyrios Kyrtzidis 5a25297c5e Revert 148902 which was part of 148901 which was reverted in r148906.
Original log:
 use the new ConstantVector::getSplat method where it makes sense.

llvm-svn: 148907
2012-01-25 02:58:12 +00:00
Chris Lattner c558d7d176 use the new ConstantVector::getSplat method where it makes sense.
llvm-svn: 148902
2012-01-25 02:06:10 +00:00
David Blaikie e4d798f078 More dead code removal (using -Wunreachable-code)
llvm-svn: 148577
2012-01-20 21:50:17 +00:00
Eli Friedman 65499b45f0 Add __builtin_labs and __builtin_llabs, to complete the set of __builtin_*abs. Patch by Ruben Van Boxem.
llvm-svn: 148340
2012-01-17 22:11:30 +00:00
David Blaikie f47fa304a4 Remove unnecessary default cases in switches over enums.
This allows -Wswitch-enum to find switches that need updating when these enums are modified.

llvm-svn: 148281
2012-01-17 02:30:50 +00:00
Eli Friedman df88c54f8d Revert r147655; it's breaking the compiler_rt build on OSX.
llvm-svn: 147677
2012-01-06 20:03:09 +00:00
David Chisnall 9217435a33 If we are compiling with -fno-builtin then don't do constant folding of
builtins.

This fixes PR11711.

llvm-svn: 147655
2012-01-06 12:20:19 +00:00