Commit Graph

516 Commits

Author SHA1 Message Date
Saleem Abdulrasool ece7217f70 ARM: rename ARM builtins to use __builtin_arm prefix
This corrects SVN r212196's naming change to use the proper prefix of
`__builtin_arm_` instead of `__builtin_`.

Thanks to Yi Kong for pointing out the incorrect naming!

llvm-svn: 212253
2014-07-03 02:43:20 +00:00
Saleem Abdulrasool 4bddd9d400 CodeGen: make target builtins support languages
This extends the target builtin support to allow language specific annotations
(i.e. LANGBUILTIN).  This is to allow MSVC compatibility whilst retaining the
ability to have EABI targets use a __builtin_ prefix.  This is merely to allow
uniformity in the EABI case where the unprefixed name is provided as an alias in
the header.

llvm-svn: 212196
2014-07-02 17:41:27 +00:00
Tim Northover 3acd6bd0b6 ARM: add support for v8 ldaex/stlex builtins.
ARMv8 adds (to both AArch32 and AArch64) acquiring and releasing
variants of the exclusive operations, in line with the C++11 memory
model.

This adds support for two new intrinsics to expose them to C & C++
developers directly: __builtin_arm_ldaex and __builtin_arm_stlex, in
direct analogy with the versions with no implicit barrier.

rdar://problem/15885451

llvm-svn: 212175
2014-07-02 12:56:02 +00:00
Craig Topper 00bbdcf9b3 Remove llvm:: from uses of ArrayRef.
llvm-svn: 211987
2014-06-28 23:22:23 +00:00
Matt Arsenault 56f008d538 Add R600 builtin codegen.
llvm-svn: 211631
2014-06-24 20:45:01 +00:00
Tim Northover 6ea28bdef5 ARM: remove dead CodeGen functions.
These two are no longer being used by NEON codegen.

llvm-svn: 211586
2014-06-24 12:07:44 +00:00
Jim Grosbach e59c43dc21 Fix spelling. s/overloaed/overloaded/
llvm-svn: 211530
2014-06-23 20:28:43 +00:00
Saleem Abdulrasool 114efe0dc8 CodeGen: improve ms instrincics support
Add support for _InterlockedCompareExchangePointer, _InterlockExchangePointer,
_InterlockExchange.  These are available as a compiler intrinsic on ARM and x86.
These are used directly by the Windows SDK headers without use of the intrin
header.

llvm-svn: 211216
2014-06-18 20:51:10 +00:00
Jim Grosbach 79140826bc AArch64: Support for __builtin_arm_rbit() and __builtin_arm_rbit64().
__builtin_arm_rbit() and __builtin_arm_rbit64().

rdar://9283021

llvm-svn: 211060
2014-06-16 21:56:02 +00:00
Jim Grosbach 171ec34544 ARM: Support for __builtin_arm_rbit() intrinsic.
Reverse the bits in a word. Maps to the RBIT instruction.

rdar://9283021

llvm-svn: 211059
2014-06-16 21:55:58 +00:00
Tim Northover b49b04bbe0 IR-change: cmpxchg operations now return { iN, i1 }.
This is a minimal fix for clang. I'll soon add support for generating
weak variants when requested, but that's not really necessary for the
LLVM change in isolation.

llvm-svn: 210907
2014-06-13 14:24:59 +00:00
Richard Smith 760520bcb7 Add __builtin_operator_new and __builtin_operator_delete, which act like calls
to the normal non-placement ::operator new and ::operator delete, but allow
optimizations like new-expressions and delete-expressions do.

llvm-svn: 210137
2014-06-03 23:27:44 +00:00
Michael J. Spencer 5ce26687f2 [CodeGen] Don't use SizeTy for EmitNeonSplat.
llvm-svn: 210042
2014-06-02 19:48:59 +00:00
Michael J. Spencer dd59775f06 [CodeGen] Don't cast and use SizeTy instead of Int32Ty when constructing {extract,insert} vector element instructions.
llvm-svn: 209942
2014-05-31 00:22:12 +00:00
Tim Northover 573cbee543 AArch64/ARM64: rename ARM64 components to AArch64
This keeps Clang consistent with backend naming conventions.

llvm-svn: 209579
2014-05-24 12:52:07 +00:00
Tim Northover 25e8a6754e AArch64/ARM64: update Clang after AArch64 removal.
A few (mostly CodeGen) parts of Clang were tightly coupled to the
AArch64 backend. Now that it's gone, they will not even compile.

I've also deduplicated RUN lines in many of the AArch64 tests. This
might improve "make check-all" time noticably: some of those NEON
tests were monsters.

llvm-svn: 209578
2014-05-24 12:51:25 +00:00
Craig Topper 8a13c4180e [C++11] Use 'nullptr'. CodeGen edition.
llvm-svn: 209272
2014-05-21 05:09:00 +00:00
Hao Liu 9f9492b657 [ARM64]Fix the bug right shift uint64_t by 64 generates incorrect result.
llvm-svn: 208761
2014-05-14 08:59:30 +00:00
Saleem Abdulrasool 956c2ec532 CodeGen: complete ARM ACLE hint 8.4 support
Add support for the remaining hints from the ACLE.  Although __dbg is listed as
a hint, it is handled different, so it is not covered by this change.

llvm-svn: 207930
2014-05-04 02:52:25 +00:00
Saleem Abdulrasool 38ed6de3a0 CodeGen: rename __builtin_arm_sevl to __sevl
ACLE adds the __sevl() extension.  Rename the hint from a custom name to the
ACLE specified name.

llvm-svn: 207829
2014-05-02 06:53:57 +00:00
James Molloy fa40368d9d [ARM64] Add arm64_be where it was accidentally missed from a bunch of if-conditions.
I think this is the last commit for ARM64 big endian in clang. This commit makes
arm_neon.h compile correctly.

llvm-svn: 207624
2014-04-30 10:11:40 +00:00
Hao Liu a19a2e2da6 [ARM64]Fix a bug cannot select UQSHL/SQSHL with constant i64 shift amount.
llvm-svn: 207401
2014-04-28 07:36:12 +00:00
Saleem Abdulrasool 2b99f2f7a4 CodeGen: remove an unused variable
llvm-svn: 207390
2014-04-28 02:29:11 +00:00
Sylvestre Ledru 464902589e remove useless code
llvm-svn: 207360
2014-04-27 14:57:31 +00:00
Saleem Abdulrasool b9f07e3dbc CodeGen: add __yield intrinsic for ARM
The __yield intrinsic generates a hint instruction to indicate that the thread
is not performing any useful operations at the moment.  This is for
compatibility with MSVC, although, the intrinsic is also part of the ACLE, and
is enabled globally as a result.

llvm-svn: 207275
2014-04-25 21:13:29 +00:00
Saleem Abdulrasool 0fd930e86c CodeGen: replace use of @llvm.arm.sevl with @llvm.arm.hint
Use the new generic @llvm.arm.hint hint intrinsic rather than the specialised
@llvm.arm.sevl hint instruction.

llvm-svn: 207243
2014-04-25 17:25:46 +00:00
Tim Northover b17f9a4609 ARM64: add a few bits of polynomial intrinsic codegen.
llvm-svn: 205303
2014-04-01 12:23:08 +00:00
Tim Northover 74b2def0c5 ARM64: add missing ldN/stN intrinsics and enable tests.
llvm-svn: 205296
2014-04-01 10:37:47 +00:00
Tim Northover 0c68faa455 ARM64: enable aarch64-neon-intrinsics.c test
This adds support for the various NEON intrinsics used by
aarch64-neon-intrinsics.c (originally written for AArch64) and enables the
test.

My implementations are designed to be semantically correct, the actual code
quality looks like its a wash between the two backends, and is frequently
different (hence the large number of CHECK changes).

llvm-svn: 205210
2014-03-31 15:47:09 +00:00
Dmitri Gribenko f461da5306 Remove unused variable
llvm-svn: 205169
2014-03-31 07:52:35 +00:00
Tim Northover 6166ec21be ARM64: remove currently trivial switch statement
llvm-svn: 205167
2014-03-31 07:20:13 +00:00
Tim Northover 97606edd75 ARM64: Fix GCC warning in CGBuiltin.cpp
llvm-svn: 205104
2014-03-29 15:26:07 +00:00
Tim Northover a2ee433c8d ARM64: initial clang support commit.
This adds Clang support for the ARM64 backend. There are definitely
still some rough edges, so please bring up any issues you see with
this patch.

As with the LLVM commit though, we think it'll be more useful for
merging with AArch64 from within the tree.

llvm-svn: 205100
2014-03-29 15:09:45 +00:00
Christian Pirker f01cd6f57b Add ARM big endian Target (armeb, thumbeb)
Reviewed at http://llvm-reviews.chandlerc.com/D3096

llvm-svn: 205008
2014-03-28 14:40:46 +00:00
Reid Kleckner 597e81dea1 -fms-extensions: Add __va_start builtin, which is used for x64
The main difference between __va_start and __builtin_va_start is that
the address of the va_list has already been taken, and the va_list is
always a char*.

__va_end and __va_arg are not needed.

llvm-svn: 204821
2014-03-26 15:38:33 +00:00
Renato Golin c491a8d457 Add support for __builtin___clear_cache in Clang
Adding the mapping between __builtin___clear_cache into @llvm.clear_cache

llvm-svn: 204820
2014-03-26 15:36:05 +00:00
Timur Iskhodzhanov f7af2e6de8 Fix a compile-time warning
lib/CodeGen/CGBuiltin.cpp:3136:12: warning: variable ‘TblPos’ set but not used [-Wunused-but-set-variable]

llvm-svn: 204599
2014-03-24 11:09:01 +00:00
Arnaud A. de Grandmaison 6756a497a1 Cleanup dead assignments reported by scan-build
llvm-svn: 204569
2014-03-23 20:28:07 +00:00
Tim Northover 0622b3a67a Update for IR: add a second AtomicOrdering to cmpxchg insts.
rdar://problem/15996804

llvm-svn: 203560
2014-03-11 10:49:03 +00:00
Ted Kremenek 90097491ed Remove 'break' dominated by 'return' in 'EmitBuiltinExpr'.
llvm-svn: 203080
2014-03-06 05:37:38 +00:00
Tim Northover b44e080dbb AArch64: use less cluttered intrinsic for vtbl/vtbx
The table is always 128-bit so there's no reason to specify it every time we
want the intrinsic.

llvm-svn: 202259
2014-02-26 11:55:15 +00:00
Tim Northover 2df47cedeb AArch64: use different type modifier in arm_neon.td
The 'f' modifier is designed for integer type arguments really (according to
its documentation). It's better to use the "half width, same number" modifier.

Should be no user-visible change.

llvm-svn: 202152
2014-02-25 13:53:01 +00:00
Christian Pirker 9b019ae899 Add AArch64 big endian Target (aarch64_be)
llvm-svn: 202151
2014-02-25 13:51:00 +00:00
Warren Hunt 20e4a5d2af Reapply 201734 but with appropriate gcc compatibility
Because GCC incorrectly defines _mm_prefetch to take anything that casts 
to void*, people have started using that behavior.  The previous patch 
that made _mm_prefetch actually take a const char * broke compatibility 
with existing code.  This update to the patch leaves the macro that 
defines _mm_prefetch with the (void*) cast when _MSC_VER is not defined.

llvm-svn: 201901
2014-02-21 23:08:53 +00:00
Tim Northover a0c95eb2d6 Remove commas at the end of lists (C++11 again)
llvm-svn: 201849
2014-02-21 12:16:59 +00:00
Tim Northover 8fe03d6111 ARM & AArch64: use table for EmitCommonNeonBuiltinExpr
This extends the intrinsic lookup table format slightly, and adds
entries for use the shared ARM/AArch64 definitions. The benefit is
currently smaller than for the SISD intrinsics (there's more custom
code implementing this set), but a few lines are saved and there's
scope for future expansion.

llvm-svn: 201848
2014-02-21 11:57:24 +00:00
Tim Northover 2d83796860 AArch64: refactor table-driven NEON lookup.
This extracts the table-driven intrinsic lookup phase into a separate
function, to be used by EmitCommonNeonBuiltinExpr soon.

It also simplifies the logic used in that lookup, since VectorCastArgN
and ScalarArgN were actually identical.

llvm-svn: 201847
2014-02-21 11:57:20 +00:00
Daniel Jasper 2f0f297bdb Revert r201734 and r201742.
This breaks backwards compatibility with existing code. Previously, this
was defined as

  #define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a), 0, (sel)))

Which basically accepts any pointer. Changing this to char* simply
breaks a lot of existing code. I have tried changing char* to
"const void*", which seems to be the right thing as per Intel
specification this should work on basically any pointer. However,
apparently this breaks windows compatibility (because of a conflicting
declaration in windows.h).

So, we probably need to #ifdef this based on whether clang is compiling
for windows. According to Chandler, this might be done by introducing an
additional symbol to a fake type in BuiltinsX86.def and then condition
the type expansion on the platform.

llvm-svn: 201775
2014-02-20 11:10:48 +00:00
Warren Hunt 40d6f29ad8 Add _mm_prefetch and some others as MS builtins
This patch adds several built-ins that are required for ms 
compatibility. _mm_prefetch must be a built-in because it takes a 
compile-time constant argument and our prior approach of using a #define 
to the current built-in doesn't work in the presence of re-declaration 
of _mm_prefetch. The others can be obtained by including the windows 
system headers. If a user includes the windows system headers but not 
intrin.h they still need to work and therefore must be built-in because 
we don't get a chance to implement them in intrin.h in this case.

llvm-svn: 201734
2014-02-19 23:20:20 +00:00
Tim Northover db3e5e2408 AArch64: look up EmitAArch64Scalar support before calling.
This fixes one immediate bug where an expression with side-effects
could be emitted twice during a NEON call.

It also prepares the way for folding CodeGen for many of the SISD
intrinsics into a table, reducing code size and hopefully increasing
performance eventually ("binary search + few switch cases" should be
better than "lots of switch cases").

llvm-svn: 201667
2014-02-19 11:55:06 +00:00