Commit Graph

26 Commits

Author SHA1 Message Date
Aaron Ballman ed509fe296 Use functions with prototypes when appropriate; NFC
A significant number of our tests in C accidentally use functions
without prototypes. This patch converts the function signatures to have
a prototype for the situations where the test is not specific to K&R C
declarations. e.g.,

  void func();

becomes

  void func(void);

This is the tenth batch of tests being updated (there are a
significant number of other tests left to be updated).
2022-02-15 09:28:02 -05:00
Stelios Ioannou ab86edbc88 [AArch64] Implement __rndr, __rndrrs intrinsics
This patch implements the __rndr and __rndrrs intrinsics to provide access to the random
number instructions introduced in Armv8.5-A. They are only defined for the AArch64
execution state and are available when __ARM_FEATURE_RNG is defined.

These intrinsics store the random number in their pointer argument and return a status
code if the generation succeeded. The difference between __rndr __rndrrs, is that the latter
intrinsic reseeds the random number generator.

The instructions write the NZCV flags indicating the success of the operation that we can
then read with a CSET.

[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics
[2] https://bugs.llvm.org/show_bug.cgi?id=47838

Differential Revision: https://reviews.llvm.org/D98264

Change-Id: I8f92e7bf5b450e5da3e59943b53482edf0df6efc
2021-03-15 17:51:48 +00:00
Tim Northover 5165b2b5fd AArch64+ARM: make LLVM consider system registers volatile.
Some of the system registers readable on AArch64 and ARM platforms
return different values with each read (for example a timer counter),
these shouldn't be hoisted outside loops or otherwise interfered with,
but the normal @llvm.read_register intrinsic is only considered to read
memory.

This introduces a separate @llvm.read_volatile_register intrinsic and
maps all system-registers on ARM platforms to use it for the
__builtin_arm_rsr calls. Registers declared with asm("r9") or similar
are unaffected.
2020-07-15 09:47:36 +01:00
Tim Northover 44e5879f0f AArch64: add arm64_32 support to Clang. 2019-11-12 12:45:18 +00:00
vhscampos f6e11a36c4 [ARM][AArch64] Implement __cls, __clsl and __clsll intrinsics from ACLE
Summary:
Writing support for three ACLE functions:
  unsigned int __cls(uint32_t x)
  unsigned int __clsl(unsigned long x)
  unsigned int __clsll(uint64_t x)

CLS stands for "Count number of leading sign bits".

In AArch64, these two intrinsics can be translated into the 'cls'
instruction directly. In AArch32, on the other hand, this functionality
is achieved by implementing it in terms of clz (count number of leading
zeros).

Reviewers: compnerd

Reviewed By: compnerd

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69250
2019-10-28 11:06:58 +00:00
JF Bastien dbc0a5df8d Allow prefetching from non-zero address spaces
Summary:
This is useful for targets which have prefetch instructions for non-default address spaces.

<rdar://problem/42662136>

Subscribers: nemanjai, javed.absar, hiraditya, kbarton, jkorous, dexonsmith, cfe-commits, llvm-commits, RKSimon, hfinkel, t.p.northover, craig.topper, anemet

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D65254

llvm-svn: 367032
2019-07-25 16:11:57 +00:00
Kyrylo Tkachov eb72138340 [AArch64] Implement __jcvt intrinsic from Armv8.3-A
The jcvt intrinsic defined in ACLE [1] is available when ARM_FEATURE_JCVT is defined.

This change introduces the AArch64 intrinsic, wires it up to the instruction and a new clang builtin function.
The __ARM_FEATURE_JCVT macro is now defined when an Armv8.3-A or higher target is used.
I've implemented the target detection logic in Clang so that this feature is enabled for architectures from armv8.3-a onwards (so -march=armv8.4-a also enables this, for example).

make check-all didn't show any new failures.

[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics

Differential Revision: https://reviews.llvm.org/D64495

llvm-svn: 366197
2019-07-16 09:27:39 +00:00
Sam Parker 015f97db8b [AArch64] Update int64_t ACLE builtin arguments
Re-applying r351740 with fixes (changing LL to W).

Differential Revision: https://reviews.llvm.org/D56852

llvm-svn: 352463
2019-01-29 09:04:03 +00:00
Petr Hosek f16e834dab [AArch64] Make the test for rsr and rsr64 stricter
ACLE specifies that return type for rsr and rsr64 is uint32_t and
uint64_t respectively. D56852 change the return type of rsr64 from
unsigned long to unsigned long long which at least on Linux doesn't
match uint64_t, but the test isn't strict enough to detect that
because compiler implicitly converts unsigned long long to uint64_t,
but it breaks other uses such as printf with PRIx64 type specifier.
This change makes the test stricter enforcing that the return type
of rsr and rsr64 builtins is what is actually specified in ACLE.

Differential Revision: https://reviews.llvm.org/D57210

llvm-svn: 352156
2019-01-25 02:42:30 +00:00
Petr Hosek 63bd4e9cd1 Revert "[AArch64] Use LL for 64-bit intrinsic arguments"
This reverts commit r351740: this broke on platforms where unsigned long
long isn't the same as uint64_t which is what ACLE specifies for the
return value of rsr64.

Differential Revision: https://reviews.llvm.org/D57209

llvm-svn: 352153
2019-01-25 02:16:29 +00:00
Sam Parker a96f8461e7 [AArch64] Use LL for 64-bit intrinsic arguments
The ACLE states that 64-bit crc32, wsr, rsr and rbit operands are
uint64_t so we should have the clang builtin match this description
- which is what we already do for AArch32.

Differential Revision: https://reviews.llvm.org/D56852

llvm-svn: 351740
2019-01-21 11:01:05 +00:00
Mehdi Amini 6aa9e9b41a IRGen: Add optnone attribute on function during O0
Amongst other, this will help LTO to correctly handle/honor files
compiled with O0, helping debugging failures.
It also seems in line with how we handle other options, like how
-fnoinline adds the appropriate attribute as well.

Differential Revision: https://reviews.llvm.org/D28404

llvm-svn: 304127
2017-05-29 05:38:20 +00:00
Chad Rosier 5a4a1be690 [AArch64] Use generic bitreverse intrinsic, rather than AArch64 specific.
Differential Revision: https://reviews.llvm.org/D28400

llvm-svn: 291574
2017-01-10 17:20:28 +00:00
Marcin Koscielnicki a46fade624 [Builtin] Make __builtin_thread_pointer target-independent.
This is now supported for ARM, AArch64, PowerPC, SystemZ, SPARC, Mips.

Differential Revision: http://reviews.llvm.org/D19589

llvm-svn: 272893
2016-06-16 13:41:54 +00:00
Marcin Koscielnicki 4005070e1b [AArch64] Fix D19098 fallout.
The intrinsic is now called llvm.thread.pointer, not
llvm.aarch64.thread.pointer.  Also, the code handling it in CGBuiltin.cpp
is dead - it's already covered by GCCBuiltin.  Remove it.

Differential Revision: http://reviews.llvm.org/D19099

llvm-svn: 266817
2016-04-19 20:51:00 +00:00
Tim Northover 58672974a9 ARM & AArch64: convert asm tests to LLVM IR and restrict optimizations.
This is mostly a one-time autoconversion of tests that checked assembly after
"-Owhatever" compiles to only run "opt -mem2reg" and check the assembly. This
should make them much more stable to changes in LLVM so they won't break on
unrelated changes.

"opt -mem2reg" is a compromise designed to increase the readability of tests
that check dataflow, while minimizing dependency on LLVM. Hopefully mem2reg is
stable enough that no surpises will come along.

Should address http://llvm.org/PR26815.

llvm-svn: 263048
2016-03-09 18:54:42 +00:00
Adhemerval Zanella 3916c910d1 [AArch64] Implement __builtin_thread_pointer
This path add the aarch64 __builtin_thread_pointer support.  It will be
lowered to llvm.aarch64.thread.pointer.

llvm-svn: 243413
2015-07-28 13:10:10 +00:00
Ranjeet Singh e8accef866 [ARM] Replace hard coded metadata arguments in tests with a regex.
Differential Revision: http://reviews.llvm.org/D10507

llvm-svn: 239932
2015-06-17 19:56:30 +00:00
Luke Cheeseman 59b2d83909 This patch implements clang support for the ACLE special register intrinsics
in section 10.1, __arm_{w,r}sr{,p,64}.

This includes arm_acle.h definitions with builtins and codegen to support
these, the intrinsics are implemented by generating read/write_register calls
which get appropriately lowered in the backend based on the register string
provided. SemaChecking is also implemented to fault invalid parameters.

Differential Revision: http://reviews.llvm.org/D9697

llvm-svn: 239737
2015-06-15 17:51:01 +00:00
Yi Kong a5548431a5 AArch64: Prefetch intrinsic
llvm-svn: 215569
2014-08-13 19:18:20 +00:00
Yi Kong 19a29ac0d0 Port memory barriers intrinsics to AArch64
Memory barrier __builtin_arm_[dmb, dsb, isb] intrinsics are required to
implement their corresponding ACLE and MSVC intrinsics.

This patch ports ARM dmb, dsb, isb intrinsic to AArch64.

Requires LLVM r213247.

Differential Revision: http://reviews.llvm.org/D4521

llvm-svn: 213250
2014-07-17 10:52:06 +00:00
Yi Kong 4d5e23f53a ARM: Implement __builtin_arm_nop intrinsic
This patch implements __builtin_arm_nop intrinsic for AArch32 and AArch64,
which generates hint 0x0, the alias of NOP instruction.

This intrinsic is necessary to implement ACLE __nop intrinsic.

Differential Revision: http://reviews.llvm.org/D4495

llvm-svn: 212947
2014-07-14 15:20:09 +00:00
Yi Kong 19222dcb4c Add test cases for AArch64 hints codegen
llvm-svn: 212909
2014-07-13 16:17:30 +00:00
Jim Grosbach 8ddd66928c AArch64: Fix silly think-o in tests.
rdar://9283021

llvm-svn: 211064
2014-06-16 22:18:26 +00:00
Jim Grosbach 79140826bc AArch64: Support for __builtin_arm_rbit() and __builtin_arm_rbit64().
__builtin_arm_rbit() and __builtin_arm_rbit64().

rdar://9283021

llvm-svn: 211060
2014-06-16 21:56:02 +00:00
Tim Northover a2ee433c8d ARM64: initial clang support commit.
This adds Clang support for the ARM64 backend. There are definitely
still some rough edges, so please bring up any issues you see with
this patch.

As with the LLVM commit though, we think it'll be more useful for
merging with AArch64 from within the tree.

llvm-svn: 205100
2014-03-29 15:09:45 +00:00