Commit Graph

2618 Commits

Author SHA1 Message Date
Kevin Qin 110db6f2ad [AArch64] Implement Clang CLI interface proposal about "-march".
1. Revert "Add default feature for CPUs on AArch64 target in Clang"
at r210625. Then, all enabled feature will by passed explicitly by
-target-feature in -cc1 option.

2. Get "-mfpu" deprecated.

3. Implement support of "-march". Usage is:
    -march=armv8-a+[no]feature
  For instance, "-march=armv8-a+neon+crc+nocrypto". Here "armv8-a" is
  necessary, and CPU names are not acceptable. Candidate features are
  fp, neon, crc and crypto. Where conflicting feature modifiers are
  specified, the right-most feature is used.

4. Implement support of "-mtune". Usage is:
    -march=CPU_NAME
  For instance, "-march=cortex-a57". This option will ONLY get
  micro-architectural feature enabled specifying to target CPU,
  like "+zcm" and "+zcz" for cyclone. Any architectural features
  WON'T be modified.

5. Change usage of "-mcpu" to "-mcpu=CPU_NAME+[no]feature", which is
  an alias to "-march={feature of CPU_NAME}+[no]feature" and
  "-mtune=CPU_NAME" together. Where this option is used in conjunction
  with -march or -mtune, those options take precedence over the
  appropriate part of this option.

llvm-svn: 213353
2014-07-18 07:03:22 +00:00
Alexey Samsonov c993933e78 Check-labelize ubsan tests
llvm-svn: 213334
2014-07-17 23:53:44 +00:00
NAKAMURA Takumi 0c5f4edba4 clang/test/CodeGen/ms-inline-asm.c: Fix for -Asserts.
llvm-svn: 213329
2014-07-17 22:51:49 +00:00
Nico Weber 9a08847e6d Add a test for PR20343 after llvm r213303.
llvm-svn: 213305
2014-07-17 20:25:36 +00:00
Alexey Samsonov 24cad99307 [UBSan] Add !nosanitize metadata to the code generated by UBSan.
This is used to mark the instructions emitted by Clang to implement
variety of UBSan checks. Generally, we don't want to instrument these
instructions with another sanitizers (like ASan).

Reviewed in http://reviews.llvm.org/D4544

llvm-svn: 213291
2014-07-17 18:46:27 +00:00
Yi Kong 28d7b02687 ARM: Add ACLE memory barrier intrinsic mapping
llvm-svn: 213261
2014-07-17 12:45:17 +00:00
Ehsan Akhgari d86ca7a9c7 Upstream an MS inline assembly test from Mozilla's inline assembly code
Summary:
I'm planning on upstreaming some test cases for the inline assembly
usage in the Mozilla code base.  A lot of these test cases test the
recent fixes to this code.

Reviewers: rnk

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D4508

llvm-svn: 213255
2014-07-17 11:38:22 +00:00
Yi Kong 19a29ac0d0 Port memory barriers intrinsics to AArch64
Memory barrier __builtin_arm_[dmb, dsb, isb] intrinsics are required to
implement their corresponding ACLE and MSVC intrinsics.

This patch ports ARM dmb, dsb, isb intrinsic to AArch64.

Requires LLVM r213247.

Differential Revision: http://reviews.llvm.org/D4521

llvm-svn: 213250
2014-07-17 10:52:06 +00:00
Tim Northover 6dbcbac98b IR: update Clang to use polymorphic __fp16 conversion intrinsics.
There should be no change in semantics at this stage.

llvm-svn: 213249
2014-07-17 10:51:31 +00:00
Hal Finkel 3e49fda0d4 Add basic (noop) CodeGen support for __assume
Clang supports __assume, at least at the semantic level, when MS extensions are
enabled. Unfortunately, trying to actually compile code using __assume would
result in this error:

  error: cannot compile this builtin function yet

__assume is an optimizer hint, and can be ignored at the IR level. Until LLVM
supports assumptions at the IR level, a noop lowering is valid, and that is
what is done here.

llvm-svn: 213206
2014-07-16 22:44:54 +00:00
David Majnemer ade4bee761 CodeGen: Let arrays be inputs to inline asm
An array showing up in an inline assembly input is accepted in ICC and
GCC 4.8

This fixes PR20201.

Differential Revision: http://reviews.llvm.org/D4382

llvm-svn: 212954
2014-07-14 16:27:53 +00:00
Yi Kong 472e521cec ARM: Add NOP intrinsic mapping in arm_acle.h
llvm-svn: 212950
2014-07-14 15:32:29 +00:00
Yi Kong 4d5e23f53a ARM: Implement __builtin_arm_nop intrinsic
This patch implements __builtin_arm_nop intrinsic for AArch32 and AArch64,
which generates hint 0x0, the alias of NOP instruction.

This intrinsic is necessary to implement ACLE __nop intrinsic.

Differential Revision: http://reviews.llvm.org/D4495

llvm-svn: 212947
2014-07-14 15:20:09 +00:00
Yi Kong 19222dcb4c Add test cases for AArch64 hints codegen
llvm-svn: 212909
2014-07-13 16:17:30 +00:00
Saleem Abdulrasool 3b165e7dbb tests: use a more precise target for tests
llvm-svn: 212892
2014-07-12 23:40:53 +00:00
Saleem Abdulrasool 572250d60a CodeGen: support hint intrinsics from ACLE on AArch64
This adds support for the ACLE hint intrinsics on AArch64 similar to ARM.  This
is required to properly support ACLE on AArch64.

llvm-svn: 212890
2014-07-12 23:27:22 +00:00
Yi Kong 4e00ce7d0c Improve comments of ARM ACLE header file and tests
Include section number in ARM ACLE specification for easier navigation.

llvm-svn: 212887
2014-07-12 22:48:13 +00:00
Hal Finkel d8442b1b21 Add nonnull in CodeGen for __attribute__((returns_nonnull))
As a follow-up to r212835, also add the LLVM nonnull function attribute when
__attribute__((returns_nonnull)) is provided.

llvm-svn: 212874
2014-07-12 04:51:04 +00:00
Alexey Samsonov 15c9669615 [ASan] Collect unmangled names of global variables in Clang to print them in error reports.
Currently ASan instrumentation pass creates a string with global name
for each instrumented global (to include global names in the error report). Global
name is already mangled at this point, and we may not be able to demangle it
at runtime (e.g. there is no __cxa_demangle on Android).

Instead, create a string with fully qualified global name in Clang, and pass it
to ASan instrumentation pass in llvm.asan.globals metadata. If there is no metadata
for some global, ASan will use the original algorithm.

This fixes https://code.google.com/p/address-sanitizer/issues/detail?id=264.

llvm-svn: 212872
2014-07-12 00:42:52 +00:00
Reid Kleckner f392ec6ecc Form a CallExpr from __noop without parens
MSVC accepts __noop without any trailing parens and treats it like a
literal zero.  We don't treat __noop as an integer literal, but now at
least we can parse a naked __noop expression.

Reviewers: rsmith

Differential Revision: http://reviews.llvm.org/D4476

llvm-svn: 212860
2014-07-11 23:54:29 +00:00
Reid Kleckner ed5d4adb36 MS extension: Make __noop be the integer zero, not void
We still don't accept '__noop;', and we don't consider __noop to be the
integer literal zero.  More work is needed.

llvm-svn: 212839
2014-07-11 20:22:55 +00:00
Hal Finkel 82504f03ce Add nonnull in CodeGen for __attribute__((nonnull))
We now have an LLVM-level nonnull attribute that can be applied to function
parameters, and we emit it for reference types (as of r209723), but did not
emit it when an __attribute__((nonnull)) was provided. Now we will.

llvm-svn: 212835
2014-07-11 17:35:21 +00:00
Alexey Samsonov 848560125d [UBSan] Introduce type-based blacklisting.
Teach UBSan vptr checker to ignore technically invalud down-casts on
blacklisted types.

Based on http://reviews.llvm.org/D4407 by Byoungyoung Lee!

llvm-svn: 212770
2014-07-10 22:34:19 +00:00
Ulrich Weigand b4153254b7 Fix (and reenable) ppc64-align-struct.c test for non-assert builds.
llvm-svn: 212757
2014-07-10 19:19:03 +00:00
David Blaikie cceed090d2 Quick (attempted) fix for non-asserts builds for a test introduced in r212743.
llvm-svn: 212752
2014-07-10 18:40:54 +00:00
Ulrich Weigand 581badce4b [PowerPC] ABI support for aligned by-value aggregates
This patch adds support for respecting the ABI and type alignment
of aggregates passed by value.  Currently, all aggregates are aligned
at 8 bytes in the parameter save area.  This is incorrect for two
reasons:

- Aggregates that need alignment of 16 bytes or more should be aligned
  at 16 bytes in the parameter save area.  This is implemented by
  using an appropriate "byval align" attribute in the IR.

- Aggregates that need alignment beyond 16 bytes need to be dynamically
  realigned by the caller.  This is implemented by setting the Realign
  flag of the ABIArgInfo::getIndirect call.

In addition, when expanding a va_arg call accessing a type that is
aligned at 16 bytes in the argument save area (either one of the
aggregate types as above, or a vector type which is already aligned
at 16 bytes), code needs to align the va_list pointer accordingly.

Reviewed by Hal Finkel.

llvm-svn: 212743
2014-07-10 17:20:07 +00:00
Ulrich Weigand f4eba98853 [PowerPC] ABI support for non-Altivec vector types
This patch adds support for passing arguments of non-Altivec vector type
(i.e. defined via attribute ((vector_size (...)))) on powerpc64-linux.

While such types are not mentioned in the formal ABI document, this
patch implements a calling convention compatible with GCC:

- Vectors of size < 16 bytes are passed in a GPR
- Vectors of size > 16 bytes are passed via reference

Note that vector types with a number of elements that is not a power
of 2 are not supported by GCC, so there is no pre-existing ABI to
follow.  We choose to pass those (of size < 16) as if widened to the
next power of two, so they might end up in a vector register or
in a GPR.  (Sizes > 16 are always passed via reference as well.)

Reviewed by Hal Finkel.

llvm-svn: 212734
2014-07-10 16:39:01 +00:00
Daniel Sanders cfbb71dfb6 [mips] clz is defined to give 32 for zero. Similarly, dclz gives 64.
Summary:
While debugging another issue, I noticed that Mips currently specifies that the
count leading zero builtins are undefined when the input is zero. The
architecture specifications say that the clz and dclz instructions write 32 or
64 respectively when given zero.

This doesn't fix any bugs that I'm aware of but it may improve optimisation in
some cases.

Differential Revision: http://reviews.llvm.org/D4431

llvm-svn: 212618
2014-07-09 13:43:19 +00:00
Tim Northover e8c3721165 ARM: use LLVM's atomicrmw instructions when ldrex/strex are available.
Having some kind of weird kernel-assisted ABI for these when the
native instructions are available appears to be (and should be) the
exception; OSs have been gradually opting in for years and the code
was getting silly.

So let LLVM decide whether it's possible/profitable to inline them by
default.

Patch by Phoebe Buckheister.

llvm-svn: 212598
2014-07-09 09:24:43 +00:00
Alexey Samsonov e7a8ccfaad [Sanitizer] Reduce the usage of sanitizer blacklist in CodeGenModule
Get rid of cached CodeGenModule::SanOpts, which was used to turn off
sanitizer codegen options if current LLVM Module is blacklisted, and use
plain LangOpts.Sanitize instead.

1) Some codegen decisions (turning TBAA or writable strings on/off)
   shouldn't depend on the contents of blacklist.

2) llvm.asan.globals should *always* be created, even if the module
   is blacklisted - soon Clang's CodeGen where we read sanitizer
   blacklist files, so we should properly report which globals are
   blacklisted to the backend.

llvm-svn: 212499
2014-07-07 23:34:34 +00:00
Nico Weber 9b982078e9 Add an AST node for __leave statements, hook it up.
Codegen is still missing (and I won't work on that), but __leave is now
as implemented as __try and friends.

llvm-svn: 212425
2014-07-07 00:12:30 +00:00
Ehsan Akhgari 0f89fac7a5 Add support for nested blocks in Microsoft inline assembly
This fixes http://llvm.org/PR20204.

llvm-svn: 212389
2014-07-06 05:26:54 +00:00
Saleem Abdulrasool e700cab4e9 CodeGen: add support for a few MSVC ARM intrinsics
This adds support for simple MSVC compatibility mode intrinsics.  These
intrinsics are simple in that they are either directly passed through to the
annotated MSBuiltin intrinsic or they mirror existing GCC builtins.

llvm-svn: 212378
2014-07-05 20:10:05 +00:00
Ehsan Akhgari 3e0dd89adf Add a test case for the tilde operator in Microsoft inline assembly
llvm-svn: 212373
2014-07-05 15:04:06 +00:00
Alexey Bataev b75daeecbe Fixed test CodeGen/captured-statements.c for powerpc64-linux.
llvm-svn: 212311
2014-07-04 04:06:11 +00:00
Gerolf Hoflehner 2344cfc335 Restore global static array in test case
llvm-svn: 212285
2014-07-03 19:30:33 +00:00
Yi Kong 4efadfb0b0 [ARM] Implement ISB memory barrier intrinsic
Adds support for __builtin_arm_isb. Also corrects DSB and ISB instructions
modelling by adding has-side-effects property.

llvm-svn: 212277
2014-07-03 16:01:25 +00:00
Renato Golin 47843efcf6 Add the __qdbl intrinsic to the arm_acle.h header
Patch by: Moritz Roth

llvm-svn: 212264
2014-07-03 10:14:52 +00:00
Robert Lytton 57765d5347 Move the calling of emitTargetMD() later.
Summary:
Because a global created by GetOrCreateLLVMGlobal() is not finalised until later viz:
  extern char a[];
  char f(){ return a[5];}
  char a[10];

Change MangledDeclNames to use a MapVector rather than a DenseMap so that the
Metadata is output in order of original declaration, so to make deterministic
and improve human readablity.

Differential Revision: http://reviews.llvm.org/D4176

llvm-svn: 212263
2014-07-03 09:30:33 +00:00
Christian Pirker c3d3217525 ARMEB: Fix function result return for composite types
Reviewed at http://reviews.llvm.org/D4364

llvm-svn: 212261
2014-07-03 09:28:12 +00:00
Saleem Abdulrasool ece7217f70 ARM: rename ARM builtins to use __builtin_arm prefix
This corrects SVN r212196's naming change to use the proper prefix of
`__builtin_arm_` instead of `__builtin_`.

Thanks to Yi Kong for pointing out the incorrect naming!

llvm-svn: 212253
2014-07-03 02:43:20 +00:00
Saleem Abdulrasool 4bddd9d400 CodeGen: make target builtins support languages
This extends the target builtin support to allow language specific annotations
(i.e. LANGBUILTIN).  This is to allow MSVC compatibility whilst retaining the
ability to have EABI targets use a __builtin_ prefix.  This is merely to allow
uniformity in the EABI case where the unprefixed name is provided as an alias in
the header.

llvm-svn: 212196
2014-07-02 17:41:27 +00:00
Alexey Samsonov 4f319cca42 [ASan] Print exact source location of global variables in error reports.
See https://code.google.com/p/address-sanitizer/issues/detail?id=299 for the
original feature request.

Introduce llvm.asan.globals metadata, which Clang (or any other frontend)
may use to report extra information about global variables to ASan
instrumentation pass in the backend. This metadata replaces
llvm.asan.dynamically_initialized_globals that was used to detect init-order
bugs. llvm.asan.globals contains the following data for each global:
  1) source location (file/line/column info);
  2) whether it is dynamically initialized;
  3) whether it is blacklisted (shouldn't be instrumented).

Source location data is then emitted in the binary and can be picked up
by ASan runtime in case it needs to print error report involving some global.
For example:

  0x... is located 4 bytes to the right of global variable 'C::array' defined in '/path/to/file:17:8' (0x...) of size 40

These source locations are printed even if the binary doesn't have any
debug info.

This is an ABI-breaking change. ASan initialization is renamed to
__asan_init_v4(). Pre-built libraries compiled with older Clang will not work
with the fresh runtime.

llvm-svn: 212188
2014-07-02 16:54:41 +00:00
Tim Northover 3acd6bd0b6 ARM: add support for v8 ldaex/stlex builtins.
ARMv8 adds (to both AArch32 and AArch64) acquiring and releasing
variants of the exclusive operations, in line with the C++11 memory
model.

This adds support for two new intrinsics to expose them to C & C++
developers directly: __builtin_arm_ldaex and __builtin_arm_stlex, in
direct analogy with the versions with no implicit barrier.

rdar://problem/15885451

llvm-svn: 212175
2014-07-02 12:56:02 +00:00
Tim Northover 1471cb17ae X86: inline all atomic operations up to 128-bits.
The backend *can* cope with all of these now, so Clang should give it the
chance. On CPUs without cmpxchg16b (e.g. the original athlon64) LLVM can reform
the libcalls.

rdar://problem/13496295

llvm-svn: 212173
2014-07-02 10:25:45 +00:00
Alexey Bataev f94baeb363 Added test for capturing VLA types if the captured variable is a function parameter.
llvm-svn: 212170
2014-07-02 07:05:22 +00:00
Gerolf Hoflehner 012dff0b23 Enable test/CodeGen/indirect-goto.c in 64b for local arrays
In 32b mode the reference count for block addresses
is not zero. This prevents inlining and constant
folding and causes the test to fail. Changing
the triple allows runnning the test in 64b mode.

The array in foo2 is now local instead of static until
at lower optimization levels the interprocedural constant
propagator is invoked before the global optimizer.

llvm-svn: 212092
2014-07-01 05:10:06 +00:00
Bob Wilson 84941b92e6 Temporarily disable the indirect-goto.c test.
llvm r212077 causes this test to fail. We need to reorder some passes and
possibly make other changes to reenable the optimization being tested here.

llvm-svn: 212091
2014-07-01 04:56:06 +00:00
Andrea Di Biagio eb606a3c27 [x86] Add Clang support for intrinsic __rdpmc.
This patch adds intrinsic __rdpmc to header file 'ia32intrin.h'.
Intrinsic __rdmpc can be used to read performance monitoring counters. It is
implemented as a direct call to __builtin_ia32_rdpmc.

It takes as input a value representing the index of the performance counter to
read. The value of the performance counter is then returned as a unsigned
64-bit quantity.

llvm-svn: 212053
2014-06-30 18:23:58 +00:00
Alexey Bataev 06812bc98b Second part of fix in CodeGen/captured-statements-nested.c
llvm-svn: 212028
2014-06-30 09:14:10 +00:00
Alexey Bataev e686c1d7ef Test fix
llvm-svn: 212026
2014-06-30 09:05:08 +00:00
Alexey Bataev 41ff27e9a1 Fixed incompatibility in CodeGen/captured-statements-nested.c with MSVC
llvm-svn: 212025
2014-06-30 08:37:48 +00:00
Alexey Bataev be5af7b9c7 Fixed CodeGen/captured-statements-nested.c test
llvm-svn: 212024
2014-06-30 08:17:11 +00:00
Alexey Bataev 6de13e86e9 Disable CodeGen/captured-statements-nested.c
llvm-svn: 212018
2014-06-30 05:07:42 +00:00
Alexey Bataev 83222d6109 Fixed CodeGen/captured-statements-nested.c test
llvm-svn: 212016
2014-06-30 05:02:50 +00:00
Alexey Bataev 8dfca43296 Disable CodeGen/captured-statements-nested.c
llvm-svn: 212014
2014-06-30 03:30:41 +00:00
Alexey Bataev 18da16cab5 Temp XFAIL CodeGen/captured-statements-nested.c to fix the test
llvm-svn: 212013
2014-06-30 03:14:43 +00:00
Alexey Bataev aca7fcf276 Using of variable length arrays in captured statements and OpenMP constructs.
Differential Revision: http://reviews.llvm.org/D4067

llvm-svn: 212010
2014-06-30 02:55:54 +00:00
Alp Toker f082e73696 Remove some incorrect test suppressions
These don't actually require any registered backend to run.

This commit tests the water with a handful of fixes for what is a more
widespread problem.

llvm-svn: 212008
2014-06-30 01:34:09 +00:00
Saleem Abdulrasool 24bd7da2d2 Basic: fix handling for Windows Itanium environment
This corrects the handling for i686-windows-itanium.  This environment is nearly
identical to Windows MSVC, except it uses the itanium ABI for C++.

llvm-svn: 211991
2014-06-28 23:34:11 +00:00
Alp Toker f76e6d8e6b Get arm_acle tests from r211962 working
llvm-svn: 211979
2014-06-28 06:51:27 +00:00
Yi Kong a44c4d7173 Introduce arm_acle.h supporting existing LLVM builtin intrinsics
Summary: This patch introduces ACLE header file, implementing extensions that can be directly mapped to existing Clang intrinsics. It implements for both AArch32 and AArch64.

Reviewers: t.p.northover, compnerd, rengolin

Reviewed By: compnerd, rengolin

Subscribers: rnk, echristo, compnerd, aemerson, mroth, cfe-commits

Differential Revision: http://reviews.llvm.org/D4296

llvm-svn: 211962
2014-06-27 21:25:42 +00:00
Oliver Stannard 3f32b9be7f [ARM] Fix AAPCS non-compliance caused by very large structs
This is a fix to the code in clang which inserts padding arguments to
ensure that the ARM backend can emit AAPCS-VFP compliant code. This code
needs to track the number of registers which have been allocated in order
to do this. When passing a very large struct (>64 bytes) by value, clang
emits IR which takes a pointer to the struct, but the backend converts this
back to passing the struct in registers and on the stack. The bug was that
this was being considered by clang to only use one register, meaning that
there were situations in which padding arguments were incorrectly emitted
by clang.

llvm-svn: 211898
2014-06-27 13:59:27 +00:00
James Molloy b452f78ad2 [ARM-BE] Generate correct NEON intrinsics for big endian systems.
The NEON intrinsics in arm_neon.h are designed to work on vectors
"as-if" loaded by (V)LDR. We load vectors "as-if" (V)LD1, so the
intrinsics are currently incorrect.

This patch adds big-endian versions of the intrinsics that does the
"obvious but dumb" thing of reversing all vector inputs and all
vector outputs. This will produce extra REVs, but we trust the
optimizer to remove them.

llvm-svn: 211893
2014-06-27 11:53:35 +00:00
Eli Bendersky b198b4e864 Rename loop unrolling and loop vectorizer metadata to have a common prefix.
[Clang part]

These patches rename the loop unrolling and loop vectorizer metadata
such that they have a common 'llvm.loop.' prefix.  Metadata name
changes:

llvm.vectorizer.* => llvm.loop.vectorizer.*
llvm.loopunroll.* => llvm.loop.unroll.*

This was a suggestion from an earlier review
(http://reviews.llvm.org/D4090) which added the loop unrolling
metadata. 

Patch by Mark Heffernan.

llvm-svn: 211712
2014-06-25 15:42:16 +00:00
James Molloy b8fd41926c CHECK-LABEL'ify this test.
llvm-svn: 211687
2014-06-25 11:50:56 +00:00
James Molloy 7d64a0eec4 [AArch32] Fix a stupid error in an architectural guard
The < 8 instead of <= 8 meant that a bunch of vreinterprets were not available on v8 AArch32. Simplify the guard to just !defined(aarch64) while we're at it, and enable some v8 AArch32 testing.

llvm-svn: 211686
2014-06-25 11:46:24 +00:00
David Majnemer 0c43d8077e AST: Initialization with dllimport functions in C
The C++ language requires that the address of a function be the same
across all translation units.  To make __declspec(dllimport) useful,
this means that a dllimported function must also obey this rule.  MSVC
implements this by dynamically querying the import address table located
in the linked executable.  This means that the address of such a
function in C++ is not constant (which violates other rules).

However, the C language has no notion of ODR nor does it permit dynamic
initialization whatsoever.  This requires implementations to _not_
dynamically query the import address table and instead utilize a wrapper
function that will be synthesized by the linker which will eventually
query the import address table.  The effect this has is, to say the
least, perplexing.

Consider the following C program:
__declspec(dllimport) void f(void);

typedef void (*fp)(void);

static const fp var = &f;

const fp fun() { return &f; }

int main() { return fun() == var; }

MSVC will statically initialize "var" with the address of the wrapper
function and "fun" returns the address of the actual imported function.
This means that "main" will return false!

Note that LLVM's optimizers are strong enough to figure out that "main"
should return true.  However, this result is dependent on having
optimizations enabled!

N.B.  This change also permits the usage of dllimport declarators inside
of template arguments; they are sufficiently constant for such a
purpose.  Add tests to make sure we don't regress here.

llvm-svn: 211677
2014-06-25 08:15:07 +00:00
Rafael Espindola 0a500af186 Correctly Load Mixed FP-GP Variadic Arguments for x86-64.
According to the x86-64 ABI, structures with both floating point and
integer members are split between floating-point and general purpose
registers, and consecutive 32-bit floats can be packed into a single
floating point register.

In the case of variadic functions these are stored to memory and the position
recorded in the va_list. This was already correctly implemented in
llvm.va_start.

The problem is that the code in clang for implementing va_arg was reading
floating point registers from the wrong location.

Patch by Thomas Jablin.

Fixes PR20018.

llvm-svn: 211626
2014-06-24 20:01:50 +00:00
Ulrich Weigand bebc55b13b [PowerPC] Fix small argument stack slot offset for LE
When small arguments (structures < 8 bytes or "float") are passed in a
stack slot in the ppc64 SVR4 ABI, they must reside in the least
significant part of that slot.  On BE, this means that an offset needs
to be added to the stack address of the parameter, but on LE, the least
significant part of the slot has the same address as the slot itself.

For the most part, this is handled in the LLVM back-end, where I just
fixed the LE case in commit r211368.

However, there is one piece of the clang front-end that is also aware of
these stack-slot offsets: PPC64_SVR4_ABIInfo::EmitVAArg.  This patch
updates that routine to take endianness into account.

llvm-svn: 211370
2014-06-20 16:37:40 +00:00
Oliver Stannard e3a4fb6512 Add module flags metadata to record the settings for enum and wchar width
Add module flags metadata to record the settings for enum and wchar width,
to allow correct ARM build attribute generation

llvm-svn: 211354
2014-06-20 12:43:07 +00:00
Oliver Stannard c8e3b5f849 Improve robustness of tests for module flags metadata
Fix clang tests to not break if the ID numbers of module flags metadata
nodes change.

llvm-svn: 211276
2014-06-19 16:10:21 +00:00
Saleem Abdulrasool 11415c6120 tests: relax ms-intrinsics test
Relax the tests to allow for differences between release and debug builds.  This
should fix the buildbots.

Thanks to Benjamin Kramer and Eric Christo for their invaluable tip that this
was release build specific issue.

llvm-svn: 211227
2014-06-18 21:48:44 +00:00
Saleem Abdulrasool 114efe0dc8 CodeGen: improve ms instrincics support
Add support for _InterlockedCompareExchangePointer, _InterlockExchangePointer,
_InterlockExchange.  These are available as a compiler intrinsic on ARM and x86.
These are used directly by the Windows SDK headers without use of the intrin
header.

llvm-svn: 211216
2014-06-18 20:51:10 +00:00
Tim Northover 831d728f9a AArch64: re-enable tests that were looking for a non-existent backend.
In the final phase of the merge, I managed to disable a bunch of Clang
tests accidentally. Fortunately none of them seem to have broken in
the interim.

llvm-svn: 211149
2014-06-18 08:37:28 +00:00
James Molloy dee4ab08ba Rewrite ARM NEON intrinsic emission completely.
There comes a time in the life of any amateur code generator when dumb string
concatenation just won't cut it any more. For NeonEmitter.cpp, that time has
come.

There were a bunch of magic type codes which meant different things depending on
the context. There were a bunch of special cases that really had no reason to be
there but the whole thing was so creaky that removing them would cause something
weird to fall over. There was a 1000 line switch statement for code generation
involving string concatenation, which actually did lexical scoping to an extent
(!!) with a bunch of semi-repeated cases.

I tried to refactor this three times in three different ways without
success. The only way forward was to rewrite the entire thing. Luckily the
testing coverage on this stuff is absolutely massive, both with regression tests
and the "emperor" random test case generator.

The main change is that previously, in arm_neon.td a bunch of "Operation"s were
defined with special names. NeonEmitter.cpp knew about these Operations and
would emit code based on a huge switch. Actually this doesn't make much sense -
the type information was held as strings, so type checking was impossible. Also
TableGen's DAG type actually suits this sort of code generation very well
(surprising that...)

So now every operation is defined in terms of TableGen DAGs. There are a bunch
of operators to use, including "op" (a generic unary or binary operator), "call"
(to call other intrinsics) and "shuffle" (take a guess...). One of the main
advantages of this apart from making it more obvious what is going on, is that
we have proper type inference. This has two obvious advantages:

  1) TableGen can error on bad intrinsic definitions easier, instead of just
     generating wrong code.
  2) Calls to other intrinsics are typechecked too. So
     we no longer need to work out whether the thing we call needs to be the Q-lane
     version or the D-lane version - TableGen knows that itself!

Here's an example: before:

  case OpAbdl: {
    std::string abd = MangleName("vabd", typestr, ClassS) + "(__a, __b)";
    if (typestr[0] != 'U') {
      // vabd results are always unsigned and must be zero-extended.
      std::string utype = "U" + typestr.str();
      s += "(" + TypeString(proto[0], typestr) + ")";
      abd = "(" + TypeString('d', utype) + ")" + abd;
      s += Extend(utype, abd) + ";";
    } else {
      s += Extend(typestr, abd) + ";";
    }
    break;
  }

after:

  def OP_ABDL     : Op<(cast "R", (call "vmovl", (cast $p0, "U",
                                                       (call "vabd", $p0, $p1))))>;

As an example of what happens if you do something wrong now, here's what happens
if you make $p0 unsigned before the call to "vabd" - that is, $p0 -> (cast "U",
$p0):

arm_neon.td:574:1: error: No compatible intrinsic found - looking up intrinsic 'vabd(uint8x8_t, int8x8_t)'
Available overloads:
  - float64x2_t vabdq_v(float64x2_t, float64x2_t)
  - float64x1_t vabd_v(float64x1_t, float64x1_t)
  - float64_t vabdd_f64(float64_t, float64_t)
  - float32_t vabds_f32(float32_t, float32_t)
... snip ...

This makes it seriously easy to work out what you've done wrong in fairly nasty
intrinsics.

As part of this I've massively beefed up the documentation in arm_neon.td too.

Things still to do / on the radar:
  - Testcase generation. This was implemented in the previous version and not in
    the new one, because
    - Autogenerated tests are not being run. The testcase in test/ differs from
      the autogenerated version.
    - There were a whole slew of special cases in the testcase generation that just
      felt (and looked) like hacks.
    If someone really feels strongly about this, I can try and reimplement it too.
  - Big endian. That's coming soon and should be a very small diff on top of this one.

llvm-svn: 211101
2014-06-17 13:11:27 +00:00
Jim Grosbach 8ddd66928c AArch64: Fix silly think-o in tests.
rdar://9283021

llvm-svn: 211064
2014-06-16 22:18:26 +00:00
Jim Grosbach 79140826bc AArch64: Support for __builtin_arm_rbit() and __builtin_arm_rbit64().
__builtin_arm_rbit() and __builtin_arm_rbit64().

rdar://9283021

llvm-svn: 211060
2014-06-16 21:56:02 +00:00
Jim Grosbach 171ec34544 ARM: Support for __builtin_arm_rbit() intrinsic.
Reverse the bits in a word. Maps to the RBIT instruction.

rdar://9283021

llvm-svn: 211059
2014-06-16 21:55:58 +00:00
Tim Northover ba2b33b4fe Fix test for release builds.
llvm-svn: 210934
2014-06-13 20:00:38 +00:00
Tim Northover cadbbe1537 Atomics: emit "cmpxchg weak" where possible
Most builtins date from before the "cmpxchg weak" was a gleam in the
C++ committee's eye, so fortunately not much needs to change. But a
few of them *do* acknowledge that failure is possible.

For these, we'll emit the usual cartesian product of cmpxchg
operations if we can't statically determine weakness.  CodeGen can
sort it out later if the function gets inlined.

The only other non-trivial aspect of this is (I think) that we emit
the scalar expression for "IsWeak" once, at the beginning, and
propagate its value through the successive blocks. There's not much in
it, but it's slightly more consistent with the existing handling of
FailureOrder.

llvm-svn: 210932
2014-06-13 19:43:04 +00:00
Alexey Samsonov e595e1ade0 Remove top-level Clang -fsanitize= flags for optional ASan features.
Init-order and use-after-return modes can currently be enabled
by runtime flags. use-after-scope mode is not really working at the
moment.

The only problem I see is that users won't be able to disable extra
instrumentation for init-order and use-after-scope by a top-level Clang flag.
But this instrumentation was implicitly enabled for quite a while and
we didn't hear from users hurt by it.

llvm-svn: 210924
2014-06-13 17:53:44 +00:00
Tim Northover b49b04bbe0 IR-change: cmpxchg operations now return { iN, i1 }.
This is a minimal fix for clang. I'll soon add support for generating
weak variants when requested, but that's not really necessary for the
LLVM change in isolation.

llvm-svn: 210907
2014-06-13 14:24:59 +00:00
Tim Northover d7756c5a68 Tests: use CHECK-LABEL to help debugging failures
llvm-svn: 210906
2014-06-13 14:24:48 +00:00
Brad Smith 378e7f9b78 Use dwarf-2 by default on OpenBSD and FreeBSD.
The Tools.cpp part of the patch partially based on a patch from
FreeBSD's LLVM tree.

llvm-svn: 210883
2014-06-13 03:35:37 +00:00
Eli Bendersky 86483b3a0c Add loop unroll pragma support
http://reviews.llvm.org/D4089

Patch by Mark Heffernan.

llvm-svn: 210667
2014-06-11 17:56:26 +00:00
Bill Schmidt 56a6967000 [PPC64LE] Fix vec_sld and vec_vsldoi for little endian
The vec_sld and vec_vsldoi interfaces perform a left-shift on vector
arguments for both big and little endian.  However, because they rely
on the vec_perm interface which is endian-dependent, the permutation
vector needs to be reversed for LE to get the proper shift direction.

I've added some extra testing for these interfaces for LE in the
builtins-ppc-altivec.c.

llvm-svn: 210657
2014-06-11 15:48:46 +00:00
Reid Kleckner 4173f6aff9 *Really* fix DOS newlines introduced in r210330
r210369 didn't quite catch all of them.

llvm-svn: 210593
2014-06-10 21:35:24 +00:00
Evgeniy Stepanov 2be29929be Fix line numbers for code inlined from __nodebug__ functions.
Instructions from __nodebug__ functions don't have file:line
information even when inlined into no-nodebug functions. As a result,
intrinsics (SSE and other) from <*intrin.h> clang headers _never_
have file:line information.

With this change, an instruction without !dbg metadata gets one from
the call instruction when inlined.

Fixes PR19001.

llvm-svn: 210459
2014-06-09 09:09:19 +00:00
Bill Schmidt 7f6596bb13 [PPC64LE] Implement little-endian semantics for vec_sums
The PowerPC vsumsws instruction, accessed via vec_sums, is defined
architecturally with a big-endian bias, in that the second input vector
and the result always reference big-endian element 3 (little-endian
element 0).  For ease of porting, the programmer wants elements 3 in
both cases.

To provide this semantics, for little endian we generate a permute for
the second input vector prior to the vsumsws instruction, and generate
a permute for the result vector following the vsumsws instruction.

The correctness of this code is tested by the new sums.c test added in
a previous patch, as well as the modifications to
builtins-ppc-altivec.c in the present patch.

llvm-svn: 210449
2014-06-09 03:31:47 +00:00
Joey Gouly 41181d140c Convert tests I recently add to use -verify instead of FileCheck.
This uncovered something strange. Diagnostics for InlineAsm have source locations
that don't really map to where they are within the .c source file.

llvm-svn: 210440
2014-06-08 21:28:54 +00:00
Bill Schmidt d7c53a91df [PPC64LE] Implement little-endian semantics for vec_unpack[hl]
The PowerPC vector-unpack-high and vector-unpack-low instructions
are defined architecturally with a big-endian bias, in that the vector
element numbering is assumed to be "left to right" regardless of
whether the processor is in big-endian or little-endian mode.  This
effectively reverses the meaning of "high" and "low."  Such a
definition is unnatural for little-endian code generation.

To facilitate ease of porting, the vec_unpackh and vec_unpackl
interfaces are designed to use natural element ordering, so that
elements are numbered according to little-endian design principles
when code is generated for a little-endian target.  The desired
semantics can be achieved by using the opposite instruction for
little-endian mode.  That is, when a call to vec_unpackh appears in
the code, a vector-unpack-low is generated, and when a call to
vec_unpackl appears in the code, a vector-unpack-high is generated.

The correctness of this code is tested by the new unpack.c test
added in a previous patch, as well as the modifications to
builtins-ppc-altivec.c in the present patch.

Note that these interfaces were originally incorrectly implemented
when they take a vector pixel argument.  This patch corrects this
implementation for both big- and little-endian code generation.

llvm-svn: 210391
2014-06-07 02:20:52 +00:00
Bill Schmidt 86f673a005 [PPC64LE] Update test for vec_sum2s interface
Commit r210384 prematurely included changes to the little-endian
implementation of the vec_sum2s interface.  This patch modifies
test/CodeGen/builtins-ppc-altivec.c to test those changes.

llvm-svn: 210389
2014-06-07 01:47:42 +00:00
Bill Schmidt 7f0a5c5141 [PPC64LE] Update builtins-ppc-altivec.c for PPC64 and PPC64LE
The Altivec builtin test case test/CodeGen/builtins-ppc-altivec.c has
always been executed only for 32-bit PowerPC.  These tests are equally
valid for 64-bit PowerPC.  This patch updates the test to be run for
three targets:  powerpc-unknown-unknown, powerpc64-unknown-unknown,
and powerpc64le-unknown-unknown.  The expected code generation changes
for some of the Altivec builtins for little endian, so this patch adds
new CHECK-LE variants to the test for the powerpc64le target.

These tests satisfy the testing requirements for some previous patches
committed over the last couple of days for lib/Headers/altivec.h:
r210279 for vec_perm, r210337 for vec_mul[eo], and r210340 for
vec_pack.

llvm-svn: 210384
2014-06-06 23:12:00 +00:00
Aaron Ballman b06b15aa28 Adding a new #pragma for the vectorize and interleave optimization hints.
Patch thanks to Tyler Nowicki!

llvm-svn: 210330
2014-06-06 12:40:24 +00:00
Joey Gouly 5798b26c65 When an inline-asm diagnostic is reported by the backend, report it with the
correct severity.

Previously all inline-asm diagnostics were reported as errors.

llvm-svn: 210286
2014-06-05 21:23:42 +00:00
Renato Golin 0d2f580200 Fix bot for named register test
llvm-svn: 210275
2014-06-05 16:52:20 +00:00
Renato Golin 2e31e4e47b Add pointer types to global named register
This patch adds support for pointer types in global named registers variables.
It'll be lowered as a pair of read/write_register and inttoptr/ptrtoint calls.
Also adds some early checks on types on SemaDecl to avoid the assert.

Tests changed accordingly. (PR19837)

llvm-svn: 210274
2014-06-05 16:45:22 +00:00
Robert Lytton 6adb20f720 XCore target: Fix 'typestring' binding qualifier to the array and not the type
Differential Revision: http://reviews.llvm.org/D3949

llvm-svn: 210250
2014-06-05 09:06:21 +00:00
Rafael Espindola 27c60b512a Update for llvm API change.
Aliases in llvm now hold an arbitrary expression.

llvm-svn: 210063
2014-06-03 02:42:01 +00:00