Several of these tests (the two deleted, and the one removal edit) were
relying on the optimizer to collapse things to test some frontend
feature. The tests were really old and features seemed amply covered by
other parts of the test suite, so I just removed them.
If anyone thinks they're valuable enough to keep/fix, we can play around
with that, for sure.
(inspired by r252872)
llvm-svn: 253114
The ``disable_tail_calls`` attribute instructs the backend to not
perform tail call optimization inside the marked function.
For example,
int callee(int);
int foo(int a) __attribute__((disable_tail_calls)) {
return callee(a); // This call is not tail-call optimized.
}
Note that this attribute is different from 'not_tail_called', which
prevents tail-call optimization to the marked function.
rdar://problem/8973573
Differential Revision: http://reviews.llvm.org/D12547
llvm-svn: 252986
In r244063, I had caused these builtins to call the same-named library
functions, __atomic_*_fetch_SIZE. However, this was incorrect: while
those functions are in fact supported by GCC's libatomic, they're not
documented by the spec (and gcc doesn't ever call them).
Instead, you're /supposed/ to call the __atomic_fetch_* builtins and
then redo the operation inline to return the final value.
Differential Revision: http://reviews.llvm.org/D14385
llvm-svn: 252920
target features that the caller function doesn't provide. This matches
the existing backend failure to inline functions that don't have
matching target features - and diagnoses earlier in the case of
always_inline.
Fix up a few test cases that were, in fact, invalid if you tried
to generate code from the backend with the specified target features
and add a couple of tests to illustrate what's going on.
This should fix PR25246.
llvm-svn: 252834
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
The -meabi flag to control LLVM EABI version.
Without '-meabi' or with '-meabi default' imply LLVM triple default.
With '-meabi gnu' sets EABI GNU.
With '-meabi 4' or '-meabi 5' set EABI version 4 and 5 respectively.
A similar patch was introduced in LLVM.
Patch by Vinicius Tinti.
llvm-svn: 252463
This attribute is used to prevent tail-call optimizations to the marked
function. For example, in the following piece of code, foo1 will not be
tail-call optimized:
int __attribute__((not_tail_called)) foo1(int);
int foo2(int a) {
return foo1(a); // Tail-call optimization is not performed.
}
The attribute has effect only on statically bound calls. It has no
effect on indirect calls. Also, virtual functions and objective-c
methods cannot be marked as 'not_tail_called'.
rdar://problem/22667622
Differential Revision: http://reviews.llvm.org/D12922
llvm-svn: 252369
This patch fixes one more thing in MCU psABI support: LongDoubleWidth should be set to 64.
Differential Revision: http://reviews.llvm.org/D14285
llvm-svn: 252156
This patch implements two things in front-end for MCU psABI support:
1) "long double type is the same as double."
2) "New predefined C/C++ pre-processor symbols: iamcu and iamcu__.
Differential Revision: http://reviews.llvm.org/D14205
llvm-svn: 251786
GCC uses the x87DoubleExtended model for long doubles, and passes them
indirectly by address through function calls.
Also replace the existing mingw-long-double assembly emitting test with
an IR-level test.
llvm-svn: 251567
Linking options for particular file depend on the option that specifies the file.
Currently there are two:
* -mlink-bitcode-file links in complete content of the specified file.
* -mlink-cuda-bitcode links in only the symbols needed by current TU.
Linked symbols are internalized. This bitcode linking mode is used to
link device-specific bitcode provided by CUDA.
Files are linked in order they are specified on command line.
-mlink-cuda-bitcode replaces -fcuda-uses-libdevice flag.
Differential Revision: http://reviews.llvm.org/D13913
llvm-svn: 251427
only one of a group of possibilities.
This changes the syntax in the builtin files to represent:
, as the and operator
| as the or operator
The former syntax matches how the backend tablegen files represent
multiple subtarget features being required.
Updated the builtin and intrinsic headers accordingly for the new
syntax.
llvm-svn: 251388
The MCU psABI calling convention is somewhat, but not quite, like -mregparm 3.
In particular, the rules involving structs are different.
Differential Revision: http://reviews.llvm.org/D13978
llvm-svn: 251224
According to the Intel documentation, the mask operand of a maskload and
maskstore intrinsics is always a vector of packed integer/long integer values.
This patch introduces the following two changes:
1. It fixes the avx maskload/store intrinsic definitions in avxintrin.h.
2. It changes BuiltinsX86.def to match the correct gcc definitions for avx
maskload/store (see D13861 for more details).
Differential Revision: http://reviews.llvm.org/D13861
llvm-svn: 250816
The Intel MCU psABI requires floating-point values to be passed in-reg.
This makes the x86-32 ABI code respect "-mfloat-abi soft" and generate float inreg arguments.
Differential Revision: http://reviews.llvm.org/D13554
llvm-svn: 250689
r246877 made __builtin_object_size substantially more aggressive with
unknown bases if Type=1 or Type=3, which causes issues when we encounter
code like this:
struct Foo {
int a;
char str[1];
};
const char str[] = "Hello, World!";
struct Foo *f = (struct Foo *)malloc(sizeof(*f) + strlen(str));
strcpy(&f->str, str);
__builtin_object_size(&f->str, 1) would hand back 1, which is
technically correct given the type of Foo, but the type of Foo lies to
us about how many bytes are available in this case.
This patch adds support for this "writing off the end" idiom -- we now
answer conservatively when we're given the address of the very last
member in a struct.
Differential Revision: http://reviews.llvm.org/D12169
llvm-svn: 250488
Previously, our logic when taking the address of an overloaded function
would not consider enable_if attributes, so long as all of the enable_if
conditions on a given candidate were true. So, two functions with
identical signatures (one with enable_if attributes, the other without),
would be considered equally good overloads. If we were calling the
function instead of taking its address, then the function with enable_if
attributes would be preferred.
This patch makes us prefer the candidate with enable_if regardless of if
we're calling or taking the address of an overloaded function.
Differential Revision: http://reviews.llvm.org/D13795
llvm-svn: 250486
match the feature set of the function that they're being called from.
This ensures that we can effectively diagnose some[1] code that would
instead ICE in the backend with a failure to select message.
Example:
__m128d foo(__m128d a, __m128d b) {
return __builtin_ia32_addsubps(b, a);
}
compiled for normal x86_64 via:
clang -target x86_64-linux-gnu -c
would fail to compile in the back end because the normal subtarget
features for x86_64 only include sse2 and the builtin requires sse3.
[1] We're still not erroring on:
__m128i bar(__m128i const *p) { return _mm_lddqu_si128(p); }
where we should fail and error on an always_inline function being
inlined into a function that doesn't support the subtarget features
required.
llvm-svn: 250473