target attribute, don't include "negative" subtarget features in the
list of required features. Builtins are positive by default so don't
need this change, but we pull the default list of features from the
command line and so need to make sure that we only include features
that are turned on for code generation in our error.
llvm-svn: 253242
These two intrinsics are defined in arm_acle.h.
__rev16l needs to rotate by 16 bits, bit it was actually rotating by 2 bits.
For AArch64, where long is 64 bits, this would still be wrong.
__rev16ll was incorrect, it reversed the bytes in each 32-bit word, rather than
each 16-bit halfword. The correct implementation is to apply __rev16 to the top
and bottom words of the 64-bit value.
For AArch32 targets, these get compiled down to the hardware rev16 instruction
at -O1 and above. For AArch64 targets, the 64-bit ones get compiled to two
32-bit rev16 instructions, because there is not currently a pattern for the
64-bit rev16 instruction.
Differential Revision: http://reviews.llvm.org/D14609
llvm-svn: 253211
Several of these tests (the two deleted, and the one removal edit) were
relying on the optimizer to collapse things to test some frontend
feature. The tests were really old and features seemed amply covered by
other parts of the test suite, so I just removed them.
If anyone thinks they're valuable enough to keep/fix, we can play around
with that, for sure.
(inspired by r252872)
llvm-svn: 253114
The ``disable_tail_calls`` attribute instructs the backend to not
perform tail call optimization inside the marked function.
For example,
int callee(int);
int foo(int a) __attribute__((disable_tail_calls)) {
return callee(a); // This call is not tail-call optimized.
}
Note that this attribute is different from 'not_tail_called', which
prevents tail-call optimization to the marked function.
rdar://problem/8973573
Differential Revision: http://reviews.llvm.org/D12547
llvm-svn: 252986
In r244063, I had caused these builtins to call the same-named library
functions, __atomic_*_fetch_SIZE. However, this was incorrect: while
those functions are in fact supported by GCC's libatomic, they're not
documented by the spec (and gcc doesn't ever call them).
Instead, you're /supposed/ to call the __atomic_fetch_* builtins and
then redo the operation inline to return the final value.
Differential Revision: http://reviews.llvm.org/D14385
llvm-svn: 252920
target features that the caller function doesn't provide. This matches
the existing backend failure to inline functions that don't have
matching target features - and diagnoses earlier in the case of
always_inline.
Fix up a few test cases that were, in fact, invalid if you tried
to generate code from the backend with the specified target features
and add a couple of tests to illustrate what's going on.
This should fix PR25246.
llvm-svn: 252834
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
The -meabi flag to control LLVM EABI version.
Without '-meabi' or with '-meabi default' imply LLVM triple default.
With '-meabi gnu' sets EABI GNU.
With '-meabi 4' or '-meabi 5' set EABI version 4 and 5 respectively.
A similar patch was introduced in LLVM.
Patch by Vinicius Tinti.
llvm-svn: 252463
This attribute is used to prevent tail-call optimizations to the marked
function. For example, in the following piece of code, foo1 will not be
tail-call optimized:
int __attribute__((not_tail_called)) foo1(int);
int foo2(int a) {
return foo1(a); // Tail-call optimization is not performed.
}
The attribute has effect only on statically bound calls. It has no
effect on indirect calls. Also, virtual functions and objective-c
methods cannot be marked as 'not_tail_called'.
rdar://problem/22667622
Differential Revision: http://reviews.llvm.org/D12922
llvm-svn: 252369
This patch fixes one more thing in MCU psABI support: LongDoubleWidth should be set to 64.
Differential Revision: http://reviews.llvm.org/D14285
llvm-svn: 252156
This patch implements two things in front-end for MCU psABI support:
1) "long double type is the same as double."
2) "New predefined C/C++ pre-processor symbols: iamcu and iamcu__.
Differential Revision: http://reviews.llvm.org/D14205
llvm-svn: 251786
GCC uses the x87DoubleExtended model for long doubles, and passes them
indirectly by address through function calls.
Also replace the existing mingw-long-double assembly emitting test with
an IR-level test.
llvm-svn: 251567
Linking options for particular file depend on the option that specifies the file.
Currently there are two:
* -mlink-bitcode-file links in complete content of the specified file.
* -mlink-cuda-bitcode links in only the symbols needed by current TU.
Linked symbols are internalized. This bitcode linking mode is used to
link device-specific bitcode provided by CUDA.
Files are linked in order they are specified on command line.
-mlink-cuda-bitcode replaces -fcuda-uses-libdevice flag.
Differential Revision: http://reviews.llvm.org/D13913
llvm-svn: 251427
only one of a group of possibilities.
This changes the syntax in the builtin files to represent:
, as the and operator
| as the or operator
The former syntax matches how the backend tablegen files represent
multiple subtarget features being required.
Updated the builtin and intrinsic headers accordingly for the new
syntax.
llvm-svn: 251388
The MCU psABI calling convention is somewhat, but not quite, like -mregparm 3.
In particular, the rules involving structs are different.
Differential Revision: http://reviews.llvm.org/D13978
llvm-svn: 251224
According to the Intel documentation, the mask operand of a maskload and
maskstore intrinsics is always a vector of packed integer/long integer values.
This patch introduces the following two changes:
1. It fixes the avx maskload/store intrinsic definitions in avxintrin.h.
2. It changes BuiltinsX86.def to match the correct gcc definitions for avx
maskload/store (see D13861 for more details).
Differential Revision: http://reviews.llvm.org/D13861
llvm-svn: 250816
The Intel MCU psABI requires floating-point values to be passed in-reg.
This makes the x86-32 ABI code respect "-mfloat-abi soft" and generate float inreg arguments.
Differential Revision: http://reviews.llvm.org/D13554
llvm-svn: 250689
r246877 made __builtin_object_size substantially more aggressive with
unknown bases if Type=1 or Type=3, which causes issues when we encounter
code like this:
struct Foo {
int a;
char str[1];
};
const char str[] = "Hello, World!";
struct Foo *f = (struct Foo *)malloc(sizeof(*f) + strlen(str));
strcpy(&f->str, str);
__builtin_object_size(&f->str, 1) would hand back 1, which is
technically correct given the type of Foo, but the type of Foo lies to
us about how many bytes are available in this case.
This patch adds support for this "writing off the end" idiom -- we now
answer conservatively when we're given the address of the very last
member in a struct.
Differential Revision: http://reviews.llvm.org/D12169
llvm-svn: 250488
Previously, our logic when taking the address of an overloaded function
would not consider enable_if attributes, so long as all of the enable_if
conditions on a given candidate were true. So, two functions with
identical signatures (one with enable_if attributes, the other without),
would be considered equally good overloads. If we were calling the
function instead of taking its address, then the function with enable_if
attributes would be preferred.
This patch makes us prefer the candidate with enable_if regardless of if
we're calling or taking the address of an overloaded function.
Differential Revision: http://reviews.llvm.org/D13795
llvm-svn: 250486
match the feature set of the function that they're being called from.
This ensures that we can effectively diagnose some[1] code that would
instead ICE in the backend with a failure to select message.
Example:
__m128d foo(__m128d a, __m128d b) {
return __builtin_ia32_addsubps(b, a);
}
compiled for normal x86_64 via:
clang -target x86_64-linux-gnu -c
would fail to compile in the back end because the normal subtarget
features for x86_64 only include sse2 and the builtin requires sse3.
[1] We're still not erroring on:
__m128i bar(__m128i const *p) { return _mm_lddqu_si128(p); }
where we should fail and error on an always_inline function being
inlined into a function that doesn't support the subtarget features
required.
llvm-svn: 250473
Add support for the `-fdebug-prefix-map=` option as in GCC. The syntax is
`-fdebug-prefix-map=OLD=NEW`. When compiling files from a path beginning with
OLD, change the debug info to indicate the path as start with NEW. This is
particularly helpful if you are preprocessing in one path and compiling in
another (e.g. for a build cluster with distcc).
Note that the linearity of the implementation is not as terrible as it may seem.
This is normally done once per file with an expectation that the map will be
small (1-2) entries, making this roughly linear in the number of input paths.
Addresses PR24619.
llvm-svn: 250094
This fixes a bug where one can take the address of a conditionally
enabled function to drop its enable_if guards. For example:
int foo(int a) __attribute__((enable_if(a > 0, "")));
int (*p)(int) = &foo;
int result = p(-1); // compilation succeeds; calls foo(-1)
Overloading logic has been updated to reflect this change, as well.
Functions with enable_if attributes that are always true are still
allowed to have their address taken.
Differential Revision: http://reviews.llvm.org/D13607
llvm-svn: 250090
Rationale :
// sse3
__m128d test_mm_addsub_pd(__m128d A, __m128d B) {
return _mm_addsub_pd(A, B);
}
// mmx
void shift(__m64 a, __m64 b, int c) {
_mm_slli_pi16(a, c);
_mm_slli_pi32(a, c);
_mm_slli_si64(a, c);
_mm_srli_pi16(a, c);
_mm_srli_pi32(a, c);
_mm_srli_si64(a, c);
_mm_srai_pi16(a, c);
_mm_srai_pi32(a, c);
}
clang -msse3 -mno-mmx file.c -c
For this code we should be able to explicitly turn off MMX
without affecting the compilation of the SSE3 function and then
diagnose and error on compiling the MMX function.
This is a preparatory patch to the actual diagnosis code which is
coming in a future patch. This sets us up to have the correct information
where we need it and verifies that it's being emitted for the backend
to handle.
llvm-svn: 249733
that we can build up an accurate set of features rather than relying on
TargetInfo initialization via handleTargetFeatures to munge the list
of features.
llvm-svn: 249732
Enums without an explicit, fixed, underlying type are implicitly given a
fixed 'int' type for ABI compatibility with MSVC. However, we can
enforce the standard-mandated rules on these types as-if we didn't know
this fact if the tag is not part of a definition.
llvm-svn: 249667
These test updates almost exclusively around the change in behavior
around enum: enums without a definition are considered incomplete except
when targeting MSVC ABIs. Since these tests are interested in the
'incomplete-enum' behavior, restrict them to %itanium_abi_triple.
llvm-svn: 249660
No ABI for C++ currently makes it possible to implement the standard
100% perfectly. We wrongly hid some of our compatible behavior behind
-fms-compatibility instead of tying it to the compiler ABI.
llvm-svn: 249656
With this change, most 'g' options are rejected by CompilerInvocation.
They remain only as Driver options. The new way to request debug info
from cc1 is with "-debug-info-kind={line-tables-only|limited|standalone}"
and "-dwarf-version={2|3|4}". In the absence of a command-line option
to specify Dwarf version, the Toolchain decides it, rather than placing
Toolchain-specific logic in CompilerInvocation.
Also fix a bug in the Windows compatibility argument parsing
in which the "rightmost argument wins" principle failed.
Differential Revision: http://reviews.llvm.org/D13221
llvm-svn: 249655
Currently FastISel doesn't know how to select vector bitcasts.
During instruction selection, fast-isel always falls back to SelectionDAG
every time it encounters a vector bitcast.
As a consequence of this, all the 'packed vector shift by immedate count'
test cases in avx2-builtins.c are optimized by the DAGCombiner.
In particular, the DAGCombiner would always fold trivial stack loads of
constant shift counts into the operands of packed shift builtins.
This behavior would start changing as soon as I reapply revision 249121.
That revision would teach x86 fast-isel how to select bitcasts between vector
types of the same size.
As a consequence of that change, fast-isel would less often fall back to
SelectionDAG. More importantly, DAGCombiner would no longer be able to
simplify the code by folding the stack reload of a constant.
No functional change.
llvm-svn: 249142
test that our intrinsics behave the same under -fsigned-char and
-funsigned-char.
This further testing uncovered that AVX-2 has a broken cmpgt for 8-bit
elements, and has for a long time. This is fixed in the same way as
SSE4 handles the case.
The other ISA extensions currently work correctly because they use
specific instruction intrinsics. As soon as they are rewritten in terms
of generic IR, they will need to add these special casts. I've added the
necessary testing to catch this however, so we shouldn't have to chase
it down again.
I considered changing the core typedef to be signed, but that seems like
a bad idea. Notably, it would be an ABI break if anyone is reaching into
the innards of the intrinsic headers and passing __v16qi on an API
boundary. I can't be completely confident that this wouldn't happen due
to a macro expanding in a lambda, etc., so it seems much better to leave
it alone. It also matches GCC's behavior exactly.
A fun side note is that for both GCC and Clang, -funsigned-char really
does change the semantics of __v16qi. To observe this, consider:
% cat x.cc
#include <smmintrin.h>
#include <iostream>
int main() {
__v16qi a = { 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
__v16qi b = _mm_set1_epi8(-1);
std::cout << (int)(a / b)[0] << ", " << (int)(a / b)[1] << '\n';
}
% clang++ -o x x.cc && ./x
-1, 1
% clang++ -funsigned-char -o x x.cc && ./x
0, 1
However, while this may be surprising, both Clang and GCC agree.
Differential Revision: http://reviews.llvm.org/D13324
llvm-svn: 249097
recently when we started using direct conversion to model sign
extension. The __v16qi type we use for SSE v16i8 vectors is defined in
terms of 'char' which may or may not be signed! This causes us to
generate pmovsx and pmovzx depending on the setting of -funsigned-char.
This patch just forms an explicitly signed type and uses that to
formulate the sign extension. While this gets the correct behavior
(which we now verify with the enhanced test) this is just the tip of the
ice berg. Now that I know what to look for, I have found errors of this
sort *throughout* our vector code. Fortunately, this is the only
specific place where I know of users actively having their code
miscompiled by Clang due to this, so I'm keeping the fix for those users
minimal and targeted.
I'll be sending a proper email for discussion of how to fix these
systematically, what the implications are, and just how widely broken
this is... From what I can tell, we have never shipped a correct set of
builtin headers for x86 when users rely on -funsigned-char. Oops.
llvm-svn: 248980
Summary: __nvvm_atom_cas_* returns the old value instead of whether the swap succeeds.
Reviewers: eliben, tra
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D13306
llvm-svn: 248951
This is the clang commit associated with llvm r248887.
This commit changes the interface of the vld[1234], vld[234]lane, and vst[1234],
vst[234]lane ARM neon intrinsics and associates an address space with the
pointer that these intrinsics take. This changes, e.g.,
<2 x i32> @llvm.arm.neon.vld1.v2i32(i8*, i32)
to
<2 x i32> @llvm.arm.neon.vld1.v2i32.p0i8(i8*, i32)
This change ensures that address spaces are fully taken into account in the ARM
target during lowering of interleaved loads and stores.
Differential Revision: http://reviews.llvm.org/D13127
llvm-svn: 248888
This patch corresponds to review:
http://reviews.llvm.org/D13190
Implemented the following interfaces to conform to ELF V2 ABI version 1.1.
vector signed __int128 vec_adde (vector signed __int128, vector signed __int128, vector signed __int128);
vector unsigned __int128 vec_adde (vector unsigned __int128, vector unsigned __int128, vector unsigned __int128);
vector signed __int128 vec_addec (vector signed __int128, vector signed __int128, vector signed __int128);
vector unsigned __int128 vec_addec (vector unsigned __int128, vector unsigned __int128, vector unsigned __int128);
vector signed int vec_addc(vector signed int __a, vector signed int __b);
vector bool char vec_cmpge (vector signed char __a, vector signed char __b);
vector bool char vec_cmpge (vector unsigned char __a, vector unsigned char __b);
vector bool short vec_cmpge (vector signed short __a, vector signed short __b);
vector bool short vec_cmpge (vector unsigned short __a, vector unsigned short __b);
vector bool int vec_cmpge (vector signed int __a, vector signed int __b);
vector bool int vec_cmpge (vector unsigned int __a, vector unsigned int __b);
vector bool char vec_cmple (vector signed char __a, vector signed char __b);
vector bool char vec_cmple (vector unsigned char __a, vector unsigned char __b);
vector bool short vec_cmple (vector signed short __a, vector signed short __b);
vector bool short vec_cmple (vector unsigned short __a, vector unsigned short __b);
vector bool int vec_cmple (vector signed int __a, vector signed int __b);
vector bool int vec_cmple (vector unsigned int __a, vector unsigned int __b);
vector double vec_double (vector signed long long __a);
vector double vec_double (vector unsigned long long __a);
vector bool char vec_eqv(vector bool char __a, vector bool char __b);
vector bool short vec_eqv(vector bool short __a, vector bool short __b);
vector bool int vec_eqv(vector bool int __a, vector bool int __b);
vector bool long long vec_eqv(vector bool long long __a, vector bool long long __b);
vector signed short vec_madd(vector signed short __a, vector signed short __b, vector signed short __c);
vector signed short vec_madd(vector signed short __a, vector unsigned short __b, vector unsigned short __c);
vector signed short vec_madd(vector unsigned short __a, vector signed short __b, vector signed short __c);
vector unsigned short vec_madd(vector unsigned short __a, vector unsigned short __b, vector unsigned short __c);
vector bool long long vec_mergeh(vector bool long long __a, vector bool long long __b);
vector bool long long vec_mergel(vector bool long long __a, vector bool long long __b);
vector bool char vec_nand(vector bool char __a, vector bool char __b);
vector bool short vec_nand(vector bool short __a, vector bool short __b);
vector bool int vec_nand(vector bool int __a, vector bool int __b);
vector bool long long vec_nand(vector bool long long __a, vector bool long long __b);
vector bool char vec_orc(vector bool char __a, vector bool char __b);
vector bool short vec_orc(vector bool short __a, vector bool short __b);
vector bool int vec_orc(vector bool int __a, vector bool int __b);
vector bool long long vec_orc(vector bool long long __a, vector bool long long __b);
vector signed long long vec_sub(vector signed long long __a, vector signed long long __b);
vector signed long long vec_sub(vector bool long long __a, vector signed long long __b);
vector signed long long vec_sub(vector signed long long __a, vector bool long long __b);
vector unsigned long long vec_sub(vector unsigned long long __a, vector unsigned long long __b);
vector unsigned long long vec_sub(vector bool long long __a, vector unsigned long long __b);
vector unsigned long long vec_sub(vector unsigned long long __V2 ABI V1.1
http://ror float vec_sub(vector float __a, vector float __b);
unsigned char vec_extract(vector bool char __a, int __b);
signed short vec_extract(vector signed short __a, int __b);
unsigned short vec_extract(vector bool short __a, int __b);
signed int vec_extract(vector signed int __a, int __b);
unsigned int vec_extract(vector bool int __a, int __b);
signed long long vec_extract(vector signed long long __a, int __b);
unsigned long long vec_extract(vector unsigned long long __a, int __b);
unsigned long long vec_extract(vector bool long long __a, int __b);
double vec_extract(vector double __a, int __b);
vector bool char vec_insert(unsigned char __a, vector bool char __b, int __c);
vector signed short vec_insert(signed short __a, vector signed short __b, int __c);
vector bool short vec_insert(unsigned short __a, vector bool short __b, int __c);
vector signed int vec_insert(signed int __a, vector signed int __b, int __c);
vector bool int vec_insert(unsigned int __a, vector bool int __b, int __c);
vector signed long long vec_insert(signed long long __a, vector signed long long __b, int __c);
vector unsigned long long vec_insert(unsigned long long __a, vector unsigned long long __b, int __c);
vector bool long long vec_insert(unsigned long long __a, vector bool long long __b, int __c);
vector double vec_insert(double __a, vector double __b, int __c);
vector signed long long vec_splats(signed long long __a);
vector unsigned long long vec_splats(unsigned long long __a);
vector signed __int128 vec_splats(signed __int128 __a);
vector unsigned __int128 vec_splats(unsigned __int128 __a);
vector double vec_splats(double __a);
int vec_all_eq(vector double __a, vector double __b);
int vec_all_ge(vector double __a, vector double __b);
int vec_all_gt(vector double __a, vector double __b);
int vec_all_le(vector double __a, vector double __b);
int vec_all_lt(vector double __a, vector double __b);
int vec_all_nan(vector double __a);
int vec_all_ne(vector double __a, vector double __b);
int vec_all_nge(vector double __a, vector double __b);
int vec_all_ngt(vector double __a, vector double __b);
int vec_any_eq(vector double __a, vector double __b);
int vec_any_ge(vector double __a, vector double __b);
int vec_any_gt(vector double __a, vector double __b);
int vec_any_le(vector double __a, vector double __b);
int vec_any_lt(vector double __a, vector double __b);
int vec_any_ne(vector double __a, vector double __b);
vector unsigned char vec_sbox_be (vector unsigned char);
vector unsigned char vec_cipher_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_cipherlast_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_ncipher_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_ncipherlast_be (vector unsigned char, vector unsigned char);
vector unsigned int vec_shasigma_be (vector unsigned int, const int, const int);
vector unsigned long long vec_shasigma_be (vector unsigned long long, const int, const int);
vector unsigned short vec_pmsum_be (vector unsigned char, vector unsigned char);
vector unsigned int vec_pmsum_be (vector unsigned short, vector unsigned short);
vector unsigned long long vec_pmsum_be (vector unsigned int, vector unsigned int);
vector unsigned __int128 vec_pmsum_be (vector unsigned long long, vector unsigned long long);
vector unsigned char vec_gb (vector unsigned char);
vector unsigned long long vec_bperm (vector unsigned __int128 __a, vector unsigned char __b);
Removed the folowing interfaces either because their signatures have changed
in version 1.1 of the ABI or because they were implemented for ELF V2 ABI but
have actually been deprecated in version 1.1.
vector signed char vec_eqv(vector bool char __a, vector signed char __b);
vector signed char vec_eqv(vector signed char __a, vector bool char __b);
vector unsigned char vec_eqv(vector bool char __a, vector unsigned char __b);
vector unsigned char vec_eqv(vector unsigned char __a, vector bool char __b);
vector signed short vec_eqv(vector bool short __a, vector signed short __b);
vector signed short vec_eqv(vector signed short __a, vector bool short __b);
vector unsigned short vec_eqv(vector bool short __a, vector unsigned short __b);
vector unsigned short vec_eqv(vector unsigned short __a, vector bool short __b);
vector signed int vec_eqv(vector bool int __a, vector signed int __b);
vector signed int vec_eqv(vector signed int __a, vector bool int __b);
vector unsigned int vec_eqv(vector bool int __a, vector unsigned int __b);
vector unsigned int vec_eqv(vector unsigned int __a, vector bool int __b);
vector signed long long vec_eqv(vector bool long long __a, vector signed long long __b);
vector signed long long vec_eqv(vector signed long long __a, vector bool long long __b);
vector unsigned long long vec_eqv(vector bool long long __a, vector unsigned long long __b);
vector unsigned long long vec_eqv(vector unsigned long long __a, vector bool long long __b);
vector float vec_eqv(vector bool int __a, vector float __b);
vector float vec_eqv(vector float __a, vector bool int __b);
vector double vec_eqv(vector bool long long __a, vector double __b);
vector double vec_eqv(vector double __a, vector bool long long __b);
vector unsigned short vec_nand(vector bool short __a, vector unsigned short __b);
llvm-svn: 248813
Currently it's 64-bit which will lead to mismatch between host and
device code if we compile for i386.
Differential Revision: http://reviews.llvm.org/D13181
llvm-svn: 248753
Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in
a hand-rolled tricky condition block in lib/Basic/Targets.cpp, with a FIXME:
attached.
http://reviews.llvm.org/D12937 moved the handling of the DSP feature over to
ARMTargetParser.def in LLVM, to be in line with other architecture extensions.
This is the corresponding patch to clang, to clear the FIXME: and update
the tests.
Differential Revision: http://reviews.llvm.org/D12938
llvm-svn: 248521
Summary:
Strictly speaking, the MIPS*R2 ISA's should not permit -mnan=2008 since this
feature was added in MIPS*R3. However, other toolchains permit this and we
should do the same.
Reviewers: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D13057
llvm-svn: 248481
This commit fixes an assert that is triggered when optnone is being
added to an IR function that is already marked with minsize and optsize.
rdar://problem/22723716
Differential Revision: http://reviews.llvm.org/D13004
llvm-svn: 248191
Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in
a hand-rolled tricky condition block in lib/Basic/Targets.cpp, with a FIXME:
attached.
http://reviews.llvm.org/D12937 moved the handling of +t2dsp over to
ARMTargetParser.def in LLVM, to be in line with other architecture extensions.
This is the corresponding patch to clang, to clear the FIXME: and update
the tests.
Differential Revision: http://reviews.llvm.org/D12938
llvm-svn: 248154
128-bit vector integer sign extensions correctly lower to the pmovsx instructions even for debug builds.
This patch removes the builtins and reimplements the _mm_cvtepi*_epi* intrinsics __using builtin_shufflevector (to extract the bottom most subvector) and __builtin_convertvector (to actually perform the sign extension).
Differential Revision: http://reviews.llvm.org/D12835
llvm-svn: 248092
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
Mingw generally wraps an old copy of msvcrt.dll which has these
personalities, so things should work out, or so I hear. I haven't tested
it.
llvm-svn: 247902
convert i64 to FP and vice versa
reduceps & reducepd
rangeps & rangepd
all in their 512bit versions
Differential Revision: http://reviews.llvm.org/D11716
llvm-svn: 247881
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247494
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247465
-force-align-stack.
Also, make changes to the driver so that -mno-stack-realign is no longer
an option exposed to the end-user that disallows stack realignment in
the backend.
Differential Revision: http://reviews.llvm.org/D11815
llvm-svn: 247451
It seems that there is small bug, and we can't generate assume loads
when some virtual functions have internal visibiliy
This reverts commit 982bb7d966947812d216489b3c519c9825cacbf2.
llvm-svn: 247332
This flag causes the compiler to emit bit set entries for functions as well
as runtime bitset checks at indirect call sites. Depends on the new function
bitset mechanism.
Differential Revision: http://reviews.llvm.org/D11857
llvm-svn: 247238
Generating call assume(icmp %vtable, %global_vtable) after constructor
call for devirtualization purposes.
For more info go to:
http://lists.llvm.org/pipermail/cfe-dev/2015-July/044227.html
Edit:
Fixed version because of PR24479.
After this patch got reverted because of ScalarEvolution bug (D12719)
Merged after John McCall big patch (Added Address).
http://reviews.llvm.org/D11859
llvm-svn: 247199
The tests in test/CodeGen/arm-target-features.c are currently
passing but warning messages are suppressed. These tests are now
synchronized with the corresponding changes in Target Parser.
This patch will fix the regressions in clang caused by r247136
Differential Revision: http://reviews.llvm.org/D12722
llvm-svn: 247138
Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.
Differential Revision: http://reviews.llvm.org/D12313
llvm-svn: 247104
instruction used the ReturnValue as pointer operand or value operand. This
led to wrong code gen - in later stages (load-store elision code) the found
store and its operand would be erased, causing ReturnValue to become a <badref>.
The patch adds a check that makes sure that ReturnValue is a pointer operand of
store instruction. Regression test is also added.
This fixes PR24386.
Differential Revision: http://reviews.llvm.org/D12400
llvm-svn: 247003
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
Apparently there are many cast kinds that may cause implicit pointer
arithmetic to happen. In light of this, the cast ignoring logic
introduced in r246877 has been changed to only ignore a small set of
cast kinds, and a test for this behavior has been added.
Thanks to Richard for catching this before it became a bug report. :)
llvm-svn: 246890
Improvements:
- For all types, we would give up in a case such as:
__builtin_object_size((char*)&foo, N);
even if we could provide an answer to
__builtin_object_size(&foo, N);
We now provide the same answer for both of the above examples in all
cases.
- For type=1|3, we now support subobjects with unknown bases, as long
as the designator is valid.
Thanks to Richard Smith for the review + design planning.
Review: http://reviews.llvm.org/D12169
llvm-svn: 246877
This implements basic support for compiling (though not yet assembling
or linking) for a WebAssembly target. Note that ABI details are not yet
finalized, and may change.
Differential Revision: http://reviews.llvm.org/D12002
llvm-svn: 246814
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246764
Original commit message:
[ARM] Allow passing/returning of __fp16 arguments
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246760
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246755
This patch depends on r246688 (D12341).
The goal is to make LLVM generate different code for these functions for a target that
has cheap branches (see PR23827 for more details):
int foo();
int normal(int x, int y, int z) {
if (x != 0 && y != 0) return foo();
return 1;
}
int crazy(int x, int y) {
if (__builtin_unpredictable(x != 0 && y != 0)) return foo();
return 1;
}
Differential Revision: http://reviews.llvm.org/D12458
llvm-svn: 246699
GCC 4.8+ has a PowerPC-specific intrinsic, __builtin_ppc_get_timebase, to do
what Clang's __builtin_readcyclecounter does. For compatibility with code that
uses GCC's spelling (including glibc), support it as well.
Partially fixes PR23681.
llvm-svn: 246510
In release builds labels are numbers. Matching just the number may result
in false matches where the label is contained in other numbers, such as
14 inside [114 x i8]. A stricter match requiring start of line or > character
before the label avoids these false matches.
llvm-svn: 246385
Without this, 64-byte vector types (__m512), specified to be 64-byte
aligned in the AVX512 draft SysV ABI, will only be 32-byte aligned.
This is analoguous to AVX, for which we accept 32-byte max alignment.
Differential Revision: http://reviews.llvm.org/D10724
llvm-svn: 246230
There's no point in using a larger alignment if we have no instructions
that would benefit from it.
Differential Revision: http://reviews.llvm.org/D12389
llvm-svn: 246229
A couple of changes here:
a) Do less work in the case where we don't have a target attribute on the
function. We've already canonicalized the attributes for the function -
no need to do more work.
b) Use the newer canonicalized feature adding functions from TargetInfo
to do the work when we do have a target attribute. This enables us to diagnose
some warnings in the case of conflicting written attributes (only ppc does
this today) and also make sure to get all of the features for a cpu that's
listed rather than just change the cpu.
Updated all testcases accordingly and added a new testcase to verify that we'll
error out on ppc if we have some incompatible options using the existing diagnosis
framework there.
llvm-svn: 246195
We agreed for r245605 that, as long as we don't affect -O0 codegen
too much, it's OK to use native constructs rather than intrinsics.
Let's test that, starting with AVX2 here.
See PR24580.
Differential Revision: http://reviews.llvm.org/D12212
llvm-svn: 245987
As discussed in PR23648 - the intrinsics _m_from_int, _m_to_int and _m_prefetch are defined in mmintrin.h and prfchwintrin.h so we don't need to in Intrin.h
Added tests for _m_from_int and _m_to_int
D11338 already added a test for _m_prefetch
Differential Revision: http://reviews.llvm.org/D12272
llvm-svn: 245975
_rotl, _rotwl and _lrotl (and their right-shift counterparts) are official x86
intrinsics, and should be supported regardless of environment. This is in contrast
to _rotl8, _rotl16, and _rotl64 which are MS-specific.
Note that the MS documentation for _lrotl is different from the Intel
documentation. Intel explicitly documents it as a 64-bit rotate, while for MS,
since sizeof(unsigned long) for MSVC is always 4, a 32-bit rotate is implied.
Differential Revision: http://reviews.llvm.org/D12271
llvm-svn: 245923
This is important in the case that the LLVM-inferred llvm-struct
alignment is not the same as the clang-known C-struct alignment.
Differential Revision: http://reviews.llvm.org/D12243
llvm-svn: 245719
This lets us optimize them better. We agreed to remove the intrinsics,
instead of combining them later, as, at -O0, we generate the expected
instructions. Plus, it's a nice cleanup.
Differential Revision: http://reviews.llvm.org/D10556
llvm-svn: 245605
alignment is ignored, and they always allocate a complete
storage unit.
Also, change the dumping of AST record layouts: use the more
readable C++-style dumping even in C, include bitfield offset
information in the dump, and don't print sizeof/alignof
information for fields of record type, since we don't do so
for bases or other kinds of field.
rdar://22275433
llvm-svn: 245514
__builtin_object_size would return incorrect answers for many uses where
type=3. This fixes the inaccuracy by making us emit 0 instead of LLVM's
objectsize intrinsic.
Additionally, there are many cases where we would emit suboptimal (but
correct) answers, such as when arrays are involved. This patch fixes
some of these cases (please see new tests in test/CodeGen/object-size.c
for specifics on which cases are improved)
Resubmit of r245323 with PR24493 fixed.
Patch mostly by Richard Smith.
Differential Revision: http://reviews.llvm.org/D12000
This fixes PR15212.
llvm-svn: 245403
__builtin_object_size would return incorrect answers for many uses where
type=3. This fixes the inaccuracy by making us emit 0 instead of LLVM's
objectsize intrinsic.
Additionally, there are many cases where we would emit suboptimal (but
correct) answers, such as when arrays are involved. This patch fixes
some of these cases (please see new tests in test/CodeGen/object-size.c
for specifics on which cases are improved)
Patch mostly by Richard Smith.
Differential Revision: http://reviews.llvm.org/D12000
This fixes PR15212.
llvm-svn: 245323
The fix for this is in LLVM but it depends on how clang handles the alias
attribute, so add a test to the clang tests to make sure everything works
together as expected.
Differential Revision: http://reviews.llvm.org/D11980
llvm-svn: 244756
Summary:
float_cast_overflow is the only UBSan check without a source location attached.
This patch propagates SourceLocations where necessary to get them to the
EmitCheck() call.
Reviewers: rsmith, ABataev, rjmccall
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11757
llvm-svn: 244568
Summary:
NaCl is a platform where long double is the same as double.
Its mangling is spelled with "long double" but its ABI lowering is the same
as double.
Reviewers: rnk, chh
Subscribers: jfb, cfe-commits, dschuff
Differential Revision: http://reviews.llvm.org/D11922
llvm-svn: 244541
A test was recently (r244468) added to cover long double calling convention
codegen, distinguishing between Android and GNU conventions (where long doubles
are fp128 and x86_fp80, respectively). Native Client is a target where long
doubles are the same as doubles. This change augments the test to cover
that case.
Also rename the test to test/codeGen/X86_64-longdouble.c
Differential Revision: http://reviews.llvm.org/D11921
llvm-svn: 244524
When clang is built with -DLLVM_ENABLE_ASSERTIONS=Off,
it does not create names for IR values.
Differential Revision: http://reviews.llvm.org/D11437
llvm-svn: 244502
These changes are for Android x86_64 targets to be compatible
with current Android g++ and conform to AMD64 ABI.
https://llvm.org/bugs/show_bug.cgi?id=23897
* Return type of long double (fp128) should be fp128, not x86_fp80.
* Vararg of long double (fp128) could be in register and overflowed to memory.
https://llvm.org/bugs/show_bug.cgi?id=24111
* Return value of long double (fp128) _Complex should be in memory like a structure of {fp128,fp128}.
Differential Revision: http://reviews.llvm.org/D11437
llvm-svn: 244468
Function types without prototypes can arise when mangling a function type
within an overloadable function in C. We mangle these as the absence of
any parameter types (not even an empty parameter list).
Differential Revision: http://reviews.llvm.org/D11848
llvm-svn: 244374
Summary:
By default, 'clang' emits dwarf and 'clang-cl' emits codeview. You can
force emission of one or both by passing -gcodeview and -gdwarf to
either driver.
Reviewers: dblaikie, hans
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11742
llvm-svn: 244097
Support for emitting libcalls for __atomic_fetch_nand and
__atomic_{add,sub,and,or,xor,nand}_fetch was missing; add it, and some
test cases.
Differential Revision: http://reviews.llvm.org/D10847
llvm-svn: 244063
Update testcases after LLVM change r243774.
Most of these had no need to check `tag:` field, but did so as a way of
getting to the `name:` field. In a few cases I've converted the `tag:`
checks to `arg:` or `CHECK-NOT: arg:`.
llvm-svn: 243775
This patch adds support for the System Z vector built-in functions.
The API-defined header file has the name vecintrin.h.
The user-level functions are defined in the same style as the clang
version of altivec.h, making heavy use of the __overloadable__ and
__always_inline__ attributes. Where possible the functions expand to
generic operations rather than specific built-in functions, in the hope
that that form can be optimised better.
Where a built-in routine is specified to require an immediate integer
argument, the __enable_if__ attribute is used to verify the argument is
in fact constant and in the appropriate range.
Based on a patch by Richard Sandiford.
llvm-svn: 243643
The z13 vector facility has an associated language extension,
closely modeled on AltiVec/VSX. The main differences are:
- vector long, vector float and vector pixel are not supported
- vector long long and vector double are supported (like VSX)
- comparison operators return a vector rather than a scalar integer
- shift operators behave like the OpenCL shift operators
- vector bool is only supported as argument to certain operators;
some operators allow mixing a bool with a non-bool vector
This patch adds clang support for the extension. It is closely modelled
on the AltiVec support. Similarly to the -faltivec option, there's a
new -fzvector option to enable the extensions (as well as an -mzvector
alias for compatibility with GCC). There's also a separate LangOpt.
The extension as implemented here is intended to be compatible with
the -mzvector extension recently implemented by GCC.
Based on a patch by Richard Sandiford.
Differential Revision: http://reviews.llvm.org/D11001
llvm-svn: 243642