Clang doesn’t support a use of “this” pointer inside inline asm.
When I tried to compile a class or a struct (see example) with an inline asm that contains "this" pointer.
Clang returns with an error.
This patch fixes that.
error: expected unqualified-id
For example:
'''
struct A {
void f() {
__asm mov eax, this
// error: expected unqualified-id
}
};
'''
Differential Revision: http://reviews.llvm.org/D15115
llvm-svn: 255645
The issue seems to be that .ll file may either use number of register
value or alias %numUsedRegs, so the check needs to cover both cases.
This will hopefully fix the last regression introduced by r255515.
llvm-svn: 255539
This patch enables soft float support for ppc32 architecture and fixes
the ABI for variadic functions. This is the first in a set of patches
for soft float support in LLVM.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D13351
llvm-svn: 255515
- Removed support for hexagonv3 and earlier.
- Added handling of hexagonv55 and hexagonv60.
- Added handling of target features (hvx, hvx-double).
- Updated paths to reflect current directory layout.
llvm-svn: 255502
This sets the maximum entry count among all functions in the program to the
module using module flags. This allows the optimizer to use this information.
Differential Revision: http://reviews.llvm.org/D15163
llvm-svn: 255397
As discussed on the ml, backend tests need to be put in llvm/test/CodeGen/X86 as fast-isel tests using IR that is as close to what is generated here as possible.
The llvm tests will (re)added in a future commit.
llvm-svn: 255050
variables in C, in the cases where we can constant-fold it to a value
regardless (such as floating-point division by zero and signed integer
overflow). Strictly enforcing this rule breaks too much code.
llvm-svn: 254992
Hopefully fix the remaining bot failure from r254927. Remove
target specification since it shouldn't be needed, and this causes
an error when trying to check the pass execution structure in
test/CodeGen/thinlto_backend.c on non-x86 arches.
llvm-svn: 254940
Summary:
Adds new option -fthinlto-index=<file> to invoke the LTO pipeline
along with function importing via clang using the supplied function
summary index file. This supports invoking the parallel ThinLTO
backend processes in a distributed build environment via clang.
Additionally, this causes the module linker to be invoked on the bitcode
file being compiled to perform any necessary promotion and renaming of
locals that are exported via the function summary index file.
Add a couple tests that confirm we get expected errors when we try to
use the new option on a file that isn't bitcode, or specify an invalid
index file. The tests also confirm that we trigger the expected function
import pass.
Depends on D15024
Reviewers: joker.eph, dexonsmith
Subscribers: joker.eph, davidxl, cfe-commits
Differential Revision: http://reviews.llvm.org/D15025
llvm-svn: 254927
As discussed on the ml, backend tests need to be put in llvm/test/CodeGen/X86 as fast-isel tests using IR that is as close to what is generated here as possible.
The llvm tests will (re)added in a future commit
llvm-svn: 254849
As discussed on the ml, backend tests need to be put in llvm/test/CodeGen/X86 as fast-isel tests using IR that is as close to what is generated here as possible.
The llvm tests will (re)added in a future commit
I will update PR24580 on this new plan
llvm-svn: 254847
Summary:
Looking into some recent issues with LLDBs expression parser highlighted that upstream clang passes vectors types differently to Android Open Source Project's clang for Arm Android targets.
This patch reflects the changes present in the AOSP and allows LLDB's JIT expression evaluation to work correctly for Arm Android targets when passing vectors.
This is submitted with consent of the original author Stephen Hines.
Reviewers: asl, rsmith, ADodds, rnk
Subscribers: rnk, aemerson, tberghammer, danalbert, srhines, cfe-commits, pirama
Differential Revision: http://reviews.llvm.org/D14639
llvm-svn: 254682
Fix calculating address of arguments larger than 32 bit on stack for
variadic functions (rounding up address to alignment) on ppc32 architecture.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D14871
llvm-svn: 254670
These additions were meant to go in as a part of r254554; while it's
certainly nice to have new functionality, it's nicer if we have tests to
go with it. :)
llvm-svn: 254632
This reverts commit r254143 which introduces a crash on the following input:
f(char *);
g(char *);
#pragma weak f = g
int g(char *p) {}
llvm-svn: 254605
As discussed on the ml, backend tests need to be put in llvm/test/CodeGen/X86 as fast-isel tests using IR that is as close to what is generated here as possible.
The llvm tests will (re)added in a future commit
I will update PR24580 on this new plan
llvm-svn: 254594
side-effect, so that we don't allow speculative evaluation of such expressions
during code generation.
This caused a diagnostic quality regression, so fix constant expression
diagnostics to prefer either the first "can't be constant folded" diagnostic or
the first "not a constant expression" diagnostic depending on the kind of
evaluation we're doing. This was always the intent, but didn't quite work
correctly before.
This results in certain initializers that used to be constant initializers to
no longer be; in particular, things like:
float f = 1e100;
are no longer accepted in C. This seems appropriate, as such constructs would
lead to code being executed if sanitizers are enabled.
llvm-svn: 254574
Add/Subtract.
Add missing tests that accidentally were not committed in rL254250.
Differential Revision: http://reviews.llvm.org/D14982
llvm-svn: 254251
Summary: This patch adds support for the interrupt attribute for mips32r2+.
Patch by Simon Dardis.
Reviewers: dsanders, aaron.ballman
Subscribers: aaron.ballman, cfe-commits
Differential Revision: http://reviews.llvm.org/D10802
llvm-svn: 254205
Summary: This patch adds support for the interrupt attribute for mips32r2+.
Reviewers: dsanders, aaron.ballman
Subscribers: aaron.ballman, cfe-commits
Differential Revision: http://reviews.llvm.org/D10802
llvm-svn: 254203
(Re-apply patch after bug fixing)
This diff makes sure that the driver does not pass
-fomit-frame-pointer or -momit-leaf-frame-pointer to
the frontend when -pg is used. Currently, clang gives
an error if -fomit-frame-pointer is used in combination
with -pg, but -momit-leaf-frame-pointer was forgotten.
Also, disable frame pointer elimination in the frontend
when -pg is set.
Patch by Stefan Kempf.
llvm-svn: 253886
This diff makes sure that the driver does not pass
-fomit-frame-pointer or -momit-leaf-frame-pointer to
the frontend when -pg is used. Currently, clang gives
an error if -fomit-frame-pointer is used in combination
with -pg, but -momit-leaf-frame-pointer was forgotten.
Also, disable frame pointer elimination in the frontend
when -pg is set.
Patch by Stefan Kempf.
llvm-svn: 253846
Specifying a fixed triple is not possible because that target may not
even be compiler. Go for a simpler fix by using a _? regex for the
prefix.
llvm-svn: 253758
This is similar to the earlier fix I did, r253702, expect that here it
is function names that are being searched for. If the function name
matches part of the directory name it can cause an apparent test
case failure.
llvm-svn: 253706
Summary: The frontend debuginfo tests should not invoke llvm passes which includes add-discriminators that will change the debug info generated by FE.
Reviewers: dblaikie
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D14848
llvm-svn: 253686
Summary: The discriminator change in http://reviews.llvm.org/D14738 will fail these clang tests. Update the test to accomendate the discriminator change.
Reviewers: dblaikie, davidxl, dnovillo
Differential Revision: http://reviews.llvm.org/D14836
llvm-svn: 253595
This is a follow on from a similar LLVM commit: r253511.
Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html
These intrinsics currently have an explicit alignment argument which is
required to be a constant integer. It represents the alignment of the
source and dest, and so must be the minimum of those.
This change allows source and dest to each have their own alignments
by using the alignment attribute on their arguments. The alignment
argument itself is removed.
The only code change to clang is hidden in CGBuilder.h which now passes
both dest and source alignment to IRBuilder, instead of taking the minimum of
dest and source alignments.
Reviewed by Hal Finkel.
llvm-svn: 253512
Currently, when there is a global register variable in a program that
is bound to an invalid register, clang/llvm prints an error message that
is not very user-friendly.
This commit improves the diagnostic and moves the check that used to be
in the backend to Sema. In addition, it makes changes to error out if
the size of the register doesn't match the declared variable size.
e.g., volatile register int B asm ("rbp");
rdar://problem/23084219
Differential Revision: http://reviews.llvm.org/D13834
llvm-svn: 253405
This reverts commit r253269.
This leads to assert / segfault triggering on the following reduced example:
float foo(float U, float base, float cell) { return (U = 2 * base) - cell; }
llvm-svn: 253337
target attribute, don't include "negative" subtarget features in the
list of required features. Builtins are positive by default so don't
need this change, but we pull the default list of features from the
command line and so need to make sure that we only include features
that are turned on for code generation in our error.
llvm-svn: 253242
These two intrinsics are defined in arm_acle.h.
__rev16l needs to rotate by 16 bits, bit it was actually rotating by 2 bits.
For AArch64, where long is 64 bits, this would still be wrong.
__rev16ll was incorrect, it reversed the bytes in each 32-bit word, rather than
each 16-bit halfword. The correct implementation is to apply __rev16 to the top
and bottom words of the 64-bit value.
For AArch32 targets, these get compiled down to the hardware rev16 instruction
at -O1 and above. For AArch64 targets, the 64-bit ones get compiled to two
32-bit rev16 instructions, because there is not currently a pattern for the
64-bit rev16 instruction.
Differential Revision: http://reviews.llvm.org/D14609
llvm-svn: 253211
Several of these tests (the two deleted, and the one removal edit) were
relying on the optimizer to collapse things to test some frontend
feature. The tests were really old and features seemed amply covered by
other parts of the test suite, so I just removed them.
If anyone thinks they're valuable enough to keep/fix, we can play around
with that, for sure.
(inspired by r252872)
llvm-svn: 253114
The ``disable_tail_calls`` attribute instructs the backend to not
perform tail call optimization inside the marked function.
For example,
int callee(int);
int foo(int a) __attribute__((disable_tail_calls)) {
return callee(a); // This call is not tail-call optimized.
}
Note that this attribute is different from 'not_tail_called', which
prevents tail-call optimization to the marked function.
rdar://problem/8973573
Differential Revision: http://reviews.llvm.org/D12547
llvm-svn: 252986
In r244063, I had caused these builtins to call the same-named library
functions, __atomic_*_fetch_SIZE. However, this was incorrect: while
those functions are in fact supported by GCC's libatomic, they're not
documented by the spec (and gcc doesn't ever call them).
Instead, you're /supposed/ to call the __atomic_fetch_* builtins and
then redo the operation inline to return the final value.
Differential Revision: http://reviews.llvm.org/D14385
llvm-svn: 252920
target features that the caller function doesn't provide. This matches
the existing backend failure to inline functions that don't have
matching target features - and diagnoses earlier in the case of
always_inline.
Fix up a few test cases that were, in fact, invalid if you tried
to generate code from the backend with the specified target features
and add a couple of tests to illustrate what's going on.
This should fix PR25246.
llvm-svn: 252834
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
The -meabi flag to control LLVM EABI version.
Without '-meabi' or with '-meabi default' imply LLVM triple default.
With '-meabi gnu' sets EABI GNU.
With '-meabi 4' or '-meabi 5' set EABI version 4 and 5 respectively.
A similar patch was introduced in LLVM.
Patch by Vinicius Tinti.
llvm-svn: 252463
This attribute is used to prevent tail-call optimizations to the marked
function. For example, in the following piece of code, foo1 will not be
tail-call optimized:
int __attribute__((not_tail_called)) foo1(int);
int foo2(int a) {
return foo1(a); // Tail-call optimization is not performed.
}
The attribute has effect only on statically bound calls. It has no
effect on indirect calls. Also, virtual functions and objective-c
methods cannot be marked as 'not_tail_called'.
rdar://problem/22667622
Differential Revision: http://reviews.llvm.org/D12922
llvm-svn: 252369
This patch fixes one more thing in MCU psABI support: LongDoubleWidth should be set to 64.
Differential Revision: http://reviews.llvm.org/D14285
llvm-svn: 252156
This patch implements two things in front-end for MCU psABI support:
1) "long double type is the same as double."
2) "New predefined C/C++ pre-processor symbols: iamcu and iamcu__.
Differential Revision: http://reviews.llvm.org/D14205
llvm-svn: 251786
GCC uses the x87DoubleExtended model for long doubles, and passes them
indirectly by address through function calls.
Also replace the existing mingw-long-double assembly emitting test with
an IR-level test.
llvm-svn: 251567
Linking options for particular file depend on the option that specifies the file.
Currently there are two:
* -mlink-bitcode-file links in complete content of the specified file.
* -mlink-cuda-bitcode links in only the symbols needed by current TU.
Linked symbols are internalized. This bitcode linking mode is used to
link device-specific bitcode provided by CUDA.
Files are linked in order they are specified on command line.
-mlink-cuda-bitcode replaces -fcuda-uses-libdevice flag.
Differential Revision: http://reviews.llvm.org/D13913
llvm-svn: 251427
only one of a group of possibilities.
This changes the syntax in the builtin files to represent:
, as the and operator
| as the or operator
The former syntax matches how the backend tablegen files represent
multiple subtarget features being required.
Updated the builtin and intrinsic headers accordingly for the new
syntax.
llvm-svn: 251388
The MCU psABI calling convention is somewhat, but not quite, like -mregparm 3.
In particular, the rules involving structs are different.
Differential Revision: http://reviews.llvm.org/D13978
llvm-svn: 251224
According to the Intel documentation, the mask operand of a maskload and
maskstore intrinsics is always a vector of packed integer/long integer values.
This patch introduces the following two changes:
1. It fixes the avx maskload/store intrinsic definitions in avxintrin.h.
2. It changes BuiltinsX86.def to match the correct gcc definitions for avx
maskload/store (see D13861 for more details).
Differential Revision: http://reviews.llvm.org/D13861
llvm-svn: 250816
The Intel MCU psABI requires floating-point values to be passed in-reg.
This makes the x86-32 ABI code respect "-mfloat-abi soft" and generate float inreg arguments.
Differential Revision: http://reviews.llvm.org/D13554
llvm-svn: 250689
r246877 made __builtin_object_size substantially more aggressive with
unknown bases if Type=1 or Type=3, which causes issues when we encounter
code like this:
struct Foo {
int a;
char str[1];
};
const char str[] = "Hello, World!";
struct Foo *f = (struct Foo *)malloc(sizeof(*f) + strlen(str));
strcpy(&f->str, str);
__builtin_object_size(&f->str, 1) would hand back 1, which is
technically correct given the type of Foo, but the type of Foo lies to
us about how many bytes are available in this case.
This patch adds support for this "writing off the end" idiom -- we now
answer conservatively when we're given the address of the very last
member in a struct.
Differential Revision: http://reviews.llvm.org/D12169
llvm-svn: 250488
Previously, our logic when taking the address of an overloaded function
would not consider enable_if attributes, so long as all of the enable_if
conditions on a given candidate were true. So, two functions with
identical signatures (one with enable_if attributes, the other without),
would be considered equally good overloads. If we were calling the
function instead of taking its address, then the function with enable_if
attributes would be preferred.
This patch makes us prefer the candidate with enable_if regardless of if
we're calling or taking the address of an overloaded function.
Differential Revision: http://reviews.llvm.org/D13795
llvm-svn: 250486
match the feature set of the function that they're being called from.
This ensures that we can effectively diagnose some[1] code that would
instead ICE in the backend with a failure to select message.
Example:
__m128d foo(__m128d a, __m128d b) {
return __builtin_ia32_addsubps(b, a);
}
compiled for normal x86_64 via:
clang -target x86_64-linux-gnu -c
would fail to compile in the back end because the normal subtarget
features for x86_64 only include sse2 and the builtin requires sse3.
[1] We're still not erroring on:
__m128i bar(__m128i const *p) { return _mm_lddqu_si128(p); }
where we should fail and error on an always_inline function being
inlined into a function that doesn't support the subtarget features
required.
llvm-svn: 250473
Add support for the `-fdebug-prefix-map=` option as in GCC. The syntax is
`-fdebug-prefix-map=OLD=NEW`. When compiling files from a path beginning with
OLD, change the debug info to indicate the path as start with NEW. This is
particularly helpful if you are preprocessing in one path and compiling in
another (e.g. for a build cluster with distcc).
Note that the linearity of the implementation is not as terrible as it may seem.
This is normally done once per file with an expectation that the map will be
small (1-2) entries, making this roughly linear in the number of input paths.
Addresses PR24619.
llvm-svn: 250094
This fixes a bug where one can take the address of a conditionally
enabled function to drop its enable_if guards. For example:
int foo(int a) __attribute__((enable_if(a > 0, "")));
int (*p)(int) = &foo;
int result = p(-1); // compilation succeeds; calls foo(-1)
Overloading logic has been updated to reflect this change, as well.
Functions with enable_if attributes that are always true are still
allowed to have their address taken.
Differential Revision: http://reviews.llvm.org/D13607
llvm-svn: 250090
Rationale :
// sse3
__m128d test_mm_addsub_pd(__m128d A, __m128d B) {
return _mm_addsub_pd(A, B);
}
// mmx
void shift(__m64 a, __m64 b, int c) {
_mm_slli_pi16(a, c);
_mm_slli_pi32(a, c);
_mm_slli_si64(a, c);
_mm_srli_pi16(a, c);
_mm_srli_pi32(a, c);
_mm_srli_si64(a, c);
_mm_srai_pi16(a, c);
_mm_srai_pi32(a, c);
}
clang -msse3 -mno-mmx file.c -c
For this code we should be able to explicitly turn off MMX
without affecting the compilation of the SSE3 function and then
diagnose and error on compiling the MMX function.
This is a preparatory patch to the actual diagnosis code which is
coming in a future patch. This sets us up to have the correct information
where we need it and verifies that it's being emitted for the backend
to handle.
llvm-svn: 249733
that we can build up an accurate set of features rather than relying on
TargetInfo initialization via handleTargetFeatures to munge the list
of features.
llvm-svn: 249732
Enums without an explicit, fixed, underlying type are implicitly given a
fixed 'int' type for ABI compatibility with MSVC. However, we can
enforce the standard-mandated rules on these types as-if we didn't know
this fact if the tag is not part of a definition.
llvm-svn: 249667
These test updates almost exclusively around the change in behavior
around enum: enums without a definition are considered incomplete except
when targeting MSVC ABIs. Since these tests are interested in the
'incomplete-enum' behavior, restrict them to %itanium_abi_triple.
llvm-svn: 249660
No ABI for C++ currently makes it possible to implement the standard
100% perfectly. We wrongly hid some of our compatible behavior behind
-fms-compatibility instead of tying it to the compiler ABI.
llvm-svn: 249656
With this change, most 'g' options are rejected by CompilerInvocation.
They remain only as Driver options. The new way to request debug info
from cc1 is with "-debug-info-kind={line-tables-only|limited|standalone}"
and "-dwarf-version={2|3|4}". In the absence of a command-line option
to specify Dwarf version, the Toolchain decides it, rather than placing
Toolchain-specific logic in CompilerInvocation.
Also fix a bug in the Windows compatibility argument parsing
in which the "rightmost argument wins" principle failed.
Differential Revision: http://reviews.llvm.org/D13221
llvm-svn: 249655
Currently FastISel doesn't know how to select vector bitcasts.
During instruction selection, fast-isel always falls back to SelectionDAG
every time it encounters a vector bitcast.
As a consequence of this, all the 'packed vector shift by immedate count'
test cases in avx2-builtins.c are optimized by the DAGCombiner.
In particular, the DAGCombiner would always fold trivial stack loads of
constant shift counts into the operands of packed shift builtins.
This behavior would start changing as soon as I reapply revision 249121.
That revision would teach x86 fast-isel how to select bitcasts between vector
types of the same size.
As a consequence of that change, fast-isel would less often fall back to
SelectionDAG. More importantly, DAGCombiner would no longer be able to
simplify the code by folding the stack reload of a constant.
No functional change.
llvm-svn: 249142
test that our intrinsics behave the same under -fsigned-char and
-funsigned-char.
This further testing uncovered that AVX-2 has a broken cmpgt for 8-bit
elements, and has for a long time. This is fixed in the same way as
SSE4 handles the case.
The other ISA extensions currently work correctly because they use
specific instruction intrinsics. As soon as they are rewritten in terms
of generic IR, they will need to add these special casts. I've added the
necessary testing to catch this however, so we shouldn't have to chase
it down again.
I considered changing the core typedef to be signed, but that seems like
a bad idea. Notably, it would be an ABI break if anyone is reaching into
the innards of the intrinsic headers and passing __v16qi on an API
boundary. I can't be completely confident that this wouldn't happen due
to a macro expanding in a lambda, etc., so it seems much better to leave
it alone. It also matches GCC's behavior exactly.
A fun side note is that for both GCC and Clang, -funsigned-char really
does change the semantics of __v16qi. To observe this, consider:
% cat x.cc
#include <smmintrin.h>
#include <iostream>
int main() {
__v16qi a = { 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
__v16qi b = _mm_set1_epi8(-1);
std::cout << (int)(a / b)[0] << ", " << (int)(a / b)[1] << '\n';
}
% clang++ -o x x.cc && ./x
-1, 1
% clang++ -funsigned-char -o x x.cc && ./x
0, 1
However, while this may be surprising, both Clang and GCC agree.
Differential Revision: http://reviews.llvm.org/D13324
llvm-svn: 249097
recently when we started using direct conversion to model sign
extension. The __v16qi type we use for SSE v16i8 vectors is defined in
terms of 'char' which may or may not be signed! This causes us to
generate pmovsx and pmovzx depending on the setting of -funsigned-char.
This patch just forms an explicitly signed type and uses that to
formulate the sign extension. While this gets the correct behavior
(which we now verify with the enhanced test) this is just the tip of the
ice berg. Now that I know what to look for, I have found errors of this
sort *throughout* our vector code. Fortunately, this is the only
specific place where I know of users actively having their code
miscompiled by Clang due to this, so I'm keeping the fix for those users
minimal and targeted.
I'll be sending a proper email for discussion of how to fix these
systematically, what the implications are, and just how widely broken
this is... From what I can tell, we have never shipped a correct set of
builtin headers for x86 when users rely on -funsigned-char. Oops.
llvm-svn: 248980
Summary: __nvvm_atom_cas_* returns the old value instead of whether the swap succeeds.
Reviewers: eliben, tra
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D13306
llvm-svn: 248951
This is the clang commit associated with llvm r248887.
This commit changes the interface of the vld[1234], vld[234]lane, and vst[1234],
vst[234]lane ARM neon intrinsics and associates an address space with the
pointer that these intrinsics take. This changes, e.g.,
<2 x i32> @llvm.arm.neon.vld1.v2i32(i8*, i32)
to
<2 x i32> @llvm.arm.neon.vld1.v2i32.p0i8(i8*, i32)
This change ensures that address spaces are fully taken into account in the ARM
target during lowering of interleaved loads and stores.
Differential Revision: http://reviews.llvm.org/D13127
llvm-svn: 248888
This patch corresponds to review:
http://reviews.llvm.org/D13190
Implemented the following interfaces to conform to ELF V2 ABI version 1.1.
vector signed __int128 vec_adde (vector signed __int128, vector signed __int128, vector signed __int128);
vector unsigned __int128 vec_adde (vector unsigned __int128, vector unsigned __int128, vector unsigned __int128);
vector signed __int128 vec_addec (vector signed __int128, vector signed __int128, vector signed __int128);
vector unsigned __int128 vec_addec (vector unsigned __int128, vector unsigned __int128, vector unsigned __int128);
vector signed int vec_addc(vector signed int __a, vector signed int __b);
vector bool char vec_cmpge (vector signed char __a, vector signed char __b);
vector bool char vec_cmpge (vector unsigned char __a, vector unsigned char __b);
vector bool short vec_cmpge (vector signed short __a, vector signed short __b);
vector bool short vec_cmpge (vector unsigned short __a, vector unsigned short __b);
vector bool int vec_cmpge (vector signed int __a, vector signed int __b);
vector bool int vec_cmpge (vector unsigned int __a, vector unsigned int __b);
vector bool char vec_cmple (vector signed char __a, vector signed char __b);
vector bool char vec_cmple (vector unsigned char __a, vector unsigned char __b);
vector bool short vec_cmple (vector signed short __a, vector signed short __b);
vector bool short vec_cmple (vector unsigned short __a, vector unsigned short __b);
vector bool int vec_cmple (vector signed int __a, vector signed int __b);
vector bool int vec_cmple (vector unsigned int __a, vector unsigned int __b);
vector double vec_double (vector signed long long __a);
vector double vec_double (vector unsigned long long __a);
vector bool char vec_eqv(vector bool char __a, vector bool char __b);
vector bool short vec_eqv(vector bool short __a, vector bool short __b);
vector bool int vec_eqv(vector bool int __a, vector bool int __b);
vector bool long long vec_eqv(vector bool long long __a, vector bool long long __b);
vector signed short vec_madd(vector signed short __a, vector signed short __b, vector signed short __c);
vector signed short vec_madd(vector signed short __a, vector unsigned short __b, vector unsigned short __c);
vector signed short vec_madd(vector unsigned short __a, vector signed short __b, vector signed short __c);
vector unsigned short vec_madd(vector unsigned short __a, vector unsigned short __b, vector unsigned short __c);
vector bool long long vec_mergeh(vector bool long long __a, vector bool long long __b);
vector bool long long vec_mergel(vector bool long long __a, vector bool long long __b);
vector bool char vec_nand(vector bool char __a, vector bool char __b);
vector bool short vec_nand(vector bool short __a, vector bool short __b);
vector bool int vec_nand(vector bool int __a, vector bool int __b);
vector bool long long vec_nand(vector bool long long __a, vector bool long long __b);
vector bool char vec_orc(vector bool char __a, vector bool char __b);
vector bool short vec_orc(vector bool short __a, vector bool short __b);
vector bool int vec_orc(vector bool int __a, vector bool int __b);
vector bool long long vec_orc(vector bool long long __a, vector bool long long __b);
vector signed long long vec_sub(vector signed long long __a, vector signed long long __b);
vector signed long long vec_sub(vector bool long long __a, vector signed long long __b);
vector signed long long vec_sub(vector signed long long __a, vector bool long long __b);
vector unsigned long long vec_sub(vector unsigned long long __a, vector unsigned long long __b);
vector unsigned long long vec_sub(vector bool long long __a, vector unsigned long long __b);
vector unsigned long long vec_sub(vector unsigned long long __V2 ABI V1.1
http://ror float vec_sub(vector float __a, vector float __b);
unsigned char vec_extract(vector bool char __a, int __b);
signed short vec_extract(vector signed short __a, int __b);
unsigned short vec_extract(vector bool short __a, int __b);
signed int vec_extract(vector signed int __a, int __b);
unsigned int vec_extract(vector bool int __a, int __b);
signed long long vec_extract(vector signed long long __a, int __b);
unsigned long long vec_extract(vector unsigned long long __a, int __b);
unsigned long long vec_extract(vector bool long long __a, int __b);
double vec_extract(vector double __a, int __b);
vector bool char vec_insert(unsigned char __a, vector bool char __b, int __c);
vector signed short vec_insert(signed short __a, vector signed short __b, int __c);
vector bool short vec_insert(unsigned short __a, vector bool short __b, int __c);
vector signed int vec_insert(signed int __a, vector signed int __b, int __c);
vector bool int vec_insert(unsigned int __a, vector bool int __b, int __c);
vector signed long long vec_insert(signed long long __a, vector signed long long __b, int __c);
vector unsigned long long vec_insert(unsigned long long __a, vector unsigned long long __b, int __c);
vector bool long long vec_insert(unsigned long long __a, vector bool long long __b, int __c);
vector double vec_insert(double __a, vector double __b, int __c);
vector signed long long vec_splats(signed long long __a);
vector unsigned long long vec_splats(unsigned long long __a);
vector signed __int128 vec_splats(signed __int128 __a);
vector unsigned __int128 vec_splats(unsigned __int128 __a);
vector double vec_splats(double __a);
int vec_all_eq(vector double __a, vector double __b);
int vec_all_ge(vector double __a, vector double __b);
int vec_all_gt(vector double __a, vector double __b);
int vec_all_le(vector double __a, vector double __b);
int vec_all_lt(vector double __a, vector double __b);
int vec_all_nan(vector double __a);
int vec_all_ne(vector double __a, vector double __b);
int vec_all_nge(vector double __a, vector double __b);
int vec_all_ngt(vector double __a, vector double __b);
int vec_any_eq(vector double __a, vector double __b);
int vec_any_ge(vector double __a, vector double __b);
int vec_any_gt(vector double __a, vector double __b);
int vec_any_le(vector double __a, vector double __b);
int vec_any_lt(vector double __a, vector double __b);
int vec_any_ne(vector double __a, vector double __b);
vector unsigned char vec_sbox_be (vector unsigned char);
vector unsigned char vec_cipher_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_cipherlast_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_ncipher_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_ncipherlast_be (vector unsigned char, vector unsigned char);
vector unsigned int vec_shasigma_be (vector unsigned int, const int, const int);
vector unsigned long long vec_shasigma_be (vector unsigned long long, const int, const int);
vector unsigned short vec_pmsum_be (vector unsigned char, vector unsigned char);
vector unsigned int vec_pmsum_be (vector unsigned short, vector unsigned short);
vector unsigned long long vec_pmsum_be (vector unsigned int, vector unsigned int);
vector unsigned __int128 vec_pmsum_be (vector unsigned long long, vector unsigned long long);
vector unsigned char vec_gb (vector unsigned char);
vector unsigned long long vec_bperm (vector unsigned __int128 __a, vector unsigned char __b);
Removed the folowing interfaces either because their signatures have changed
in version 1.1 of the ABI or because they were implemented for ELF V2 ABI but
have actually been deprecated in version 1.1.
vector signed char vec_eqv(vector bool char __a, vector signed char __b);
vector signed char vec_eqv(vector signed char __a, vector bool char __b);
vector unsigned char vec_eqv(vector bool char __a, vector unsigned char __b);
vector unsigned char vec_eqv(vector unsigned char __a, vector bool char __b);
vector signed short vec_eqv(vector bool short __a, vector signed short __b);
vector signed short vec_eqv(vector signed short __a, vector bool short __b);
vector unsigned short vec_eqv(vector bool short __a, vector unsigned short __b);
vector unsigned short vec_eqv(vector unsigned short __a, vector bool short __b);
vector signed int vec_eqv(vector bool int __a, vector signed int __b);
vector signed int vec_eqv(vector signed int __a, vector bool int __b);
vector unsigned int vec_eqv(vector bool int __a, vector unsigned int __b);
vector unsigned int vec_eqv(vector unsigned int __a, vector bool int __b);
vector signed long long vec_eqv(vector bool long long __a, vector signed long long __b);
vector signed long long vec_eqv(vector signed long long __a, vector bool long long __b);
vector unsigned long long vec_eqv(vector bool long long __a, vector unsigned long long __b);
vector unsigned long long vec_eqv(vector unsigned long long __a, vector bool long long __b);
vector float vec_eqv(vector bool int __a, vector float __b);
vector float vec_eqv(vector float __a, vector bool int __b);
vector double vec_eqv(vector bool long long __a, vector double __b);
vector double vec_eqv(vector double __a, vector bool long long __b);
vector unsigned short vec_nand(vector bool short __a, vector unsigned short __b);
llvm-svn: 248813
Currently it's 64-bit which will lead to mismatch between host and
device code if we compile for i386.
Differential Revision: http://reviews.llvm.org/D13181
llvm-svn: 248753
Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in
a hand-rolled tricky condition block in lib/Basic/Targets.cpp, with a FIXME:
attached.
http://reviews.llvm.org/D12937 moved the handling of the DSP feature over to
ARMTargetParser.def in LLVM, to be in line with other architecture extensions.
This is the corresponding patch to clang, to clear the FIXME: and update
the tests.
Differential Revision: http://reviews.llvm.org/D12938
llvm-svn: 248521
Summary:
Strictly speaking, the MIPS*R2 ISA's should not permit -mnan=2008 since this
feature was added in MIPS*R3. However, other toolchains permit this and we
should do the same.
Reviewers: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D13057
llvm-svn: 248481
This commit fixes an assert that is triggered when optnone is being
added to an IR function that is already marked with minsize and optsize.
rdar://problem/22723716
Differential Revision: http://reviews.llvm.org/D13004
llvm-svn: 248191
Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in
a hand-rolled tricky condition block in lib/Basic/Targets.cpp, with a FIXME:
attached.
http://reviews.llvm.org/D12937 moved the handling of +t2dsp over to
ARMTargetParser.def in LLVM, to be in line with other architecture extensions.
This is the corresponding patch to clang, to clear the FIXME: and update
the tests.
Differential Revision: http://reviews.llvm.org/D12938
llvm-svn: 248154
128-bit vector integer sign extensions correctly lower to the pmovsx instructions even for debug builds.
This patch removes the builtins and reimplements the _mm_cvtepi*_epi* intrinsics __using builtin_shufflevector (to extract the bottom most subvector) and __builtin_convertvector (to actually perform the sign extension).
Differential Revision: http://reviews.llvm.org/D12835
llvm-svn: 248092
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
Mingw generally wraps an old copy of msvcrt.dll which has these
personalities, so things should work out, or so I hear. I haven't tested
it.
llvm-svn: 247902
convert i64 to FP and vice versa
reduceps & reducepd
rangeps & rangepd
all in their 512bit versions
Differential Revision: http://reviews.llvm.org/D11716
llvm-svn: 247881
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247494
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247465
-force-align-stack.
Also, make changes to the driver so that -mno-stack-realign is no longer
an option exposed to the end-user that disallows stack realignment in
the backend.
Differential Revision: http://reviews.llvm.org/D11815
llvm-svn: 247451
It seems that there is small bug, and we can't generate assume loads
when some virtual functions have internal visibiliy
This reverts commit 982bb7d966947812d216489b3c519c9825cacbf2.
llvm-svn: 247332
This flag causes the compiler to emit bit set entries for functions as well
as runtime bitset checks at indirect call sites. Depends on the new function
bitset mechanism.
Differential Revision: http://reviews.llvm.org/D11857
llvm-svn: 247238
Generating call assume(icmp %vtable, %global_vtable) after constructor
call for devirtualization purposes.
For more info go to:
http://lists.llvm.org/pipermail/cfe-dev/2015-July/044227.html
Edit:
Fixed version because of PR24479.
After this patch got reverted because of ScalarEvolution bug (D12719)
Merged after John McCall big patch (Added Address).
http://reviews.llvm.org/D11859
llvm-svn: 247199
The tests in test/CodeGen/arm-target-features.c are currently
passing but warning messages are suppressed. These tests are now
synchronized with the corresponding changes in Target Parser.
This patch will fix the regressions in clang caused by r247136
Differential Revision: http://reviews.llvm.org/D12722
llvm-svn: 247138
Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.
Differential Revision: http://reviews.llvm.org/D12313
llvm-svn: 247104
instruction used the ReturnValue as pointer operand or value operand. This
led to wrong code gen - in later stages (load-store elision code) the found
store and its operand would be erased, causing ReturnValue to become a <badref>.
The patch adds a check that makes sure that ReturnValue is a pointer operand of
store instruction. Regression test is also added.
This fixes PR24386.
Differential Revision: http://reviews.llvm.org/D12400
llvm-svn: 247003
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
Apparently there are many cast kinds that may cause implicit pointer
arithmetic to happen. In light of this, the cast ignoring logic
introduced in r246877 has been changed to only ignore a small set of
cast kinds, and a test for this behavior has been added.
Thanks to Richard for catching this before it became a bug report. :)
llvm-svn: 246890
Improvements:
- For all types, we would give up in a case such as:
__builtin_object_size((char*)&foo, N);
even if we could provide an answer to
__builtin_object_size(&foo, N);
We now provide the same answer for both of the above examples in all
cases.
- For type=1|3, we now support subobjects with unknown bases, as long
as the designator is valid.
Thanks to Richard Smith for the review + design planning.
Review: http://reviews.llvm.org/D12169
llvm-svn: 246877
This implements basic support for compiling (though not yet assembling
or linking) for a WebAssembly target. Note that ABI details are not yet
finalized, and may change.
Differential Revision: http://reviews.llvm.org/D12002
llvm-svn: 246814
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246764
Original commit message:
[ARM] Allow passing/returning of __fp16 arguments
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246760
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246755
This patch depends on r246688 (D12341).
The goal is to make LLVM generate different code for these functions for a target that
has cheap branches (see PR23827 for more details):
int foo();
int normal(int x, int y, int z) {
if (x != 0 && y != 0) return foo();
return 1;
}
int crazy(int x, int y) {
if (__builtin_unpredictable(x != 0 && y != 0)) return foo();
return 1;
}
Differential Revision: http://reviews.llvm.org/D12458
llvm-svn: 246699
GCC 4.8+ has a PowerPC-specific intrinsic, __builtin_ppc_get_timebase, to do
what Clang's __builtin_readcyclecounter does. For compatibility with code that
uses GCC's spelling (including glibc), support it as well.
Partially fixes PR23681.
llvm-svn: 246510
In release builds labels are numbers. Matching just the number may result
in false matches where the label is contained in other numbers, such as
14 inside [114 x i8]. A stricter match requiring start of line or > character
before the label avoids these false matches.
llvm-svn: 246385
Without this, 64-byte vector types (__m512), specified to be 64-byte
aligned in the AVX512 draft SysV ABI, will only be 32-byte aligned.
This is analoguous to AVX, for which we accept 32-byte max alignment.
Differential Revision: http://reviews.llvm.org/D10724
llvm-svn: 246230
There's no point in using a larger alignment if we have no instructions
that would benefit from it.
Differential Revision: http://reviews.llvm.org/D12389
llvm-svn: 246229
A couple of changes here:
a) Do less work in the case where we don't have a target attribute on the
function. We've already canonicalized the attributes for the function -
no need to do more work.
b) Use the newer canonicalized feature adding functions from TargetInfo
to do the work when we do have a target attribute. This enables us to diagnose
some warnings in the case of conflicting written attributes (only ppc does
this today) and also make sure to get all of the features for a cpu that's
listed rather than just change the cpu.
Updated all testcases accordingly and added a new testcase to verify that we'll
error out on ppc if we have some incompatible options using the existing diagnosis
framework there.
llvm-svn: 246195
We agreed for r245605 that, as long as we don't affect -O0 codegen
too much, it's OK to use native constructs rather than intrinsics.
Let's test that, starting with AVX2 here.
See PR24580.
Differential Revision: http://reviews.llvm.org/D12212
llvm-svn: 245987
As discussed in PR23648 - the intrinsics _m_from_int, _m_to_int and _m_prefetch are defined in mmintrin.h and prfchwintrin.h so we don't need to in Intrin.h
Added tests for _m_from_int and _m_to_int
D11338 already added a test for _m_prefetch
Differential Revision: http://reviews.llvm.org/D12272
llvm-svn: 245975
_rotl, _rotwl and _lrotl (and their right-shift counterparts) are official x86
intrinsics, and should be supported regardless of environment. This is in contrast
to _rotl8, _rotl16, and _rotl64 which are MS-specific.
Note that the MS documentation for _lrotl is different from the Intel
documentation. Intel explicitly documents it as a 64-bit rotate, while for MS,
since sizeof(unsigned long) for MSVC is always 4, a 32-bit rotate is implied.
Differential Revision: http://reviews.llvm.org/D12271
llvm-svn: 245923
This is important in the case that the LLVM-inferred llvm-struct
alignment is not the same as the clang-known C-struct alignment.
Differential Revision: http://reviews.llvm.org/D12243
llvm-svn: 245719
This lets us optimize them better. We agreed to remove the intrinsics,
instead of combining them later, as, at -O0, we generate the expected
instructions. Plus, it's a nice cleanup.
Differential Revision: http://reviews.llvm.org/D10556
llvm-svn: 245605
alignment is ignored, and they always allocate a complete
storage unit.
Also, change the dumping of AST record layouts: use the more
readable C++-style dumping even in C, include bitfield offset
information in the dump, and don't print sizeof/alignof
information for fields of record type, since we don't do so
for bases or other kinds of field.
rdar://22275433
llvm-svn: 245514
__builtin_object_size would return incorrect answers for many uses where
type=3. This fixes the inaccuracy by making us emit 0 instead of LLVM's
objectsize intrinsic.
Additionally, there are many cases where we would emit suboptimal (but
correct) answers, such as when arrays are involved. This patch fixes
some of these cases (please see new tests in test/CodeGen/object-size.c
for specifics on which cases are improved)
Resubmit of r245323 with PR24493 fixed.
Patch mostly by Richard Smith.
Differential Revision: http://reviews.llvm.org/D12000
This fixes PR15212.
llvm-svn: 245403
__builtin_object_size would return incorrect answers for many uses where
type=3. This fixes the inaccuracy by making us emit 0 instead of LLVM's
objectsize intrinsic.
Additionally, there are many cases where we would emit suboptimal (but
correct) answers, such as when arrays are involved. This patch fixes
some of these cases (please see new tests in test/CodeGen/object-size.c
for specifics on which cases are improved)
Patch mostly by Richard Smith.
Differential Revision: http://reviews.llvm.org/D12000
This fixes PR15212.
llvm-svn: 245323
The fix for this is in LLVM but it depends on how clang handles the alias
attribute, so add a test to the clang tests to make sure everything works
together as expected.
Differential Revision: http://reviews.llvm.org/D11980
llvm-svn: 244756
Summary:
float_cast_overflow is the only UBSan check without a source location attached.
This patch propagates SourceLocations where necessary to get them to the
EmitCheck() call.
Reviewers: rsmith, ABataev, rjmccall
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11757
llvm-svn: 244568
Summary:
NaCl is a platform where long double is the same as double.
Its mangling is spelled with "long double" but its ABI lowering is the same
as double.
Reviewers: rnk, chh
Subscribers: jfb, cfe-commits, dschuff
Differential Revision: http://reviews.llvm.org/D11922
llvm-svn: 244541
A test was recently (r244468) added to cover long double calling convention
codegen, distinguishing between Android and GNU conventions (where long doubles
are fp128 and x86_fp80, respectively). Native Client is a target where long
doubles are the same as doubles. This change augments the test to cover
that case.
Also rename the test to test/codeGen/X86_64-longdouble.c
Differential Revision: http://reviews.llvm.org/D11921
llvm-svn: 244524
When clang is built with -DLLVM_ENABLE_ASSERTIONS=Off,
it does not create names for IR values.
Differential Revision: http://reviews.llvm.org/D11437
llvm-svn: 244502
These changes are for Android x86_64 targets to be compatible
with current Android g++ and conform to AMD64 ABI.
https://llvm.org/bugs/show_bug.cgi?id=23897
* Return type of long double (fp128) should be fp128, not x86_fp80.
* Vararg of long double (fp128) could be in register and overflowed to memory.
https://llvm.org/bugs/show_bug.cgi?id=24111
* Return value of long double (fp128) _Complex should be in memory like a structure of {fp128,fp128}.
Differential Revision: http://reviews.llvm.org/D11437
llvm-svn: 244468
Function types without prototypes can arise when mangling a function type
within an overloadable function in C. We mangle these as the absence of
any parameter types (not even an empty parameter list).
Differential Revision: http://reviews.llvm.org/D11848
llvm-svn: 244374
Summary:
By default, 'clang' emits dwarf and 'clang-cl' emits codeview. You can
force emission of one or both by passing -gcodeview and -gdwarf to
either driver.
Reviewers: dblaikie, hans
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11742
llvm-svn: 244097
Support for emitting libcalls for __atomic_fetch_nand and
__atomic_{add,sub,and,or,xor,nand}_fetch was missing; add it, and some
test cases.
Differential Revision: http://reviews.llvm.org/D10847
llvm-svn: 244063
Update testcases after LLVM change r243774.
Most of these had no need to check `tag:` field, but did so as a way of
getting to the `name:` field. In a few cases I've converted the `tag:`
checks to `arg:` or `CHECK-NOT: arg:`.
llvm-svn: 243775
This patch adds support for the System Z vector built-in functions.
The API-defined header file has the name vecintrin.h.
The user-level functions are defined in the same style as the clang
version of altivec.h, making heavy use of the __overloadable__ and
__always_inline__ attributes. Where possible the functions expand to
generic operations rather than specific built-in functions, in the hope
that that form can be optimised better.
Where a built-in routine is specified to require an immediate integer
argument, the __enable_if__ attribute is used to verify the argument is
in fact constant and in the appropriate range.
Based on a patch by Richard Sandiford.
llvm-svn: 243643
The z13 vector facility has an associated language extension,
closely modeled on AltiVec/VSX. The main differences are:
- vector long, vector float and vector pixel are not supported
- vector long long and vector double are supported (like VSX)
- comparison operators return a vector rather than a scalar integer
- shift operators behave like the OpenCL shift operators
- vector bool is only supported as argument to certain operators;
some operators allow mixing a bool with a non-bool vector
This patch adds clang support for the extension. It is closely modelled
on the AltiVec support. Similarly to the -faltivec option, there's a
new -fzvector option to enable the extensions (as well as an -mzvector
alias for compatibility with GCC). There's also a separate LangOpt.
The extension as implemented here is intended to be compatible with
the -mzvector extension recently implemented by GCC.
Based on a patch by Richard Sandiford.
Differential Revision: http://reviews.llvm.org/D11001
llvm-svn: 243642
This will be used for old targets like Android that do not
support ELF TLS models.
Differential Revision: http://reviews.llvm.org/D10524
llvm-svn: 243441
The 3DNOW/PRFCHW cpu targets define both the PREFETCHW (set cache line modified) and PREFETCH (set cache line exclusive) instructions but only the _m_prefetchw (PREFETCHW) intrinsic is included in the header. This patch adds the missing _m_prefetch intrinsic.
I'm basing this off AMD documentation - the intel docs on the support for PREFETCHW isn't clear whether Silvermont/Broadwell properly support PREFETCH but given that the intrinsic implementation is a default __builtin_prefetch call, it is safe whatever.
Fix for PR23648
Differential Revision: http://reviews.llvm.org/D11338
llvm-svn: 243305
__builtin_frame_address requires its argument to be a constant
expression which already implies that it cannot have undefined behavior.
However, we used EmitScalarExpr to emit the argument causing UBSan to
try to check for overflow.
Instead, use the constant expression emission system.
This fixes PR24256.
llvm-svn: 243206
This is the PS4 counterpart to r229376, which quotes the library name if the
name contains space. It was discovered that if a library name contains both
double-quote and space characters, quoting the name might produce unexpected
results, but we are mostly concerned with a Windows host environment, which
does not allow double-quote or slashes in file/folder names.
Differential Revision: http://reviews.llvm.org/D11275
llvm-svn: 242689
Currently, -save-temp will cause ObjCARC optimization to be dropped,
sanitizer pass to run early in the pipeline, and profiling
instrumentation to run twice.
Fix the issue by properly disable all passes in the optimization
pipeline when generating bitcode output and parse some of the Language
Options even when the input is bitcode so the passes can be setup
correctly.
llvm-svn: 242565
"-arm-use-movt=0".
This change is needed since backend options do not make it to the backend
when doing LTO and are not capable of changing the behavior of code-gen
passes on a per-function basis.
rdar://problem/21529937
Differential Revision: http://reviews.llvm.org/D11025
llvm-svn: 242368
Revision 224297 modified the behavior of vec_sld for little endian so
that LLVM will generate the correct corresponding vsldoi instruction.
I neglected to update the existing tests, which continued to pass
because they were not specific enough. This patch adds enough
specificity to the tests to make them useful for BE and LE testing of
vec_sld.
llvm-svn: 242313
This patch corresponds to review:
http://reviews.llvm.org/D11184
A number of new interfaces for altivec.h (as mandated by the ABI):
vector float vec_cpsgn(vector float, vector float)
vector double vec_cpsgn(vector double, vector double)
vector double vec_or(vector bool long long, vector double)
vector double vec_or(vector double, vector bool long long)
vector double vec_re(vector double)
vector signed char vec_cntlz(vector signed char)
vector unsigned char vec_cntlz(vector unsigned char)
vector short vec_cntlz(vector short)
vector unsigned short vec_cntlz(vector unsigned short)
vector int vec_cntlz(vector int)
vector unsigned int vec_cntlz(vector unsigned int)
vector signed long long vec_cntlz(vector signed long long)
vector unsigned long long vec_cntlz(vector unsigned long long)
vector signed char vec_nand(vector bool signed char, vector signed char)
vector signed char vec_nand(vector signed char, vector bool signed char)
vector signed char vec_nand(vector signed char, vector signed char)
vector unsigned char vec_nand(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector unsigned char)
vector short vec_nand(vector bool short, vector short)
vector short vec_nand(vector short, vector bool short)
vector short vec_nand(vector short, vector short)
vector unsigned short vec_nand(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector unsigned short)
vector int vec_nand(vector bool int, vector int)
vector int vec_nand(vector int, vector bool int)
vector int vec_nand(vector int, vector int)
vector unsigned int vec_nand(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector unsigned int)
vector signed long long vec_nand(vector bool long long, vector signed long long)
vector signed long long vec_nand(vector signed long long, vector bool long long)
vector signed long long vec_nand(vector signed long long, vector signed long long)
vector unsigned long long vec_nand(vector bool long long, vector unsigned long long)
vector unsigned long long vec_nand(vector unsigned long long, vector bool long long)
vector unsigned long long vec_nand(vector unsigned long long, vector unsigned long long)
vector signed char vec_orc(vector bool signed char, vector signed char)
vector signed char vec_orc(vector signed char, vector bool signed char)
vector signed char vec_orc(vector signed char, vector signed char)
vector unsigned char vec_orc(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector unsigned char)
vector short vec_orc(vector bool short, vector short)
vector short vec_orc(vector short, vector bool short)
vector short vec_orc(vector short, vector short)
vector unsigned short vec_orc(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector unsigned short)
vector int vec_orc(vector bool int, vector int)
vector int vec_orc(vector int, vector bool int)
vector int vec_orc(vector int, vector int)
vector unsigned int vec_orc(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector unsigned int)
vector signed long long vec_orc(vector bool long long, vector signed long long)
vector signed long long vec_orc(vector signed long long, vector bool long long)
vector signed long long vec_orc(vector signed long long, vector signed long long)
vector unsigned long long vec_orc(vector bool long long, vector unsigned long long)
vector unsigned long long vec_orc(vector unsigned long long, vector bool long long)
vector unsigned long long vec_orc(vector unsigned long long, vector unsigned long long)
vector signed char vec_div(vector signed char, vector signed char)
vector unsigned char vec_div(vector unsigned char, vector unsigned char)
vector signed short vec_div(vector signed short, vector signed short)
vector unsigned short vec_div(vector unsigned short, vector unsigned short)
vector signed int vec_div(vector signed int, vector signed int)
vector unsigned int vec_div(vector unsigned int, vector unsigned int)
vector signed long long vec_div(vector signed long long, vector signed long long)
vector unsigned long long vec_div(vector unsigned long long, vector unsigned long long)
vector unsigned char vec_mul(vector unsigned char, vector unsigned char)
vector unsigned int vec_mul(vector unsigned int, vector unsigned int)
vector unsigned long long vec_mul(vector unsigned long long, vector unsigned long long)
vector unsigned short vec_mul(vector unsigned short, vector unsigned short)
vector signed char vec_mul(vector signed char, vector signed char)
vector signed int vec_mul(vector signed int, vector signed int)
vector signed long long vec_mul(vector signed long long, vector signed long long)
vector signed short vec_mul(vector signed short, vector signed short)
vector signed long long vec_mergeh(vector signed long long, vector signed long long)
vector signed long long vec_mergeh(vector signed long long, vector bool long long)
vector signed long long vec_mergeh(vector bool long long, vector signed long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergeh(vector bool long long, vector unsigned long long)
vector double vec_mergeh(vector double, vector double)
vector double vec_mergeh(vector double, vector bool long long)
vector double vec_mergeh(vector bool long long, vector double)
vector signed long long vec_mergel(vector signed long long, vector signed long long)
vector signed long long vec_mergel(vector signed long long, vector bool long long)
vector signed long long vec_mergel(vector bool long long, vector signed long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergel(vector bool long long, vector unsigned long long)
vector double vec_mergel(vector double, vector double)
vector double vec_mergel(vector double, vector bool long long)
vector double vec_mergel(vector bool long long, vector double)
vector signed int vec_pack(vector signed long long, vector signed long long)
vector unsigned int vec_pack(vector unsigned long long, vector unsigned long long)
vector bool int vec_pack(vector bool long long, vector bool long long)
llvm-svn: 242171
add 2 bit to ObjCOrBuiltinID (changed from 11bits to 13bits), see discussion in
Add new intrinsics support that already covered by the BE.
All the intrinsics are covered by tests
Differential Revision: http://reviews.llvm.org/D10893
llvm-svn: 242144
Previously, clang/llvm treated inline-asm instructions conservatively,
choosing not to eliminate the instructions or hoisting them out of a loop
even when it was safe to do so. This commit makes changes to attach a
readonly or readnone attribute to an inline-asm instruction, which enables
passes such as LICM and EarlyCSE to move or optimize away the instruction.
rdar://problem/11358192
Differential Revision: http://reviews.llvm.org/D10546
llvm-svn: 241930
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:
struct XBitfield {
unsigned b1 : 10;
unsigned b2 : 12;
unsigned b3 : 10;
};
struct YBitfield {
char x;
struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;
unsigned test7() {
// CHECK: @test7
// CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
return gbitfield.y.b2;
}
The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.
This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp. Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.
Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).
In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage. This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.
Differential Revision: http://reviews.llvm.org/D11034
llvm-svn: 241916
This patch corresponds to review:
http://reviews.llvm.org/D10972
Fix for the handling of dependent features that are enabled by default
on some CPU's (such as -mvsx, -mpower8-vector).
Also provides a number of new interfaces or fixes existing ones in
altivec.h.
Changed signatures to conform to ABI:
vector short vec_perm(vector signed short, vector signed short, vector unsigned char)
vector int vec_perm(vector signed int, vector signed int, vector unsigned char)
vector long long vec_perm(vector signed long long, vector signed long long, vector unsigned char)
vector signed char vec_sld(vector signed char, vector signed char, const int)
vector unsigned char vec_sld(vector unsigned char, vector unsigned char, const int)
vector bool char vec_sld(vector bool char, vector bool char, const int)
vector unsigned short vec_sld(vector unsigned short, vector unsigned short, const int)
vector signed short vec_sld(vector signed short, vector signed short, const int)
vector signed int vec_sld(vector signed int, vector signed int, const int)
vector unsigned int vec_sld(vector unsigned int, vector unsigned int, const int)
vector float vec_sld(vector float, vector float, const int)
vector signed char vec_splat(vector signed char, const int)
vector unsigned char vec_splat(vector unsigned char, const int)
vector bool char vec_splat(vector bool char, const int)
vector signed short vec_splat(vector signed short, const int)
vector unsigned short vec_splat(vector unsigned short, const int)
vector bool short vec_splat(vector bool short, const int)
vector pixel vec_splat(vector pixel, const int)
vector signed int vec_splat(vector signed int, const int)
vector unsigned int vec_splat(vector unsigned int, const int)
vector bool int vec_splat(vector bool int, const int)
vector float vec_splat(vector float, const int)
Added a VSX path to:
vector float vec_round(vector float)
Added interfaces:
vector signed char vec_eqv(vector signed char, vector signed char)
vector signed char vec_eqv(vector bool char, vector signed char)
vector signed char vec_eqv(vector signed char, vector bool char)
vector unsigned char vec_eqv(vector unsigned char, vector unsigned char)
vector unsigned char vec_eqv(vector bool char, vector unsigned char)
vector unsigned char vec_eqv(vector unsigned char, vector bool char)
vector signed short vec_eqv(vector signed short, vector signed short)
vector signed short vec_eqv(vector bool short, vector signed short)
vector signed short vec_eqv(vector signed short, vector bool short)
vector unsigned short vec_eqv(vector unsigned short, vector unsigned short)
vector unsigned short vec_eqv(vector bool short, vector unsigned short)
vector unsigned short vec_eqv(vector unsigned short, vector bool short)
vector signed int vec_eqv(vector signed int, vector signed int)
vector signed int vec_eqv(vector bool int, vector signed int)
vector signed int vec_eqv(vector signed int, vector bool int)
vector unsigned int vec_eqv(vector unsigned int, vector unsigned int)
vector unsigned int vec_eqv(vector bool int, vector unsigned int)
vector unsigned int vec_eqv(vector unsigned int, vector bool int)
vector signed long long vec_eqv(vector signed long long, vector signed long long)
vector signed long long vec_eqv(vector bool long long, vector signed long long)
vector signed long long vec_eqv(vector signed long long, vector bool long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector bool long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector bool long long)
vector float vec_eqv(vector float, vector float)
vector float vec_eqv(vector bool int, vector float)
vector float vec_eqv(vector float, vector bool int)
vector double vec_eqv(vector double, vector double)
vector double vec_eqv(vector bool long long, vector double)
vector double vec_eqv(vector double, vector bool long long)
vector bool long long vec_perm(vector bool long long, vector bool long long, vector unsigned char)
vector double vec_round(vector double)
vector double vec_splat(vector double, const int)
vector bool long long vec_splat(vector bool long long, const int)
vector signed long long vec_splat(vector signed long long, const int)
vector unsigned long long vec_splat(vector unsigned long long,
vector bool int vec_sld(vector bool int, vector bool int, const int)
vector bool short vec_sld(vector bool short, vector bool short, const int)
llvm-svn: 241904
Code in CGCall.cpp that loads up function arguments that need to be
coerced to a different type may in some cases ignore the fact that
the source of the argument is not naturally aligned. This may cause
incorrect code to be generated. In some places in CreateCoercedLoad,
we already have setAlignment calls to address this, but I ran into one
where it was missing, causing wrong code generation on SystemZ.
However, in that location, we do not actually know what alignment of
the source location we can rely on; the callers do not pass anything
to this routine. This is already an issue in other places in
CreateCoercedLoad; and the same problem exists for CreateCoercedStore.
To avoid pessimising code, and to fix the FIXMEs already in place,
this patch also adds an alignment argument to the CreateCoerced*
routines and uses it instead of forcing an alignment of 1. The
callers are changed to pass in the best information they have.
This actually requires changes in a number of existing test cases
since we now get better alignment in many places.
Differential Revision: http://reviews.llvm.org/D11033
llvm-svn: 241898
Move the diagnostic back to codegen so that we can compile ATL on the
self-host bot. We don't actually end up emitting code for the __try, so
the diagnostic won't be hit.
llvm-svn: 241761
This patch adds ObjectFilePCHContainerOperations uses the LLVM backend
to put the contents of a PCH into a __clangast section inside a COFF, ELF,
or Mach-O object file container.
This is done to facilitate module debugging by makeing it possible to
store the debug info for the types defined by a module alongside the AST.
rdar://problem/20091852
llvm-svn: 241620
"-arm-long-calls".
This change allows using -mlong-calls/-mno-long-calls for LTO and enabling or
disabling long call on a per-function basis.
rdar://problem/21529937
Differential Revision: http://reviews.llvm.org/D9414
llvm-svn: 241565
different function signatures. (Previously clang would emit all block
pointer types with the type of the first block pointer in the compile
unit.)
rdar://problem/21602473
llvm-svn: 241534
This reverts commit r241244, but restricts SEH support to Win64.
This way, Chromium builds will still fall back on TUs with SEH, and
Clang developers can work on this incrementally upstream while patching
this small predicate locally. It'll also make it easier to review small
fixes.
llvm-svn: 241533
The patch is the same except for the addition of a new test for the
issue that required reverting the dependent llvm commit.
--Original Commit Message--
Pass down the -flto option to the -cc1 job, and from there into the
CodeGenOptions and onto the PassManagerBuilder. This enables gating
the new EliminateAvailableExternally module pass on whether we are
preparing for LTO.
If we are preparing for LTO (e.g. a -flto -c compile), the new pass is not
included as we want to preserve available externally functions for possible
link time inlining.
llvm-svn: 241467