MS ignores the keyword "short" when used after a jc/jz instruction, LLVM ought to do the same.
llvm: D35892
Differential Revision: https://reviews.llvm.org/D35893
llvm-svn: 309510
Allows the incorporation of legit (x86) Control Regs within inline asm stataements
Differential Revision: https://reviews.llvm.org/D35903
llvm-svn: 309508
Clang specifies a max type alignment of 16 bytes on darwin targets (annoyingly in the driver not via cc1), meaning that the builtin nontemporal stores don't correctly align the loads/stores to 32 or 64 bytes when required, resulting in lowering to temporal unaligned loads/stores.
This patch casts the vectors to explicitly aligned types prior to the load/store to ensure that the require alignment is respected.
Differential Revision: https://reviews.llvm.org/D35996
llvm-svn: 309488
On some targets, passing zero to the clz() or ctz() builtins has undefined
behavior. I ran into this issue while debugging UB in __hash_table from libcxx:
the bug I was seeing manifested itself differently under -O0 vs -Os, due to a
UB call to clz() (see: libcxx/r304617).
This patch introduces a check which can detect UB calls to builtins.
llvm.org/PR26979
Differential Revision: https://reviews.llvm.org/D34590
llvm-svn: 309459
Clang specifies a max type alignment of 16 bytes on darwin targets, meaning that the builtin nontemporal stores don't correctly align the loads/stores to 32 or 64 bytes when required, resulting in lowering to temporal unaligned loads/stores.
llvm-svn: 309382
The ARM Runtime ABI document (IHI0043) defines the AEABI floating point
helper functions in 4.1.2 The floating-point helper functions. These
functions always use the base PCS (soft-fp). However helper functions
defined outside of this document such as the complex-number multiply and
divide helpers are not covered by this requirement and should use
hard-float PCS if the target is hard-float as both compiler-rt and libgcc
for a hard-float sysroot implement these functions with a hard-float PCS.
All of the floating point helper functions that are explicitly soft float
are expanded in the llvm ARM backend. This change makes clang not force the
BuiltinCC to AAPCS for AAPCS_VFP. With this change the ARM compiler-rt
tests involving _Complex pass with both hard-fp and soft-fp targets.
Differential Revision: https://reviews.llvm.org/D35538
llvm-svn: 309257
Notifying the author via Diffusion did not yield any answer. Therefore, I'm
adding the missing triple. I have no idea if this is the intended triple, but
it seems to fit the bill and should turn the bots back to green.
If the intended triple is a different one, please feel free to change it but I
need make this change to turn the bots back to green now.
llvm-svn: 308985
This reverts r308867 and r308866.
It broke the sanitizer-windows buildbot on C++ code similar to the
following:
namespace cl { }
void f() {
__asm {
mov al, cl
}
}
t.cpp(4,13): error: unexpected namespace name 'cl': expected expression
mov al, cl
^
In this case, MSVC parses 'cl' as a register, not a namespace.
llvm-svn: 308926
On MS-style, the following snippet:
int eax;
__asm mov eax, ebx
should yield loading of ebx, into the location pointed by the variable eax
This patch sees to it.
Currently, a reg-to-reg move would have been invoked.
llvm: D34739
Differential Revision: https://reviews.llvm.org/D34740
llvm-svn: 308867
MIPS gcc supports `long_call/far/near` attributes only, but other
targets have the `short_call` attribut, so let's support it for MIPS
for consistency.
llvm-svn: 308719
The patch adds support of i128 params lowering. The changes are quite trivial to
support i128 as a "special case" of integer type. With this patch, we lower i128
params the same way as aggregates of size 16 bytes: .param .b8 _ [16].
Currently, NVPTX can't deal with the 128 bit integers:
* in some cases because of failed assertions like
ValVTs.size() == OutVals.size() && "Bad return value decomposition"
* in other cases emitting PTX with .i128 or .u128 types (which are not valid [1])
[1] http://docs.nvidia.com/cuda/parallel-thread-execution/index.html#fundamental-types
Differential Revision: https://reviews.llvm.org/D34555
Patch by: Denys Zariaiev (denys.zariaiev@gmail.com)
llvm-svn: 308675
This patch adds support for the `long_call`, `far`, and `near` attributes
for MIPS targets. The `long_call` and `far` attributes are synonyms. All
these attributes override `-mlong-calls` / `-mno-long-calls` command
line options for particular function.
Differential revision: https://reviews.llvm.org/D35479
llvm-svn: 308667
Summary: COFF ARM64 is LLP64 platform. So int is 4 bytes, long is 4 bytes and long long is 8 bytes.
Reviewers: compnerd, ruiu, rnk, efriedma
Reviewed By: compnerd, efriedma
Subscribers: efriedma, javed.absar, cfe-commits, aemerson, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D34859
llvm-svn: 308222
Move builtins from the x86 specific scope into the global
scope. Their use is still limited to x86_64 and aarch64 though.
This allows wine on aarch64 to properly handle variadic functions.
Differential Revision: https://reviews.llvm.org/D34475
llvm-svn: 308218
This patch updates the vecintrin.h header file to provide the new
set of high-level vector built-in functions. This matches the
updated definition implemented by other compilers for the platform,
indicated by the pre-defined macro __VEC__ == 10302.
Note that some of the new functions (notably those involving the
vector float data type) are only available with -march=z14
(indicated by __ARCH__ == 12).
llvm-svn: 308199
This patch extends the -fzvector language feature to enable the new
"vector float" data type when compiling at -march=z14. This matches
the updated extension definition implemented by other compilers for
the platform, which is indicated to applications by pre-defining
__VEC__ to 10302 (instead of 10301).
llvm-svn: 308198
This patch series adds support for the IBM z14 processor. This part includes:
- Basic support for the new processor and its features.
- Support for low-level builtins mapped to new LLVM intrinsics.
Support for the -fzvector extension to vector float and the new
high-level vector intrinsics is provided by separate patches.
llvm-svn: 308197
This is the clang part, adding support for
void __builtin_HEXAGON_Y2_dccleana(void*);
void __builtin_HEXAGON_Y2_dccleaninva(void*);
void __builtin_HEXAGON_Y2_dcinva(void*);
void __builtin_HEXAGON_Y2_dczeroa(void*);
void __builtin_HEXAGON_Y4_l2fetch(void*, unsigned);
void __builtin_HEXAGON_Y5_l2fetch(void*, unsigned long long);
Requires r308032.
llvm-svn: 308035
The pointer overflow check gives false negatives when dealing with
expressions in which an unsigned value is subtracted from a pointer.
This is summarized in PR33430 [1]: ubsan permits the result of the
subtraction to be greater than "p", but it should not.
To fix the issue, we should track whether or not the pointer expression
is a subtraction. If it is, and the indices are unsigned, we know to
expect "p - <unsigned> <= p".
I've tested this by running check-{llvm,clang} with a stage2
ubsan-enabled build. I've also added some tests to compiler-rt, which
are in D34122.
[1] https://bugs.llvm.org/show_bug.cgi?id=33430
Differential Revision: https://reviews.llvm.org/D34121
llvm-svn: 307955
devirtualized.
The code to detect devirtualized calls is already in IRGen, so move the
code to lib/AST and make it a shared utility between Sema and IRGen.
This commit fixes a linkage error I was seeing when compiling the
following code:
$ cat test1.cpp
struct Base {
virtual void operator()() {}
};
template<class T>
struct Derived final : Base {
void operator()() override {}
};
Derived<int> *d;
int main() {
if (d)
(*d)();
return 0;
}
rdar://problem/33195657
Differential Revision: https://reviews.llvm.org/D34301
llvm-svn: 307883
Summary:
The _bit_scan_forward and _bit_scan_reverse intrinsics were accidentally
masked under the preprocessor checks that prune intrinsics definitions for the
benefit of faster compile-time on Windows. This patch moves the
definitons out of that region.
Fixes pr33722
Reviewers: craig.topper, aaboud, thakis
Reviewed By: craig.topper
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D35184
llvm-svn: 307524
Certain targets (e.g. amdgcn) require global variable to stay in global or constant address
space. In C or C++ global variables are emitted in the default (generic) address space.
This patch introduces virtual functions TargetCodeGenInfo::getGlobalVarAddressSpace
and TargetInfo::getConstantAddressSpace to handle this in a general approach.
It only affects IR generated for amdgcn target.
Differential Revision: https://reviews.llvm.org/D33842
llvm-svn: 307470
Summary: The patch makes the integration test cover major sample PGO components.
Reviewers: davidxl
Reviewed By: davidxl
Subscribers: sanjoy, cfe-commits
Differential Revision: https://reviews.llvm.org/D34725
llvm-svn: 307445
problems in testing, see comments in D34161 for some more details.
A fix is in progres in D35011, but a revert seems better now as the fix will
probably take some more time to land.
llvm-svn: 307277
Summary: This implements the clang bits of https://reviews.llvm.org/D34720, and add corresponding test to verify if it worked.
Reviewers: chandlerc, davidxl, davide, tejohnson
Reviewed By: chandlerc, tejohnson
Subscribers: tejohnson, sanjoy, mehdi_amini, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D34721
llvm-svn: 306764
This patch extends the `overloadable` attribute to allow for one
function with a given name to not be marked with the `overloadable`
attribute. The overload without the `overloadable` attribute will not
have its name mangled.
So, the following code is now legal:
void foo(void) __attribute__((overloadable));
void foo(int);
void foo(float) __attribute__((overloadable));
In addition, this patch fixes a bug where we'd accept code with
`__attribute__((overloadable))` inconsistently applied. In other words,
we used to accept:
void foo(void);
void foo(void) __attribute__((overloadable));
But we will do this no longer, since it defeats the original purpose of
requiring `__attribute__((overloadable))` on all redeclarations of a
function.
This breakage seems to not be an issue in practice, since the only code
I could find that had this pattern often looked like:
void foo(void);
void foo(void) __attribute__((overloadable)) __asm__("foo");
void foo(int) __attribute__((overloadable));
...Which can now be simplified by simply removing the asm label and
overloadable attribute from the redeclaration of `void foo(void);`
Differential Revision: https://reviews.llvm.org/D32332
llvm-svn: 306467