Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.
Differential Revision: http://reviews.llvm.org/D12313
llvm-svn: 247104
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
This implements basic support for compiling (though not yet assembling
or linking) for a WebAssembly target. Note that ABI details are not yet
finalized, and may change.
Differential Revision: http://reviews.llvm.org/D12002
llvm-svn: 246814
This patch depends on r246688 (D12341).
The goal is to make LLVM generate different code for these functions for a target that
has cheap branches (see PR23827 for more details):
int foo();
int normal(int x, int y, int z) {
if (x != 0 && y != 0) return foo();
return 1;
}
int crazy(int x, int y) {
if (__builtin_unpredictable(x != 0 && y != 0)) return foo();
return 1;
}
Differential Revision: http://reviews.llvm.org/D12458
llvm-svn: 246699
GCC 4.8+ has a PowerPC-specific intrinsic, __builtin_ppc_get_timebase, to do
what Clang's __builtin_readcyclecounter does. For compatibility with code that
uses GCC's spelling (including glibc), support it as well.
Partially fixes PR23681.
llvm-svn: 246510
We had "vcvt_f16" and "VCVT_HIGH_F16": for other FP types, this naming
is used for intrinsics with integer overloads. The FP->FP conversions,
on the other hand, use the full "vcvt_f32_f64" name instead.
Use the same naming convention for the f16<->f32 conversions.
While there, reorder the definitions a little bit.
llvm-svn: 245763
- Use cached LLVM types
- Turn SmallVectors into Arrays/ArrayRef if the size is static
- Use ConstantInt::get's implicit splatting for vector types
No functionality change intended.
llvm-svn: 243425
__builtin_frame_address requires its argument to be a constant
expression which already implies that it cannot have undefined behavior.
However, we used EmitScalarExpr to emit the argument causing UBSan to
try to check for overflow.
Instead, use the constant expression emission system.
This fixes PR24256.
llvm-svn: 243206
This patch corresponds to review:
http://reviews.llvm.org/D11184
A number of new interfaces for altivec.h (as mandated by the ABI):
vector float vec_cpsgn(vector float, vector float)
vector double vec_cpsgn(vector double, vector double)
vector double vec_or(vector bool long long, vector double)
vector double vec_or(vector double, vector bool long long)
vector double vec_re(vector double)
vector signed char vec_cntlz(vector signed char)
vector unsigned char vec_cntlz(vector unsigned char)
vector short vec_cntlz(vector short)
vector unsigned short vec_cntlz(vector unsigned short)
vector int vec_cntlz(vector int)
vector unsigned int vec_cntlz(vector unsigned int)
vector signed long long vec_cntlz(vector signed long long)
vector unsigned long long vec_cntlz(vector unsigned long long)
vector signed char vec_nand(vector bool signed char, vector signed char)
vector signed char vec_nand(vector signed char, vector bool signed char)
vector signed char vec_nand(vector signed char, vector signed char)
vector unsigned char vec_nand(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector unsigned char)
vector short vec_nand(vector bool short, vector short)
vector short vec_nand(vector short, vector bool short)
vector short vec_nand(vector short, vector short)
vector unsigned short vec_nand(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector unsigned short)
vector int vec_nand(vector bool int, vector int)
vector int vec_nand(vector int, vector bool int)
vector int vec_nand(vector int, vector int)
vector unsigned int vec_nand(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector unsigned int)
vector signed long long vec_nand(vector bool long long, vector signed long long)
vector signed long long vec_nand(vector signed long long, vector bool long long)
vector signed long long vec_nand(vector signed long long, vector signed long long)
vector unsigned long long vec_nand(vector bool long long, vector unsigned long long)
vector unsigned long long vec_nand(vector unsigned long long, vector bool long long)
vector unsigned long long vec_nand(vector unsigned long long, vector unsigned long long)
vector signed char vec_orc(vector bool signed char, vector signed char)
vector signed char vec_orc(vector signed char, vector bool signed char)
vector signed char vec_orc(vector signed char, vector signed char)
vector unsigned char vec_orc(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector unsigned char)
vector short vec_orc(vector bool short, vector short)
vector short vec_orc(vector short, vector bool short)
vector short vec_orc(vector short, vector short)
vector unsigned short vec_orc(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector unsigned short)
vector int vec_orc(vector bool int, vector int)
vector int vec_orc(vector int, vector bool int)
vector int vec_orc(vector int, vector int)
vector unsigned int vec_orc(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector unsigned int)
vector signed long long vec_orc(vector bool long long, vector signed long long)
vector signed long long vec_orc(vector signed long long, vector bool long long)
vector signed long long vec_orc(vector signed long long, vector signed long long)
vector unsigned long long vec_orc(vector bool long long, vector unsigned long long)
vector unsigned long long vec_orc(vector unsigned long long, vector bool long long)
vector unsigned long long vec_orc(vector unsigned long long, vector unsigned long long)
vector signed char vec_div(vector signed char, vector signed char)
vector unsigned char vec_div(vector unsigned char, vector unsigned char)
vector signed short vec_div(vector signed short, vector signed short)
vector unsigned short vec_div(vector unsigned short, vector unsigned short)
vector signed int vec_div(vector signed int, vector signed int)
vector unsigned int vec_div(vector unsigned int, vector unsigned int)
vector signed long long vec_div(vector signed long long, vector signed long long)
vector unsigned long long vec_div(vector unsigned long long, vector unsigned long long)
vector unsigned char vec_mul(vector unsigned char, vector unsigned char)
vector unsigned int vec_mul(vector unsigned int, vector unsigned int)
vector unsigned long long vec_mul(vector unsigned long long, vector unsigned long long)
vector unsigned short vec_mul(vector unsigned short, vector unsigned short)
vector signed char vec_mul(vector signed char, vector signed char)
vector signed int vec_mul(vector signed int, vector signed int)
vector signed long long vec_mul(vector signed long long, vector signed long long)
vector signed short vec_mul(vector signed short, vector signed short)
vector signed long long vec_mergeh(vector signed long long, vector signed long long)
vector signed long long vec_mergeh(vector signed long long, vector bool long long)
vector signed long long vec_mergeh(vector bool long long, vector signed long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergeh(vector bool long long, vector unsigned long long)
vector double vec_mergeh(vector double, vector double)
vector double vec_mergeh(vector double, vector bool long long)
vector double vec_mergeh(vector bool long long, vector double)
vector signed long long vec_mergel(vector signed long long, vector signed long long)
vector signed long long vec_mergel(vector signed long long, vector bool long long)
vector signed long long vec_mergel(vector bool long long, vector signed long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergel(vector bool long long, vector unsigned long long)
vector double vec_mergel(vector double, vector double)
vector double vec_mergel(vector double, vector bool long long)
vector double vec_mergel(vector bool long long, vector double)
vector signed int vec_pack(vector signed long long, vector signed long long)
vector unsigned int vec_pack(vector unsigned long long, vector unsigned long long)
vector bool int vec_pack(vector bool long long, vector bool long long)
llvm-svn: 242171
This is needed to use clang's command line option "-ftrap-function" for LTO and
enable changing the trap function name on a per-call-site basis.
rdar://problem/21225723
Differential Revision: http://reviews.llvm.org/D10831
llvm-svn: 241306
This matches the implementation of the gcc support for the same
feature, including checking the values set up by libgcc at runtime.
The structure looks like this:
unsigned int __cpu_vendor;
unsigned int __cpu_type;
unsigned int __cpu_subtype;
unsigned int __cpu_features[1];
with a set of enums to match various fields that are field out after
parsing the output of the cpuid instruction.
This also adds a set of errors checking for valid input (and cpu).
compiler-rt support for this and the other builtins in this family
(__builtin_cpu_init and __builtin_cpu_is) are forthcoming.
llvm-svn: 240994
This patch corresponds to review:
http://reviews.llvm.org/D10637
This is the first round of additions of missing builtins listed in the ABI document. More to come (this builds onto what seurer already addes). This patch adds:
vector signed long long vec_abs(vector signed long long)
vector double vec_abs(vector double)
vector signed long long vec_add(vector signed long long, vector signed long long)
vector unsigned long long vec_add(vector unsigned long long, vector unsigned long long)
vector double vec_add(vector double, vector double)
vector double vec_and(vector bool long long, vector double)
vector double vec_and(vector double, vector bool long long)
vector double vec_and(vector double, vector double)
vector signed long long vec_and(vector signed long long, vector signed long long)
vector double vec_andc(vector bool long long, vector double)
vector double vec_andc(vector double, vector bool long long)
vector double vec_andc(vector double, vector double)
vector signed long long vec_andc(vector signed long long, vector signed long long)
vector double vec_ceil(vector double)
vector bool long long vec_cmpeq(vector double, vector double)
vector bool long long vec_cmpge(vector double, vector double)
vector bool long long vec_cmpge(vector signed long long, vector signed long long)
vector bool long long vec_cmpge(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmpgt(vector double, vector double)
vector bool long long vec_cmple(vector double, vector double)
vector bool long long vec_cmple(vector signed long long, vector signed long long)
vector bool long long vec_cmple(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmplt(vector double, vector double)
vector bool long long vec_cmplt(vector signed long long, vector signed long long)
vector bool long long vec_cmplt(vector unsigned long long, vector unsigned long long)
llvm-svn: 240821
Integer variants are implemented as atomicrmw or cmpxchg instructions.
Atomic add for floating point (__nvvm_atom_add_gen_f()) is implemented
as a call to an overloaded @llvm.nvvm.atomic.load.add.f32.* LVVM
intrinsic.
Differential Revision: http://reviews.llvm.org/D10666
llvm-svn: 240669
This fixes a serious bug in r240462: checking the BuiltinID for
ARM::BI_MoveToCoprocessor* in EmitBuiltinExpr() ignores the fact that
each target has an overlapping range of the BuiltinID values. That check
can trigger for builtins from other targets, leading to very bad
behavior.
Part of the reason I did not implement r240462 this way to begin with is
the special handling of the last argument for Neon builtins. In this
change, I have factored out the check to see which builtins have that
extra argument into a new HasExtraNeonArgument() function. There is still
some awkwardness in having to check for those builtins in two separate
places, i.e., once to see if the extra argument is present and once to
generate the appropriate IR, but this seems much cleaner than my previous
patch.
llvm-svn: 240522
The Microsoft-extension _MoveToCoprocessor and _MoveToCoprocessor2
builtins take the register value to be moved as the first argument,
but the corresponding mcr and mcr2 LLVM intrinsics expect that value
to be the third argument. Handle this as a special case, while still
leaving those intrinsics as generic MSBuiltins. I considered the
alternative of handling these in EmitARMBuiltinExpr, but that does
not work well for the follow-up change that I'm going to make to improve
the error handling for PR22560 -- we need the GetBuiltinType() checks
for ICEArguments, and the ARM version of that code is only used for
Neon intrinsics where the last argument is special and not
checked in the normal way.
llvm-svn: 240462
in section 10.1, __arm_{w,r}sr{,p,64}.
This includes arm_acle.h definitions with builtins and codegen to support
these, the intrinsics are implemented by generating read/write_register calls
which get appropriately lowered in the backend based on the register string
provided. SemaChecking is also implemented to fault invalid parameters.
Differential Revision: http://reviews.llvm.org/D9697
llvm-svn: 239737
On ARM/AArch64, we currently always use EmitScalarExpr for the immediate
builtin arguments, instead of directly emitting the constant. When the
overflow sanitizer is enabled, this generates overflow intrinsics
instead of constants, breaking assumptions in various places.
Instead, use the knowledge of "immediates" to directly emit a constant:
- teach the tablegen backend to emit the "immediate" modifiers
- use those modifiers in the NEON CodeGen, on ARM and AArch64.
Fixes PR23517.
Differential Revision: http://reviews.llvm.org/D10045
llvm-svn: 239002
This adds low-level builtins to allow access to all of the z13 vector
instructions. Note that instructions whose semantics can be described
by standard C (including clang extensions) do not get any builtins.
For each instructions whose semantics *cannot* (fully) be described, we
define a builtin named __builtin_s390_<insn> that directly maps to this
instruction. These are intended to be compatible with GCC.
For instructions that also set the condition code, the builtin will take
an extra argument of type "int *" at the end. The integer pointed to by
this argument will be set to the post-instruction CC value.
For many instructions, the low-level builtin is mapped to the corresponding
LLVM IR intrinsic. However, a number of instructions can be represented
in standard LLVM IR without requiring use of a target intrinsic.
Some instructions require immediate integer operands within a certain
range. Those are verified at the Sema level.
Based on a patch by Richard Sandiford.
llvm-svn: 236532
The zEC12 provides the transactional-execution facility. This is exposed
to users via a set of builtin routines on other compilers. This patch
adds clang support to enable those builtins. In partciular, the patch:
- enables the transactional-execution feature by default on zEC12
- allows to override presence of that feature via the -mhtm/-mno-htm options
- adds a predefined macro __HTM__ if the feature is enabled
- adds support for the transactional-execution GCC builtins
- adds Sema checking to verify the __builtin_tabort abort code
- adds the s390intrin.h header file (for GCC compatibility)
- adds s390 sections to the htmintrin.h and htmxlintrin.h header files
Since this is first use of target-specific intrinsics on the platform,
the patch creates the include/clang/Basic/BuiltinsSystemZ.def file and
hooks it up in TargetBuiltins.h and lib/Basic/Targets.cpp.
An associated LLVM patch adds the required LLVM IR intrinsics.
For reference, the transactional-execution instructions are documented
in the z/Architecture Principles of Operation for the zEC12:
http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/download/DZ9ZR009.pdf
The associated builtins are documented in the GCC manual:
http://gcc.gnu.org/onlinedocs/gcc/S_002f390-System-z-Built-in-Functions.html
The htmxlintrin.h intrinsics provided for compatibility with the IBM XL
compiler are documented in the "z/OS XL C/C++ Programming Guide".
llvm-svn: 233804
The argument range checks for the HTM and Crypto builtins were implemented in
CGBuiltin.cpp, not in Sema. This change moves them to the appropriate location
in SemaChecking.cpp. It requires the creation of a new method in the Sema class
to do checks for PPC-specific builtins.
http://reviews.llvm.org/D8672
llvm-svn: 233586
This patch adds Hardware Transaction Memory (HTM) support supported by ISA 2.07
(POWER8). The intrinsic support is based on GCC one [1], with both 'PowerPC HTM
Low Level Built-in Functions' and 'PowerPC HTM High Level Inline Functions'
implemented.
Along with builtins a new driver switch is added to enable/disable HTM
instruction support (-mhtm) and a header with common definitions (mostly to
parse the TFHAR register value). The HTM switch also sets a preprocessor builtin
HTM.
The HTM usage requires a recently newer kernel with PPC HTM enabled. Tested on
powerpc64 and powerpc64le.
This is send along a llvm patch to enabled the builtins and option switch.
[1]
https://gcc.gnu.org/onlinedocs/gcc/PowerPC-Hardware-Transactional-Memory-Built-in-Functions.html
Phabricator Review: http://reviews.llvm.org/D8248
llvm-svn: 233205
Somehow, we never managed to implement this fully. We could constant
fold it like crazy, including constant folding complex arguments, etc.
But if you actually needed to generate code for it, error.
I've implemented it using the somewhat obvious lowering. Happy for
suggestions on a more clever way to lower this.
Now, what you might ask does this have to do with modules? Fun story. So
it turns out that libstdc++ actually uses __builtin_isinf_sign to
implement std::isinf when in C++98 mode, but only inside of a template.
So if we're lucky, and we never instantiate that, everything is good.
But once we try to instantiate that template function, we need this
builtin. All of my customers at least are using C++11 and so they never
hit this code path.
But what does that have to do with modules? Fun story. So it turns out
that with modules we actually observe a bunch of bugs in libstdc++ where
their <cmath> header clobbers things exposed by <math.h>. To fix these,
we have to provide global function definitions to replace the macros
that C99 would have used. And it turns out that ::isinf needs to be
implemented using the exact semantics used by the C++98 variant of
std::isinf. And so I started to fix this bug in libstdc++ and ceased to
be able to compile libstdc++ with Clang.
The yaks are legion.
llvm-svn: 232778
std::make_exception_ptr calls std::__GetExceptionInfo in order to figure
out how to properly copy the exception object.
Differential Revision: http://reviews.llvm.org/D8280
llvm-svn: 232188
Originally we were using the same GCC builtins to lower this AVX2 vector
intrinsic. Instead we will now lower it directly to a vector shuffle.
This will not only allow LLVM to generate better code, but it will also allow us
to remove the GCC intrinsics.
Reviewed by Andrea
This is related to rdar://problem/18742778.
llvm-svn: 231081
llvm.eh.sjlj.setjmp / llvm.eh.sjlj.longjmp, if the backend is known to
support them outside the Exception Handling context. The default
handling in LLVM codegen doesn't work and will create incorrect code.
The ARM backend on the other hand will assert if the intrinsics are
used.
llvm-svn: 230255