This adds support for copy-constructor closures. These are generated
when the C++ runtime has to call a copy-constructor with a particular
calling convention or with default arguments substituted in to the call.
Because the runtime has no mechanism to call the function with a
different calling convention or know-how to evaluate the default
arguments at run-time, we create a thunk which will do all the
appropriate work and package it in a way the runtime can use.
Differential Revision: http://reviews.llvm.org/D8225
llvm-svn: 231952
It's slightly cheaper than copying it, if the DebugLoc points to replaceable
metadata every copy is recorded in a DenseMap, moving reduces the peak size of
that map.
llvm-svn: 228492
__declspec(restrict) and __attribute(malloc) are both handled
identically by clang: they are allowed to the noalias LLVM attribute.
Seeing as how noalias models the C99 notion of 'restrict', rename the
internal clang attribute to Restrict from Malloc.
llvm-svn: 228120
Summary:
It was used for interoperability with PNaCl's calling conventions, but
it's no longer needed.
Also Remove NaCl*ABIInfo which just existed to delegate to either the portable
or native ABIInfo, and remove checkCallingConvention which was now a no-op
override.
Reviewers: jvoung
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D7206
llvm-svn: 227362
This workaround was to provide unique call sites to ensure LLVM's inline
debug info handling would properly unique two calls to the same function
on the same line. Instead, this has now been fixed in LLVM (r226736) and
the workaround here can be removed.
Originally committed in r176895, but this isn't a straight revert due to
all the changes since then. I just searched for anything ForcedColumn*
related and removed them.
We could test this - but it didn't strike me as terribly valuable once
we're no longer adding this workaround everything just works as expected
& it's no longer a special case to test for.
llvm-svn: 226738
The test was fixed after a discussion with the revision author: the check
pattern was made more flexible as the "%call" part is not what we actually want
to check strictly there.
The original patch description:
===
Introduce SPIR calling conventions.
This implements Section 3.7 from the SPIR 1.2 spec:
SPIR kernels should use "spir_kernel" calling convention.
Non-kernel functions use "spir_func" calling convention. All
other calling conventions are disallowed.
The patch works only for OpenCL source. Any other uses will need
to ensure that kernels are assigned the spir_kernel calling
convention correctly.
===
llvm-svn: 226561
This implements Section 3.7 from the SPIR 1.2 spec:
SPIR kernels should use "spir_kernel" calling convention.
Non-kernel functions use "spir_func" calling convention. All
other calling conventions are disallowed.
The patch works only for OpenCL source. Any other uses will need
to ensure that kernels are assigned the spir_kernel calling
convention correctly.
llvm-svn: 226548
The code setting the debug location being removed here was accidentally
leaking a location into the call to the non-static data member's ctor
call. Without it the call had no location and could cause assertion
failures if it was inlined. Now that it has a location (and a correct
one at that) this code should hopefully be no longer needed.
It's possible of course that other parts of the debug info are also
relying on the debug locations being set here to leak to where they're
needed - so we might see the same assertions again & will have to
investigate what the dependence was/is. But the chances are good that
any of those are debug info line table quality bugs we've just not found
yet anyway - so it'll be good to flush them out.
llvm-svn: 226383
PR22096 has several test cases that assert that look fairly different. I'm
adding one of those as an automated test, but when relanding the other cases
should probably be checked as well.
llvm-svn: 225361
r225000 generalized debug info line info handling for expressions such
that this code is no longer necessary.
This removes the last use of CGDebugInfo::getLocation, but not all the
uses of CGDebugInfo::CurLoc, which is still used internally in
CGDebugInfo. I'd like to do away with all of that & might succeed after
a few more patches.
llvm-svn: 225085
The extension has the following syntax:
__builtin_call_with_static_chain(Call, Chain)
where Call must be a function call expression and Chain must be of pointer type
This extension performs a function call Call with a static chain pointer
Chain passed to the callee in a designated register. This is useful for
calling foreign language functions whose ABI uses static chain pointers
(e.g. to implement closures).
Differential Revision: http://reviews.llvm.org/D6332
llvm-svn: 224167
having OptimizeNone remove them again, just don't add them in the
first place if the function already has OptimizeNone.
Note that MinSize can still appear due to attributes on different
declarations; a future patch will address that.
llvm-svn: 224047
Summary:
This change makes CodeGenFunction::EmitCheck() take several
conditions that needs to be checked (all of them need to be true),
together with sanitizer kinds these checks are for. This would allow
to split one call into UBSan runtime into several calls in case
different sanitizer kinds would have different recoverability
settings.
Tests should be fixed accordingly, I'm working on it.
Test Plan: regression test suite.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6219
llvm-svn: 221716
Make sure CodeGenFunction::EmitCheck() knows which sanitizer
it emits check for. Make CheckRecoverableKind enum an
implementation detail and move it away from header.
Currently CheckRecoverableKind is determined by the type of
sanitizer ("unreachable" and "return" are unrecoverable,
"vptr" is always-recoverable, all the rest are recoverable).
This will change in future if we allow to specify which sanitizers
are recoverable, and which are not by -fsanitize-recover= flag.
No functionality change.
llvm-svn: 221635
Use the bitmask to store the set of enabled sanitizers instead of a
bitfield. On the negative side, it makes syntax for querying the
set of enabled sanitizers a bit more clunky. On the positive side, we
will be able to use SanitizerKind to eventually implement the
new semantics for -fsanitize-recover= flag, that would allow us
to make some sanitizers recoverable, and some non-recoverable.
No functionality change.
llvm-svn: 221558
The most complex aspect of the convention is the handling of homogeneous
vector and floating point aggregates. Reuse the homogeneous aggregate
classification code that we use on PPC64 and ARM for this.
This convention also has a C mangling, and we apparently implement that
in both Clang and LLVM.
Reviewed By: majnemer
Differential Revision: http://reviews.llvm.org/D6063
llvm-svn: 221006
Summary:
The Itanium ABI approach of using offset-to-top isn't possible with the
MS ABI, it doesn't have that kind of information lying around.
Instead, we do the following:
- Call the virtual deleting destructor with the "don't delete the object
flag" set. The virtual deleting destructor will return a pointer to
'this' adjusted to the most derived class.
- Call the global delete using the adjusted 'this' pointer.
Reviewers: rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D5996
llvm-svn: 220993
Reuse the PPC64 HVA detection algorithm for ARM and AArch64. This is a
nice code deduplication, since they are roughly identical. A few virtual
method extension points are needed to understand how big an HVA can be
and what element types it can have for a given architecture.
Also make the record expansion code work in the presence of non-virtual
bases.
Reviewed By: uweigand, asl
Differential Revision: http://reviews.llvm.org/D6045
llvm-svn: 220972
SanitizerOptions is not even a POD now, so having global variable of
this type, is not nice. Instead, provide a regular constructor and clear()
method, and let each CodeGenFunction has its own copy of SanitizerOptions
it uses.
llvm-svn: 220920
Wire it through everywhere we have support for fastcall, essentially.
This allows us to parse the MSVC "14" CTP headers, but we will
miscompile them because LLVM doesn't support __vectorcall yet.
Reviewed By: Aaron Ballman
Differential Revision: http://reviews.llvm.org/D5808
llvm-svn: 220573
Make it possible to pass NULL through variadic functions on 64-bit
Windows targets. The Visual C++ headers define NULL to 0, when they
should define it to 0LL on Win64 so that NULL is a pointer-sized
integer.
Fixes PR20949.
Reviewers: thakis, rsmith
Differential Revision: http://reviews.llvm.org/D5480
llvm-svn: 219456
This adds support for the align_value attribute. This attribute is supported by
Intel's compiler (versions 14.0+), and several of my HPC users have requested
support in Clang. It specifies an alignment assumption on the values to which a
pointer points, and is used by numerical libraries to encourage efficient
generation of vector code.
Of course, we already have an aligned attribute that can specify enhanced
alignment for a type, so why is this additional attribute important? The
problem is that if you want to specify that an input array of T is, say,
64-byte aligned, you could try this:
typedef double aligned_double attribute((aligned(64)));
void foo(aligned_double *P) {
double x = P[0]; // This is fine.
double y = P[1]; // What alignment did those doubles have again?
}
the access here to P[1] causes problems. P was specified as a pointer to type
aligned_double, and any object of type aligned_double must be 64-byte aligned.
But if P[0] is 64-byte aligned, then P[1] cannot be, and this access causes
undefined behavior. Getting round this problem requires a lot of awkward
casting and hand-unrolling of loops, all of which is bad.
With the align_value attribute, we can accomplish what we'd like in a well
defined way:
typedef double *aligned_double_ptr attribute((align_value(64)));
void foo(aligned_double_ptr P) {
double x = P[0]; // This is fine.
double y = P[1]; // This is fine too.
}
This attribute does not create a new type (and so it not part of the type
system), and so will only "propagate" through templates, auto, etc. by
optimizer deduction after inlining. This seems consistent with Intel's
implementation (thanks to Alexey for confirming the various Intel-compiler
behaviors).
As a final note, I would have chosen to call this aligned_value, not
align_value, for better naming consistency with the aligned attribute, but I
think it would be more useful to users to adopt Intel's name.
llvm-svn: 218910
This is the last piece of CGCall code that had implicit assumptions about
the order in which Clang arguments are translated to LLVM ones (positions
of inalloca argument, sret, this, padding arguments etc.) Now all of
this data is encapsulated in ClangToLLVMArgsMapping. If this information
would be required somewhere else, this class can be moved to a separate
header or pulled into CGFunctionInfo.
llvm-svn: 218634
Add a method to calculate the number of arguments given QualType
expnads to. Use this method in ClangToLLVMArgMapping calculation.
This number may be cached in CodeGenTypes for efficiency, if needed.
llvm-svn: 218623
Hoist the logic which determines the way QualType is expanded
into a separate method. Remove a bunch of copy-paste and simplify
getTypesFromArgs() / ExpandTypeFromArgs() / ExpandTypeToArgs() methods.
llvm-svn: 218615
In addition to __builtin_assume_aligned, GCC also supports an assume_aligned
attribute which specifies the alignment (and optional offset) of a function's
return value. Here we implement support for the assume_aligned attribute by making
use of the @llvm.assume intrinsic.
llvm-svn: 218500
Summary:
This patch implements a new UBSan check, which verifies
that function arguments declared to be nonnull with __attribute__((nonnull))
are actually nonnull in runtime.
To implement this check, we pass FunctionDecl to CodeGenFunction::EmitCallArgs
(where applicable) and if function declaration has nonnull attribute specified
for a certain formal parameter, we compare the corresponding RValue to null as
soon as it's calculated.
Test Plan: regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits, rnk
Differential Revision: http://reviews.llvm.org/D5082
llvm-svn: 217389
There were code paths that are duplicated for constructors and destructors just
because we have both CXXCtorType and CXXDtorsTypes.
This patch introduces an unified enum and reduces code deplication a bit.
llvm-svn: 217383
For naked functions with parameters, Clang would still emit stores in the prologue
that would clobber the stack, because LLVM doesn't set up a stack frame. (This
shows up in -O0 compiles, because the stores are optimized away otherwise.)
For example:
__attribute__((naked)) int f(int x) {
asm("movl $42, %eax");
asm("retl");
}
Would result in:
_Z1fi:
movl 12(%esp), %eax
movl %eax, (%esp) <--- Oops.
movl $42, %eax
retl
Differential Revision: http://reviews.llvm.org/D5183
llvm-svn: 217198
This avoids encoding information about the function prototype into the
thunk at the cost of some function prototype bitcast gymnastics.
Fixes PR20653.
llvm-svn: 216782
Previously, EnterStructPointerForCoercedAccess used Alloc size when determining how to convert. This was problematic, because there were situations were the alloc size was larger than the store size. For example, if the first element of a structure were i24 and the destination type were i32, the old code would generate a GEP and a load i24. The code should compare store sizes to ensure the whole object is loaded. I have attached a test case.
This patch modifies the output of arm64-be-bitfield.c test case, but the new IR seems to be equivalent, and after -O3, the compiler generates identical ARM assembly. (asr x0, x0, #54)
Patch by Thomas Jablin!
llvm-svn: 216722
This tidies up some ARM-specific code added by r208417 to move it out
of the target-independent parts of clang into TargetInfo.cpp. This
also has the advantage that we can now flatten struct arguments to
variadic AAPCS functions.
llvm-svn: 216535
Summary:
This refactoring introduces ClangToLLVMArgMapping class, which
encapsulates the information about the order in which function arguments listed
in CGFunctionInfo should be passed to actual LLVM IR function, such as:
1) positions of sret, if there is any
2) position of inalloca argument, if there is any
3) position of helper padding argument for each call argument
4) positions of regular argument (there can be many if it's expanded).
Simplify several related methods (ConstructAttributeList, EmitFunctionProlog
and EmitCall): now they don't have to maintain iterators over the list
of LLVM IR function arguments, dealing with all the sret/inalloca/this complexities,
and just use expected positions of LLVM IR arguments stored in ClangToLLVMArgMapping.
This may increase the running time of EmitFunctionProlog, as we have to traverse
expandable arguments twice, but in further refactoring we will be able
to speed up EmitCall by passing already calculated CallArgsToIRArgsMapping to
ConstructAttributeList, thus avoiding traversing expandable argument there.
No functionality change.
Test Plan: regression test suite
Reviewers: majnemer, rnk
Reviewed By: rnk
Subscribers: cfe-commits, rjmccall, timurrrr
Differential Revision: http://reviews.llvm.org/D4938
llvm-svn: 216251