Instead of always storing all source locations for the selector identifiers
we check whether all the identifiers are in a "standard" position; "standard" position is
-Immediately before the arguments: -(id)first:(int)x second:(int)y;
-With a space between the arguments: -(id)first: (int)x second: (int)y;
-For nullary selectors, immediately before ';': -(void)release;
In such cases we infer the locations instead of storing them.
llvm-svn: 140989
builtin types (When requested). This is another step toward making
ASTUnit build the ASTContext as needed when loading an AST file,
rather than doing so after the fact. No actual functionality change (yet).
llvm-svn: 138985
emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.
llvm-svn: 138599
A homogeneous aggregate is an aggregate data structure where after flattening
any nesting there are 1 to 4 elements of the same base type that is either a
float, double, or Neon vector. All Neon vectors of the same size, either 64
or 128 bits, are treated as equivalent for this purpose. When using the
AAPCS-VFP ABI, check for homogeneous aggregates and pass them as arguments by
expanding them into a sequence of their base types. This requires extending
the existing support for expanded arguments to handle not only structs, but
also constant arrays and complex types.
llvm-svn: 136767
This is something of a hack, the problem is as follows:
1. we instantiate both copied of RetainPtr with the two different argument types
(an id and protocol-qualified id).
2. We refer to the ctor of one of the instantiations when introducing global "x",
this causes us to emit an llvm::Function for a prototype whose "this" has type
"RetainPtr<id<bork> >*".
3. We refer to the ctor of the other instantiation when introducing global "y",
however, because it *mangles to the same name as the other ctor* we just use
a bitcasted version of the llvm::Function we previously emitted.
4. We emit deferred declarations, causing us to emit the body of the ctor, however
the body we emit is for RetainPtr<id>, which expects its 'this' to have an IR
type of "RetainPtr<id>*".
Because of the mangling collision, we don't have this case, and explode.
This is really some sort of weird AST invariant violation or something, but hey
a bitcast makes the pain go away.
llvm-svn: 135572
to prevent recursive compilation problems. This fixes a failure of CodeGen/decl.c
on x86-32 targets that don't fill in the coerce-to type.
llvm-svn: 135256
types. Fore xample, we used to lower:
struct bar { int a; };
struct foo {
void (*FP)(struct bar);
} G;
to:
%struct.foo = type { {}* }
since the function pointer would cause recursive translation of bar and
we didn't know if that would get us into trouble. We are now smart enough
to know that it is fine, so we get this type instead:
%struct.foo = type { void (i32)* }
Codegen still needs to be prepared for uncooperative types at any place,
which is why I let the maximally uncooperative code sit around for awhile to
help shake out the bugs.
llvm-svn: 135244
it is a predicate, not an action. Change the return type to be a bool,
not the incomplete member. Enhace it to detect the recursive compilation
case, allowing us to compile Eli's testcase on llvmdev:
struct T {
struct T (*p)(void);
} t;
into:
%struct.T = type { {}* }
@t = common global %struct.T zeroinitializer, align 8
llvm-svn: 134853
Language-design credit goes to a lot of people, but I particularly want
to single out Blaine Garst and Patrick Beard for their contributions.
Compiler implementation credit goes to Argyrios, Doug, Fariborz, and myself,
in no particular order.
llvm-svn: 133103
This patch tries relatively hard to avoid creating an extra copy if it can be avoided (see test3 in the included testcase), but it is not possible to avoid in some cases (like test2 in the included testcase).
rdar://9483886
llvm-svn: 132957
Type::isUnsignedIntegerOrEnumerationType(), which are like
Type::isSignedIntegerType() and Type::isUnsignedIntegerType() but also
consider the underlying type of a C++0x scoped enumeration type.
Audited all callers to the existing functions, switching those that
need to also handle scoped enumeration types (e.g., those that deal
with constant values) over to the new functions. Fixes PR9923 /
<rdar://problem/9447851>.
llvm-svn: 131735
AAPCS+VFP), similar to fastcall / stdcall / whatevercall seen on x86.
In particular, all library functions should always be AAPCS regardless of floating point ABI used.
llvm-svn: 129534
add support for the OpenCL __private, __local, __constant and
__global address spaces, as well as the __read_only, _read_write and
__write_only image access specifiers. Patch originally by ARM;
language-specific address space support by myself.
llvm-svn: 127915
Change the interface to expose the new information and deal with the enormous fallout.
Introduce the new ExceptionSpecificationType value EST_DynamicNone to more easily deal with empty throw specifications.
Update the tests for noexcept and fix the various bugs uncovered, such as lack of tentative parsing support.
llvm-svn: 127537
simplify the logic of initializing function parameters so that we don't need
both a variable declaration and a type in FunctionArgList. This also means
that we need to propagate the CGFunctionInfo down in a lot of places rather
than recalculating it from the FAL. There's more we can do to eliminate
redundancy here, and I've left FIXMEs behind to do it.
llvm-svn: 127314
The prototype for objc_msgSend() is technically variadic -
`id objc_msgSend(id, SEL, ...)`.
But all method calls should use a prototype that matches the method,
not the prototype for objc_msgSend itself().
// rdar://9048030
llvm-svn: 126754
update callers as best I can.
- This is a work in progress, our alignment handling is very horrible / sketchy -- I am just aiming for monotonic improvement.
- Serious review appreciated.
llvm-svn: 111707
The X86-64 ABI code didn't handle the case when a struct
would get classified and turn up as "NoClass INTEGER" for
example. This is perfectly possible when the first slot
is all padding (e.g. due to empty base classes). In this
situation, the first 8-byte doesn't take a register at all,
only the second 8-byte does.
This fixes this by enhancing the x86-64 abi stuff to allow
and handle this case, reverts the broken fix for PR5831,
and enhances the target independent stuff to be able to
handle an argument value in registers being accessed at an
offset from the memory value.
This is the last x86-64 calling convention related miscompile
that I'm aware of.
llvm-svn: 109848
have a "coerce to" type which often matches the default lowering of Clang
type to LLVM IR type, but the coerce case can be handled by making them
not be the same.
This simplifies things and fixes issues where X86-64 abi lowering would
return coerce after making preferred types exactly match up. This caused
us to compile:
typedef float v4f32 __attribute__((__vector_size__(16)));
v4f32 foo(v4f32 X) {
return X+X;
}
into this code at -O0:
define <4 x float> @foo(<4 x float> %X.coerce) nounwind {
entry:
%retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X.coerce, <4 x float>* %coerce
%X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
store <4 x float> %add, <4 x float>* %retval
%0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1]
ret <4 x float> %0
}
Now we get:
define <4 x float> @foo(<4 x float> %X) nounwind {
entry:
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
ret <4 x float> %add
}
This implements rdar://8248065
llvm-svn: 109733
them as such. Type::is(Signed|Unsigned|)IntegerType() now return false
for vector types, and new functions
has(Signed|Unsigned|)IntegerRepresentation() cover integer types and
vector-of-integer types. This fixes a bunch of latent bugs.
Patch from Anton Yartsev!
llvm-svn: 109229
whether to use objc_msgSend_fpret; the choice is target dependent, not Obj-C ABI
dependent.
- <rdar://problem/8139758> arm objc _objc_msgSend_fpret bug
llvm-svn: 108379
self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
alloca for an argument. Make sure the argument gets the proper
decl alignment, which may be different than the type alignment.
This fixes PR7567
llvm-svn: 107627
store make sure to move the debug metadata from the store (which is actual
'return' statement location) to the return instruction (which otherwise would
have the function end location as its debug info).
- Tested by gdb test suite.
llvm-svn: 107322
r107173, "fix PR7519: after thrashing around and remembering how all this stuff"
r107216, "fix PR7523, which was caused by the ABI code calling ConvertType instead"
This includes a fix to make ConvertTypeForMem handle the "recursive" case, and call
it as such when lowering function types which have an indirect result.
llvm-svn: 107310
works, the fix is quite simple: just make sure to call ConvertTypeRecursive
when the function type being lowered is in the midst of ConvertType.
llvm-svn: 107173
This is somewhat annoying to do this at this level, but it avoids
having ABIInfo know depend on CodeGenTypes for a hint.
Nothing is using this yet, so no functionality change.
llvm-svn: 107111
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
load/store nonsense in the epilog. For example, for:
int foo(int X) {
int A[100];
return A[X];
}
we used to generate:
%arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
%tmp1 = load i32* %arrayidx ; <i32> [#uses=1]
store i32 %tmp1, i32* %retval
%0 = load i32* %retval ; <i32> [#uses=1]
ret i32 %0
}
which codegen'd to this code:
_foo: ## @foo
## BB#0: ## %entry
subq $408, %rsp ## imm = 0x198
movl %edi, 400(%rsp)
movl 400(%rsp), %edi
movslq %edi, %rax
movl (%rsp,%rax,4), %edi
movl %edi, 404(%rsp)
movl 404(%rsp), %eax
addq $408, %rsp ## imm = 0x198
ret
Now we generate:
%arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
%tmp1 = load i32* %arrayidx ; <i32> [#uses=1]
ret i32 %tmp1
}
and:
_foo: ## @foo
## BB#0: ## %entry
subq $408, %rsp ## imm = 0x198
movl %edi, 404(%rsp)
movl 404(%rsp), %edi
movslq %edi, %rax
movl (%rsp,%rax,4), %eax
addq $408, %rsp ## imm = 0x198
ret
This actually does matter, cutting out 2000 lines of IR from CGStmt.ll
for example.
Another interesting effect is that altivec.h functions which are dead
now get dce'd by the inliner. Hence all the changes to
builtins-ppc-altivec.c to ensure the calls aren't dead.
llvm-svn: 106970
isn't possible to compute.
This patch is mostly refactoring; the key change is the addition of the code
starting with the comment, "Check whether the function has a computable LLVM
signature." The solution here is essentially the same as the way the
vtable code handles such functions.
llvm-svn: 105151
This mirror's Dan's patch for llvm-gcc in r97989, and
fixes the miscompilation in PR6525. There is some contention
over whether this is the right thing to do, but it is the
conservative answer and demonstrably fixes a miscompilation.
llvm-svn: 101877
This introduces FunctionType::ExtInfo to hold the calling convention and the
noreturn attribute. The next patch will extend it to include the regparm
attribute and fix the bug.
llvm-svn: 99920
a common source of oddities and, in theory, removes some redundant ABI
computations. Also fixes a miscompile I introduced yesterday by refactoring
some code and causing a slightly different code path to be taken that
didn't perform *parameter* type canonicalization, just normal type
canonicalization; this in turn caused a bit of ABI code to misfire because
it was looking for 'double' or 'float' but received 'const float'.
llvm-svn: 97030
1) emit base destructors as aliases to their unique base class destructors
under some careful conditions. This is enabled for the same targets that can
support complete-to-base aliases, i.e. not darwin.
2) Emit non-variadic complete constructors for classes with no virtual bases
as calls to the base constructor. This is enabled on all targets and in
theory can trigger in situations that the alias optimization can't (mostly
involving virtual bases, mostly not yet supported).
These are bundled together because I didn't think it worthwhile to split them,
not because they really need to be.
llvm-svn: 96842
- This fixes many many more places than the test case, but my feeling is we need to audit alignment systematically so I'm not inclined to try hard to test the individual fixes in this patch. If this bothers you, patches welcome!
PR6240.
llvm-svn: 95648
follows (as conservatively as possible) gcc's current behavior: attributes
written on return types that don't apply there are applied to the function
instead, etc. Only parse CC attributes as type attributes, not as decl attributes;
don't accepet noreturn as a decl attribute on ValueDecls, either (it still
needs to apply to other decls, like blocks). Consistently consume CC/noreturn
information throughout codegen; enforce this by removing their default values
in CodeGenTypes::getFunctionInfo().
llvm-svn: 95436
directly into the sret pointer. This is an optimization in C, but is required
for correctness in C++ for classes with a non-trivial copy constructor.
llvm-svn: 90526
Handle this by returning the llvm::OpaqueType for those cases, which CodeGenModule::GetOrCreateLLVMFunction knows about, and treats as being an "incomplete function".
llvm-svn: 89736
Type hierarchy. Demote 'volatile' to extended-qualifier status. Audit our
use of qualifiers and fix a few places that weren't dealing with qualifiers
quite right; many more remain.
llvm-svn: 82705
Several of the existing methods were identical to their respective
specializations, and so have been removed entirely. Several more 'leaf'
optimizations were introduced.
The getAsFoo() methods which imposed extra conditions, like
getAsObjCInterfacePointerType(), have been left in place.
llvm-svn: 82501
Remove ASTContext parameter from DeclContext's methods. This change cascaded down to other Decl's methods and changes to call sites started "escalating".
Timings using pre-tokenized "cocoa.h" showed only a ~1% increase in time run between and after this commit.
llvm-svn: 74506
The implementations of these methods can Use Decl::getASTContext() to get the ASTContext.
This commit touches a lot of files since call sites for these methods are everywhere.
I used pre-tokenized "carbon.h" and "cocoa.h" headers to do some timings, and there was no real time difference between before the commit and after it.
llvm-svn: 74501
function attributes. There are predefined macros that are defined when stack
protectors are used: __SSP__=1 with -fstack-protector and __SSP_ALL__=2 with
-fstack-protector-all.
llvm-svn: 74405
when generating a coercion for ABI handling purposes.
- This may only manifest itself when building at -O0, but the practical effect
is that other arguments may get clobbered.
- <rdar://problem/6930451> [irgen] ABI coercion clobbers other arguments
llvm-svn: 72932
thing for non-aggregate types.
- Otherwise we unnecessarily pin values to the stack and currently end up
triggering a backend bug in one case.
- This loose cooperation with LLVM to implement the ABI is pretty ugly.
- <rdar://problem/6918722> [irgen] clang miscompile of many pointer varargs on
x86-64
llvm-svn: 72419
coercion to be specified which truncates padding bits. It would be
nice to still have the assert, but we don't have any API call for the
unpadding size of a type yet.
llvm-svn: 71695
to use a wide enough type. This might be wider than the "single
element"'s type in the presence of padding bit-fields.
- Darwin x86_32 now passes the first 1k ABI tests with bit-field
generation enabled.
llvm-svn: 71270
compatible with VC++ and GCC. The codegen/mangling angle hasn't
been fully ironed out yet. Note that we accept int128_t even in
32-bit mode, unlike gcc.
llvm-svn: 70464
- Small structures are returned in a register if:
1. They fit nicely in a register.
2. All fields fit nicely in a register.
(more or less)
- We now pass the first 5000 ABITests if unions are disabled.
- <rdar://problem/6497882> [irgen] x86-32 ABI compatibility with
small structs
llvm-svn: 68197
element structures", which have different ABI rules.
- Current return-arguments-32 status is: 1 out of 1000 failures (-7)
- Also, vectors inside "single element structs" require special
handling.
llvm-svn: 68196