This takes some trickery since CastExpr has subclasses (and indeed,
is abstract).
Also, smoosh the CastKind into the bitfield from Expr.
Drops two words of storage from Expr in the common case of expressions
which don't need inheritance paths. Avoids a separate allocation and
another word of overhead in cases needing inheritance paths. Also has
the advantage of not leaking memory, since destructors for AST nodes are
never run.
llvm-svn: 110507
enclosing normal cleanup, not the top of the EH stack. I'm *really*
surprised this hasn't been causing more problems.
Fixes rdar://problem/8231514.
llvm-svn: 109569
initializer of (). Make sure to use a simple memset() when we can, or
fall back to generating a loop when a simple memset will not
suffice. Fixes <rdar://problem/8212208>, a regression due to my work
in r107857.
llvm-svn: 108977
which generates more efficient and more obviously conformant
code. We now test for overflow of the multiply then force
the result to -1 if so. On X86, this generates nice code
like this:
__Z4testl: ## @_Z4testl
## BB#0: ## %entry
subl $12, %esp
movl $4, %eax
mull 16(%esp)
testl %edx, %edx
movl $-1, %ecx
cmovel %eax, %ecx
movl %ecx, (%esp)
call __Znam
addl $12, %esp
ret
llvm-svn: 108927
causing clang to compile this code into something that correctly throws a
length error, fixing a potential integer overflow security attack:
void *test(long N) {
return new int[N];
}
int main() {
test(1L << 62);
}
We do this even when exceptions are disabled, because it is better for the
code to abort than for the attack to succeed.
This is heavily based on a patch that Fariborz wrote.
llvm-svn: 108915
mostly in avoiding unnecessary work at compile time but also in producing more
sensible block orderings.
Move the destructor cleanups for local variables over to use lazy cleanups.
Eventually all cleanups will do this; for now we have some awkward code
duplication.
Tell IR generation just to never produce landing pads in -fno-exceptions.
This is a much more comprehensive solution to a problem which previously was
half-solved by checks in most cleanup-generation spots.
llvm-svn: 108270
emit metadata associating allocas and global values with a Decl*. This feature
is controlled by an option that (intentionally) cannot be enabled on the command
line.
To use this feature, simply set
CodeGenOptions.EmitDeclMetadata = true;
and then interpret the completely underspecified metadata. :)
llvm-svn: 107739
self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
alloca for an argument. Make sure the argument gets the proper
decl alignment, which may be different than the type alignment.
This fixes PR7567
llvm-svn: 107627
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
load/store nonsense in the epilog. For example, for:
int foo(int X) {
int A[100];
return A[X];
}
we used to generate:
%arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
%tmp1 = load i32* %arrayidx ; <i32> [#uses=1]
store i32 %tmp1, i32* %retval
%0 = load i32* %retval ; <i32> [#uses=1]
ret i32 %0
}
which codegen'd to this code:
_foo: ## @foo
## BB#0: ## %entry
subq $408, %rsp ## imm = 0x198
movl %edi, 400(%rsp)
movl 400(%rsp), %edi
movslq %edi, %rax
movl (%rsp,%rax,4), %edi
movl %edi, 404(%rsp)
movl 404(%rsp), %eax
addq $408, %rsp ## imm = 0x198
ret
Now we generate:
%arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
%tmp1 = load i32* %arrayidx ; <i32> [#uses=1]
ret i32 %tmp1
}
and:
_foo: ## @foo
## BB#0: ## %entry
subq $408, %rsp ## imm = 0x198
movl %edi, 404(%rsp)
movl 404(%rsp), %edi
movslq %edi, %rax
movl (%rsp,%rax,4), %eax
addq $408, %rsp ## imm = 0x198
ret
This actually does matter, cutting out 2000 lines of IR from CGStmt.ll
for example.
Another interesting effect is that altivec.h functions which are dead
now get dce'd by the inliner. Hence all the changes to
builtins-ppc-altivec.c to ensure the calls aren't dead.
llvm-svn: 106970
'self' variable arising from uses of the 'super' keyword. Also reorganize
some code so that BlockInfo (now CGBlockInfo) can be opaque outside of
CGBlocks.cpp.
Fixes rdar://problem/8010633.
llvm-svn: 104312
__cxa_guard_abort along the exceptional edge into (in effect) a nested
"try" that rethrows after aborting. Fixes PR7144 and the remaining
Boost.ProgramOptions failures, along with the regressions that r103880
caused.
The crucial difference between this and r103880 is that we now follow
LLVM's little dance with the llvm.eh.exception and llvm.eh.selector
calls, then use _Unwind_Resume_or_Rethrow to rethrow.
llvm-svn: 103892
__cxa_guard_abort along the exceptional edge into (in effect) a nested
"try" that rethrows after aborting. Fixes PR7144 and the remaining
Boost.ProgramOptions failures.
llvm-svn: 103880
implicitly-generated copy constructor. Previously, Sema would perform
some checking and instantiation to determine which copy constructors,
etc., would be called, then CodeGen would attempt to figure out which
copy constructor to call... but would get it wrong, or poke at an
uninstantiated default argument, or fail in other ways.
The new scheme is similar to what we now do for the implicit
copy-assignment operator, where Sema performs all of the semantic
analysis and builds specific ASTs that look similar to the ASTs we'd
get from explicitly writing the copy constructor, so that CodeGen need
only do a direct translation.
However, it's not quite that simple because one cannot explicit write
elementwise copy-construction of an array. So, I've extended
CXXBaseOrMemberInitializer to contain a list of indexing variables
used to copy-construct the elements. For example, if we have:
struct A { A(const A&); };
struct B {
A array[2][3];
};
then we generate an implicit copy assignment operator for B that looks
something like this:
B::B(const B &other) : array[i0][i1](other.array[i0][i1]) { }
CodeGen will loop over the invented variables i0 and i1 to visit all
elements in the array, so that each element in the destination array
will be copy-constructed from the corresponding element in the source
array. Of course, if we're dealing with arrays of scalars or class
types with trivial copy-assignment operators, we just generate a
memcpy rather than a loop.
Fixes PR6928, PR5989, and PR6887. Boost.Regex now compiles and passes
all of its regression tests.
Conspicuously missing from this patch is handling for the exceptional
case, where we need to destruct those objects that we have
constructed. I'll address that case separately.
llvm-svn: 103079
assignment operators.
Previously, Sema provided type-checking and template instantiation for
copy assignment operators, then CodeGen would synthesize the actual
body of the copy constructor. Unfortunately, the two were not in sync,
and CodeGen might pick a copy-assignment operator that is different
from what Sema chose, leading to strange failures, e.g., link-time
failures when CodeGen called a copy-assignment operator that was not
instantiation, run-time failures when copy-assignment operators were
overloaded for const/non-const references and the wrong one was
picked, and run-time failures when by-value copy-assignment operators
did not have their arguments properly copy-initialized.
This implementation synthesizes the implicitly-defined copy assignment
operator bodies in Sema, so that the resulting ASTs encode exactly
what CodeGen needs to do; there is no longer any special code in
CodeGen to synthesize copy-assignment operators. The synthesis of the
body is relatively simple, and we generate one of three different
kinds of copy statements for each base or member:
- For a class subobject, call the appropriate copy-assignment
operator, after overload resolution has determined what that is.
- For an array of scalar types or an array of class types that have
trivial copy assignment operators, construct a call to
__builtin_memcpy.
- For an array of class types with non-trivial copy assignment
operators, synthesize a (possibly nested!) for loop whose inner
statement calls the copy constructor.
- For a scalar type, use built-in assignment.
This patch fixes at least a few tests cases in Boost.Spirit that were
failing because CodeGen picked the wrong copy-assignment operator
(leading to link-time failures), and I suspect a number of undiagnosed
problems will also go away with this change.
Some of the diagnostics we had previously have gotten worse with this
change, since we're going through generic code for our
type-checking. I will improve this in a subsequent patch.
llvm-svn: 102853
in a throw expression. Use EmitAnyExprToMem to emit the throw expression,
which magically elides the final copy-constructor call (which raises a new
strict-compliance bug, but baby steps). Give __cxa_throw a destructor pointer
if the exception type has a non-trivial destructor.
llvm-svn: 102039
of the block descriptor field. This field is the ObjC style @encode
signature of the implementation function, and was to this point
conditionally provided in the block literal data structure. That
provisional support is removed.
Additionally, eliminate unused enumerations for the block literal flags field.
The first shipping ABI unconditionally set (1<<29) but this bit is unused
by the runtime, so the second ABI will unconditionally have (1<<30) set so
that the runtime can in fact distinguish whether the additional data is
present or not.
llvm-svn: 96989
1) emit base destructors as aliases to their unique base class destructors
under some careful conditions. This is enabled for the same targets that can
support complete-to-base aliases, i.e. not darwin.
2) Emit non-variadic complete constructors for classes with no virtual bases
as calls to the base constructor. This is enabled on all targets and in
theory can trigger in situations that the alias optimization can't (mostly
involving virtual bases, mostly not yet supported).
These are bundled together because I didn't think it worthwhile to split them,
not because they really need to be.
llvm-svn: 96842
Fix some bugs with function-try-blocks and simplify normal try-block
code generation.
This implementation excludes a deleting destructor's call to
operator delete() from the function-try-block, which I believe
is correct but which I can't find straightforward support for at
a moment's glance.
llvm-svn: 96670
repeatedly reloading from an alloca. We still need to create the alloca
for debug info purposes (although we currently create it in all cases
because of some abstraction boundaries that're hard to break down).
llvm-svn: 96403
the offset to the virtual bases statically inside of relying on the virtual
base offsets in the object's vtable(s). This is both more efficient and
sound against the destructor's manipulation of the vtables.
Also extract a few helper routines.
Oh and we seem to pass all tests with an optimized clang now.
llvm-svn: 96327
- This fixes many many more places than the test case, but my feeling is we need to audit alignment systematically so I'm not inclined to try hard to test the individual fixes in this patch. If this bothers you, patches welcome!
PR6240.
llvm-svn: 95648
"ASTContext::getTypeSize() / 8". Replace [u]int64_t variables with CharUnits
ones as appropriate.
Also rename RawType, fromRaw(), and getRaw() in CharUnits to QuantityType,
fromQuantity(), and getQuantity() for clarity.
llvm-svn: 93153
the constructor. This doesn't handle cases requiring the VTT at the moment,
and generates unnecessary stores, but I think it's essentially correct.
llvm-svn: 91731
This implements a new flag -fcatch-undefined-behavior. The flag turns
on additional runtime checks for:
T a[I];
a[i] abort when i < 0 or i >= I.
Future stuff includes shifts by >= bitwidth amounts.
llvm-svn: 91198
directly into the sret pointer. This is an optimization in C, but is required
for correctness in C++ for classes with a non-trivial copy constructor.
llvm-svn: 90526
Highlights include:
Add a helper to generate __cxa_free_exception and _ZSt9terminatev.
Add a region to handle EH object deallocation for ctor failures for throw.
Add a terminate handler for __cxa_end_catch.
A framework for adding cleanup actions for the exceptional edges only.
llvm-svn: 90305
All statements that involve conditions can now hold on to a separate
condition declaration (a VarDecl), and will use a DeclRefExpr
referring to that VarDecl for the condition expression. ForStmts now
have such a VarDecl (I'd missed those in previous commits).
Also, since this change reworks the Action interface for
if/while/switch/for, use FullExprArg for the full expressions in those
expressions, to ensure that we're emitting
Note that we are (still) not generating the right cleanups for
condition variables in for statements. That will be a follow-on
commit.
llvm-svn: 89817
cleanups for while loops:
1) Make sure that we destroy the condition variable of a while statement each time through the loop for, e.g.,
while (shared_ptr<WorkInt> p = getWorkItem()) {
// ...
}
2) Make sure that we always enter a new cleanup scope for the body of the while loop, even when there is no compound expression, e.g.,
while (blah)
RAIIObject raii(blah+1);
llvm-svn: 89800
- Outside the "if", to ensure that we destroy the condition variable
at the end of the "if" statement rather than at the end of the
block containing the "if" statement.
- Inside the "then" and "else" branches, so that we emit then- or
else-local cleanups at the end of the corresponding block when the
block is not a compound statement.
To make adding these new cleanup scopes easier (and since
switch/do/while will all need the same treatment), added the
CleanupScope RAII object to introduce a new cleanup scope and make
sure it gets cleaned up.
llvm-svn: 89773
using the new LLVM support for this. This is temporarily hiding
behind horrible and ugly #ifdefs until the time when the optimizer
is stable (hopefully a week or so). Until then, lets make it "opt in" :)
llvm-svn: 85446
qualified reference to a declaration that is not a non-static data
member or non-static member function, e.g.,
namespace N { int i; }
int j = N::i;
Instead, extend DeclRefExpr to optionally store the qualifier. Most
clients won't see or care about the difference (since
QualifierDeclRefExpr inherited DeclRefExpr). However, this reduces the
number of top-level expression types that clients need to cope with,
brings the implementation of DeclRefExpr into line with MemberExpr,
and simplifies and unifies our handling of declaration references.
Extended DeclRefExpr to (optionally) store explicitly-specified
template arguments. This occurs when naming a declaration via a
template-id (which will be stored in a TemplateIdRefExpr) that,
following template argument deduction and (possibly) overload
resolution, is replaced with a DeclRefExpr that refers to a template
specialization but maintains the template arguments as written.
llvm-svn: 84962
1. CGF now has fewer bytes of state (one pointer instead of a vector).
2. The generated code is determinstic, instead of getting labels in
'map order' based on pointer addresses.
3. Clang now emits one 'indirect goto switch' for each function, instead
of one for each indirect goto. This fixes an M*N = N^2 IR size issue
when there are lots of address-taken labels and lots of indirect gotos.
4. This also makes the default cause do something useful, reducing the
size of the jump table needed (by one).
llvm-svn: 83952
Type hierarchy. Demote 'volatile' to extended-qualifier status. Audit our
use of qualifiers and fix a few places that weren't dealing with qualifiers
quite right; many more remain.
llvm-svn: 82705
consistent model for handling size expressions for VLAs.
The model is essentially as follows: VLA types own their associated
expression. In some cases, we need to create multiple VLA types to
represent a given VLA (for canonical types, or qualifiers on array types,
or type merging). If we need to create multiple types based off of
the same VLA declaration, we use the new refcounting functionality so they can
all own the expression. The VLASizeMap in CodeGenFunction then uses the size
expression to identify the group of VLA types based off of the same original
declaration.
I'm not particularly attached to the VLA types owning the expression,
but we're stuck with at least until someone comes up with a way
to walk the VLA expressions for a declaration.
I did the parallel fix in ASTContext for DependentSizedArrayType, but I
haven't really looked closely at it, so there might still be issues
there.
I'll clean up the code duplication in ASTContext in a followup commit.
llvm-svn: 79071