self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
This works around a crash where malloc reused the memory of an erased BB for a
new BB leaving old cleanup information pointing at the new block.
llvm-svn: 104472
return statements. We perform NRVO only when all of the return
statements in the function return the same variable. Fixes some link
failures in Boost.Interprocess (which is relying on NRVO), and
probably improves performance for some C++ applications.
llvm-svn: 103867
input and output types when the smaller value isn't mentioned in the
asm string. Extend this support from integers to also allowing
fp values to be mismatched (if not mentioned in the asm string).
llvm-svn: 102188
(if there's a current block). The chief advantage of doing this is that it
lets us pick blocks (e.g. EH blocks) to push to the end of the function so
that fallthrough happens consistently --- i.e. it gives us the flexibility
of ordering blocks as we please without having to change the order in which
we generate code. There are standard (?) optimization passes which can do some
of that for us, but better to generate reasonable code to begin with.
llvm-svn: 101997
have the code generate slap a srcloc metadata on inline asm nodes.
This allows us to diagnose invalid inline asms with such nice
diagnostics as:
<inline asm>:1:2: error: unrecognized instruction
abc incl %eax
^
asm.c:2:12: note: generated from here
__asm__ ("abc incl %0" : "+r" (X));
^
2 diagnostics generated.
llvm-svn: 100608
EmitReferenceBindingToExpr() rather than assuming we have an
lvalue. This is just the lowest hanging fruit for PR6024, which still
requires a bit of work.
llvm-svn: 99447
need to deal with aggregates specially; this is consistent with the rest of IRgen.
Also, simplify EmitParmDecl and don't worry about using Decl::getNameAsString.
llvm-svn: 95393
All statements that involve conditions can now hold on to a separate
condition declaration (a VarDecl), and will use a DeclRefExpr
referring to that VarDecl for the condition expression. ForStmts now
have such a VarDecl (I'd missed those in previous commits).
Also, since this change reworks the Action interface for
if/while/switch/for, use FullExprArg for the full expressions in those
expressions, to ensure that we're emitting
Note that we are (still) not generating the right cleanups for
condition variables in for statements. That will be a follow-on
commit.
llvm-svn: 89817
cleanups for while loops:
1) Make sure that we destroy the condition variable of a while statement each time through the loop for, e.g.,
while (shared_ptr<WorkInt> p = getWorkItem()) {
// ...
}
2) Make sure that we always enter a new cleanup scope for the body of the while loop, even when there is no compound expression, e.g.,
while (blah)
RAIIObject raii(blah+1);
llvm-svn: 89800
- Outside the "if", to ensure that we destroy the condition variable
at the end of the "if" statement rather than at the end of the
block containing the "if" statement.
- Inside the "then" and "else" branches, so that we emit then- or
else-local cleanups at the end of the corresponding block when the
block is not a compound statement.
To make adding these new cleanup scopes easier (and since
switch/do/while will all need the same treatment), added the
CleanupScope RAII object to introduce a new cleanup scope and make
sure it gets cleaned up.
llvm-svn: 89773
rather than burying it in a CXXConditionDeclExpr (that occassionally
hides behind implicit conversions). Similar changes for
switch, while, and do-while will follow, then the removal of
CXXConditionDeclExpr. This commit is the canary.
llvm-svn: 89717
using the new LLVM support for this. This is temporarily hiding
behind horrible and ugly #ifdefs until the time when the optimizer
is stable (hopefully a week or so). Until then, lets make it "opt in" :)
llvm-svn: 85446
1. CGF now has fewer bytes of state (one pointer instead of a vector).
2. The generated code is determinstic, instead of getting labels in
'map order' based on pointer addresses.
3. Clang now emits one 'indirect goto switch' for each function, instead
of one for each indirect goto. This fixes an M*N = N^2 IR size issue
when there are lots of address-taken labels and lots of indirect gotos.
4. This also makes the default cause do something useful, reducing the
size of the jump table needed (by one).
llvm-svn: 83952