in the scope checker. With that done, turn an indirect goto into a
protected scope into a hard error; otherwise IR generation has to start
worrying about declarations not dominating their scopes, as exemplified
in PR8473.
If this really affects anyone, I can probably adjust this to only hard-error
on possible indirect gotos into VLA scopes rather than arbitrary scopes.
But we'll see how people cope with the aggressive change on the marginal
feature.
llvm-svn: 117539
slot. The easiest way to do that was to bundle up the information
we care about for aggregate slots into a new structure which demands
that its creators at least consider the question.
I could probably be convinced that the ObjC 'needs GC' bit should
be rolled into this structure.
Implement generalized copy elision. The main obstacle here is that
IR-generation must be much more careful about making sure that exactly
llvm-svn: 113962
self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
This works around a crash where malloc reused the memory of an erased BB for a
new BB leaving old cleanup information pointing at the new block.
llvm-svn: 104472
return statements. We perform NRVO only when all of the return
statements in the function return the same variable. Fixes some link
failures in Boost.Interprocess (which is relying on NRVO), and
probably improves performance for some C++ applications.
llvm-svn: 103867
input and output types when the smaller value isn't mentioned in the
asm string. Extend this support from integers to also allowing
fp values to be mismatched (if not mentioned in the asm string).
llvm-svn: 102188
(if there's a current block). The chief advantage of doing this is that it
lets us pick blocks (e.g. EH blocks) to push to the end of the function so
that fallthrough happens consistently --- i.e. it gives us the flexibility
of ordering blocks as we please without having to change the order in which
we generate code. There are standard (?) optimization passes which can do some
of that for us, but better to generate reasonable code to begin with.
llvm-svn: 101997
have the code generate slap a srcloc metadata on inline asm nodes.
This allows us to diagnose invalid inline asms with such nice
diagnostics as:
<inline asm>:1:2: error: unrecognized instruction
abc incl %eax
^
asm.c:2:12: note: generated from here
__asm__ ("abc incl %0" : "+r" (X));
^
2 diagnostics generated.
llvm-svn: 100608
EmitReferenceBindingToExpr() rather than assuming we have an
lvalue. This is just the lowest hanging fruit for PR6024, which still
requires a bit of work.
llvm-svn: 99447
need to deal with aggregates specially; this is consistent with the rest of IRgen.
Also, simplify EmitParmDecl and don't worry about using Decl::getNameAsString.
llvm-svn: 95393
All statements that involve conditions can now hold on to a separate
condition declaration (a VarDecl), and will use a DeclRefExpr
referring to that VarDecl for the condition expression. ForStmts now
have such a VarDecl (I'd missed those in previous commits).
Also, since this change reworks the Action interface for
if/while/switch/for, use FullExprArg for the full expressions in those
expressions, to ensure that we're emitting
Note that we are (still) not generating the right cleanups for
condition variables in for statements. That will be a follow-on
commit.
llvm-svn: 89817
cleanups for while loops:
1) Make sure that we destroy the condition variable of a while statement each time through the loop for, e.g.,
while (shared_ptr<WorkInt> p = getWorkItem()) {
// ...
}
2) Make sure that we always enter a new cleanup scope for the body of the while loop, even when there is no compound expression, e.g.,
while (blah)
RAIIObject raii(blah+1);
llvm-svn: 89800
- Outside the "if", to ensure that we destroy the condition variable
at the end of the "if" statement rather than at the end of the
block containing the "if" statement.
- Inside the "then" and "else" branches, so that we emit then- or
else-local cleanups at the end of the corresponding block when the
block is not a compound statement.
To make adding these new cleanup scopes easier (and since
switch/do/while will all need the same treatment), added the
CleanupScope RAII object to introduce a new cleanup scope and make
sure it gets cleaned up.
llvm-svn: 89773
rather than burying it in a CXXConditionDeclExpr (that occassionally
hides behind implicit conversions). Similar changes for
switch, while, and do-while will follow, then the removal of
CXXConditionDeclExpr. This commit is the canary.
llvm-svn: 89717
using the new LLVM support for this. This is temporarily hiding
behind horrible and ugly #ifdefs until the time when the optimizer
is stable (hopefully a week or so). Until then, lets make it "opt in" :)
llvm-svn: 85446
1. CGF now has fewer bytes of state (one pointer instead of a vector).
2. The generated code is determinstic, instead of getting labels in
'map order' based on pointer addresses.
3. Clang now emits one 'indirect goto switch' for each function, instead
of one for each indirect goto. This fixes an M*N = N^2 IR size issue
when there are lots of address-taken labels and lots of indirect gotos.
4. This also makes the default cause do something useful, reducing the
size of the jump table needed (by one).
llvm-svn: 83952
expressions.
- This generally catches the important case of noreturn functions.
- With the last two changes, we are down to 152 unreachable blocks emitted on
403.gcc, vs the 1805 we started with.
llvm-svn: 76364
- Emit variable declarations as "simple", we want to avoid forcing the creation
of a dummy basic block, but still need to make the variable available for
later use.
- With that, we can now skip IRgen for other unreachable statements (which
don't define a label).
- Anders, I added two fixmes on calls to EmitVLASize, can you check them?
llvm-svn: 76361
function calls. For a program like this:
#include <stdio.h>
static __inline__ __attribute__((always_inline))
int bar(int x) { return 4; }
int main() {
int X = bar(4);
printf("%d\n", X);
}
clang was not outputing any debug info for the body of main(). This is
because the backend is getting confused by the region_start/end that clang
is emitting for block scopes. For now, just disable these (matching llvm-gcc),
this stuff is in progress of rework anyway.
llvm-svn: 70889
- <rdar://problem/6732143> Crash when generating @synchronize for
zero-cost exception
- Thanks to Anders for helping track down the problem.
llvm-svn: 68186
really horrible extensions that are disabled by default but that can
be accepted by -fheinous-gnu-extensions (but which always emit a
warning when enabled).
As our first instance of this, implement PR3788/PR3794, which allows
non-lvalues in inline asms in contexts where lvalues are required. bleh.
llvm-svn: 66910
done in sema, and is reflected by the existing PR3258. In the meantime,
fix PR3682 by disabling a bogus assertion (which doesn't account for +
operands).
llvm-svn: 66533
asm. This allows us to properly handle the case when an optimizer duplicates
the asm, such as here:
void bar() {
int i;
for (i = 0; i < 3; ++i)
asm("foo %=" : : "r"(0));
}
we now produce:
_bar:
xorl %eax, %eax
## InlineAsm Start
foo 0
## InlineAsm End
## InlineAsm Start
foo 1
## InlineAsm End
## InlineAsm Start
foo 2
## InlineAsm End
ret
instead of:
_bar:
xorl %eax, %eax
## InlineAsm Start
foo 1
## InlineAsm End
## InlineAsm Start
foo 1
## InlineAsm End
## InlineAsm Start
foo 1
## InlineAsm End
ret
This also fixes a fixme by eliminating a static.
llvm-svn: 66528
it in the stack trace, giving us stuff like:
Stack dump:
0. Program arguments: clang t.c -emit-llvm
1. <eof> parser at end of file
2. t.c:1:5: LLVM IR generation of declaration 'a'
3. t.c:1:9: LLVM IR generation of compound statement ('{}')
4. t.c:2:3: LLVM IR generation of compound statement ('{}')
Abort
llvm-svn: 66154
If people could beat on it and let me know if there are any new
semantics required by newer language standards or DRs or any little
details I goofed on, I'd be happy to fix any issues found.
llvm-svn: 64079