Since the continuation block of the if statement is emitted within the
condition scope this had the undesirable effect of creating a line table
entry at the end of the then or else statement, a line that may have never
been executed.
PR19864 / rdar://problem/17052973
llvm-svn: 209764
It also adds a simple initial version of codegen for pragma omp simd (it will change in the future to support all the clauses).
Differential revision: http://reviews.llvm.org/D3644
llvm-svn: 209411
condition to a constant and emit only the relevant statement. In that
case, we were previously creating the epilog jump destination, a cleanup
scope, and emitting any condition variable into it. Instead, we can emit
the condition variable (if we have one) into the cleanup scope used for
the entire folded case sequence. We avoid creating a jump dest, a basic
block, and an extra cleanup scope. Win!
llvm-svn: 207888
CapturedStmt was being ignored by instrumentation based profiling, and
its counters attributed to the containing function. Instead, we need
to treat this as a top level entity, like we do with blocks.
llvm-svn: 206231
are not associated with any source lines.
Previously, if the Location of a Decl was empty, EmitFunctionStart would
just keep using CurLoc, which would sometimes be correct (e.g., thunks)
but in other cases would just point to a hilariously random location.
This patch fixes this by completely eliminating all uses of CurLoc from
EmitFunctionStart and rather have clients explicitly pass in a
SourceLocation for the function header and the function body.
rdar://problem/14985269
llvm-svn: 205999
This adds Clang support for the ARM64 backend. There are definitely
still some rough edges, so please bring up any issues you see with
this patch.
As with the LLVM commit though, we think it'll be more useful for
merging with AArch64 from within the tree.
llvm-svn: 205100
r203364: what was use_iterator is now user_iterator, and there is
a use_iterator for directly iterating over the uses.
This also switches to use the range-based APIs where appropriate.
llvm-svn: 203365
Previously, we made one traversal of the AST prior to codegen to assign
counters to the ASTs and then propagated the count values during codegen. This
patch now adds a separate AST traversal prior to codegen for the
-fprofile-instr-use option to propagate the count values. The counts are then
saved in a map from which they can be retrieved during codegen.
This new approach has several advantages:
1. It gets rid of a lot of extra PGO-related code that had previously been
added to codegen.
2. It fixes a serious bug. My original implementation (which was mailed to the
list but never committed) used 3 counters for every loop. Justin improved it to
move 2 of those counters into the less-frequently executed breaks and continues,
but that turned out to produce wrong count values in some cases. The solution
requires visiting a loop body before the condition so that the count for the
condition properly includes the break and continue counts. Changing codegen to
visit a loop body first would be a fairly invasive change, but with a separate
AST traversal, it is easy to control the order of traversal. I've added a
testcase (provided by Justin) to make sure this works correctly.
3. It improves the instrumentation overhead, reducing the number of counters for
a loop from 3 to 1. We no longer need dedicated counters for breaks and
continues, since we can just use the propagated count values when visiting
breaks and continues.
To make this work, I needed to make a change to the way we count case
statements, going back to my original approach of not including the fall-through
in the counter values. This was necessary because there isn't always an AST node
that can be used to record the fall-through count. Now case statements are
handled the same as default statements, with the fall-through paths branching
over the counter increments. While I was at it, I also went back to using this
approach for do-loops -- omitting the fall-through count into the loop body
simplifies some of the calculations and make them behave the same as other
loops. Whenever we start using this instrumentation for coverage, we'll need
to add the fall-through counts into the counter values.
llvm-svn: 201528
This fixes PR15768, where the sret parameter and the 'this' parameter
are in the wrong order.
Instance methods compiled by MSVC never return records in registers,
they always return indirectly through an sret pointer. That sret
pointer always comes after the 'this' parameter, for both __cdecl and
__thiscall methods.
Unfortunately, the same is true for other calling conventions, so we'll
have to change the overall approach here relatively soon.
Reviewers: rsmith
Differential Revision: http://llvm-reviews.chandlerc.com/D2664
llvm-svn: 200587
Due to statement expressions supported as GCC extension, it is possible
to put 'break' or 'continue' into a loop/switch statement but outside
its body, for example:
for ( ; ({ if (first) { first = 0; continue; } 0; }); )
This code is rejected by GCC if compiled in C mode but is accepted in C++
code. GCC bug 44715 tracks this discrepancy. Clang used code generation
that differs from GCC in both modes: only statement of the third
expression of 'for' behaves as if it was inside loop body.
This change makes code generation more close to GCC, considering 'break'
or 'continue' statement in condition and increment expressions of a
loop as it was inside the loop body. It also adds error for the cases
when 'break'/'continue' appear outside loop due to this syntax. If
code generation differ from GCC, warning is issued.
Differential Revision: http://llvm-reviews.chandlerc.com/D2518
llvm-svn: 199897
I misunderstood the discussion on this. The complexity here is
justified by the malloc overhead it saves.
This reverts commit r199302.
llvm-svn: 199700
Way back in r129652 we tried to avoid emitting an empty block at -O0
for switch cases that did nothing but break. This led to a poor
debugging experience as reported in PR9796, so we disabled the
optimization for -O0 but left it in for higher optimization levels in
r154420.
Since the whole point of this was to improve -O0, it's silly to keep
the complexity at all.
llvm-svn: 199302
adjustFallThroughCount isn't a good name, and the documentation was
even worse. This commit attempts to clarify what it's for and when to
use it.
llvm-svn: 199139
There are a number of places where we do PGO.setCurrentRegionCount(0)
directly after an unconditional branch. Give this operation a name so
that it's clearer why we're doing this.
llvm-svn: 199138
This call looks like it was an artifact of an earlier change, and
doesn't actually make sense. We begin a new region immediately anyway,
so it was mostly harmless.
llvm-svn: 199137
C and C++ don't emit an extra lexical scope for the compound statement
that is the body of an Objective-C method.
rdar://problem/15010825
llvm-svn: 198699
encodes the canonical rules for LLVM's style. I noticed this had drifted
quite a bit when cleaning up LLVM, so wanted to clean up Clang as well.
llvm-svn: 198686
Not long ago I made the CodeGen of for loops simplify the condition at
-O0 in the same way we do for if and conditionals. Unfortunately this
ties how loops and simple conditions work together too tightly, which
makes features such as instrumentation based PGO awkward.
Ultimately, we should find a more general way to simplify the logic in
a given condition, but for now we'll just avoid using EmitBranchOnBool
for loops, like we already do for while and do loops.
llvm-svn: 195438
A while ago EmitForStmt was changed to explicitly evaluate the
condition expression and create a branch instead of using
EmitBranchOnBool, so that the condition expression could be used for
some cleanup logic. The cleanup stuff has since been reorganized, and
this is no longer necessary.
In EmitCXXForRange, the evaluated condition was never used for
anything else. The logic was presumably modeled on EmitForStmt.
llvm-svn: 193994
An initialization somehow found its way in between a comment and the
block of code the comment is about. Moving the initialization makes
this less confusing.
llvm-svn: 193993
CodeGenFunction is run on only one function - a new object is made for
each new function. I would add an assertion/flag to this effect, but
there's an exception: ObjC properties involve emitting helper functions
that are all emitted by the same CodeGenFunction object, so such a check
is not possible/correct.
llvm-svn: 189277
between a block assignment and the entry of the block function. In reality
this wouldn't work anyway because blocks are predominantly created
on-the-fly inside of an ObjC method invocation.
The proper fix for the ambiguity is to use -gcolumn-info to differentiate
the breakpoints.
This is expected to break some block-related darwin-gdb tests.
rdar://problem/14039866
llvm-svn: 184157
Two variables with the same name declared in two if conditions in the same
scope are no longer coalesced into one.
rdar://problem/14024005
llvm-svn: 183597
X86's 'y' inline assembly constraint represents an MMX register, this change
prevents Clang from hitting an assertion when passed an incompatible type to
deal with.
llvm-svn: 183467
EmitCapturedStmt creates a captured struct containing all of the captured
variables, and then emits a call to the outlined function. This is similar in
principle to EmitBlockLiteral.
GenerateCapturedFunction actually produces the outlined function. It is based
on GenerateBlockFunction, but is much simpler. The function type is determined
by the parameters that are in the CapturedDecl.
Some changes have been added to this patch that were reviewed as part of the
serialization patch and moving the parameters to the captured decl.
Differential Revision: http://llvm-reviews.chandlerc.com/D640
llvm-svn: 181536
Un-break the gdb buildbot.
- Use the debug location of the return expression for the cleanup code
if the return expression is trivially evaluatable, regardless of the
number of stop points in the function.
- Ensure that any EH code in the cleanup still gets the line number of
the closing } of the lexical scope.
- Added a testcase with EH in the cleanup.
rdar://problem/13442648
llvm-svn: 181056
- Use the debug location of the return expression for the cleanup code
if the return expression is trivially evaluatable, regardless of the
number of stop points in the function.
- Ensure that any EH code in the cleanup still gets the line number of
the closing } of the lexical scope.
- Added a testcase with EH in the cleanup.
rdar://problem/13442648
llvm-svn: 180982
the actual parser and support arbitrary id-expressions.
We're actually basically set up to do arbitrary expressions here
if we wanted to.
Assembly operands permit things like A::x to be written regardless
of language mode, which forces us to embellish the evaluation
context logic somewhat. The logic here under template instantiation
is incorrect; we need to preserve the fact that an expression was
unevaluated. Of course, template instantiation in general is fishy
here because we have no way of delaying semantic analysis in the
MC parser. It's all just fishy.
I've also fixed the serialization of MS asm statements.
This commit depends on an LLVM commit.
llvm-svn: 180976
If there is cleanup code, the cleanup code gets the debug location of
the closing '}'. The subsequent ret IR-instruction does not get a
debug location. The return _expression_ will get the debug location
of the return statement.
If the function contains only a single, simple return statement,
the cleanup code may become the first breakpoint in the function.
In this case we set the debug location for the cleanup code
to the location of the return statement.
rdar://problem/13442648
llvm-svn: 180932
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.
Also, normalize the API for loading and storing complexes.
I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.
llvm-svn: 176656
One of the gotchas (see changes to CodeGenFunction) was due to the fix in
r139416 (for PR10829). This only worked previously because the top level
lexical block would set the location to the end of the function, the debug
location would be updated (as per r139416), the location would be set to
the end of the function again (but that would no-op, since it was the same
as the previous location), then the return instruction would be emitted using
the debug location.
Once the top level lexical block was no longer emitted, the end-of-function
location change was causing the debug loc to be updated, regressing that bug.
llvm-svn: 173593
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
Convert the uses of the Attributes class over to the new format. The
Attributes::get method call now takes an LLVM context so that the attributes
object can be uniquified and stored.
llvm-svn: 165918
into the enclosing scope; this is a more accurate model but is
(I believe) unnecessary in my test case due to other flaws.
However, one of those flaws is now intentional: blocks which
appear in return statements can be trivially observed to not
extend in lifetime past the return, and so we can allow a jump
past them. Do the necessary magic in IR-generation to make
this work.
llvm-svn: 164589
AsmStmts. This function is only used by GCCAsmStmts, however. Constraints need
to be properly computed before MSAsmStmts can use EmitAsmStmt. No functional
change intended.
llvm-svn: 162776
CodeGen option to a LangOpt option. In turn, hoist the guard into the parser
so that we avoid the new (and fairly unstable) Sema/AST/CodeGen logic. This
should restore the behavior of clang to that prior to r158325.
<rdar://problem/12163681>
llvm-svn: 162602
error was asserting on anything that included Windows.h. MS-style inline asm is
still dropped, but at least now we're not completely silent about it.
llvm-svn: 158833
attached. Since we do not support any attributes which appertain to a statement
(yet), testing of this is necessarily quite minimal.
Patch by Alexander Kornienko!
llvm-svn: 154723
and emit a relatively empty block for a plain break statement. This
enables us to track where we went through a switch.
PR9796 & rdar://11215207
llvm-svn: 154420
The purpose of refactoring is to hide operand roles from SwitchInst user (programmer). If you want to play with operands directly, probably you will need lower level methods than SwitchInst ones (TerminatorInst or may be User). After this patch we can reorganize SwitchInst operands and successors as we want.
What was done:
1. Changed semantics of index inside the getCaseValue method:
getCaseValue(0) means "get first case", not a condition. Use getCondition() if you want to resolve the condition. I propose don't mix SwitchInst case indexing with low level indexing (TI successors indexing, User's operands indexing), since it may be dangerous.
2. By the same reason findCaseValue(ConstantInt*) returns actual number of case value. 0 means first case, not default. If there is no case with given value, ErrorIndex will returned.
3. Added getCaseSuccessor method. I propose to avoid usage of TerminatorInst::getSuccessor if you want to resolve case successor BB. Use getCaseSuccessor instead, since internal SwitchInst organization of operands/successors is hidden and may be changed in any moment.
4. Added resolveSuccessorIndex and resolveCaseIndex. The main purpose of these methods is to see how case successors are really mapped in TerminatorInst.
4.1 "resolveSuccessorIndex" was created if you need to level down from SwitchInst to TerminatorInst. It returns TerminatorInst's successor index for given case successor.
4.2 "resolveCaseIndex" converts low level successors index to case index that curresponds to the given successor.
Note: There are also related compatability fix patches for dragonegg, klee, llvm-gcc-4.0, llvm-gcc-4.2, safecode, clang.
llvm-svn: 149482
statements. As noted in the documentation for the AST node, the
semantics of __if_exists/__if_not_exists are somewhat different from
the way Visual C++ implements them, because our parsed-template
representation can't accommodate VC++ semantics without serious
contortions. Hopefully this implementation is "good enough".
llvm-svn: 142901
Start handling debug line and scope information better:
Migrate most of the location setting within the larger API in CGDebugInfo and
update a lot of callers.
Remove the existing file/scope change machinery in UpdateLineDirectiveRegion
and replace it with DILexicalBlockFile usage.
Finishes off the rest of rdar://10246360
after fixing a few bugs that were exposed in gdb testsuite testing.
llvm-svn: 141893
Migrate most of the location setting within the larger API in CGDebugInfo and
update a lot of callers.
Remove the existing file/scope change machinery in UpdateLineDirectiveRegion
and replace it with DILexicalBlockFile usage.
Finishes off the rest of rdar://10246360
llvm-svn: 141732
- Remodel Expr::EvaluateAsInt to behave like the other EvaluateAs* functions,
and add Expr::EvaluateKnownConstInt to capture the current fold-or-assert
behaviour.
- Factor out evaluation of bitfield bit widths.
- Fix a few places which would evaluate an expression twice: once to determine
whether it is a constant expression, then again to get the value.
llvm-svn: 141561
emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.
llvm-svn: 138599
hierarchy of delegation, and that EH selector values are meaningful
function-wide (good thing, too, or inlining wouldn't work).
2,3d
1a
hierarchy of delegation and that EH selector values have the same
meaning everywhere in the function instead of being meaningful only
in the context of a specific selector.
This removes the need for routing edges through EH cleanups,
since a cleanup simply always branches to its enclosing scope.
llvm-svn: 137293
Language-design credit goes to a lot of people, but I particularly want
to single out Blaine Garst and Patrick Beard for their contributions.
Compiler implementation credit goes to Argyrios, Doug, Fariborz, and myself,
in no particular order.
llvm-svn: 133103
Emit debug info only if there is an insertion point. The debug info should not force an insertion point. Codegen may later on decide to not emit code for some reason, see extensive comment in CodeGenFunction::EmitStmt(), and debug info should not get in the way.
llvm-svn: 132610
it down. we effectively were compile the testcase into:
void test14(int x) {
switch (x) {
case 11: break;
case 42: test14(97); // fallthrough
default: test14(42); break;
which is not the same thing at all. This fixes a miscompilation of
MallocBench/gs seen on the clang-x86_64-linux-fnt buildbot.
llvm-svn: 129679
live case of a switch statement when switching on a constant. This is terribly
limited, but enough to handle the trivial example included. Before we would
emit:
define void @test1(i32 %i) nounwind {
entry:
%i.addr = alloca i32, align 4
store i32 %i, i32* %i.addr, align 4
switch i32 1, label %sw.epilog [
i32 1, label %sw.bb
]
sw.bb: ; preds = %entry
%tmp = load i32* %i.addr, align 4
%inc = add nsw i32 %tmp, 1
store i32 %inc, i32* %i.addr, align 4
br label %sw.epilog
sw.epilog: ; preds = %sw.bb, %entry
switch i32 0, label %sw.epilog3 [
i32 1, label %sw.bb1
]
sw.bb1: ; preds = %sw.epilog
%tmp2 = load i32* %i.addr, align 4
%add = add nsw i32 %tmp2, 2
store i32 %add, i32* %i.addr, align 4
br label %sw.epilog3
sw.epilog3: ; preds = %sw.bb1, %sw.epilog
ret void
}
now we emit:
define void @test1(i32 %i) nounwind {
entry:
%i.addr = alloca i32, align 4
store i32 %i, i32* %i.addr, align 4
%tmp = load i32* %i.addr, align 4
%inc = add nsw i32 %tmp, 1
store i32 %inc, i32* %i.addr, align 4
ret void
}
This improves -O0 compile time (less IR to generate and shove through the code
generator) and the clever linux kernel people found a way to fail to build if we
don't do this optimization. This step isn't enough to handle the kernel case
though.
llvm-svn: 126597
LabelDecl and LabelStmt. There is a 1-1 correspondence between the
two, but this simplifies a bunch of code by itself. This is because
labels are the only place where we previously had references to random
other statements, causing grief for AST serialization and other stuff.
This does cause one regression (attr(unused) doesn't silence unused
label warnings) which I'll address next.
This does fix some minor bugs:
1. "The only valid attribute " diagnostic was capitalized.
2. Various diagnostics printed as ''labelname'' instead of 'labelname'
3. This reduces duplication of label checking between functions and blocks.
Review appreciated, particularly for the cindex and template bits.
llvm-svn: 125733
there were only three virtual methods of any significance.
The primary way to grab child iterators now is with
Stmt::child_range children();
Stmt::const_child_range children() const;
where a child_range is just a std::pair of iterators suitable for
being llvm::tie'd to some locals. I've left the old child_begin()
and child_end() accessors in place, but it's probably a substantial
penalty to grab the iterators individually now, since the
switch-based dispatch is kindof inherently slower than vtable
dispatch. Grabbing them together is probably a slight win over the
status quo, although of course we could've achieved that with vtables, too.
I also reclassified SwitchCase (correctly) as an abstract Stmt
class, which (as the first such class that wasn't an Expr subclass)
required some fiddling in a few places.
There are somewhat gross metaprogramming hooks in place to ensure
that new statements/expressions continue to implement
getSourceRange() and children(). I had to work around a recent clang
bug; dgregor actually fixed it already, but I didn't want to
introduce a selfhosting dependency on ToT.
llvm-svn: 125183
delete the block we began emitting into if it had no predecessors. We never
want to do this, because there are several valid cases during statement
emission where an existing block has no known predecessors but will acquire
some later. The case in my test case doesn't inherently fall into this
category, because we could safely emit the case-range code before the statement
body, but there are examples with labels that can't be fallen into
that would also demonstrate this bug.
rdar://problem/8837067
llvm-svn: 123303
in asm statements:
register int foo asm("rdi");
asm("..." : ... "r" (foo) ...
We also only accept these variables if the constraint in the asm statement is "r".
This fixes most of PR3933.
llvm-svn: 122643
Fix a bug in the emission of complex compound assignment l-values.
Introduce a method to emit an expression whose value isn't relevant.
Make that method evaluate its operand as an l-value if it is one.
Fixes our volatile compliance in C++.
llvm-svn: 120931
of all the lines of the inline asm. With the refactoring and enhancement
of the backend, we can now reports errors on the correct source line when
an asm contains multiple lines of text. For something like this:
void foo() {
asm("push %rax\n"
".code32\n");
}
we used to get this: (note that the line 4 in t.c isn't helpful)
t.c:4:7: error: warning: ignoring directive for now
asm("push %rax\n"
^
<inline asm>:2:1: note: instantiated into assembly here
.code32
^
now we get:
t.c:5:8: error: warning: ignoring directive for now
".code32\n"
^
<inline asm>:2:1: note: instantiated into assembly here
.code32
^
Note that we're pointing to line 5 properly now. This implements
rdar://7839391 - inline asm errors should point to the right line in the asm
and makes the error message in PR8595 much less confusing.
llvm-svn: 119489
in asm's. PR 8501, 8602988.
I don't like including Type.h where it is; the idea was
to get references to X86_MMXTy out of the common code.
Maybe there's a better way?
llvm-svn: 117736
in the scope checker. With that done, turn an indirect goto into a
protected scope into a hard error; otherwise IR generation has to start
worrying about declarations not dominating their scopes, as exemplified
in PR8473.
If this really affects anyone, I can probably adjust this to only hard-error
on possible indirect gotos into VLA scopes rather than arbitrary scopes.
But we'll see how people cope with the aggressive change on the marginal
feature.
llvm-svn: 117539
slot. The easiest way to do that was to bundle up the information
we care about for aggregate slots into a new structure which demands
that its creators at least consider the question.
I could probably be convinced that the ObjC 'needs GC' bit should
be rolled into this structure.
Implement generalized copy elision. The main obstacle here is that
IR-generation must be much more careful about making sure that exactly
llvm-svn: 113962
self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
This works around a crash where malloc reused the memory of an erased BB for a
new BB leaving old cleanup information pointing at the new block.
llvm-svn: 104472
return statements. We perform NRVO only when all of the return
statements in the function return the same variable. Fixes some link
failures in Boost.Interprocess (which is relying on NRVO), and
probably improves performance for some C++ applications.
llvm-svn: 103867
input and output types when the smaller value isn't mentioned in the
asm string. Extend this support from integers to also allowing
fp values to be mismatched (if not mentioned in the asm string).
llvm-svn: 102188
(if there's a current block). The chief advantage of doing this is that it
lets us pick blocks (e.g. EH blocks) to push to the end of the function so
that fallthrough happens consistently --- i.e. it gives us the flexibility
of ordering blocks as we please without having to change the order in which
we generate code. There are standard (?) optimization passes which can do some
of that for us, but better to generate reasonable code to begin with.
llvm-svn: 101997
have the code generate slap a srcloc metadata on inline asm nodes.
This allows us to diagnose invalid inline asms with such nice
diagnostics as:
<inline asm>:1:2: error: unrecognized instruction
abc incl %eax
^
asm.c:2:12: note: generated from here
__asm__ ("abc incl %0" : "+r" (X));
^
2 diagnostics generated.
llvm-svn: 100608
EmitReferenceBindingToExpr() rather than assuming we have an
lvalue. This is just the lowest hanging fruit for PR6024, which still
requires a bit of work.
llvm-svn: 99447
need to deal with aggregates specially; this is consistent with the rest of IRgen.
Also, simplify EmitParmDecl and don't worry about using Decl::getNameAsString.
llvm-svn: 95393
All statements that involve conditions can now hold on to a separate
condition declaration (a VarDecl), and will use a DeclRefExpr
referring to that VarDecl for the condition expression. ForStmts now
have such a VarDecl (I'd missed those in previous commits).
Also, since this change reworks the Action interface for
if/while/switch/for, use FullExprArg for the full expressions in those
expressions, to ensure that we're emitting
Note that we are (still) not generating the right cleanups for
condition variables in for statements. That will be a follow-on
commit.
llvm-svn: 89817
cleanups for while loops:
1) Make sure that we destroy the condition variable of a while statement each time through the loop for, e.g.,
while (shared_ptr<WorkInt> p = getWorkItem()) {
// ...
}
2) Make sure that we always enter a new cleanup scope for the body of the while loop, even when there is no compound expression, e.g.,
while (blah)
RAIIObject raii(blah+1);
llvm-svn: 89800
- Outside the "if", to ensure that we destroy the condition variable
at the end of the "if" statement rather than at the end of the
block containing the "if" statement.
- Inside the "then" and "else" branches, so that we emit then- or
else-local cleanups at the end of the corresponding block when the
block is not a compound statement.
To make adding these new cleanup scopes easier (and since
switch/do/while will all need the same treatment), added the
CleanupScope RAII object to introduce a new cleanup scope and make
sure it gets cleaned up.
llvm-svn: 89773
rather than burying it in a CXXConditionDeclExpr (that occassionally
hides behind implicit conversions). Similar changes for
switch, while, and do-while will follow, then the removal of
CXXConditionDeclExpr. This commit is the canary.
llvm-svn: 89717
using the new LLVM support for this. This is temporarily hiding
behind horrible and ugly #ifdefs until the time when the optimizer
is stable (hopefully a week or so). Until then, lets make it "opt in" :)
llvm-svn: 85446
1. CGF now has fewer bytes of state (one pointer instead of a vector).
2. The generated code is determinstic, instead of getting labels in
'map order' based on pointer addresses.
3. Clang now emits one 'indirect goto switch' for each function, instead
of one for each indirect goto. This fixes an M*N = N^2 IR size issue
when there are lots of address-taken labels and lots of indirect gotos.
4. This also makes the default cause do something useful, reducing the
size of the jump table needed (by one).
llvm-svn: 83952
expressions.
- This generally catches the important case of noreturn functions.
- With the last two changes, we are down to 152 unreachable blocks emitted on
403.gcc, vs the 1805 we started with.
llvm-svn: 76364
- Emit variable declarations as "simple", we want to avoid forcing the creation
of a dummy basic block, but still need to make the variable available for
later use.
- With that, we can now skip IRgen for other unreachable statements (which
don't define a label).
- Anders, I added two fixmes on calls to EmitVLASize, can you check them?
llvm-svn: 76361
function calls. For a program like this:
#include <stdio.h>
static __inline__ __attribute__((always_inline))
int bar(int x) { return 4; }
int main() {
int X = bar(4);
printf("%d\n", X);
}
clang was not outputing any debug info for the body of main(). This is
because the backend is getting confused by the region_start/end that clang
is emitting for block scopes. For now, just disable these (matching llvm-gcc),
this stuff is in progress of rework anyway.
llvm-svn: 70889