live case of a switch statement when switching on a constant. This is terribly
limited, but enough to handle the trivial example included. Before we would
emit:
define void @test1(i32 %i) nounwind {
entry:
%i.addr = alloca i32, align 4
store i32 %i, i32* %i.addr, align 4
switch i32 1, label %sw.epilog [
i32 1, label %sw.bb
]
sw.bb: ; preds = %entry
%tmp = load i32* %i.addr, align 4
%inc = add nsw i32 %tmp, 1
store i32 %inc, i32* %i.addr, align 4
br label %sw.epilog
sw.epilog: ; preds = %sw.bb, %entry
switch i32 0, label %sw.epilog3 [
i32 1, label %sw.bb1
]
sw.bb1: ; preds = %sw.epilog
%tmp2 = load i32* %i.addr, align 4
%add = add nsw i32 %tmp2, 2
store i32 %add, i32* %i.addr, align 4
br label %sw.epilog3
sw.epilog3: ; preds = %sw.bb1, %sw.epilog
ret void
}
now we emit:
define void @test1(i32 %i) nounwind {
entry:
%i.addr = alloca i32, align 4
store i32 %i, i32* %i.addr, align 4
%tmp = load i32* %i.addr, align 4
%inc = add nsw i32 %tmp, 1
store i32 %inc, i32* %i.addr, align 4
ret void
}
This improves -O0 compile time (less IR to generate and shove through the code
generator) and the clever linux kernel people found a way to fail to build if we
don't do this optimization. This step isn't enough to handle the kernel case
though.
llvm-svn: 126597
LabelDecl and LabelStmt. There is a 1-1 correspondence between the
two, but this simplifies a bunch of code by itself. This is because
labels are the only place where we previously had references to random
other statements, causing grief for AST serialization and other stuff.
This does cause one regression (attr(unused) doesn't silence unused
label warnings) which I'll address next.
This does fix some minor bugs:
1. "The only valid attribute " diagnostic was capitalized.
2. Various diagnostics printed as ''labelname'' instead of 'labelname'
3. This reduces duplication of label checking between functions and blocks.
Review appreciated, particularly for the cindex and template bits.
llvm-svn: 125733
there were only three virtual methods of any significance.
The primary way to grab child iterators now is with
Stmt::child_range children();
Stmt::const_child_range children() const;
where a child_range is just a std::pair of iterators suitable for
being llvm::tie'd to some locals. I've left the old child_begin()
and child_end() accessors in place, but it's probably a substantial
penalty to grab the iterators individually now, since the
switch-based dispatch is kindof inherently slower than vtable
dispatch. Grabbing them together is probably a slight win over the
status quo, although of course we could've achieved that with vtables, too.
I also reclassified SwitchCase (correctly) as an abstract Stmt
class, which (as the first such class that wasn't an Expr subclass)
required some fiddling in a few places.
There are somewhat gross metaprogramming hooks in place to ensure
that new statements/expressions continue to implement
getSourceRange() and children(). I had to work around a recent clang
bug; dgregor actually fixed it already, but I didn't want to
introduce a selfhosting dependency on ToT.
llvm-svn: 125183
delete the block we began emitting into if it had no predecessors. We never
want to do this, because there are several valid cases during statement
emission where an existing block has no known predecessors but will acquire
some later. The case in my test case doesn't inherently fall into this
category, because we could safely emit the case-range code before the statement
body, but there are examples with labels that can't be fallen into
that would also demonstrate this bug.
rdar://problem/8837067
llvm-svn: 123303
in asm statements:
register int foo asm("rdi");
asm("..." : ... "r" (foo) ...
We also only accept these variables if the constraint in the asm statement is "r".
This fixes most of PR3933.
llvm-svn: 122643
Fix a bug in the emission of complex compound assignment l-values.
Introduce a method to emit an expression whose value isn't relevant.
Make that method evaluate its operand as an l-value if it is one.
Fixes our volatile compliance in C++.
llvm-svn: 120931
of all the lines of the inline asm. With the refactoring and enhancement
of the backend, we can now reports errors on the correct source line when
an asm contains multiple lines of text. For something like this:
void foo() {
asm("push %rax\n"
".code32\n");
}
we used to get this: (note that the line 4 in t.c isn't helpful)
t.c:4:7: error: warning: ignoring directive for now
asm("push %rax\n"
^
<inline asm>:2:1: note: instantiated into assembly here
.code32
^
now we get:
t.c:5:8: error: warning: ignoring directive for now
".code32\n"
^
<inline asm>:2:1: note: instantiated into assembly here
.code32
^
Note that we're pointing to line 5 properly now. This implements
rdar://7839391 - inline asm errors should point to the right line in the asm
and makes the error message in PR8595 much less confusing.
llvm-svn: 119489
in asm's. PR 8501, 8602988.
I don't like including Type.h where it is; the idea was
to get references to X86_MMXTy out of the common code.
Maybe there's a better way?
llvm-svn: 117736
in the scope checker. With that done, turn an indirect goto into a
protected scope into a hard error; otherwise IR generation has to start
worrying about declarations not dominating their scopes, as exemplified
in PR8473.
If this really affects anyone, I can probably adjust this to only hard-error
on possible indirect gotos into VLA scopes rather than arbitrary scopes.
But we'll see how people cope with the aggressive change on the marginal
feature.
llvm-svn: 117539
slot. The easiest way to do that was to bundle up the information
we care about for aggregate slots into a new structure which demands
that its creators at least consider the question.
I could probably be convinced that the ObjC 'needs GC' bit should
be rolled into this structure.
Implement generalized copy elision. The main obstacle here is that
IR-generation must be much more careful about making sure that exactly
llvm-svn: 113962
self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
This works around a crash where malloc reused the memory of an erased BB for a
new BB leaving old cleanup information pointing at the new block.
llvm-svn: 104472
return statements. We perform NRVO only when all of the return
statements in the function return the same variable. Fixes some link
failures in Boost.Interprocess (which is relying on NRVO), and
probably improves performance for some C++ applications.
llvm-svn: 103867
input and output types when the smaller value isn't mentioned in the
asm string. Extend this support from integers to also allowing
fp values to be mismatched (if not mentioned in the asm string).
llvm-svn: 102188
(if there's a current block). The chief advantage of doing this is that it
lets us pick blocks (e.g. EH blocks) to push to the end of the function so
that fallthrough happens consistently --- i.e. it gives us the flexibility
of ordering blocks as we please without having to change the order in which
we generate code. There are standard (?) optimization passes which can do some
of that for us, but better to generate reasonable code to begin with.
llvm-svn: 101997
have the code generate slap a srcloc metadata on inline asm nodes.
This allows us to diagnose invalid inline asms with such nice
diagnostics as:
<inline asm>:1:2: error: unrecognized instruction
abc incl %eax
^
asm.c:2:12: note: generated from here
__asm__ ("abc incl %0" : "+r" (X));
^
2 diagnostics generated.
llvm-svn: 100608