calculating it recursively.
boost::assign::tuple_list_of uses the trick of chaining call operator expressions in order to declare a "list of tuples", e.g:
std::vector<tuple> v = boost::assign::tuple_list_of(1, "foo")(2, "bar")(3, "qqq");
Due to CXXOperatorCallExpr calculating its source range recursively we would get
significant slowdowns with a large number of chained call operator expressions and the
potential for stack overflow.
rdar://11350116
llvm-svn: 155848
This was exposed by SingleSource/UnitTests/Vector/constpool.c.
The computed size of a basic block isn't always a multiple of its known
alignment, and that can introduce extra alignment padding after the
block.
<rdar://problem/11347135>
llvm-svn: 155845
Thanks to "Gabor Greif" <ggreif@gmail.com> for reporting this problem.
The configure flag should be --with-default-sysroot as documented, and
not --with-sysroot. The reason we don't want to define --with-sysroot
is that GCC has a configure flag by that name and it has a different
semantics.
llvm-svn: 155844
Apparently we weren't checking default arguments when they were instantiated.
This adds the check, fixes the lack of instantiation caching (which seems like
it was mostly implemented but just missed the last step), and avoids
implementing non-dependent default args (for non-dependent parameter types) as
uninstantiated default arguments (so that we don't warn once for every
instantiation when it's not instantiation dependent).
Reviewed by Richard Smith.
llvm-svn: 155838
On x86-32, structure return via sret lets the callee pop the hidden
pointer argument off the stack, which the caller then re-pushes.
However if the calling convention is fastcc, then a register is used
instead, and the caller should not adjust the stack. This is
implemented with a check of IsTailCallConvention
X86TargetLowering::LowerCall but is now checked properly in
X86FastISel::DoSelectCall.
(this time, actually commit what was reviewed!)
llvm-svn: 155825
ARM BUILD_VECTORs created after type legalization cannot use i8 or i16
operands, since those types are not legal. Instead use i32 operands, which
will be implicitly truncated by the BUILD_VECTOR to match the element type.
llvm-svn: 155824
relocations are resolved. It's much more reasonable to do this decision when
relocations are just being added - we have all the information at that point.
Also a bit of renaming and extra comments to clarify extensions.
llvm-svn: 155819
Allow the "SplitCriticalEdge" function to split the edge to a landing pad. If
the pass is *sure* that it thinks it knows what it's doing, then it may go ahead
and specify that the landing pad can have its critical edge split. The loop
unswitch pass is one of these passes. It will split the critical edges of all
edges coming from a loop to a landing pad not within the loop. Doing so will
retain important loop analysis information, such as loop simplify.
llvm-svn: 155817
- Add comments
- Change field names to be more reasonable
- Fix indentation and naming to conform to coding conventions
- Remove unnecessary includes / replace them by forward declatations
llvm-svn: 155815
filter_decl_iterator had a weird mismatch where both op* and op-> returned T*
making it difficult to generalize this filtering behavior into a reusable
library of any kind.
This change errs on the side of value, making op-> return T* and op* return
T&.
(reviewed by Richard Smith)
llvm-svn: 155808
g++4.7, which reuses stack space allocated for temporaries. CFGElement::getAs
returns a suitably-cast version of 'this'. Patch by Markus Trippelsdorf!
No test: this code has the same observable behavior as the old code when built
with most compilers, and the tests were already failing when built with a
compiler for which this produced a broken binary.
llvm-svn: 155803
victim. Don't crash if we have a delay-parsed exception specification for a
class member which is invalid in a way which precludes building a FunctionDecl.
llvm-svn: 155788
i32 __builtin_annotation(i32, string);
Applying it to i64 (e.g., long long) generates the following IR.
trunc i64 {{.*}} to i32
call i32 @llvm.annotation.i32
zext i32 {{.*}} to i64
The redundant truncation and extension make the result difficult to use.
This patch makes __builtin_annotation() generic.
type __builtin_annotation(type, string);
For the i64 example, it simplifies the generated IR to:
call i64 @llvm.annotation.i64
Patch by Xi Wang!
llvm-svn: 155764