Create an ConstantAggregateZero upfront if we see that it is viable.
This saves us from having to manually push_back each and every
initializer and then looping back over them to determine if they are
'null'.
llvm-svn: 224908
Fixes this snippet from SLi's afl fuzzer output:
class {
i (x = <, enum
This parsed i as a function, x as a paramter, and the stuff after < as a
template list. This then called TryConsumeDeclarationSpecifier() which
called TryAnnotateCXXScopeToken() without checking the preconditions of
this function. Check them before calling, like all other callers of
TryAnnotateCXXScopeToken() do.
A more readable reproducer that causes the same crash is
class {
void i(int x = MyTemplateClass<int, union int>::foo());
};
The reduced version used an eof token as surprising token, but kw_int works
just as well to repro and is easier to insert into a test file.
llvm-svn: 224906
isDeclarationSpecifier performs error recovers which jostles the token
stream. Specifically, TryAnnotateTypeOrScopeToken will end up consuming
a typename token which will confuse the attribute parsing machinery as
we no-longer have something identifier-like.
llvm-svn: 224903
The else case ResultReg was not checked for validity.
To my surprise, this case was not hit in any of the
existing test cases. This includes a new test cases
that tests this path.
Also drop the `target triple` declaration from the
original test as suggested by H.J. Lu, because
apparently with it the test won't be run on Linux
llvm-svn: 224901
If the control flow is modelling an if-statement where the only instruction in
the 'then' basic block (excluding the terminator) is a call to cttz/ctlz,
CodeGenPrepare can try to speculate the cttz/ctlz call and simplify the control
flow graph.
Example:
\code
entry:
%cmp = icmp eq i64 %val, 0
br i1 %cmp, label %end.bb, label %then.bb
then.bb:
%c = tail call i64 @llvm.cttz.i64(i64 %val, i1 true)
br label %end.bb
end.bb:
%cond = phi i64 [ %c, %then.bb ], [ 64, %entry]
\code
In this example, basic block %then.bb is taken if value %val is not zero.
Also, the phi node in %end.bb would propagate the size-of in bits of %val
only if %val is equal to zero.
With this patch, CodeGenPrepare will try to hoist the call to cttz from %then.bb
into basic block %entry only if cttz is cheap to speculate for the target.
Added two new hooks in TargetLowering.h to let targets customize the behavior
(i.e. decide whether it is cheap or not to speculate calls to cttz/ctlz). The
two new methods are 'isCheapToSpeculateCtlz' and 'isCheapToSpeculateCttz'.
By default, both methods return 'false'.
On X86, method 'isCheapToSpeculateCtlz' returns true only if the target has
LZCNT. Method 'isCheapToSpeculateCttz' only returns true if the target has BMI.
Differential Revision: http://reviews.llvm.org/D6728
llvm-svn: 224899
We expected the type of a TagDecl to be a TagType, not an
InjectedClassNameType. Introduced a helper method, Type::getAsTagDecl,
to abstract away the difference; redefine Type::getAsCXXRecordDecl to be
in terms of it.
llvm-svn: 224898
Masked vector intrinsics are a part of common LLVM IR, but they are really supported on AVX2 and AVX-512 targets. I added a code that translates masked intrinsic for all other targets. The masked vector intrinsic is converted to a chain of scalar operations inside conditional basic blocks.
http://reviews.llvm.org/D6436
llvm-svn: 224897
We'd let annotation tokens from '#pragma pack' and the like get inside a
function-like macro. This would lead to terror and mayhem; stop the
madness early.
This fixes PR22037.
llvm-svn: 224896
I'd be interested if the paragraph on Parse not knowing much about AST is
something folks agree with. I think this used to be true after rjmccall removed
the Action interface in r112244 and I believe it's still true, but I'm not sure.
(For example, ParseOpenMP.cpp does include AST/StmtOpenMP.h. Other than that,
Parse not using AST nodes much seems to be still true, though.)
llvm-svn: 224894
This fixes PR21587, what r221933 fixed for regular programs is now also
fixed for decls coming from PCH files.
Use another bit from the count/bits uint16_t for storing the "more than one
decl" bit. This reduces the number of bits for the count from 14 to 13.
The selector with the most overloads in Cocoa.h has ~55 overloads, so 13 bits
should still be plenty. Since this changes the meaning of a serialized bit
pattern, also increase clang::serialization::VERSION_MAJOR.
Storing the "more than one decl" state of only the first overload isn't quite
correct, but Sema::AreMultipleMethodsInGlobalPool() currently only looks at
the state of the first overload so it's good enough for now.
llvm-svn: 224892
Determining the address of a TLS variable results in a function call in
certain TLS models. This means that a simple ICmpInst might actually
result in invalidating the CTR register.
In such cases, do not attempt to rely on the CTR register for loop
optimization purposes.
This fixes PR22034.
Differential Revision: http://reviews.llvm.org/D6786
llvm-svn: 224890
Summary:
Consider the following IR:
%3 = load i8* undef
%4 = trunc i8 %3 to i1
%5 = call %jl_value_t.0* @foo(..., i1 %4, ...)
ret %jl_value_t.0* %5
Bools (that are the result of direct truncs) are lowered as whatever
the argument to the trunc was and a "and 1", causing the part of the
MBB responsible for this argument to look something like this:
%vreg8<def,tied1> = AND8ri %vreg7<kill,tied0>, 1, %EFLAGS<imp-def>; GR8:%vreg8,%vreg7
Later, when the load is lowered, it will insert
%vreg15<def> = MOV8rm %vreg14, 1, %noreg, 0, %noreg; mem:LD1[undef] GR8:%vreg15 GR64:%vreg14
but remember to (at the end of isel) replace vreg7 by vreg15. Now for
the bug. In fast isel lowering, we mistakenly mark vreg8 as the result
of the load instead of the trunc. This adds a fixup to have
vreg8 replaced by whatever the result of the load is as well, so
we end up with
%vreg15<def,tied1> = AND8ri %vreg15<kill,tied0>, 1, %EFLAGS<imp-def>; GR8:%vreg15
which is an SSA violation and causes problems later down the road.
This fixes PR21557.
Test Plan: Test test case from PR21557 is added to the test suite.
Reviewers: ributzka
Reviewed By: ributzka
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6245
llvm-svn: 224884
This still lower to the same intrinsics as before.
This is preparation for bounds checking the immediate on the avx version of the builtin so we don't pass illegal immediates into the backend. Since SSE uses a smaller size immediate its not possible to bounds check when using a shared builtin. Rather than creating a clang specific builtin for the different immediate, I decided (after consulting with Chandler) that it was better to match gcc.
llvm-svn: 224879
Remove ObjCMethodList::Count, instead store a "has more than one decl" bit in
the low bit of the ObjCMethodDecl pointer, using a PointerIntPair.
Most of this patch is replacing ".Method" with ".getMethod()".
No intended behavior change.
llvm-svn: 224876