A second instance of attributed types escaped the previous change, identified
thanks to Richard Smith! When deducing the void case, we would also assume that
the type would not be attributed. Furthermore, properly handle multiple
attributes being applied to a single TypeLoc.
Properly handle this case and future-proof a bit by ignoring parenthesis
further. The test cases do use the additional parenthesis to ensure that this
case remains properly handled.
Addresses post-commit review comments from Richard Smith to SVN r219851.
llvm-svn: 219974
Specifically, avoid typo-correcting the variable name into a type before
typo-correcting the actual type name in the declaration. Doing so
results in a very unpleasant cascade of errors, with the typo correction
of the actual type name being buried in the middle.
llvm-svn: 219732
and !=) to support mixed complex and real operand types.
This requires removing an assert from SemaChecking, and adding support
both to the constant evaluator and the code generator to synthesize the
imaginary part when needed. This seemed somewhat cleaner than having
just the comparison operators force real-to-complex conversions.
I've added test cases for these operations. I'm really terrified that
there were *no* tests in-tree which exercised this.
This turned up when trying to build R after my change to the complex
type lowering.
llvm-svn: 219570
operators where one type is a C complex type, and to emit both the
efficient and correct implementation for complex arithmetic according to
C11 Annex G using this extra information.
For both multiply and divide the old code was writing a long-hand
reduced version of the math without any of the special handling of inf
and NaN recommended by the standard here. Instead of putting more
complexity here, this change does what GCC does which is to emit
a libcall for the fully general case.
However, the old code also failed to do the proper minimization of the
set of operations when there was a mixed complex and real operation. In
those cases, C provides a spec for much more minimal operations that are
valid. Clang now emits the exact suggested operations. This change isn't
*just* about performance though, without minimizing these operations, we
again lose the correct handling of infinities and NaNs. It is critical
that this happen in the frontend based on assymetric type operands to
complex math operations.
The performance implications of this change aren't trivial either. I've
run a set of benchmarks in Eigen, an open source mathematics library
that makes heavy use of complex. While a few have slowed down due to the
libcall being introduce, most sped up and some by a huge amount: up to
100% and 140%.
In order to make all of this work, also match the algorithm in the
constant evaluator to the one in the runtime library. Currently it is
a broken port of the simplifications from C's Annex G to the long-hand
formulation of the algorithm.
Splitting this patch up is very hard because none of this works without
the AST change to preserve non-complex operands. Sorry for the enormous
change.
Follow-up changes will include support for sinking the libcalls onto
cold paths in common cases and fastmath improvements to allow more
aggressive backend folding.
Differential Revision: http://reviews.llvm.org/D5698
llvm-svn: 219557
Summary: This fixes PR21235.
Test Plan: Includes an automated test.
Reviewers: hansw
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D5718
llvm-svn: 219551
Assertion failed: "Computed __func__ length differs from type!"
Reworked PredefinedExpr representation with internal StringLiteral field for function declaration.
Differential Revision: http://reviews.llvm.org/D5365
llvm-svn: 219393
Windows TLS relies on indexing through a tls_index in order to get at
the DLL's thread local variables. However, this index is not exported
along with the variable: it is assumed that all accesses to thread local
variables are inside the same module which created the variable in the
first place.
While there are several implementation techniques we could adopt to fix
this (notably, the Itanium ABI gets this for free), it is not worth the
heroics.
Instead, let's just ban this combination. We could revisit this in the
future if we need to.
This fixes PR21111.
llvm-svn: 219049
Richard noted in the review of r217349 that extra handling of
__builtin_assume_aligned inside of the expression evaluator was needed. He was
right, and this should address the concerns raised, namely:
1. The offset argument to __builtin_assume_aligned can have side effects, and
we need to make sure that all arguments are properly evaluated.
2. If the alignment assumption does not hold, that introduces undefined
behavior, and undefined behavior cannot appear inside a constexpr.
and hopefully the diagnostics produced are detailed enough to explain what is
going on.
llvm-svn: 218992
This CL has caused bootstrap failures on Linux and OSX buildbots running with -Werror.
Example report from http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux/builds/13183/steps/bootstrap%20clang/logs/stdio:
================================================================
[ 91%] Building CXX object tools/clang/tools/diagtool/CMakeFiles/diagtool.dir/ShowEnabledWarnings.cpp.o
In file included from /home/dtoolsbot/build/sanitizer-x86_64-linux/build/llvm/lib/Target/R600/AMDGPUISelDAGToDAG.cpp:20:
In file included from /home/dtoolsbot/build/sanitizer-x86_64-linux/build/llvm/lib/Target/R600/SIISelLowering.h:19:
/home/dtoolsbot/build/sanitizer-x86_64-linux/build/llvm/lib/Target/R600/SIInstrInfo.h:71:8: error: 'getLdStBaseRegImmOfs' overrides a member function but is not marked 'override' [-Werror,-Winconsistent-missing-override]
bool getLdStBaseRegImmOfs(MachineInstr *LdSt,
^
/home/dtoolsbot/build/sanitizer-x86_64-linux/build/llvm/include/llvm/Target/TargetInstrInfo.h:815:16: note: overridden virtual function is here
virtual bool getLdStBaseRegImmOfs(MachineInstr *LdSt,
^
================================================================
llvm-svn: 218969
for an overriding method if class has at least one
'override' specified on one of its methods.
Reviewed by Doug Gregor. rdar://18295240
(I have already checked in all llvm files with missing 'override'
methods and Bob Wilson has fixed a TableGen of FastISel so
no warnings are expected from build of llvm after this patch.
I have already verified this).
llvm-svn: 218925
This adds support for the align_value attribute. This attribute is supported by
Intel's compiler (versions 14.0+), and several of my HPC users have requested
support in Clang. It specifies an alignment assumption on the values to which a
pointer points, and is used by numerical libraries to encourage efficient
generation of vector code.
Of course, we already have an aligned attribute that can specify enhanced
alignment for a type, so why is this additional attribute important? The
problem is that if you want to specify that an input array of T is, say,
64-byte aligned, you could try this:
typedef double aligned_double attribute((aligned(64)));
void foo(aligned_double *P) {
double x = P[0]; // This is fine.
double y = P[1]; // What alignment did those doubles have again?
}
the access here to P[1] causes problems. P was specified as a pointer to type
aligned_double, and any object of type aligned_double must be 64-byte aligned.
But if P[0] is 64-byte aligned, then P[1] cannot be, and this access causes
undefined behavior. Getting round this problem requires a lot of awkward
casting and hand-unrolling of loops, all of which is bad.
With the align_value attribute, we can accomplish what we'd like in a well
defined way:
typedef double *aligned_double_ptr attribute((align_value(64)));
void foo(aligned_double_ptr P) {
double x = P[0]; // This is fine.
double y = P[1]; // This is fine too.
}
This attribute does not create a new type (and so it not part of the type
system), and so will only "propagate" through templates, auto, etc. by
optimizer deduction after inlining. This seems consistent with Intel's
implementation (thanks to Alexey for confirming the various Intel-compiler
behaviors).
As a final note, I would have chosen to call this aligned_value, not
align_value, for better naming consistency with the aligned attribute, but I
think it would be more useful to users to adopt Intel's name.
llvm-svn: 218910
Clang warns (treated as error by default, but still ignored in system headers)
when passing non-POD arguments to variadic functions, and generates a trap
instruction to crash the program if that code is ever run.
Unfortunately, MSVC happily generates code for such calls without a warning,
and there is code in system headers that use it.
This makes Clang not insert the trap instruction when in -fms-compatibility
mode, while still generating the warning/error message.
Differential Revision: http://reviews.llvm.org/D5492
llvm-svn: 218640
In addition to __builtin_assume_aligned, GCC also supports an assume_aligned
attribute which specifies the alignment (and optional offset) of a function's
return value. Here we implement support for the assume_aligned attribute by making
use of the @llvm.assume intrinsic.
llvm-svn: 218500
we were failing to find that bit-field when performing integer promotions. This
brings us closer to following the standard, and closer to GCC.
In C, this change is technically a regression: we get bit-field promotions
completely wrong in C, promoting cases that are categorically not bit-field
designators. This change makes us do so slightly more consistently, though.
llvm-svn: 218428
A record which contains a flexible array member is itself a flexible
array member. A struct which contains such a record should also
consider itself to be a flexible array member.
llvm-svn: 218378
lists. Since the fields are inititalized one at a time, using a field with
lower index to initialize a higher indexed field should not be warned on.
llvm-svn: 218339
that function, and apart from being slow, this is unnecessary: ADL can trigger
instantiations that are not permitted here. The standard isn't *completely*
clear here, but this seems like the intent, and in any case this approach is
permitted by [temp.inst]p7.
llvm-svn: 218330
warns when a guarded variable is passed by reference as a function argument.
This is released as a separate warning flag, because it could potentially
break existing code that uses thread safety analysis.
llvm-svn: 218087
The reasoning is that this construct is accepted by all compilers and valid in
C++11, so it doesn't seem like a useful warning to have enabled by default.
Building with -pedantic, -Wbind-to-temporary-copy, or -Wc++98-compat still
shows the warning.
The motivation is that I built re2, and this was the only warning that was
emitted during the build. Both changing re2 to fix the warning and detecting
clang and suppressing the warning in re2's build seem inferior than just giving
the compiler a good default for this warning.
Also move the cxx98compat version of this warning to CXX98CompatPedantic, and
update tests accordingly.
llvm-svn: 218008
We would end up marking the vtable of the derived class as used for no
reason. Because the call itself is qualified, it is never virtual, and
the vtable of the derived class isn't helpful. We would end up rejecting
code that MSVC accepts for no benefit.
See http://crbug.com/413478
llvm-svn: 217910
constexpr function. Part of this fix is a tentative fix for an as-yet-unfiled
core issue (we're missing a prohibition against reading mutable members from
unions via a trivial constructor/assignment, since that doesn't perform an
lvalue-to-rvalue conversion on the members).
llvm-svn: 217852
In Parser::ParseCXXClassMemberDeclaration(), it was possible to change
isAccessDecl = NextToken().is(tok::kw_operator);
to
isAccessDecl = false;
and no tests would fail. Now there's coverage for this.
llvm-svn: 217519
This makes use of the recently-added @llvm.assume intrinsic to implement a
__builtin_assume(bool) intrinsic (to provide additional information to the
optimizer). This hooks up __assume in MS-compatibility mode to mirror
__builtin_assume (the semantics have been intentionally kept compatible), and
implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM
now contains special logic to deal with assumptions of this form.
llvm-svn: 217349
The warning warns on TypedefNameDecls -- typedefs and C++11 using aliases --
that are !isReferenced(). Since the isReferenced() bit on TypedefNameDecls
wasn't used for anything before this warning it wasn't always set correctly,
so this patch also adds a few missing MarkAnyDeclReferenced() calls in
various places for TypedefNameDecls.
This is made a bit complicated due to local typedefs possibly being used only
after their local scope has closed. Consider:
template <class T>
void template_fun(T t) {
typename T::Foo s3foo; // YYY
(void)s3foo;
}
void template_fun_user() {
struct Local {
typedef int Foo; // XXX
} p;
template_fun(p);
}
Here the typedef in XXX is only used at end-of-translation unit, when YYY in
template_fun() gets instantiated. To handle this, typedefs that are unused when
their scope exits are added to a set of potentially unused typedefs, and that
set gets checked at end-of-TU. Typedefs that are still unused at that point then
get warned on. There's also serialization code for this set, so that the
warning works with precompiled headers and modules. For modules, the warning
is emitted when the module is built, for precompiled headers each time the
header gets used.
Finally, consider a function using C++14 auto return types to return a local
type defined in a header:
auto f() {
struct S { typedef int a; };
return S();
}
Here, the typedef escapes its local scope and could be used by only some
translation units including the header. To not warn on this, add a
RecursiveASTVisitor that marks all delcs on local types returned from auto
functions as referenced. (Except if it's a function with internal linkage, or
the decls are private and the local type has no friends -- in these cases, it
_is_ safe to warn.)
Several of the included testcases (most of the interesting ones) were provided
by Richard Smith.
(gcc's spelling -Wunused-local-typedefs is supported as an alias for this
warning.)
llvm-svn: 217298
"protected scope" is very unhelpful here and actively confuses users. Instead,
simply state the nature of the problem in the diagnostic: we cannot jump from
here to there. The notes explain nicely why not.
llvm-svn: 217293
Originally, self reference checking made a double pass over some expressions
to handle reference type checking. Now, allow HandleValue to also check
reference types, and fallback to Visit for unhandled expressions.
llvm-svn: 217203
before retrying the initialization to produce diagnostics. Otherwise, we may
fail to produce any diagnostics, and silently produce invalid AST in a -Asserts
build. Also add a note to this codepath to make it more clear why we were
trying to create a temporary.
llvm-svn: 217197
Scoped lockable objects (mutex guards) are implemented as if it is a
lock itself that is acquired upon construction and unlocked upon
destruction. As it if course needs to be used to actually lock down
something else (a mutex), it keeps track of this knowledge through its
underlying mutex field in its FactEntry.
The problem with this approach is that this only allows us to lock down
a single mutex, so extend the code to use a vector of underlying
mutexes. This, however, makes the code a bit more complex than
necessary, so subclass FactEntry into LockableFactEntry and
ScopedLockableFactEntry and move all the logic that differs between
regular locks and scoped lockables into member functions.
llvm-svn: 217016