Assertion failed: "Computed __func__ length differs from type!"
Reworked PredefinedExpr representation with internal StringLiteral field for function declaration.
Differential Revision: http://reviews.llvm.org/D5365
llvm-svn: 219393
Summary:
Previously CodeGen assumed that static locals were emitted before they
could be accessed, which is true for automatic storage duration locals.
However, it is possible to have CodeGen emit a nested function that uses
a static local before emitting the function that defines the static
local, breaking that assumption.
Fix it by creating the static local upon access and ensuring that the
deferred function body gets emitted. We may not be able to emit the
initializer properly from outside the function body, so don't try.
Fixes PR18020. See also previous attempts to fix static locals in
PR6769 and PR7101.
Reviewers: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4787
llvm-svn: 219265
Summary:
This add support for the C++11 feature, thread_local global variables.
The ABI Clang implements is an improvement of the MSVC ABI. Sadly,
further improvements could be made but not without sacrificing ABI
compatibility.
The feature is implemented as follows:
- All thread_local initialization routines are pointed to from the
.CRT$XDU section.
- All non-weak thread_local variables have their initialization routines
call from a single function instead of getting their own .CRT$XDU
section entry. This is done to open up optimization opportunities to
the compiler.
- All weak thread_local variables have their own .CRT$XDU section entry.
This entry is in a COMDAT with the global variable it is initializing;
this ensures that we will initialize the global exactly once.
- Destructors are registered in the initialization function using
__tlregdtor.
Differential Revision: http://reviews.llvm.org/D5597
llvm-svn: 219074
Summary:
This patch implements a new UBSan check, which verifies
that function arguments declared to be nonnull with __attribute__((nonnull))
are actually nonnull in runtime.
To implement this check, we pass FunctionDecl to CodeGenFunction::EmitCallArgs
(where applicable) and if function declaration has nonnull attribute specified
for a certain formal parameter, we compare the corresponding RValue to null as
soon as it's calculated.
Test Plan: regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits, rnk
Differential Revision: http://reviews.llvm.org/D5082
llvm-svn: 217389
This makes use of the recently-added @llvm.assume intrinsic to implement a
__builtin_assume(bool) intrinsic (to provide additional information to the
optimizer). This hooks up __assume in MS-compatibility mode to mirror
__builtin_assume (the semantics have been intentionally kept compatible), and
implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM
now contains special logic to deal with assumptions of this form.
llvm-svn: 217349
Summary:
This is a first small step towards passing generic "Expr" instead of
ArgBeg/ArgEnd pair into EmitCallArgs() family of methods. Having "Expr" will
allow us to get the corresponding FunctionDecl and its ParmVarDecls,
thus allowing us to alter CodeGen depending on the function/parameter
attributes.
No functionality change.
Test Plan: regression test suite
Reviewers: rnk
Reviewed By: rnk
Subscribers: aemerson, cfe-commits
Differential Revision: http://reviews.llvm.org/D4915
llvm-svn: 216214
It is responsible for generating metadata consumed by sanitizer instrumentation
passes in the backend. Move several methods from CodeGenModule to SanitizerMetadata.
For now the class is stateless, but soon it won't be the case.
Instead of creating globals providing source-level information to ASan, we will create
metadata nodes/strings which will be turned into actual global variables in the
backend (if needed).
No functionality change.
llvm-svn: 214564
This broke the following gdb tests:
gdb.base__annota1.exp
gdb.base__consecutive.exp
gdb.python__py-symtab.exp
gdb.reverse__consecutive-precsave.exp
gdb.reverse__consecutive-reverse.exp
I will look into this.
This reverts commit 214162.
llvm-svn: 214163
This allows us to give more precise diagnostics.
Diego kindly tested the impact on debug info size: "The increase on average
debug sizes is 0.1%. The total file size increase is ~0%."
llvm-svn: 214162
Otherwise -fsanitize=vptr causes the program to crash when it downcasts
a null pointer.
Reviewed in http://reviews.llvm.org/D4412.
Patch by Byoungyoung Lee!
llvm-svn: 213393
Summary:
This change adds description of globals created by UBSan
instrumentation (UBSan handlers, type descriptors, filenames) to
llvm.asan.globals metadata, effectively "blacklisting" them. This can
dramatically decrease the data section in binaries built with UBSan+ASan,
as UBSan tends to create a lot of handlers, and ASan instrumentation
increases the global size to at least 64 bytes.
Test Plan: clang regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits, byoungyoung, kcc
Differential Revision: http://reviews.llvm.org/D4575
llvm-svn: 213392
This is used to mark the instructions emitted by Clang to implement
variety of UBSan checks. Generally, we don't want to instrument these
instructions with another sanitizers (like ASan).
Reviewed in http://reviews.llvm.org/D4544
llvm-svn: 213291
Teach UBSan vptr checker to ignore technically invalud down-casts on
blacklisted types.
Based on http://reviews.llvm.org/D4407 by Byoungyoung Lee!
llvm-svn: 212770
Now CodeGenFunction is responsible for looking at sanitizer blacklist
(in CodeGenFunction::StartFunction) and turning off instrumentation,
if necessary.
No functionality change.
llvm-svn: 212501
Summary:
This change generalizes the code used to create global LLVM
variables referencing predefined strings (e.g. __FUNCTION__): now it
just calls GetAddrOfConstantStringFromLiteral method. As a result,
global variables for these predefined strings may get mangled names
and linkonce_odr linkage. Fix the test accordingly.
Test Plan: clang regression tests
Reviewers: majnemer
Reviewed By: majnemer
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4023
llvm-svn: 210284
This patch adds support for pointer types in global named registers variables.
It'll be lowered as a pair of read/write_register and inttoptr/ptrtoint calls.
Also adds some early checks on types on SemaDecl to avoid the assert.
Tests changed accordingly. (PR19837)
llvm-svn: 210274
That small change, although it looked harmless, it made emitting the LValue
on the PHI node without the proper cast. Reverting it fixes PR19841.
llvm-svn: 209663
Enables the emission of MS-compatible RTTI data structures for use with
typeid, dynamic_cast and exceptions. Does not implement dynamic_cast
or exceptions. As an artiface, typeid works in some cases but proper
support an testing will coming in a subsequent patch.
majnemer has fuzzed the results. Test cases included.
Differential Revision: http://reviews.llvm.org/D3833
llvm-svn: 209523
This patch implements global named registers in Clang, lowering to the just
created intrinsics in LLVM (@llvm.read/write_register). A new type of LValue
had to be created (Register), which just adds support to carry the metadata
node containing the name of the register. Two new methods to emit loads and
stores interoperate with another to emit the named metadata node.
No guarantees are being made and only non-allocatable global variable named
registers are being supported. Local named register support is unchanged.
llvm-svn: 209149
Also tidy up, simplify, and extend the test coverage to demonstrate the
limitations. This test should now fail if the bugs are fixed (&
hopefully whoever ends up in this situation sees the FIXMEs and realizes
that the test needs to be updated to positively test their change that
has fixed some or all of these issues).
I do wonder whether I could demonstrate breakage without a macro here,
but any way I slice it I can't think of a way to get two calls to the
same function on the same line/column in non-macro C++ - implicit
conversions happen at the same location as an explicit function, but
you'd never get an implicit conversion on the result of an explicit call
to the same implicit conversion operator (since the value is already
converted to the desired result)...
llvm-svn: 208468
It is very similar to GCC's __PRETTY_FUNCTION__, except it prints the
calling convention.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D3311
llvm-svn: 205780
The MS ABI requires that we determine the vbptr offset if have a
virtual inheritance model. Instead, raise an error pointing to the
diagnostic when this happens.
This fixes PR18583.
Differential Revision: http://llvm-reviews.chandlerc.com/D2842
llvm-svn: 201824
Previously, we made one traversal of the AST prior to codegen to assign
counters to the ASTs and then propagated the count values during codegen. This
patch now adds a separate AST traversal prior to codegen for the
-fprofile-instr-use option to propagate the count values. The counts are then
saved in a map from which they can be retrieved during codegen.
This new approach has several advantages:
1. It gets rid of a lot of extra PGO-related code that had previously been
added to codegen.
2. It fixes a serious bug. My original implementation (which was mailed to the
list but never committed) used 3 counters for every loop. Justin improved it to
move 2 of those counters into the less-frequently executed breaks and continues,
but that turned out to produce wrong count values in some cases. The solution
requires visiting a loop body before the condition so that the count for the
condition properly includes the break and continue counts. Changing codegen to
visit a loop body first would be a fairly invasive change, but with a separate
AST traversal, it is easy to control the order of traversal. I've added a
testcase (provided by Justin) to make sure this works correctly.
3. It improves the instrumentation overhead, reducing the number of counters for
a loop from 3 to 1. We no longer need dedicated counters for breaks and
continues, since we can just use the propagated count values when visiting
breaks and continues.
To make this work, I needed to make a change to the way we count case
statements, going back to my original approach of not including the fall-through
in the counter values. This was necessary because there isn't always an AST node
that can be used to record the fall-through count. Now case statements are
handled the same as default statements, with the fall-through paths branching
over the counter increments. While I was at it, I also went back to using this
approach for do-loops -- omitting the fall-through count into the loop body
simplifies some of the calculations and make them behave the same as other
loops. Whenever we start using this instrumentation for coverage, we'll need
to add the fall-through counts into the counter values.
llvm-svn: 201528
A return type is the declared or deduced part of the function type specified in
the declaration.
A result type is the (potentially adjusted) type of the value of an expression
that calls the function.
Rule of thumb:
* Declarations have return types and parameters.
* Expressions have result types and arguments.
llvm-svn: 200082
adjustFallThroughCount isn't a good name, and the documentation was
even worse. This commit attempts to clarify what it's for and when to
use it.
llvm-svn: 199139
With the introduction of explicit address space casts into LLVM, there's
a need to provide a new cast kind the front-end can create for C/OpenCL/CUDA
and code to produce address space casts from those kinds when appropriate.
Patch by Michele Scandale!
llvm-svn: 197036
In OpenCL a vector of 3 elements, acts like a vector of four elements.
So for a vector of size 3 the '.hi' and '.odd' accessors, would access
the elements {2, 3} and {1, 3} respectively.
However, in EmitStoreThroughExtVectorComponentLValue we are still operating on
a vector of size 3, so we should only access {2} and {1}. We do this by checking
the last element to be accessed, and ignore it if it is out-of-bounds.
EmitLoadOfExtVectorElementLValue doesn't have a similar problem, because it does
a direct shufflevector with undef, so an out-of-bounds access just gives an undef
value.
Patch by Anastasia Stulova!
llvm-svn: 195367
Summary:
Similar to __FUNCTION__, MSVC exposes the name of the enclosing mangled
function name via __FUNCDNAME__. This implementation is very naive and
unoptimized, it is expected that __FUNCDNAME__ would be used rarely in
practice.
Reviewers: rnk, rsmith, thakis
CC: cfe-commits, silvas
Differential Revision: http://llvm-reviews.chandlerc.com/D2109
llvm-svn: 194181
deallocation function (and the corresponding unsized deallocation function has
been declared), emit a weak discardable definition of the function that
forwards to the corresponding unsized deallocation.
This allows a C++ standard library implementation to provide both a sized and
an unsized deallocation function, where the unsized one does not just call the
sized one, for instance by putting both in the same object file within an
archive.
llvm-svn: 194055
check using the ubsan runtime) and -fsanitize=local-bounds (for the middle-end
check which inserts traps).
Remove -fsanitize=local-bounds from -fsanitize=undefined. It does not produce
useful diagnostics and has false positives (PR17635), and is not a good
compromise position between UBSan's checks and ASan's checks.
Map -fbounds-checking to -fsanitize=local-bounds to restore Clang's historical
behavior for that flag.
llvm-svn: 193205
This uses function prefix data to store function type information at the
function pointer.
Differential Revision: http://llvm-reviews.chandlerc.com/D1338
llvm-svn: 193058
An updated version of r191586 with bug fix.
Struct-path aware TBAA generates tags to specify the access path,
while scalar TBAA only generates tags to scalar types.
We should not generate a TBAA tag with null being the first field. When
a TBAA type node is null, the tag should be null too. Make sure we
don't decorate an instruction with a null TBAA tag.
Added a testing case for the bug reported by Richard with -relaxed-aliasing
and -fsanitizer=thread.
llvm-svn: 192145
The code in CGExpr was added back in 2012 (r165536) but not exercised in tests
until recently.
Detected on the MemorySanitizer bootstrap bot.
llvm-svn: 190521
This reverts commit r189320.
Alexey Samsonov and Dmitry Vyukov presented some arguments for keeping
these around - though it still seems like those tasks could be solved by
a tool just using the symbol table. In a very small number of cases,
thunks may be inlined & debug info might be able to save profilers &
similar tools from misclassifying those cases as part of the caller.
The extra changes here plumb through the VarDecl for various cases to
CodeGenFunction - this provides better fidelity through a few APIs but
generally just causes the CGF::StartFunction to fallback to using the
name of the IR function as the name in the debug info.
The changes to debug-info-global-ctor-dtor.cpp seem like goodness. The
two names that go missing (in favor of only emitting those names as
linkage names) are names that can be demangled - emitting them only as
the linkage name should encourage tools to do just that.
Again, thanks to Dinesh Dwivedi for investigation/work on this issue.
llvm-svn: 189421
- __func__ or __FUNCTION__ returns captured statement's parent
function name, not the one compiler generated.
Differential Revision: http://llvm-reviews.chandlerc.com/D1491
Reviewed by bkramer
llvm-svn: 189219
1. We now print the return type of lambdas and return type deduced functions
as "auto". Trailing return types with decltype print the underlying type.
2. Use the lambda or block scope for the PredefinedExpr type instead of the
parent function. This fixes PR16946, a strange mismatch between type of the
expression and the actual result.
3. Verify the type in CodeGen.
4. The type for blocks is still wrong. They are numbered and the name is not
known until CodeGen.
llvm-svn: 188900
Summary:
We would crash in CodeGen::CodeGenModule::EmitUuidofInitializer
because our attempt to enter CodeGen::CodeGenModule::EmitConstantValue
will be foiled: the type of the constant value is incomplete.
Instead, create an unnamed type with the proper layout on all platforms.
Punt the problem of wrongly defined struct _GUID types to the user.
(It's impossible because the TU may never get to see the type and thus
we can't verify that it is suitable.)
This fixes PR16856.
Reviewers: rsmith, rnk, thakis
Reviewed By: rnk
CC: cfe-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1375
llvm-svn: 188481
Summary:
UBSan was checking for alignment of the derived class on the pointer to
the base class, before converting. With some class hierarchies, this could
generate false positives.
Added test-case.
llvm-svn: 187948
Restore it after each argument is emitted. This fixes the scope info for
inlined subroutines inside of function argument expressions. (E.g.,
anything STL).
rdar://problem/12592135
llvm-svn: 187240
This is the same way GenericSelectionExpr works, and it's generally a
more consistent approach.
A large part of this patch is devoted to caching the value of the condition
of a ChooseExpr; it's needed to avoid threading an ASTContext into
IgnoreParens().
Fixes <rdar://problem/14438917>.
llvm-svn: 186738
Introduce CXXStdInitializerListExpr node, representing the implicit
construction of a std::initializer_list<T> object from its underlying array.
The AST representation of such an expression goes from an InitListExpr with a
flag set, to a CXXStdInitializerListExpr containing a MaterializeTemporaryExpr
containing an InitListExpr (possibly wrapped in a CXXBindTemporaryExpr).
This more detailed representation has several advantages, the most important of
which is that the new MaterializeTemporaryExpr allows us to directly model
lifetime extension of the underlying temporary array. Using that, this patch
*drastically* simplifies the IR generation of this construct, provides IR
generation support for nested global initializer_list objects, fixes several
bugs where the destructors for the underlying array would accidentally not get
invoked, and provides constant expression evaluation support for
std::initializer_list objects.
llvm-svn: 183872
were lacking ExprWithCleanups nodes in some cases where the new approach to
lifetime extension needed them).
Original commit message:
Rework IR emission for lifetime-extended temporaries. Instead of trying to walk
into the expression and dig out a single lifetime-extended entity and manually
pull its cleanup outside the expression, instead keep a list of the cleanups
which we'll need to emit when we get to the end of the full-expression. Also
emit those cleanups early, as EH-only cleanups, to cover the case that the
full-expression does not terminate normally. This allows IR generation to
properly model temporary lifetime when multiple temporaries are extended by the
same declaration.
We have a pre-existing bug where an exception thrown from a temporary's
destructor does not clean up lifetime-extended temporaries created in the same
expression and extended to automatic storage duration; that is not fixed by
this patch.
llvm-svn: 183859
into the expression and dig out a single lifetime-extended entity and manually
pull its cleanup outside the expression, instead keep a list of the cleanups
which we'll need to emit when we get to the end of the full-expression. Also
emit those cleanups early, as EH-only cleanups, to cover the case that the
full-expression does not terminate normally. This allows IR generation to
properly model temporary lifetime when multiple temporaries are extended by the
same declaration.
We have a pre-existing bug where an exception thrown from a temporary's
destructor does not clean up lifetime-extended temporaries created in the same
expression and extended to automatic storage duration; that is not fixed by
this patch.
llvm-svn: 183721
EmitCapturedStmt creates a captured struct containing all of the captured
variables, and then emits a call to the outlined function. This is similar in
principle to EmitBlockLiteral.
GenerateCapturedFunction actually produces the outlined function. It is based
on GenerateBlockFunction, but is much simpler. The function type is determined
by the parameters that are in the CapturedDecl.
Some changes have been added to this patch that were reviewed as part of the
serialization patch and moving the parameters to the captured decl.
Differential Revision: http://llvm-reviews.chandlerc.com/D640
llvm-svn: 181536
a lambda.
Bug #1 is that CGF's CurFuncDecl was "stuck" at lambda invocation
functions. Fix that by generally improving getNonClosureContext
to look through lambdas and captured statements but only report
code contexts, which is generally what's wanted. Audit uses of
CurFuncDecl and getNonClosureAncestor for correctness.
Bug #2 is that lambdas weren't specially mapping 'self' when inside
an ObjC method. Fix that by removing the requirement for that
and using the normal EmitDeclRefLValue path in LoadObjCSelf.
rdar://13800041
llvm-svn: 181000
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.
There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.
llvm-svn: 179958
non-constant constructors or non-trivial destructors. Plus bugfixes for
thread_local references bound to temporaries (the temporaries themselves are
lifetime-extended to become thread_local), and the corresponding case for
std::initializer_list.
llvm-svn: 179496
For struct-path aware TBAA, we used to use scalar type node as the scalar tag,
which has an incompatible format with the struct path tag. We now use the same
format: base type, access type and offset.
We also uniformize the scalar type node and the struct type node: name, a list
of pairs (offset + pointer to MDNode). For scalar type, we have a single pair.
These are to make implementaiton of aliasing rules easier.
llvm-svn: 179335
For this source:
const int &ref = someStruct.bitfield;
We used to generate this AST:
DeclStmt [...]
`-VarDecl [...] ref 'const int &'
`-MaterializeTemporaryExpr [...] 'const int' lvalue
`-ImplicitCastExpr [...] 'const int' lvalue <NoOp>
`-MemberExpr [...] 'int' lvalue bitfield .bitfield [...]
`-DeclRefExpr [...] 'struct X' lvalue ParmVar [...] 'someStruct' 'struct X'
Notice the lvalue inside the MaterializeTemporaryExpr, which is very
confusing (and caused an assertion to fire in the analyzer - PR15694).
We now generate this:
DeclStmt [...]
`-VarDecl [...] ref 'const int &'
`-MaterializeTemporaryExpr [...] 'const int' lvalue
`-ImplicitCastExpr [...] 'int' <LValueToRValue>
`-MemberExpr [...] 'int' lvalue bitfield .bitfield [...]
`-DeclRefExpr [...] 'struct X' lvalue ParmVar [...] 'someStruct' 'struct X'
Which makes a lot more sense. This allows us to remove code in both
CodeGen and AST that hacked around this special case.
The commit also makes Clang accept this (legal) C++11 code:
int &&ref = std::move(someStruct).bitfield
PR15694 / <rdar://problem/13600396>
llvm-svn: 179250
Added TBAABaseType and TBAAOffset in LValue. These two fields are initialized to
the actual type and 0, and are updated in EmitLValueForField.
Path-aware TBAA tags are enabled for EmitLoadOfScalar and EmitStoreOfScalar.
Added command line option -struct-path-tbaa.
llvm-svn: 178797
the balance between expected behavior and compatibility with the gdb
testsuite.
(GDB gets confused if we break an expression into multiple debug
stmts so we enable this behavior only for inlined functions. For the
full experience people can still use -gcolumn-info.)
llvm-svn: 177164