Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247494
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247465
It seems that there is small bug, and we can't generate assume loads
when some virtual functions have internal visibiliy
This reverts commit 982bb7d966947812d216489b3c519c9825cacbf2.
llvm-svn: 247332
This flag causes the compiler to emit bit set entries for functions as well
as runtime bitset checks at indirect call sites. Depends on the new function
bitset mechanism.
Differential Revision: http://reviews.llvm.org/D11857
llvm-svn: 247238
Generating call assume(icmp %vtable, %global_vtable) after constructor
call for devirtualization purposes.
For more info go to:
http://lists.llvm.org/pipermail/cfe-dev/2015-July/044227.html
Edit:
Fixed version because of PR24479.
After this patch got reverted because of ScalarEvolution bug (D12719)
Merged after John McCall big patch (Added Address).
http://reviews.llvm.org/D11859
llvm-svn: 247199
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
Summary:
Dtor sanitization handled amidst other dtor cleanups,
between cleaning bases and fields. Sanitizer call pushed onto
stack of cleanup operations.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D12022
Refactoring dtor sanitizing emission order.
- Support multiple inheritance by poisoning after
member destructors are invoked, and before base
class destructors are invoked.
- Poison for virtual destructor and virtual bases.
- Repress dtor aliasing when sanitizing in dtor.
- CFE test for dtor aliasing, and repression of aliasing in dtor
code generation.
- Poison members on field-by-field basis, with collective poisoning
of trivial members when possible.
- Check msan flags and existence of fields, before dtor sanitizing,
and when determining if aliasing is allowed.
- Testing sanitizing bit fields.
llvm-svn: 246815
There was linker problem, and it turns out that it is not always safe
to refer to vtable. If the vtable is used, then we can refer to it
without any problem, but because we don't know when it will be used or
not, we can only check if vtable is external or it is safe to to emit it
speculativly (when class it doesn't have any inline virtual functions).
It should be fixed in the future.
http://reviews.llvm.org/D12385
llvm-svn: 246214
Summary: Poisoning applied to only class members, and before dtors for base class invoked
Implement poisoning of only class members in dtor, as opposed to also
poisoning fields inherited from base classes. Members are poisoned
only once, by the last dtor for a class. Skip poisoning if class has
no fields.
Verify emitted code for derived class with virtual destructor sanitizes
its members only once.
Removed patch file containing extraneous changes.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D11951
Simplified test cases for use-after-dtor
Summary: Simplified test cases to focus on one feature at time.
Tests updated to align with new emission order for sanitizing
callback.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D12003
llvm-svn: 244933
Verify emitted code for derived class with virtual destructor sanitizes its members only once.
Changed emission order for dtor callback, so only the last dtor for a class emits the sanitizing callback, while ensuring that class members are poisoned before base class destructors are invoked.
Skip poisoning of members, if class has no fields.
Removed patch file containing extraneous changes.
Summary: Poisoning applied to only class members, and before dtors for base class invoked
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D11951
llvm-svn: 244819
Summary: In addition to checking compiler flags, the front-end also examines the attributes of the destructor definition to ensure that the SanitizeMemory attribute is attached.
Reviewers: eugenis, kcc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11727
refactored test into new file, revised how function attribute examined
modified test to examine default dtor with and without attribute
removed attribute check
llvm-svn: 243912
- Make it a proper random access iterator with a little help from iterator_adaptor_base
- Clean up users of magic dereferencing. The iterator should behave like an Expr **.
- Make it an implementation detail of Stmt. This allows inlining of the assertions.
llvm-svn: 242608
We now use the sanitizer special case list to decide which types to blacklist.
We also support a special blacklist entry for types with a uuid attribute,
which are generally COM types whose virtual tables are defined externally.
Differential Revision: http://reviews.llvm.org/D11096
llvm-svn: 242286
The fix is to remove duplicate copy-initialization of the only memcpy-able struct member and to correct the address of aggregately initialized members in destructors' calls during stack unwinding (in order to obtain address of struct member by using GEP instead of 'bitcast').
Differential Revision: http://reviews.llvm.org/D10990
llvm-svn: 242127
Under the -fsanitize-memory-use-after-dtor (disabled by default) insert
an MSan runtime library call at the end of every destructor.
Patch by Naomi Musgrave.
llvm-svn: 242097
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:
struct XBitfield {
unsigned b1 : 10;
unsigned b2 : 12;
unsigned b3 : 10;
};
struct YBitfield {
char x;
struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;
unsigned test7() {
// CHECK: @test7
// CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
return gbitfield.y.b2;
}
The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.
This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp. Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.
Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).
In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage. This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.
Differential Revision: http://reviews.llvm.org/D11034
llvm-svn: 241916
We were previously creating bit set entries at virtual table offset
sizeof(void*) unconditionally under the Microsoft C++ ABI. This is incorrect
if RTTI data is disabled; in that case the "address point" is at offset
0. This change modifies bit set emission to take into account whether RTTI
data is being emitted.
Also make a start on a blacklisting scheme for records.
Differential Revision: http://reviews.llvm.org/D11048
llvm-svn: 241845
The fix is to emit cleanup for arrays of memcpy-able objects in struct if an exception is thrown later during copy-construction.
Differential Revision: http://reviews.llvm.org/D10989
llvm-svn: 241670
Skip calls to HasTrivialDestructorBody() in the case where the
destructor is never invoked. Alternatively, Richard proposed to change
Sema to declare a trivial destructor for anonymous union member, which
seems too wasteful.
Differential Revision: http://reviews.llvm.org/D10508
llvm-svn: 240742
Member pointers in the MS ABI are made complicated due to the following:
- Virtual methods in the most derived class (MDC) might live in a
vftable in a virtual base.
- There are four different representations of member pointer: single
inheritance, multiple inheritance, virtual inheritance and the "most
general" representation.
- Bases might have a *more* general representation than classes which
derived from them, a most surprising result.
We believed that we could treat all member pointers as-if they were a
degenerate case of the multiple inheritance model. This fell apart once
we realized that implementing standard member pointers using this ABI
requires referencing members with a non-zero vbindex.
On a bright note, all but the virtual inheritance model operate rather
similarly. The virtual inheritance member pointer representation
awkwardly requires a virtual base adjustment in order to refer to
entities in the MDC.
However, the first virtual base might be quite far from the start of the
virtual base. This means that we must add a negative non-virtual
displacement.
However, things get even more complicated. The most general
representation interprets vbindex zero differently from the virtual
inheritance model: it doesn't reference the vbtable at all.
It turns out that this complexity can increase for quite some time:
consider a derived to base conversion from the most general model to the
multiple inheritance model...
To manage this complexity we introduce a concept of "normalized" member
pointer which allows us to treat all three models as the most general
model. Then we try to figure out how to map this generalized member
pointer onto the destination member pointer model. I've done my best to
furnish the code with comments explaining why each adjustment is
performed.
This fixes PR23878.
llvm-svn: 240384
The patch is generated using this command:
$ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
work/llvm/tools/clang
To reduce churn, not touching namespaces spanning less than 10 lines.
llvm-svn: 240270
Testcase provided, in the PR, by Christian Shelton and
reduced by David Majnemer.
PR: 23584
Differential Revision: http://reviews.llvm.org/D10508
Reviewed by: rnk
llvm-svn: 240242
This causes programs compiled with this flag to print a diagnostic when
a control flow integrity check fails instead of aborting. Diagnostics are
printed using UBSan's runtime library.
The main motivation of this feature over -fsanitize=vptr is fidelity with
the -fsanitize=cfi implementation: the diagnostics are printed under exactly
the same conditions as those which would cause -fsanitize=cfi to abort the
program. This means that the same restrictions apply regarding compiling
all translation units with -fsanitize=cfi, cross-DSO virtual calls are
forbidden, etc.
Differential Revision: http://reviews.llvm.org/D10268
llvm-svn: 240109
-fprofile-instr-generate does not emit counter increment intrinsics
for Dtor_Deleting and Dtor_Complete destructors with assigned
counters. This causes unnecessary [-Wprofile-instr-out-of-date]
warnings during profile-use runs even if the source has never been
modified since profile collection.
Patch by Betul Buyukkurt. Thanks!
llvm-svn: 237804
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.
llvm-svn: 235664
This uses the same class metadata currently used for virtual call and
cast checks.
The new flag is -fsanitize=cfi-nvcall. For consistency, the -fsanitize=cfi-vptr
flag has been renamed -fsanitize=cfi-vcall.
Differential Revision: http://reviews.llvm.org/D8756
llvm-svn: 233874
This scheme checks that pointer and lvalue casts are made to an object of
the correct dynamic type; that is, the dynamic type of the object must be
a derived class of the pointee type of the cast. The checks are currently
only introduced where the class being casted to is a polymorphic class.
Differential Revision: http://reviews.llvm.org/D8312
llvm-svn: 232241