This patch introduces the -fwhole-program-vtables flag, which enables the
whole-program vtable optimization feature (D16795) in Clang.
Differential Revision: http://reviews.llvm.org/D16821
llvm-svn: 261767
Avoid crashing when printing diagnostics for vtable-related CFI
errors. In diagnostic mode, the frontend does an additional check of
the vtable pointer against the set of all known vtable addresses and
lets the runtime handler know if it is safe to inspect the vtable.
http://reviews.llvm.org/D16823
llvm-svn: 259716
* Runtime diagnostic data for cfi-icall changed to match the rest of
cfi checks
* Layout of all CFI diagnostic data changed to put Kind at the
beginning. There is no ABI stability promise yet.
* Call cfi_slowpath_diag instead of cfi_slowpath when needed.
* Emit __cfi_check_fail function, which dispatches a CFI check
faliure according to trap/recover settings of the current module.
* A tiny driver change to match the way the new handlers are done in
compiler-rt.
llvm-svn: 258745
This is part of a new statistics gathering feature for the sanitizers.
See clang/docs/SanitizerStats.rst for further info and docs.
Differential Revision: http://reviews.llvm.org/D16175
llvm-svn: 257971
Clang-side cross-DSO CFI.
* Adds a command line flag -f[no-]sanitize-cfi-cross-dso.
* Links a runtime library when enabled.
* Emits __cfi_slowpath calls is bitset test fails.
* Emits extra hash-based bitsets for external CFI checks.
* Sets a module flag to enable __cfi_check generation during LTO.
This mode does not yet support diagnostics.
llvm-svn: 255694
Ensure that the vptr store in the most-derived constructor is not behind
an invariant group barrier. Previously, the base-most vptr store would
be the one behind no barrier, and that could result in the creator of
the object thinking it had the base-most vtable.
This bug caused clang call pure virtual functions when called from
constructor body.
http://reviews.llvm.org/D13373
llvm-svn: 249197
ptr in dtor.
Summary:
After destruction, invocation of virtual functions prevented
by poisoning vtable pointer.
Reviewers: eugenis, kcc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D12712
Fixed testing callback emission order to account for vptr.
Poison vtable in either complete or base dtor, depending on
if virtual bases exist. If virtual bases exist, poison in
complete dtor. Otherwise, poison in base.
Remove commented-out block.
llvm-svn: 247762
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247494
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247465
It seems that there is small bug, and we can't generate assume loads
when some virtual functions have internal visibiliy
This reverts commit 982bb7d966947812d216489b3c519c9825cacbf2.
llvm-svn: 247332
This flag causes the compiler to emit bit set entries for functions as well
as runtime bitset checks at indirect call sites. Depends on the new function
bitset mechanism.
Differential Revision: http://reviews.llvm.org/D11857
llvm-svn: 247238
Generating call assume(icmp %vtable, %global_vtable) after constructor
call for devirtualization purposes.
For more info go to:
http://lists.llvm.org/pipermail/cfe-dev/2015-July/044227.html
Edit:
Fixed version because of PR24479.
After this patch got reverted because of ScalarEvolution bug (D12719)
Merged after John McCall big patch (Added Address).
http://reviews.llvm.org/D11859
llvm-svn: 247199
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
Summary:
Dtor sanitization handled amidst other dtor cleanups,
between cleaning bases and fields. Sanitizer call pushed onto
stack of cleanup operations.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D12022
Refactoring dtor sanitizing emission order.
- Support multiple inheritance by poisoning after
member destructors are invoked, and before base
class destructors are invoked.
- Poison for virtual destructor and virtual bases.
- Repress dtor aliasing when sanitizing in dtor.
- CFE test for dtor aliasing, and repression of aliasing in dtor
code generation.
- Poison members on field-by-field basis, with collective poisoning
of trivial members when possible.
- Check msan flags and existence of fields, before dtor sanitizing,
and when determining if aliasing is allowed.
- Testing sanitizing bit fields.
llvm-svn: 246815
There was linker problem, and it turns out that it is not always safe
to refer to vtable. If the vtable is used, then we can refer to it
without any problem, but because we don't know when it will be used or
not, we can only check if vtable is external or it is safe to to emit it
speculativly (when class it doesn't have any inline virtual functions).
It should be fixed in the future.
http://reviews.llvm.org/D12385
llvm-svn: 246214
Summary: Poisoning applied to only class members, and before dtors for base class invoked
Implement poisoning of only class members in dtor, as opposed to also
poisoning fields inherited from base classes. Members are poisoned
only once, by the last dtor for a class. Skip poisoning if class has
no fields.
Verify emitted code for derived class with virtual destructor sanitizes
its members only once.
Removed patch file containing extraneous changes.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D11951
Simplified test cases for use-after-dtor
Summary: Simplified test cases to focus on one feature at time.
Tests updated to align with new emission order for sanitizing
callback.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D12003
llvm-svn: 244933
Verify emitted code for derived class with virtual destructor sanitizes its members only once.
Changed emission order for dtor callback, so only the last dtor for a class emits the sanitizing callback, while ensuring that class members are poisoned before base class destructors are invoked.
Skip poisoning of members, if class has no fields.
Removed patch file containing extraneous changes.
Summary: Poisoning applied to only class members, and before dtors for base class invoked
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D11951
llvm-svn: 244819
Summary: In addition to checking compiler flags, the front-end also examines the attributes of the destructor definition to ensure that the SanitizeMemory attribute is attached.
Reviewers: eugenis, kcc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11727
refactored test into new file, revised how function attribute examined
modified test to examine default dtor with and without attribute
removed attribute check
llvm-svn: 243912
- Make it a proper random access iterator with a little help from iterator_adaptor_base
- Clean up users of magic dereferencing. The iterator should behave like an Expr **.
- Make it an implementation detail of Stmt. This allows inlining of the assertions.
llvm-svn: 242608
We now use the sanitizer special case list to decide which types to blacklist.
We also support a special blacklist entry for types with a uuid attribute,
which are generally COM types whose virtual tables are defined externally.
Differential Revision: http://reviews.llvm.org/D11096
llvm-svn: 242286
The fix is to remove duplicate copy-initialization of the only memcpy-able struct member and to correct the address of aggregately initialized members in destructors' calls during stack unwinding (in order to obtain address of struct member by using GEP instead of 'bitcast').
Differential Revision: http://reviews.llvm.org/D10990
llvm-svn: 242127
Under the -fsanitize-memory-use-after-dtor (disabled by default) insert
an MSan runtime library call at the end of every destructor.
Patch by Naomi Musgrave.
llvm-svn: 242097
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:
struct XBitfield {
unsigned b1 : 10;
unsigned b2 : 12;
unsigned b3 : 10;
};
struct YBitfield {
char x;
struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;
unsigned test7() {
// CHECK: @test7
// CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
return gbitfield.y.b2;
}
The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.
This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp. Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.
Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).
In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage. This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.
Differential Revision: http://reviews.llvm.org/D11034
llvm-svn: 241916