variables.
Added emission of the offloading data sections for the variables within
declare target regions + fixes emission of the declare target variables
marked as declare target not within the declare target region.
llvm-svn: 328888
If the link clause is used on the declare target directive, the object
should be linked on target or target data directives, not during the
codegen. Patch adds support for this clause.
llvm-svn: 328544
source expressions when iterating over a PseudoObjectExpr's semantic
subexpression list.
Previously the loop in emitPseudoObjectExpr would emit the IR for each
OpaqueValueExpr that was in a PseudoObjectExpr's semantic-form
expression list and use the result when the OpaqueValueExpr later
appeared in other expressions. This caused an assertion failure when
AggExprEmitter tried to copy the result of an OpaqueValueExpr and the
copied type didn't have trivial copy/move constructors or assignment
operators.
This patch adds flag IsUnique to OpaqueValueExpr which indicates it is a
unique reference to its source expression (it is not used in multiple
places). The loop in emitPseudoObjectExpr ignores OpaqueValueExprs that
are unique and CodeGen visitors simply traverse the source expressions
of such OpaqueValueExprs.
rdar://problem/34363596
Differential Revision: https://reviews.llvm.org/D39562
llvm-svn: 327939
Summary:
This patch enables debugging of C99 VLA types by generating more precise
LLVM Debug metadata, using the extended DISubrange 'count' field that
takes a DIVariable.
This should implement:
Bug 30553: Debug info generated for arrays is not what GDB expects (not as good as GCC's)
https://bugs.llvm.org/show_bug.cgi?id=30553
Reviewers: echristo, aprantl, dexonsmith, clayborg, pcc, kristof.beyls, dblaikie
Reviewed By: aprantl
Subscribers: jholewinski, schweitz, davide, fhahn, JDevlieghere, cfe-commits
Differential Revision: https://reviews.llvm.org/D41698
llvm-svn: 323952
As discussed in the mail thread <https://groups.google.com/a/isocpp.org/forum/
#!topic/std-discussion/T64_dW3WKUk> "Calling noexcept function throug non-
noexcept pointer is undefined behavior?", such a call should not be UB.
However, Clang currently warns about it.
This change removes exception specifications from the function types recorded
for -fsanitize=function, both in the functions themselves and at the call sites.
That means that calling a non-noexcept function through a noexcept pointer will
also not be flagged as UB. In the review of this change, that was deemed
acceptable, at least for now. (See the "TODO" in compiler-rt
test/ubsan/TestCases/TypeCheck/Function/function.cpp.)
To remove exception specifications from types, the existing internal
ASTContext::getFunctionTypeWithExceptionSpec was made public, and some places
otherwise unrelated to this change have been adapted to call it, too.
This is the cfe part of a patch covering both cfe and compiler-rt.
Differential Revision: https://reviews.llvm.org/D40720
llvm-svn: 321859
only.
Added support for -fopenmp-simd option that allows compilation of
simd-based constructs without emission of OpenMP runtime calls.
llvm-svn: 321560
...when such an operation is done on an object during con-/destruction.
This is the cfe part of a patch covering both cfe and compiler-rt.
Differential Revision: https://reviews.llvm.org/D40295
llvm-svn: 321519
The new format requires to specify both the type of the access
and its size. This patch fixes setting access sizes for TBAA tags
that denote accesses to structure members. This fix affects all
future TBAA metadata tests for the new format, so I guess we
don't need any special tests for this fix.
Differential Revision: https://reviews.llvm.org/D41452
llvm-svn: 321250
Diagnose 'unreachable' UB when a noreturn function returns.
1. Insert a check at the end of functions marked noreturn.
2. A decl may be marked noreturn in the caller TU, but not marked in
the TU where it's defined. To diagnose this scenario, strip away the
noreturn attribute on the callee and insert check after calls to it.
Testing: check-clang, check-ubsan, check-ubsan-minimal, D40700
rdar://33660464
Differential Revision: https://reviews.llvm.org/D40698
llvm-svn: 321231
At least <http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-android/
builds/6013/steps/annotate/logs/stdio> complains about
__ubsan::__ubsan_handle_function_type_mismatch_abort (compiler-rt
lib/ubsan/ubsan_handlers.cc) returning now despite being declared 'noreturn', so
looks like a different approach is needed for the function_type_mismatch check
to be called also in cases that may ultimately succeed.
llvm-svn: 320982
As discussed in the mail thread <https://groups.google.com/a/isocpp.org/forum/
#!topic/std-discussion/T64_dW3WKUk> "Calling noexcept function throug non-
noexcept pointer is undefined behavior?", such a call should not be UB.
However, Clang currently warns about it.
There is no cheap check whether two function type_infos only differ in noexcept,
so pass those two type_infos as additional data to the function_type_mismatch
handler (with the optimization of passing a null "static callee type" info when
that is already noexcept, so the additional check can be avoided anyway). For
the Itanium ABI (which appears to be the only one that happens to be used on
platforms that support -fsanitize=function, and which appears to only record
noexcept information for pointer-to-function type_infos, not for function
type_infos themselves), we then need to check the mangled names for occurrence
of "Do" representing "noexcept".
This is the cfe part of a patch covering both cfe and compiler-rt.
Differential Revision: https://reviews.llvm.org/D40720
llvm-svn: 320978
Most of the -Wsign-compare warnings are due to the fact that
enums are signed by default in the MS ABI, while the
tautological comparison warnings trigger on x86 builds where
sizeof(size_t) is 4 bytes, so N > numeric_limits<unsigned>::max()
is always false.
Differential Revision: https://reviews.llvm.org/D41256
llvm-svn: 320750
This is a follow-up to r320128. Eli pointed out that there is some gray
area in the language standard about whether the constant size is exact,
or a lower bound.
https://reviews.llvm.org/D40940
llvm-svn: 320185
The basic idea behind this patch is that since in strict aliasing
mode all accesses to union members require their outermost
enclosing union objects to be specified explicitly, then for a
couple given accesses to union members of the form
p->a.b.c...
q->x.y.z...
it is known they can only alias if both p and q point to the same
union type and offset ranges of members a.b.c... and x.y.z...
overlap. Note that the actual types of the members do not matter.
Specifically, in this patch we do the following:
* Make unions to be valid TBAA base access types. This enables
generation of TBAA type descriptors for unions.
* Encode union types as structures with a single member of a
special "union member" type. Currently we do not encode
information about sizes of types, but conceptually such union
members are considered to be of the size of the whole union.
* Encode accesses to direct and indirect union members, including
member arrays, as accesses to these special members. All
accesses to members of a union thus get the same offset, which
is the offset of the union they are part of. This means the
existing LLVM TBAA machinery is able to handle such accesses
with no changes.
While this is already an improvement comparing to the current
situation, that is, representing all union accesses as may-alias
ones, there are further changes planned to complete the support
for unions. One of them is storing information about access sizes
so we can distinct accesses to non-overlapping union members,
including accesses to different elements of member arrays.
Another change is encoding type sizes in order to make it
possible to compute offsets within constant-indexed array
elements. These enhancements will be addressed with separate
patches.
Differential Revision: https://reviews.llvm.org/D39455
llvm-svn: 319413
Summary:
This change allows generalizing pointers in type signatures used for
cfi-icall by enabling the -fsanitize-cfi-icall-generalize-pointers flag.
This works by 1) emitting an additional generalized type signature
metadata node for functions and 2) llvm.type.test()ing for the
generalized type for translation units with the flag specified.
This flag is incompatible with -fsanitize-cfi-cross-dso because it would
require emitting twice as many type hashes which would increase artifact
size.
Reviewers: pcc, eugenis
Reviewed By: pcc
Subscribers: kcc
Differential Revision: https://reviews.llvm.org/D39358
llvm-svn: 317044
This patch fixes various places in clang to propagate may-alias
TBAA access descriptors during construction of lvalues, thus
eliminating the need for the LValueBaseInfo::MayAlias flag.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D39008
llvm-svn: 316988
For non-zero alloca addr space, alloca is usually casted to default addr
space immediately.
For non-vla, alloca is inserted at AllocaInsertPt, therefore the addr
space cast should also be insterted at AllocaInsertPt. However,
for vla, alloca is inserted at the current insertion point of IRBuilder,
therefore the addr space cast should also inserted at the current
insertion point of IRBuilder.
Currently clang always insert addr space cast at AllocaInsertPt, which
causes invalid IR.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D39374
llvm-svn: 316909
Builder save/restores insertion pointer when emitting addr space cast
for alloca, but does not save/restore debug loc, which causes verifier
failure for certain call instructions.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D39069
llvm-svn: 316484
The main change is that now we generate TBAA info before
constructing the resulting lvalue instead of constructing lvalue
with some default TBAA info and fixing it as necessary
afterwards. We also keep the TBAA info close to lvalue base info,
which is supposed to simplify their future merging.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38947
llvm-svn: 315989
This patch addresses the rest of the cases where we pass lvalue
base info, but do not provide corresponding TBAA info.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to make
reviewing easier.
Differential Revision: https://reviews.llvm.org/D38945
llvm-svn: 315986
This patch enables explicit generation of TBAA information in all
cases where LValue base info is propagated or constructed in
non-trivial ways. Eventually, we will consider each of these
cases to make sure the TBAA information is correct and not too
conservative. For now, we just fall back to generating TBAA info
from the access type.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38733
llvm-svn: 315575
Besides obvious code simplification, avoiding explicit creation
of LValueBaseInfo objects makes it easier to make TBAA
information to be part of such objects.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38695
llvm-svn: 315289
In C++11 variable to global variables are considered as constant
expressions and these variables are not captured in the outlined
regions. Patch allows capturing of such variables in the OpenMP regions.
llvm-svn: 315074
This patch is an attempt to clarify and simplify generation and
propagation of TBAA information. The idea is to pack all values
that describe a memory access, namely, base type, access type and
offset, into a single structure. This is supposed to make further
changes, such as adding support for unions and array members,
easier to prepare and review.
DecorateInstructionWithTBAA() is no more responsible for
converting types to tags. These implicit conversions not only
complicate reading the code, but also suggest assigning scalar
access tags while we generally prefer full-size struct-path tags.
TBAAPathTag is replaced with TBAAAccessInfo; the latter is now
the type of the keys of the cache map that translates access
descriptors to metadata nodes.
Fixed a bug with writing to a wrong map in
getTBAABaseTypeMetadata() (former getTBAAStructTypeInfo()).
We now check for valid base access types every time we
dereference a field. The original code only checks the top-level
base type. See isValidBaseType() / isTBAAPathStruct() calls.
Some entities have been renamed to sound more adequate and less
confusing/misleading in presence of path-aware TBAA information.
Now we do not lookup twice for the same cache entry in
getAccessTagInfo().
Refined relevant comments and descriptions.
Differential Revision: https://reviews.llvm.org/D37826
llvm-svn: 315048
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.
Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
of types to access tags anymore. Instead, it takes an access
descriptor (TBAAAccessInfo) and generates corresponding access
tag from it.
* getTBAAInfoForVTablePtr() reworked to
getTBAAVTablePtrAccessInfo() that now returns the
virtual-pointer access descriptor and not the virtual-point
type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
it is the only way to generate access tag by a given access
descriptor. It is capable of producing both scalar and
struct-path tags, depending on options and availability of the
base access type. getTBAAScalarTagInfo() and its cache
ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
access tag should be a scalar or struct-path one,
getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
descriptor by a given QualType access type.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38503
llvm-svn: 314979
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.
Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
of types to access tags anymore. Instead, it takes an access
descriptor (TBAAAccessInfo) and generates corresponding access
tag from it.
* getTBAAInfoForVTablePtr() reworked to
getTBAAVTablePtrAccessInfo() that now returns the
virtual-pointer access descriptor and not the virtual-point
type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
it is the only way to generate access tag by a given access
descriptor. It is capable of producing both scalar and
struct-path tags, depending on options and availability of the
base access type. getTBAAScalarTagInfo() and its cache
ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
access tag should be a scalar or struct-path one,
getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
descriptor by a given QualType access type.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38503
llvm-svn: 314977
With this patch we implement a concept of TBAA access descriptors
that are capable of representing both scalar and struct-path
accesses in a generic way.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38456
llvm-svn: 314780
Don't emit alignment checks which the IR constant folder throws away.
I've tested this out on X86FastISel.cpp. While this doesn't decrease
end-to-end compile-time significantly, it results in 122 fewer type
checks (1% reduction) overall, without adding any real complexity.
Differential Revision: https://reviews.llvm.org/D37544
llvm-svn: 314752
This patch fixes misleading names of entities related to getting,
setting and generation of TBAA access type descriptors.
This is effectively an attempt to provide a review for D37826 by
breaking it into smaller pieces.
Differential Revision: https://reviews.llvm.org/D38404
llvm-svn: 314657
Added missing addrspacecast case in alignment computation
logic of pointer type emission in IR generation.
Differential Revision: https://reviews.llvm.org/D37804
llvm-svn: 314304
Summary:
This is the follow-up patch to D37924.
This change refactors clang to use the the newly added section headers
in SpecialCaseList to specify which sanitizers blacklists entries
should apply to, like so:
[cfi-vcall]
fun:*bad_vcall*
[cfi-derived-cast|cfi-unrelated-cast]
fun:*bad_cast*
The SanitizerSpecialCaseList class has been added to allow querying by
SanitizerMask, and SanitizerBlacklist and its downstream users have been
updated to provide that information. Old blacklists not using sections
will continue to function identically since the blacklist entries will
be placed into a '[*]' section by default matching against all
sanitizers.
Reviewers: pcc, kcc, eugenis, vsk
Reviewed By: eugenis
Subscribers: dberris, cfe-commits, mgorny
Differential Revision: https://reviews.llvm.org/D37925
llvm-svn: 314171
This change will make it possible to use -fsanitize=function on Darwin and
possibly on other platforms. It fixes an issue with the way RTTI is stored into
function prologue data.
On Darwin, addresses stored in prologue data can't require run-time fixups and
must be PC-relative. Run-time fixups are undesirable because they necessitate
writable text segments, which can lead to security issues. And absolute
addresses are undesirable because they break PIE mode.
The fix is to create a private global which points to the RTTI, and then to
encode a PC-relative reference to the global into prologue data.
Differential Revision: https://reviews.llvm.org/D37597
llvm-svn: 313096
Because it is common to treat vector types as an array of their elements, or
even some other type that's not the element type, and thus index into them, we
can't use struct-path TBAA for these accesses. Even though we already treat all
vector types as equivalent to 'char', we were using field-offset information
for them with TBAA, and this renders undefined the intra-value indexing we
intend to allow. Note that, although 'char' is universally aliasing, with path
TBAA, we can still differentiate between access to s.a and s.b in
struct { char a, b; } s;. We can't use this capability as-is for vector types.
Fixes PR33967.
llvm-svn: 312447
Summary:
An implementation of ubsan runtime library suitable for use in production.
Minimal attack surface.
* No stack traces.
* Definitely no C++ demangling.
* No UBSAN_OPTIONS=log_file=/path (very suid-unfriendly). And no UBSAN_OPTIONS in general.
* as simple as possible
Minimal CPU and RAM overhead.
* Source locations unnecessary in the presence of (split) debug info.
* Values and types (as in A+B overflows T) can be reconstructed from register/stack dumps, once you know what type of error you are looking at.
* above two items save 3% binary size.
When UBSan is used with -ftrap-function=abort, sometimes it is hard to reason about failures. This library replaces abort with a slightly more informative message without much extra overhead. Since ubsan interface in not stable, this code must reside in compiler-rt.
Reviewers: pcc, kcc
Subscribers: srhines, mgorny, aprantl, krytarowski, llvm-commits
Differential Revision: https://reviews.llvm.org/D36810
llvm-svn: 312029
expressions
C++ allows us to reference static variables through member expressions. Prior to
this commit, non-integer static variables that were referenced using a member
expression were always emitted using lvalue loads. The old behaviour introduced
an inconsistency between regular uses of static variables and member expressions
uses. For example, the following program compiled and linked successfully:
struct Foo {
constexpr static const char *name = "foo";
};
int main() {
return Foo::name[0] == 'f';
}
but this program failed to link because "Foo::name" wasn't found:
struct Foo {
constexpr static const char *name = "foo";
};
int main() {
Foo f;
return f.name[0] == 'f';
}
This commit ensures that constant static variables referenced through member
expressions are emitted in the same way as ordinary static variable references.
rdar://33942261
Differential Revision: https://reviews.llvm.org/D36876
llvm-svn: 311772
of class fails to map class static variable.
If the global variable is captured and it has several redeclarations,
sometimes it may lead to a compiler crash. Patch fixes this by working
only with canonical declarations.
llvm-svn: 311479
the interface.
The ultimate goal here is to make it easier to do some more interesting
things in constant emission, like emit constant initializers that have
ignorable side-effects, or doing the majority of an initialization
in-place and then patching up the last few things with calls. But for
now this is mostly just a refactoring.
llvm-svn: 310964
OpenCL 2.0 atomic builtin functions have a scope argument which is ideally
represented as synchronization scope argument in LLVM atomic instructions.
Clang supports translating Clang atomic builtin functions to LLVM atomic
instructions. However it currently does not support synchronization scope
of LLVM atomic instructions. Without this, users have to use LLVM assembly
code to implement OpenCL atomic builtin functions.
This patch adds OpenCL 2.0 atomic builtin functions as Clang builtin
functions, which supports generating LLVM atomic instructions with
synchronization scope operand.
Currently only constant memory scope argument is supported. Support of
non-constant memory scope argument will be added later.
Differential Revision: https://reviews.llvm.org/D28691
llvm-svn: 310082
In r309007, I made -fsanitize=null a hard prerequisite for -fsanitize=vptr. I
did not see the need for the two checks to have separate null checking logic
for the same pointer. I expected the two checks to either always be enabled
together, or to be mutually compatible.
In the mailing list discussion re: r309007 it became clear that that isn't the
case. If a codebase is -fsanitize=vptr clean but not -fsanitize=null clean,
it's useful to have -fsanitize=vptr emit its own null check. That's what this
patch does: with it, -fsanitize=vptr can be used without -fsanitize=null.
Differential Revision: https://reviews.llvm.org/D36112
llvm-svn: 309846
The instrumentation generated by -fsanitize=vptr does not null check a
user pointer before loading from it. This causes crashes in the face of
UB member calls (this=nullptr), i.e it's causing user programs to crash
only after UBSan is turned on.
The fix is to make run-time null checking a prerequisite for enabling
-fsanitize=vptr, and to then teach UBSan to reuse these run-time null
checks to make -fsanitize=vptr safe.
Testing: check-clang, check-ubsan, a stage2 ubsan-enabled build
Differential Revision: https://reviews.llvm.org/D35735https://bugs.llvm.org/show_bug.cgi?id=33881
llvm-svn: 309007
The uses of alloca may be in different blocks other than the block containing the alloca.
Therefore if the alloca addr space is non-zero and it needs to be casted to default
address space, the cast needs to be inserted in the same BB as the alloca insted of
the current builder insert point since the current insert point may be in a different BB.
Differential Revision: https://reviews.llvm.org/D35438
llvm-svn: 308313
The pointer overflow check gives false negatives when dealing with
expressions in which an unsigned value is subtracted from a pointer.
This is summarized in PR33430 [1]: ubsan permits the result of the
subtraction to be greater than "p", but it should not.
To fix the issue, we should track whether or not the pointer expression
is a subtraction. If it is, and the indices are unsigned, we know to
expect "p - <unsigned> <= p".
I've tested this by running check-{llvm,clang} with a stage2
ubsan-enabled build. I've also added some tests to compiler-rt, which
are in D34122.
[1] https://bugs.llvm.org/show_bug.cgi?id=33430
Differential Revision: https://reviews.llvm.org/D34121
llvm-svn: 307955
Certain targets (e.g. amdgcn) require global variable to stay in global or constant address
space. In C or C++ global variables are emitted in the default (generic) address space.
This patch introduces virtual functions TargetCodeGenInfo::getGlobalVarAddressSpace
and TargetInfo::getConstantAddressSpace to handle this in a general approach.
It only affects IR generated for amdgcn target.
Differential Revision: https://reviews.llvm.org/D33842
llvm-svn: 307470
In C++ all variables are in default address space. Previously change has been
made to cast automatic variables to default address space. However that is
not sufficient since all temporary variables need to be casted to default
address space.
This patch casts all temporary variables to default address space except those
for passing indirect arguments since they are only used for load/store.
This patch only affects target having non-zero alloca address space.
Differential Revision: https://reviews.llvm.org/D33706
llvm-svn: 305711
Skip checks for null dereference, alignment violation, object size
violation, and dynamic type violation if the pointer points to volatile
data.
Differential Revision: https://reviews.llvm.org/D34262
llvm-svn: 305546
Summary:
The title says it all.
Reviewers: GorNishanov, rsmith
Reviewed By: GorNishanov
Subscribers: rjmccall, cfe-commits
Differential Revision: https://reviews.llvm.org/D34194
llvm-svn: 305496
Adding an unsigned offset to a base pointer has undefined behavior if
the result of the expression would precede the base. An example from
@regehr:
int foo(char *p, unsigned offset) {
return p + offset >= p; // This may be optimized to '1'.
}
foo(p, -1); // UB.
This patch extends the pointer overflow check in ubsan to detect invalid
unsigned pointer index expressions. It changes the instrumentation to
only permit non-negative offsets in pointer index expressions when all
of the GEP indices are unsigned.
Testing: check-llvm, check-clang run on a stage2, ubsan-instrumented
build.
Differential Revision: https://reviews.llvm.org/D33910
llvm-svn: 305216
Summary:
If the first parameter of the function is the ImplicitParamDecl, codegen
automatically marks it as an implicit argument with `this` or `self`
pointer. Added internal kind of the ImplicitParamDecl to separate
'this', 'self', 'vtt' and other implicit parameters from other kind of
parameters.
Reviewers: rjmccall, aaron.ballman
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D33735
llvm-svn: 305075
Check pointer arithmetic for overflow.
For some more background on this check, see:
https://wdtz.org/catching-pointer-overflow-bugs.htmlhttps://reviews.llvm.org/D20322
Patch by Will Dietz and John Regehr!
This version of the patch is different from the original in a few ways:
- It introduces the EmitCheckedInBoundsGEP utility which inserts
checks when the pointer overflow check is enabled.
- It does some constant-folding to reduce instrumentation overhead.
- It does not check some GEPs in CGExprCXX. I'm not sure that
inserting checks here, or in CGClass, would catch many bugs.
Possible future directions for this check:
- Introduce CGF.EmitCheckedStructGEP, to detect overflows when
accessing structures.
Testing: Apart from the added lit test, I ran check-llvm and check-clang
with a stage2, ubsan-instrumented clang. Will and John have also done
extensive testing on numerous open source projects.
Differential Revision: https://reviews.llvm.org/D33305
llvm-svn: 304459
Summary:
We need to emit barrier if the union field
is CXXRecordDecl because it might have vptrs. The testcode
was wrongly devirtualized. It also proves that having different
groups for different dynamic types is not sufficient.
Reviewers: rjmccall, rsmith, mehdi_amini
Subscribers: amharc, cfe-commits
Differential Revision: https://reviews.llvm.org/D31830
llvm-svn: 304448
The functions creating LValues propagated information about alignment
source. Extend the propagated data to also include information about
possible unrestricted aliasing. A new class LValueBaseInfo will
contain both AlignmentSource and MayAlias info.
This patch should not introduce any functional changes.
Differential Revision: https://reviews.llvm.org/D33284
llvm-svn: 303358
Use variadic templates instead of relying on <cstdarg> + sentinel.
This enforces better type checking and makes code more readable.
Differential revision: https://reviews.llvm.org/D32550
llvm-svn: 302572
Fix the nullability-assign check so that it can handle assignments into
C++ structs. Previously, such assignments were not instrumented.
Testing: check-clang, check-ubsan, enabling the existing test in ObjC++
mode, and building some Apple frameworks with -fsanitize=nullability.
llvm-svn: 301482
It's possible to determine the alignment of an alloca at compile-time.
Use this information to skip emitting some runtime alignment checks.
Testing: check-clang, check-ubsan.
This significantly reduces the amount of alignment checks we emit when
compiling X86ISelLowering.cpp. Here are the numbers from patched/unpatched
clangs based on r301361.
------------------------------------------
| Setup | # of alignment checks |
------------------------------------------
| unpatched, -O0 | 47195 |
| patched, -O0 | 30876 | (-34.6%)
------------------------------------------
llvm-svn: 301377
The IR builder can constant-fold null checks if the pointer operand
points to a constant. If the "is-non-null" check is folded away to
"true", don't emit the null check + branch.
Testing: check-clang, check-ubsan.
This slightly reduces the amount of null checks we emit when compiling
X86ISelLowering.cpp. Here are the numbers from patched/unpatched clangs
based on r300371.
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 25251 |
| patched, -O0 | 23925 | (-5.3%)
-------------------------------------
llvm-svn: 300509
Pointers to the start of an alloca are non-null, so we don't need to
emit runtime null checks for them.
Testing: check-clang, check-ubsan.
This significantly reduces the amount of null checks we emit when
compiling X86ISelLowering.cpp. Here are the numbers from patched /
unpatched clangs based on r300371.
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 45439 |
| patched, -O0 | 25251 | (-44.4%)
-------------------------------------
llvm-svn: 300508
If a pointer is 1-byte aligned, there's no use in checking its
alignment. Somewhat surprisingly, ubsan can spend a significant amount
of time doing just that!
This loosely depends on D30283.
Testing: check-clang, check-ubsan, and a stage2 ubsan build.
Differential Revision: https://reviews.llvm.org/D30285
llvm-svn: 300371
This patch teaches ubsan to insert an alignment check for the 'this'
pointer at the start of each method/lambda. This allows clang to emit
significantly fewer alignment checks overall, because if 'this' is
aligned, so are its fields.
This is essentially the same thing r295515 does, but for the alignment
check instead of the null check. One difference is that we keep the
alignment checks on member expressions where the base is a DeclRefExpr.
There's an opportunity to diagnose unaligned accesses in this situation
(as pointed out by Eli, see PR32630).
Testing: check-clang, check-ubsan, and a stage2 ubsan build.
Along with the patch from D30285, this roughly halves the amount of
alignment checks we emit when compiling X86FastISel.cpp. Here are the
numbers from patched/unpatched clangs based on r298160.
------------------------------------------
| Setup | # of alignment checks |
------------------------------------------
| unpatched, -O0 | 24326 |
| patched, -O0 | 12717 | (-47.7%)
------------------------------------------
Differential Revision: https://reviews.llvm.org/D30283
llvm-svn: 300370
Previously __cfi_check was created in LTO optimization pipeline, which
means LLD has no way of knowing about the existence of this symbol
without rescanning the LTO output object. As a result, LLD fails to
export __cfi_check, even when given --export-dynamic-symbol flag.
llvm-svn: 299806
It's possible to load out-of-range values from bitfields backed by a
boolean or an enum. Check for UB loads from bitfields.
This is the motivating example:
struct S {
BOOL b : 1; // Signed ObjC BOOL.
};
S s;
s.b = 1; // This is actually stored as -1.
if (s.b == 1) // Evaluates to false, -1 != 1.
...
Changes since the original commit:
- Single-bit bools are a special case (see CGF::EmitFromMemory), and we
can't avoid dealing with them when loading from a bitfield. Don't try to
insert a check in this case.
Differential Revision: https://reviews.llvm.org/D30423
llvm-svn: 297389
It's possible to load out-of-range values from bitfields backed by a
boolean or an enum. Check for UB loads from bitfields.
This is the motivating example:
struct S {
BOOL b : 1; // Signed ObjC BOOL.
};
S s;
s.b = 1; // This is actually stored as -1.
if (s.b == 1) // Evaluates to false, -1 != 1.
...
Differential Revision: https://reviews.llvm.org/D30423
llvm-svn: 297298
Summary:
Because of the existence branches out of GNU statement expressions, it
is possible that emitting cleanups for a full expression may cause the
new insertion point to not be dominated by the result of the inner
expression. Consider this example:
struct Foo { Foo(); ~Foo(); int x; };
int g(Foo, int);
int f(bool cond) {
int n = g(Foo(), ({ if (cond) return 0; 42; }));
return n;
}
Before this change, result of the call to 'g' did not dominate its use
in the store to 'n'. The early return exit from the statement expression
branches to a shared cleanup block, which ends in a switch between the
fallthrough destination (the assignment to 'n') or the function exit
block.
This change solves the problem by spilling and reloading expression
evaluation results when any of the active cleanups have branches.
I audited the other call sites of enterFullExpression, and they don't
appear to keep and Values live across the site of the cleanup, except in
ARC code. I wasn't able to create a test case for ARC that exhibits this
problem, though.
Reviewers: rjmccall, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D30590
llvm-svn: 297084
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang, check-ubsan, and a stage2 ubsan build.
I also compiled X86FastISel.cpp with -fsanitize=null using
patched/unpatched clangs based on r293572. Here are the number of null
checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Changes since the initial commit:
- Don't introduce any unintentional object-size or alignment checks.
- Don't rely on IRGen of C labels in the test.
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295515
CodeGenFunction::EmitTypeCheck accepts a bool flag which controls
whether or not null checks are emitted. Make this a bit more flexible by
changing the bool to a SanitizerSet.
Needed for an upcoming change which deals with a scenario in which we
only want to emit null checks.
llvm-svn: 295514
This reverts commit r295401. It breaks the ubsan self-host. It inserts
object size checks once per C++ method which fire when the structure is
empty.
llvm-svn: 295494
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang and check-ubsan. I also compiled X86FastISel.cpp
with -fsanitize=null using patched/unpatched clangs based on r293572.
Here are the number of null checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Changes since the initial commit: don't rely on IRGen of C labels in the
test.
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295401