Summary:
This patch implements a new UBSan check, which verifies
that function arguments declared to be nonnull with __attribute__((nonnull))
are actually nonnull in runtime.
To implement this check, we pass FunctionDecl to CodeGenFunction::EmitCallArgs
(where applicable) and if function declaration has nonnull attribute specified
for a certain formal parameter, we compare the corresponding RValue to null as
soon as it's calculated.
Test Plan: regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits, rnk
Differential Revision: http://reviews.llvm.org/D5082
llvm-svn: 217389
Because we may change the name of a FileEntry inside getFile, the name
returned by FileEntry::getName() could be destroyed. This was causing a
use-after-free when searching the HeaderFileInfo on-disk hashtable for a
module or pch.
llvm-svn: 217385
There were code paths that are duplicated for constructors and destructors just
because we have both CXXCtorType and CXXDtorsTypes.
This patch introduces an unified enum and reduces code deplication a bit.
llvm-svn: 217383
Assert in scheduler from an inserted copy_to_regclass from
a constant.
This only seems to break sometimes when a constant initializer
address is forced into VGPRs in a non-entry block. No test
since the only case I've managed to hit only happens with a future
patch, and that case will also not be a problem once scalar instructions
are used in non-entry blocks.
llvm-svn: 217380
This cleans up a couple of warnings [-Wcovered-switch-default] from the build by
removing the default case from a couple of switches which are fully covered.
This is generally better as it will help identify when a new item is added to
the enumeration but the use sites are not updated.
llvm-svn: 217376
This adds a definition for the TypeValidatorImpl_CXX destructor. Because the
destructor is first virtual method, and declared out-of-line, it also serves as
the key function. Since no definition was present, no virtual table for
TypeValidatorImpl_CXX was emitted, which results in link failures due to
references to undefined symbols.
Also add a definition for a TypeValidatorImpl contructor which was declared
out-of-line and referenced in a constructor for TypeValidatorImpl_CXX.
llvm-svn: 217375
Ever wanted to fix all the header guards in clang? Now it's easy.
Make sure clang-tidy is in $PATH and a compilation database is available.
$ ./run-clang-tidy.py -checks=-*,llvm-header-guard -fix
... get coffee (or more CPU cores) ...
$ svn diff
Some may argue that this is just a glorified xargs -P, but it does a bit more ;)
Differential Revision: http://reviews.llvm.org/D5188
llvm-svn: 217368
defined in a shared library.
Now LLD does not export a strong defined symbol if it coalesces away a
weak symbol defined in a shared library. This bug affects all ELF
architectures and leads to segfault:
% cat foo.c
extern int __attribute__((weak)) flag;
int foo() { return flag; }
% cat main.c
int flag = 1;
int foo();
int main() { return foo() == 1 ? 0 : -1; }
% clang -c -fPIC foo.c main.c
% lld -flavor gnu -target x86_64 -shared -o libfoo.so ... foo.o
% lld -flavor gnu -target x86_64 -o a.out ... main.o libfoo.so
% ./a.out
Segmentation fault
The problem is caused by the fact that we lose all information about
coalesced symbols after the `Resolver::resolve()` method is finished.
The patch solves the problem by overriding the
`LinkingContext::notifySymbolTableCoalesce()` method and saving names
of coalesced symbols. Later in the `buildDynamicSymbolTable()` routine
we use this information to export these symbols.
llvm-svn: 217363
Linking Release+Asserts executable lldb-gdbserver (without symbols)
liblldb.so: undefined reference to `lldb_private::MemoryHistoryASan::Initialize()'
liblldb.so: undefined reference to `lldb_private::MemoryHistoryASan::Terminate()'
liblldb.so: undefined reference to `vtable for lldb_private::TypeValidatorImpl_CXX'
liblldb.so: undefined reference to `lldb_private::TypeValidatorImpl::TypeValidatorImpl(lldb_private::TypeValidatorImpl::Flags const&)'
liblldb.so underlinked to lldbPluginMemoryHistoryASan.a when building with the
make based build system (as opposed to CMake).
llvm-svn: 217360
When a file is not found, produce a proper error message. The previous error
message produced a file format error, which made me wonder for a while why
there is a file format error, but essentially the file was not found.
This fixes the problem by producing a proper error message.
llvm-svn: 217359
By default linker would not create a separate segment to hold read only data.
This option overrides that behavior by creating the a separate read only segment
for read only data.
llvm-svn: 217358
We would previously simply assume that the write would always succeed. However,
write(2) may return -1 for error as well as fail to perform a complete write (in
which case the returned number of bytes will be less than the requested bytes).
Explicitly check if an error condition is encountered. This would previously
not be caught as we default initialized success to true. Add an assertion that
we always perform a complete write (a continuous retry could be added to ensure
that we finish writing completely).
This was caught by GCC's signed comparison warning and manual inspection.
llvm-svn: 217355
Temporarily comment out the test for really-large powers of two. This seems to
be host-sensitive for some reason... trying to fix the clang-i386-freebsd
builder.
llvm-svn: 217351
This makes use of the recently-added @llvm.assume intrinsic to implement a
__builtin_assume(bool) intrinsic (to provide additional information to the
optimizer). This hooks up __assume in MS-compatibility mode to mirror
__builtin_assume (the semantics have been intentionally kept compatible), and
implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM
now contains special logic to deal with assumptions of this form.
llvm-svn: 217349
This adds a basic (but important) use of @llvm.assume calls in ScalarEvolution.
When SE is attempting to validate a condition guarding a loop (such as whether
or not the loop count can be zero), this check should also include dominating
assumptions.
llvm-svn: 217348
InstCombine just got a bit smarter about checking known bits of returned
values, and because this test runs the optimizer, it requires an update. We
should really rewrite this test to directly check the IR output from CodeGen.
llvm-svn: 217347
From a combination of @llvm.assume calls (and perhaps through other means, such
as range metadata), it is possible that all bits of a return value might be
known. Previously, InstCombine did not check for this (which is understandable
given assumptions of constant propagation), but means that we'd miss simple
cases where assumptions are involved.
llvm-svn: 217346
This change teaches LazyValueInfo to use the @llvm.assume intrinsic. Like with
the known-bits change (r217342), this requires feeding a "context" instruction
pointer through many functions. Aside from a little refactoring to reuse the
logic that turns predicates into constant ranges in LVI, the only new code is
that which can 'merge' the range from an assumption into that otherwise
computed. There is also a small addition to JumpThreading so that it can have
LVI use assumptions in the same block as the comparison feeding a conditional
branch.
With this patch, we can now simplify this as expected:
int foo(int a) {
__builtin_assume(a > 5);
if (a > 3) {
bar();
return 1;
}
return 0;
}
llvm-svn: 217345
This adds a ScalarEvolution-powered transformation that updates load, store and
memory intrinsic pointer alignments based on invariant((a+q) & b == 0)
expressions. Many of the simple cases we can get with ValueTracking, but we
still need something like this for the more complicated cases (such as those
with an offset) that require some algebra. Note that gcc's
__builtin_assume_aligned's optional third argument provides exactly for this
kind of 'misalignment' offset for which this kind of logic is necessary.
The primary motivation is to fixup alignments for vector loads/stores after
vectorization (and unrolling). This pass is added to the optimization pipeline
just after the SLP vectorizer runs (which, admittedly, does not preserve SE,
although I imagine it could). Regardless, I actually don't think that the
preservation matters too much in this case: SE computes lazily, and this pass
won't issue any SE queries unless there are any assume intrinsics, so there
should be no real additional cost in the common case (SLP does preserve DT and
LoopInfo).
llvm-svn: 217344
This builds on r217342, which added the infrastructure to compute known bits
using assumptions (@llvm.assume calls). That original commit added only a few
patterns (to catch common cases related to determining pointer alignment); this
change adds several other patterns for simple cases.
r217342 contained that, for assume(v & b = a), bits in the mask
that are known to be one, we can propagate known bits from the a to v. It also
had a known-bits transfer for assume(a = b). This patch adds:
assume(~(v & b) = a) : For those bits in the mask that are known to be one, we
can propagate inverted known bits from the a to v.
assume(v | b = a) : For those bits in b that are known to be zero, we can
propagate known bits from the a to v.
assume(~(v | b) = a): For those bits in b that are known to be zero, we can
propagate inverted known bits from the a to v.
assume(v ^ b = a) : For those bits in b that are known to be zero, we can
propagate known bits from the a to v. For those bits in
b that are known to be one, we can propagate inverted
known bits from the a to v.
assume(~(v ^ b) = a) : For those bits in b that are known to be zero, we can
propagate inverted known bits from the a to v. For those
bits in b that are known to be one, we can propagate
known bits from the a to v.
assume(v << c = a) : For those bits in a that are known, we can propagate them
to known bits in v shifted to the right by c.
assume(~(v << c) = a) : For those bits in a that are known, we can propagate
them inverted to known bits in v shifted to the right by c.
assume(v >> c = a) : For those bits in a that are known, we can propagate them
to known bits in v shifted to the right by c.
assume(~(v >> c) = a) : For those bits in a that are known, we can propagate
them inverted to known bits in v shifted to the right by c.
assume(v >=_s c) where c is non-negative: The sign bit of v is zero
assume(v >_s c) where c is at least -1: The sign bit of v is zero
assume(v <=_s c) where c is negative: The sign bit of v is one
assume(v <_s c) where c is non-positive: The sign bit of v is one
assume(v <=_u c): Transfer the known high zero bits
assume(v <_u c): Transfer the known high zero bits (if c is know to be a power
of 2, transfer one more)
A small addition to InstCombine was necessary for some of the test cases. The
problem is that when InstCombine was simplifying and, or, etc. it would fail to
check the 'do I know all of the bits' condition before checking less specific
conditions and would not fully constant-fold the result. I'm not sure how to
trigger this aside from using assumptions, so I've just included the change
here.
llvm-svn: 217343
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.
As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.
The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.
Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.
This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).
llvm-svn: 217342