In short, CVE-2016-2143 will crash the machine if a process uses both >4TB
virtual addresses and fork(). ASan, TSan, and MSan will, by necessity, map
a sizable chunk of virtual address space, which is much larger than 4TB.
Even worse, sanitizers will always use fork() for llvm-symbolizer when a bug
is detected. Disable all three by aborting on process initialization if
the running kernel version is not known to contain a fix.
Unfortunately, there's no reliable way to detect the fix without crashing
the kernel. So, we rely on whitelisting - I've included a list of upstream
kernel versions that will work. In case someone uses a distribution kernel
or applied the fix themselves, an override switch is also included.
Differential Revision: http://reviews.llvm.org/D19576
llvm-svn: 267747
Current interface assumes that Go calls ProcWire/ProcUnwire
to establish the association between thread and proc.
With the wisdom of hindsight, this interface does not work
very well. I had to sprinkle Go scheduler with wire/unwire
calls, and any mistake leads to hard to debug crashes.
This is not something one wants to maintian.
Fortunately, there is a simpler solution. We can ask Go
runtime as to what is the current Processor, and that
question is very easy to answer on Go side.
Switch to such interface.
llvm-svn: 267703
tsan_debugging.cc: In function ‘void* __tsan_get_current_report()’:
tsan_debugging.cc:61:18: warning: cast from type ‘const __tsan::ReportDesc*’
to type ‘void*’ casts away qualifiers [-Wcast-qual]
return (void *)rep;
llvm-svn: 267679
This is reincarnation of http://reviews.llvm.org/D17648 with the bug fix pointed out by Adhemerval (zatrazz).
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 267678
The field "pid" in ReportThread is used to store the OS-provided thread ID (pthread_self or gettid). The name "pid" suggests it's a process ID, which it isn't. Let's rename it.
Differential Revision: http://reviews.llvm.org/D19365
llvm-svn: 266994
The real problem is that sanitizer_print_stack_trace obtains current PC and
expects the PC to be in the stack trace after function calls. We don't
prevent tail calls in sanitizer runtimes, so this assumption does not
necessary hold.
We add "always inline" attribute on PrintCurrentStackSlow to address this
issue, however this solution is not reliable enough, but unfortunately, we
don't see any simple, reliable solution.
Reviewers: samsonov hfinkel kbarton tjablin dvyukov kcc
http://reviews.llvm.org/D19148
Thanks Hal, dvyukov, and kcc for invaluable discussion, I have even borrowed
part of dvyukov's summary as my commit message!
llvm-svn: 266869
In short, CVE-2016-2143 will crash the machine if a process uses both >4TB
virtual addresses and fork(). ASan, TSan, and MSan will, by necessity, map
a sizable chunk of virtual address space, which is much larger than 4TB.
Even worse, sanitizers will always use fork() for llvm-symbolizer when a bug
is detected. Disable all three by aborting on process initialization if
the running kernel version is not known to contain a fix.
Unfortunately, there's no reliable way to detect the fix without crashing
the kernel. So, we rely on whitelisting - I've included a list of upstream
kernel versions that will work. In case someone uses a distribution kernel
or applied the fix themselves, an override switch is also included.
Differential Revision: http://reviews.llvm.org/D18915
llvm-svn: 266297
The custom zone implementation for OS X must not return 0 (even for 0-sized allocations). Returning 0 indicates that the pointer doesn't belong to the zone. This can break existing applications. The underlaying allocator allocates 1 byte for 0-sized allocations anyway, so returning 1 in this case is okay.
Differential Revision: http://reviews.llvm.org/D19100
llvm-svn: 266283
We need to handle the case when handler is NULL in dispatch_source_set_cancel_handler and similar interceptors.
Differential Revision: http://reviews.llvm.org/D18968
llvm-svn: 266080
OS X provides atomic functions in libkern/OSAtomic.h. These provide atomic guarantees and they have alternatives which have barrier semantics. This patch adds proper TSan support for the functions from libkern/OSAtomic.h.
Differential Revision: http://reviews.llvm.org/D18500
llvm-svn: 265665
To avoid using the public header (tsan_interface_atomic.h), which has different data types, let's add all the __tsan_atomic* functions to tsan_interface.h.
Differential Revision: http://reviews.llvm.org/D18543
llvm-svn: 265663
Adding an interceptor with two more release+acquire pairs to avoid false positives with dispatch_apply.
Differential Revision: http://reviews.llvm.org/D18722
llvm-svn: 265662
XPC APIs have async callbacks, and we need some more happen-before edges to avoid false positives. This patch add them, plus a test case (sorry for the long boilerplate code, but XPC just needs all that).
Differential Revision: http://reviews.llvm.org/D18493
llvm-svn: 265661
GCD has APIs for event sources, we need some more release-acquire pairs to avoid false positives in TSan.
Differential Revision: http://reviews.llvm.org/D18515
llvm-svn: 265660
In the interceptor for dispatch_sync, we're currently missing synchronization between the callback and the code *after* the call to dispatch_sync. This patch fixes this by adding an extra release+acquire pair to dispatch_sync() and similar APIs. Added a testcase.
Differential Revision: http://reviews.llvm.org/D18502
llvm-svn: 265659
Summary:
After patch https://lkml.org/lkml/2015/12/21/340 is introduced in
linux kernel, the random gap between stack and heap is increased
from 128M to 36G on 39-bit aarch64. And it is almost impossible
to cover this big range. So we need to disable randomized virtual
space on aarch64 linux.
Reviewers: llvm-commits, zatrazz, dvyukov, rengolin
Subscribers: aemerson, rengolin, tberghammer, danalbert, srhines
Differential Revision: http://reviews.llvm.org/D18526
llvm-svn: 265366
We've reset thr->ignore_reads_and_writes, but forget to do
thr->fast_state.ClearIgnoreBit(). So ignores were not effective
reset and fast_state.ignore_bit was corrupted if signal handler
itself uses ignores.
Properly reset/restore fast_state.ignore_bit around signal handlers.
llvm-svn: 265288
This patch fixes the custom ThreadState destruction on OS X to avoid crashing when dispatch_main calls pthread_exit which quits the main thread.
Differential Revision: http://reviews.llvm.org/D18496
llvm-svn: 264627
Summary:
Currently, sanitizer_common_interceptors.inc has an implicit, undocumented
assumption that the sanitizer including it has previously declared
interceptors for memset and memmove. Since the memset, memmove, and memcpy
routines require interception by many sanitizers, we add them to the
set of common interceptions, both to address the undocumented assumption
and to speed future tool development. They are intercepted under a new
flag intercept_intrin.
The tsan interceptors are removed in favor of the new common versions. The
asan and msan interceptors for these are more complex (they incur extra
interception steps and their function bodies are exposed to the compiler)
so they opt out of the common versions and keep their own.
Reviewers: vitalybuka
Subscribers: zhaoqin, llvm-commits, kcc
Differential Revision: http://reviews.llvm.org/D18465
llvm-svn: 264451
On OS X, fork() under TSan asserts (in debug builds only) because REAL(fork) calls some intercepted functions, which check that no internal locks are held via CheckNoLocks(). But the wrapper of fork intentionally holds some locks. This patch fixes that by using ScopedIgnoreInterceptors during the call to REAL(fork). After that, all the fork-based tests seem to pass on OS X, so let's just remove all the UNSUPPORTED: darwin annotations we have.
Differential Revision: http://reviews.llvm.org/D18409
llvm-svn: 264261
This reverts commits r264068 and r264079, and they were breaking the build and
weren't reverted in time, nor they exhibited expected behaviour from the
reviewers. There is more to discuss than just a test fix.
llvm-svn: 264150
Summary:
After patch https://lkml.org/lkml/2015/12/21/340 is introduced in
linux kernel, the random gap between stack and heap is increased
from 128M to 36G on 39-bit aarch64. And it is almost impossible
to cover this big range. So I think we need to disable randomized
virtual space on aarch64 linux.
Reviewers: kcc, llvm-commits, eugenis, zatrazz, dvyukov, rengolin
Subscribers: rengolin, aemerson, tberghammer, danalbert, srhines, enh
Differential Revision: http://reviews.llvm.org/D18003
llvm-svn: 264068
Adds strchr, strchrnul, and strrchr to the common interceptors, under a new
common flag intercept_strchr.
Removes the now-duplicate strchr interceptor from asan and all 3
interceptors from tsan. Previously, asan did not intercept strchrnul, but
does now; previously, msan did not intercept strchr, strchrnul, or strrchr,
but does now.
http://reviews.llvm.org/D18329
Patch by Derek Bruening!
llvm-svn: 263992
`__tsan_get_report_thread` and others can crash if a stack trace is missing, let's add the missing checks.
Differential Revision: http://reviews.llvm.org/D18306
llvm-svn: 263939
Summary:
Introducing InitializeCommonFlags accross all sanitizers to simplify
common flags management.
Setting coverage=1 when html_cov_report is requested.
Differential Revision: http://reviews.llvm.org/D18273
llvm-svn: 263820
On OS X, we have pthread_cond_timedwait_relative_np. TSan needs to intercept this API to avoid false positives when using condition variables.
Differential Revision: http://reviews.llvm.org/D18184
llvm-svn: 263782
On OS X 10.11+, we have "automatic interceptors", so we don't need to use DYLD_INSERT_LIBRARIES when launching instrumented programs. However, non-instrumented programs that load TSan late (e.g. via dlopen) are currently broken, as TSan will still try to initialize, but the program will crash/hang at random places (because the interceptors don't work). This patch adds an explicit check that interceptors are working, and if not, it aborts and prints out an error message suggesting to explicitly use DYLD_INSERT_LIBRARIES.
TSan unit tests run with a statically linked runtime, where interceptors don't work. To avoid aborting the process in this case, the patch replaces `DisableReexec()` with a weak `ReexecDisabled()` function which is defined to return true in unit tests.
Differential Revision: http://reviews.llvm.org/D18212
llvm-svn: 263695
This patch adds a new TSan report type, ReportTypeMutexInvalidAccess, which is triggered when pthread_mutex_lock or pthread_mutex_unlock returns EINVAL (this means the mutex is invalid, uninitialized or already destroyed).
Differential Revision: http://reviews.llvm.org/D18132
llvm-svn: 263641
Summary:
Adds strlen to the common interceptors, under a new common flag
intercept_strlen. This provides better sharing of interception code among
sanitizers and cleans up the inconsistent type declarations of the
previously duplicated interceptors.
Removes the now-duplicate strlen interceptor from asan, msan, and tsan.
The entry check semantics are normalized now for msan and asan, whose
private strlen interceptors contained multiple layers of checks that
included impossible-to-reach code. The new semantics are identical to the
old: bypass interception if in the middle of init or if both on Mac and not
initialized; else, call the init routine and proceed.
Patch by Derek Bruening!
Reviewers: samsonov, vitalybuka
Subscribers: llvm-commits, kcc, zhaoqin
Differential Revision: http://reviews.llvm.org/D18020
llvm-svn: 263177
Currently, TSan only reports everything in a formatted textual form. The idea behind this patch is to provide a consistent API that can be used to query information contained in a TSan-produced report. User can use these APIs either in a debugger (via a script or directly), or they can use it directly from the process (e.g. in the __tsan_on_report callback). ASan already has a similar API, see http://reviews.llvm.org/D4466.
Differential Revision: http://reviews.llvm.org/D16191
llvm-svn: 263126
Summary:
__BIG_ENDIAN__ and __LITTLE_ENDIAN__ are not supported by gcc, which
eg. for ubsan Value::getFloatValue will silently fall through to
the little endian branch, breaking display of float values by ubsan.
Use __BYTE_ORDER__ == __ORDER_BIG/LITTLE_ENDIAN__ as the condition
instead, which is supported by both clang and gcc.
Noticed while porting ubsan to s390x.
Patch by Marcin Kościelnicki!
Differential Revision: http://reviews.llvm.org/D17660
llvm-svn: 263077
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
1. Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
2. Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
3. This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
4. Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 262037
This patch moves recv and recvfrom interceptors from MSan and TSan to
sanitizer_common to enable them in ASan.
Differential Revision: http://reviews.llvm.org/D17479
llvm-svn: 261841
The first issue is that we longjmp from ScopedInterceptor scope
when called from an ignored lib. This leaves thr->in_ignored_lib set.
This, in turn, disables handling of sigaction. This, in turn,
corrupts tsan state since signals delivered asynchronously.
Another issue is that we can ignore synchronization in asignal
handler, if the signal is delivered into an IgnoreSync region.
Since signals are generally asynchronous, they should ignore
memory access/synchronization/interceptor ignores.
This could lead to false positives in signal handlers.
llvm-svn: 261658