This is a follow up to r282152.
A more extensive testing on real apps revealed a subtle bug in r282152.
The revision made shadow mapping non-linear even within a single
user region. But there are lots of code in runtime that processes
memory ranges and assumes that mapping is linear. For example,
region memory access handling simply increments shadow address
to advance to the next shadow cell group. Similarly, DontNeedShadowFor,
java memory mover, search of heap memory block header, etc
make similar assumptions.
To trigger the bug user range would need to cross 0x008000000000 boundary.
This was observed for a module data section.
Make shadow mapping linear within a single user range again.
Add a startup CHECK for linearity.
llvm-svn: 282405
Don't xor user address with kAppMemXor in meta mapping.
The only purpose of kAppMemXor is to raise shadow for ~0 user addresses,
so that they don't map to ~0 (which would cause overlap between
user memory and shadow).
For meta mapping we explicitly add kMetaShadowBeg offset,
so we don't need to additionally raise meta shadow.
llvm-svn: 282403
In ShadowToMem we call MemToShadow potentially for incorrect addresses.
So DCHECK(IsAppMem(p)) can fire in debug mode.
Fix this by swapping range and MemToShadow checks.
llvm-svn: 282157
4.1+ Linux kernels map pie binaries at 0x55:
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1fd836dcf00d2028c700c7e44d2c23404062c90
Currently tsan does not support app memory at 0x55 (https://github.com/google/sanitizers/issues/503).
Older kernels also map pie binaries at 0x55 when ASLR is disables (most notably under gdb).
This change extends tsan mapping for linux/x86_64 to cover 0x554-0x568 app range and fixes both 4.1+ kernels and gdb.
This required to slightly shrink low and high app ranges and move heap. The mapping become even more non-linear, since now we xor lower bits. Now even a continuous app range maps to split, intermixed shadow ranges. This breaks ShadowToMemImpl as it assumes linear mapping at least within a continuous app range (however it turned out to be already broken at least on arm64/42-bit vma as uncovered by r281970). So also change ShadowToMemImpl to hopefully a more robust implementation that does not assume a linear mapping.
llvm-svn: 282152
The definitions in sanitizer_common may conflict with definitions from system headers because:
The runtime includes the system headers after the project headers (as per LLVM coding guidelines).
lib/sanitizer_common/sanitizer_internal_defs.h pollutes the namespace of everything defined after it, which is all/most of the sanitizer .h and .cc files and the included system headers with: using namespace __sanitizer; // NOLINT
This patch solves the problem by introducing the namespace only within the sanitizer namespaces as proposed by Dmitry.
Differential Revision: https://reviews.llvm.org/D21947
llvm-svn: 281657
This patch adds a wrapper for call_once, which uses an already-compiled helper __call_once with an atomic release which is invisible to TSan. To avoid false positives, the interceptor performs an explicit atomic release in the callback wrapper.
Differential Revision: https://reviews.llvm.org/D24188
llvm-svn: 280920
This patch builds on LLVM r279776.
In this patch I've done some cleanup and abstracted three common steps runtime components have in their CMakeLists files, and added a fourth.
The three steps I abstract are:
(1) Add a top-level target (i.e asan, msan, ...)
(2) Set the target properties for sorting files in IDE generators
(3) Make the compiler-rt target depend on the top-level target
The new step is to check if a command named "runtime_register_component" is defined, and to call it with the component name.
The runtime_register_component command is defined in llvm/runtimes/CMakeLists.txt, and presently just adds the component to a list of sub-components, which later gets used to generate target mappings.
With this patch a new workflow for runtimes builds is supported. The new workflow when building runtimes from the LLVM runtimes directory is:
> cmake [...]
> ninja runtimes-configure
> ninja asan
The "runtimes-configure" target builds all the dependencies for configuring the runtimes projects, and runs CMake on the runtimes projects. Running the runtimes CMake generates a list of targets to bind into the top-level CMake so subsequent build invocations will have access to some of Compiler-RT's targets through the top-level build.
Note: This patch does exclude some top-level targets from compiler-rt libraries because they either don't install files (sanitizer_common), or don't have a cooresponding `check` target (stats).
llvm-svn: 279863
When compiler-rt's CMake is not directly invoked, it will currently not call
project() and thus ASM will not be enabled.
We also don't need to put the .S files through the C compiler then.
Differential Revision: https://reviews.llvm.org/D23656
llvm-svn: 279215
Current AArch64 {sig}{set,long}jmp interposing requires accessing glibc
private __pointer_chk_guard to get process xor mask to demangled the
internal {sig}jmp_buf function pointers.
It causes some packing issues, as described in gcc PR#71042 [1], and is
is not a godd practice to rely on a private glibc namespace (since ABI is
not meant to be stable).
This patch fixes it by changing how libtsan obtains the guarded pointer
value: at initialization a specific routine issues a setjmp call and
using the mangled function pointer and the original value derive the
random guarded pointer.
Checked on aarch64 39-bit VMA.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71042
llvm-svn: 278292
The system implementation of OSAtomicTestAndClear returns the original bit, but the TSan interceptor has a bug which always returns zero from the function. This patch fixes this and adds a test.
Differential Revision: https://reviews.llvm.org/D23061
llvm-svn: 277461
On Darwin, there are some apps that rely on realloc(nullptr, 0) returning a valid pointer. TSan currently returns nullptr in this case, let's fix it to avoid breaking binary compatibility.
Differential Revision: https://reviews.llvm.org/D22800
llvm-svn: 277458
This patch adds 48-bits VMA support for tsan on aarch64. As current
mappings for aarch64, 48-bit VMA also supports PIE executable. This
limits the mapping mechanism because the PIE address bits
(usually 0aaaaXXXXXXXX) makes it harder to create a mask/xor value
to include all memory regions. I think it is possible to create a
large application VAM range by either dropping PIE support or tune
current range.
It also changes slight the way addresses are packed in SyncVar structure:
previously it assumes x86_64 as the maximum VMA range. Since ID is 14 bits
wide, shifting 48 bits should be ok.
Tested on x86_64, ppc64le and aarch64 (39 and 48 bits VMA).
llvm-svn: 277137
When we delay signals we can deliver them when the signal
is blocked. This can be surprising to the program.
Intercept signal blocking functions merely to process
pending signals. As the result, at worst we will delay
a signal till return from the signal blocking function.
llvm-svn: 276876
Summary:
This patch is a refactoring of the way cmake 'targets' are grouped.
It won't affect non-UI cmake-generators.
Clang/LLVM are using a structured way to group targets which ease
navigation through Visual Studio UI. The Compiler-RT projects
differ from the way Clang/LLVM are grouping targets.
This patch doesn't contain behavior changes.
Reviewers: kubabrecka, rnk
Subscribers: wang0109, llvm-commits, kubabrecka, chrisha
Differential Revision: http://reviews.llvm.org/D21952
llvm-svn: 275111
This patch adds interceptors for dispatch_io_*, dispatch_read and dispatch_write functions. This avoids false positives when using GCD IO. Adding several test cases.
Differential Revision: http://reviews.llvm.org/D21889
llvm-svn: 275071
This patch adds synchronization between the creation of the GCD data object and destructor’s execution. It’s far from perfect, because ideally we’d want to synchronize the destruction of the last reference (via dispatch_release) and the destructor’s execution, but intercepting objc_release is problematic.
Differential Revision: http://reviews.llvm.org/D21990
llvm-svn: 274749
We already have interceptors for dispatch_source API (e.g. dispatch_source_set_event_handler), but they currently only handle submission synchronization. We also need to synchronize based on the target queue (serial, concurrent), in other words, we need to use dispatch_callback_wrap. This patch implements that.
Differential Revision: http://reviews.llvm.org/D21999
llvm-svn: 274619
In the patch that introduced support for GCD barrier blocks, I removed releasing a group when leaving it (in dispatch_group_leave). However, this is necessary to synchronize leaving a group and a notification callback (dispatch_group_notify). Adding this back, simplifying dispatch_group_notify_f and adding a test case.
Differential Revision: http://reviews.llvm.org/D21927
llvm-svn: 274549
Because we use SCOPED_TSAN_INTERCEPTOR in the dispatch_once interceptor, the original dispatch_once can also be sometimes called (when ignores are enabled or when thr->is_inited is false). However the original dispatch_once function doesn’t expect to find “2” in the storage and it will spin forever (but we use “2” to indicate that the initialization is already done, so no waiting is necessary). This patch makes sure we never call the original dispatch_once.
Differential Revision: http://reviews.llvm.org/D21976
llvm-svn: 274548
The dispatch_group_async interceptor actually extends the lifetime of the executed block. This means the destructor of the block (and captured variables) is called *after* dispatch_group_leave, which changes the semantics of dispatch_group_async. This patch fixes that.
Differential Revision: http://reviews.llvm.org/D21816
llvm-svn: 274117
Adding support for GCD barrier blocks in concurrent queues. This uses two sync object in the same way as read-write locks do. This also simplifies the use of dispatch groups (the notifications act as barrier blocks).
Differential Revision: http://reviews.llvm.org/D21604
llvm-svn: 273893
The non-barrier versions of OSAtomic* functions are semantically mo_relaxed, but the two variants (e.g. OSAtomicAdd32 and OSAtomicAdd32Barrier) are actually aliases of each other, and we cannot have different interceptors for them, because they're actually the same function. Thus, we have to stay conservative and treat the non-barrier versions as mo_acq_rel.
Differential Revision: http://reviews.llvm.org/D21733
llvm-svn: 273890
There is a "well-known" TSan false positive when using C++ weak_ptr/shared_ptr and code in destructors, e.g. described at <https://llvm.org/bugs/show_bug.cgi?id=22324>. The "standard" solution is to build and use a TSan-instrumented version of libcxx, which is not trivial for end-users. This patch tries a different approach (on OS X): It adds an interceptor for the specific function in libc++.dylib, which implements the atomic operation that needs to be visible to TSan.
Differential Revision: http://reviews.llvm.org/D21609
llvm-svn: 273806
This patch replaces all uses of __libc_malloc and friends with the internal allocator.
It seems that the only reason why we have calls to __libc_malloc in the first place was the lack of the internal allocator at the time. Using the internal allocator will also make sure that the system allocator is never used (this is the same behavior as ASan), and we don’t have to worry about working with unknown pointers coming from the system allocator.
Differential Revision: http://reviews.llvm.org/D21025
llvm-svn: 271916
This is a very simple optimization that gets about 10% speedup for certain programs. We’re currently storing the pointer to the main thread’s ThreadState, but we can store the state directly in a static variable, which avoid the load acquire.
Differential Revision: http://reviews.llvm.org/D20910
llvm-svn: 271906
We're not building the Go runtime with -mmacosx-version-min, which means it'll have a minimum deployment target set to the system you're building on. Let's make the code compile (and link) with -mmacosx-version-min=10.7.
Differential Revision: http://reviews.llvm.org/D20670
llvm-svn: 271833
The new annotation was added a while ago, but was not actually used.
Use the annotation to detect linker-initialized mutexes instead
of the broken IsGlobalVar which has both false positives and false
negatives. Remove IsGlobalVar mess.
llvm-svn: 271663
Currently the added test produces false race reports with glibc 2.19,
because DLTS memory is reused by pthread under the hood.
Use the DTLS machinery to intercept new DTLS ranges.
__tls_get_addr known to cause issues for tsan in the past,
so write the interceptor more carefully.
Reviewed in http://reviews.llvm.org/D20927
llvm-svn: 271568
Summary:
As suggested by kcc@ in http://reviews.llvm.org/D20084#441418, move the CheckFailed and Die functions, and their associated callback functionalities in their own separate file.
I expended the build rules to include a new rule that would not include those termination functions, so that another project can define their own.
The tests check-{a,t,m,ub,l,e,df}san are all passing.
Reviewers: llvm-commits, kcc
Subscribers: kubabrecka
Differential Revision: http://reviews.llvm.org/D20742
llvm-svn: 271055