We need libfuzzer libraries on Arm32 so that we can fuzz
Arm32 binaries on Linux (Chrome OS). Android already
allows Arm32 for libfuzzer.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D112091
pthread_join needs to map pthread_t of the joined thread to our Tid.
Currently we do this with linear search over all threads.
This has quadratic complexity and becomes much worse with the new
tsan runtime, which memorizes all threads that ever existed.
To resolve this add a hash map of live threads only (that are still
associated with pthread_t) and use it for the mapping.
With the new tsan runtime some programs spent 1/3 of time in this mapping.
After this change the mapping disappears from profiles.
Depends on D113996.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D113997
The __flags variable needs to be of type 'long' in order to get sign extended
properly.
internal_clone() uses an svc (Supervisor Call) directly (as opposed to
internal_syscall), and therefore needs to take care to set errno and return
-1 as needed.
Review: Ulrich Weigand
asan does not use user_id for anything,
so don't pass it to ThreadCreate.
Passing a random uninitialized field of AsanThread
as user_id does not make much sense anyway.
Depends on D113921.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113922
memprof does not use user_id for anything,
so don't pass it to ThreadCreate.
Passing a random field of MemprofThread as user_id
does not make much sense anyway.
Depends on D113920.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113921
They don't seem to do anything useful in lsan.
They are needed only if a tools needs to execute
some custom logic during detach/join, or if it uses
thread registry quarantine. Lsan does none of this.
And if a tool cares then it would also need to intercept
pthread_tryjoin_np and pthread_timedjoin_np, otherwise
it will mess thread states.
Fwiw, asan does not intercept these functions either.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113920
Since libclang_rt.profile is added later in the command line, a
definition of __llvm_profile_raw_version is not included if it is
provided from an earlier object, e.g. from a shared dependency.
This causes an extra dependence edge where if libA.so depends on libB.so
and both are coverage-instrumented, libA.so uses libB.so's definition of
__llvm_profile_raw_version. This leads to a runtime link failure if the
libB.so available at runtime does not provide this symbol (but provides
the other dependent symbols). Such a scenario can occur in Android's
mainline modules.
E.g.:
ld -o libB.so libclang_rt.profile-x86_64.a
ld -o libA.so -l B libclang_rt.profile-x86_64.a
libB.so has a global definition of __llvm_profile_raw_version. libA.so
uses libB.so's definition of __llvm_profile_raw_version. At runtime,
libB.so may not be coverage-instrumented (i.e. not export
__llvm_profile_raw_version) so runtime linking of libA.so will fail.
Marking this symbol as hidden forces each binary to use the definition
of __llvm_profile_raw_version from libclang_rt.profile. The visiblity
is unchanged for Apple platforms where its presence is checked by the
TAPI tool.
Reviewed By: MaskRay, phosek, davidxl
Differential Revision: https://reviews.llvm.org/D111759
The new test started failing on bots with:
CHECK failed: tsan_rtl.cpp:327 "((addr + size)) <= ((TraceMemEnd()))"
(0xf06200e03010, 0xf06200000000) (tid=4073872)
https://lab.llvm.org/buildbot#builders/179/builds/1761
This is a latent bug in aarch64 virtual address space layout,
there is not enough address space to fit traces for all threads.
But since the trace space is going away with the new tsan runtime
(D112603), disable the test.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113990
Use of gethostent provokes caching of some resources inside of libc.
They are freed in __libc_thread_freeres very late in thread lifetime,
after our ThreadFinish. __libc_thread_freeres calls free which
previously crashed in malloc hooks.
Fix it by setting ignore_interceptors for finished threads,
which in turn prevents malloc hooks.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113989
Precisely specifying the unused parts of the bitfield is critical for
performance. If we don't specify them, compiler will generate code to load
the old value and shuffle it to extract the unused bits to apply to the new
value. If we specify the unused part and store 0 in there, all that
unnecessary code goes away (store of the 0 const is combined with other
constant parts).
I don't see a good way to ensure we cover all of u64 bits with fields.
So at least introduce named kUnusedBits consts and check that bits
sum up to 64.
Depends on D113978.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113979
In the old runtime we used to use different number of trace parts
for C++ and Go to reduce trace memory consumption for Go.
But now it's easier and better to use smaller parts because
we already use minimal possible number of parts for C++ (3).
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D113978
man pthread_equal:
The pthread_equal() function is necessary because thread IDs
should be considered opaque: there is no portable way for
applications to directly compare two pthread_t values.
Depends on D113916.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113919
pthread_setname_np does linear search over all thread descriptors
to map pthread_t to the thread descriptor. This has O(N^2) complexity
and becomes much worse in the new tsan runtime that keeps all ever
existed threads in the thread registry.
Replace linear search with direct access if pthread_setname_np
is called for the current thread (a very common case).
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113916
We used to mmap C++ shadow stack as part of the trace region
before ed7f3f5bc9 ("tsan: move shadow stack into ThreadState"),
which moved the shadow stack into TLS. This started causing
timeouts and OOMs on some of our internal tests that repeatedly
create and destroy thousands of threads.
Allocate C++ shadow stack with mmap and small pages again.
This prevents the observed timeouts and OOMs.
But we now need to be more careful with interceptors that
run after thread finalization because FuncEntry/Exit and
TraceAddEvent all need the shadow stack.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113786
Similar to how the other swift sections are registered by the ORC
runtime's macho platform, add the __swift5_types section, which contains
type metadata. Add a simple test that demonstrates that the swift
runtime recognized the registered types.
rdar://85358530
Differential Revision: https://reviews.llvm.org/D113811
Since glibc 2.34, dlsym does
1. malloc 1
2. malloc 2
3. free pointer from malloc 1
4. free pointer from malloc 2
These sequence was not handled by trivial dlsym hack.
This fixes https://bugs.llvm.org/show_bug.cgi?id=52278
Reviewed By: eugenis, morehouse
Differential Revision: https://reviews.llvm.org/D112588