Summary:
Whenever possible (Linux + glibc 2.16+), detect dynamic loader module by
its base address, not by the module name matching. The current name
matching approach fails on some configurations.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D33859
llvm-svn: 304633
There is can be a situation when vptr is not initializing
by constructor of the object, and has a junk data which should
be properly checked, because c++ standard says:
"if default constructor is not specified
16 (7.3) no initialization is performed."
Patch by Denis Khalikov!
Differential Revision: https://reviews.llvm.org/D33712
llvm-svn: 304437
This patch addresses https://github.com/google/sanitizers/issues/774. When we
fork a multi-threaded process it's possible to deadlock if some thread acquired
StackDepot or allocator internal lock just before fork. In this case the lock
will never be released in child process causing deadlock on following memory alloc/dealloc
routine. While calling alloc/dealloc routines after multi-threaded fork is not allowed,
most of modern allocators (Glibc, tcmalloc, jemalloc) are actually fork safe. Let's do the same
for sanitizers except TSan that has complex locking rules.
Differential Revision: https://reviews.llvm.org/D33325
llvm-svn: 304285
Summary:
D33521 addressed a memory ordering issue in BlockingMutex, which seems
to be the cause of a flakiness of a few ASan tests on PowerPC.
Reviewers: eugenis
Subscribers: kubamracek, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D33611
llvm-svn: 304045
The test was meant for Darwin anyway, so I'm not even sure it's supposed
to run on Linux. If it was, then we need time to investigate, but since
the test is new, there's no point in reverting the whole patch because
of it.
llvm-svn: 304010
Summary:
Currently we are not enforcing the success of `pthread_once`, and
`pthread_setspecific`. Errors could lead to harder to debug issues later in
the thread's life. This adds checks for a 0 return value for both.
If `pthread_setspecific` fails in the teardown path, opt for an immediate
teardown as opposed to a fatal failure.
Reviewers: alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33555
llvm-svn: 303998
Summary:
D33521 addressed a memory ordering issue in BlockingMutex, which seems
to be the cause of a flakiness of a few ASan tests on PowerPC.
Reviewers: eugenis
Subscribers: kubamracek, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D33569
llvm-svn: 303995
Summary:
allow_user_segv_handler had confusing name did not allow to control behavior for
signals separately.
Reviewers: eugenis, alekseyshl, kcc
Subscribers: llvm-commits, dberris, kubamracek
Differential Revision: https://reviews.llvm.org/D33371
llvm-svn: 303941
Summary:
This required for any users who call exit() after creating
thread-specific data, as tls destructors are only called when
pthread_exit() or pthread_cancel() are used. This should also
match tls behavior on linux.
Getting the base address of the tls section is straightforward,
as it's stored as a section offset in %gs. The size is a bit trickier
to work out, as there doesn't appear to be any official documentation
or source code referring to it. The size used in this patch was determined
by taking the difference between the base address and the address of the
subsequent memory region returned by vm_region_recurse_64, which was
1024 * sizeof(uptr) on all threads except the main thread, where it was
larger. Since the section must be the same size on all of the threads,
1024 * sizeof(uptr) seemed to be a reasonable size to use, barring
a more programtic way to get the size.
1024 seems like a reasonable number, given that PTHREAD_KEYS_MAX
is 512 on darwin, so pthread keys will fit inside the region while
leaving space for other tls data. A larger size would overflow the
memory region returned by vm_region_recurse_64, and a smaller size
wouldn't leave room for all the pthread keys. In addition, the
stress test added here passes, which means that we are scanning at
least the full set of possible pthread keys, and probably
the full tls section.
Reviewers: alekseyshl, kubamracek
Subscribers: krytarowski, llvm-commits
Differential Revision: https://reviews.llvm.org/D33215
llvm-svn: 303887
The existing implementation ran CHECKs to assert that the thread state
was stored inside the tls. However, the mac implementation of tsan doesn't
store the thread state in tls, so these checks fail once darwin tls support
is added to the sanitizers. Only run these checks on platforms where
the thread state is expected to be contained in the tls.
llvm-svn: 303886
Summary:
Apparently Windows's `UnmapOrDie` doesn't support partial unmapping. Which
makes the new region allocation technique not Windows compliant.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33554
llvm-svn: 303883
Summary:
Currently, AllocateRegion has a tendency to fragment memory: it allocates
`2*kRegionSize`, and if the memory is aligned, will unmap `kRegionSize` bytes,
thus creating a hole, which can't itself be reused for another region. This
is exacerbated by the fact that if 2 regions get allocated one after another
without any `mmap` in between, the second will be aligned due to mappings
generally being contiguous.
An idea, suggested by @alekseyshl, to prevent such a behavior is to have a
stash of regions: if the `2*kRegionSize` allocation is properly aligned, split
it in two, and stash the second part to be returned next time a region is
requested.
At this point, I thought about a couple of ways to implement this:
- either an `IntrusiveList` of regions candidates, storing `next` at the
begining of the region;
- a small array of regions candidates existing in the Primary.
While the second option is more constrained in terms of size, it offers several
advantages:
- security wise, a pointer in a region candidate could be overflowed into, and
abused when popping an element;
- we do not dirty the first page of the region by storing something in it;
- unless several threads request regions simultaneously from different size
classes, the stash rarely goes above 1 entry.
I am not certain about the Windows impact of this change, as `sanitizer_win.cc`
has its own version of MmapAlignedOrDie, maybe someone could chime in on this.
MmapAlignedOrDie is effectively unused after this change and could be removed
at a later point. I didn't notice any sizeable performance gain, even though we
are saving a few `mmap`/`munmap` syscalls.
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33454
llvm-svn: 303879
Summary:
Dmitry, seeking your expertise. I believe, the proper way to implement
Lock/Unlock here would be to use acquire/release semantics. Am I missing
something?
Reviewers: dvyukov
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33521
llvm-svn: 303869
Summary:
In FreeBSD we needed to add generic implementations for `__bswapdi2` and
`__bswapsi2`, since gcc 6.x for mips is emitting calls to these. See:
https://reviews.freebsd.org/D10838 and https://reviews.freebsd.org/rS318601
The actual mips code generated for these generic C versions is pretty
OK, as can be seen in the (FreeBSD) review.
I checked over gcc sources, and it seems that it can emit these calls on
more architectures, so maybe it's best to simply always add them to the
compiler-rt builtins library.
Reviewers: howard.hinnant, compnerd, petarj, emaste
Reviewed By: compnerd, emaste
Subscribers: mgorny, llvm-commits, arichardson
Differential Revision: https://reviews.llvm.org/D33516
llvm-svn: 303866
This test case occassionally fails when run on powerpc64 be.
asan/TestCases/Posix/halt_on_error-torture.cc
The failure causes false problem reports to be sent to developers whose
code had nothing to do with the failures. Reactivate it when the real
problem is fixed.
This could also be related to the same problems as with the tests
ThreadedOneSizeMallocStressTest, ThreadedMallocStressTest, ManyThreadsTest,
and several others that do not run reliably on powerpc.
llvm-svn: 303864
Summary:
This flags is not covered by tests on Windows and looks like it's implemented
incorrectly. Switching its default breaks some tests.
Taking into account that related handle_segv flag is not supported on Windows
it's safer to remove it until we commit to support it.
Reviewers: eugenis, zturner, rnk
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D33471
llvm-svn: 303728
Summary: We are going to make it tri-state and remove allow_user_segv_handler.
Reviewers: eugenis, alekseys, kcc
Subscribers: kubamracek, dberris, llvm-commits
Differential Revision: https://reviews.llvm.org/D33159
llvm-svn: 303464
Summary:
The LINKEDIT section is very large and is read-only. Scanning this
section caused LSan on darwin to be very slow. When only writable sections
are scanned for global pointers, performance improved by a factor of about 25x.
Reviewers: alekseyshl, kubamracek
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33322
llvm-svn: 303422
Summary:
After discussing the current defaults with a couple of parties, the consensus
is that they are too high. 1Mb of quarantine has about a 4Mb impact on PSS, so
memory usage goes up quickly.
This is obviously configurable, but the default value should be more
"approachable", so both the global size and the thread local size are 1/4 of
what they used to be.
Reviewers: alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33321
llvm-svn: 303380
The sanitizer library unit tests for libc can get a different definition
of 'struct stat' to what the sanitizer library is built with for certain
targets.
For MIPS the size element of 'struct stat' is after a macro guarded
explicit padding element.
This patch resolves any possible inconsistency by adding the same
_FILE_OFFSET_BITS=64 and _LARGE_SOURCE with the same
conditions as the sanitizer library to the build flags for the unit tests.
This resolves a recurring build failure on the MIPS buildbots due to
'struct stat' defintion differences.
Reviewers: slthakur
Differential Revision: https://reviews.llvm.org/D33131
llvm-svn: 303350
It's used in asan_test.cc also on Windows, and my build was failing
with:
C:/src/llvm/projects/compiler-rt/lib/asan/tests/asan_test.cc:549:28: error: unknown type name 'jmp_buf'
NOINLINE void LongJmpFunc1(jmp_buf buf) {
^
C:/src/llvm/projects/compiler-rt/lib/asan/tests/asan_test.cc:569:10: error: unknown type name 'jmp_buf'
static jmp_buf buf;
^
I couldn't find what changed to make this not work anymore, but this should fix
it.
llvm-svn: 303273
Summary:
This required for any users who call exit() after creating
thread-specific data, as tls destructors are only called when
pthread_exit() or pthread_cancel() are used. This should also
match tls behavior on linux.
Getting the base address of the tls section is straightforward,
as it's stored as a section offset in %gs. The size is a bit trickier
to work out, as there doesn't appear to be any official documentation
or source code referring to it. The size used in this patch was determined
by taking the difference between the base address and the address of the
subsequent memory region returned by vm_region_recurse_64, which was
1024 * sizeof(uptr) on all threads except the main thread, where it was
larger. Since the section must be the same size on all of the threads,
1024 * sizeof(uptr) seemed to be a reasonable size to use, barring
a more programtic way to get the size.
1024 seems like a reasonable number, given that PTHREAD_KEYS_MAX
is 512 on darwin, so pthread keys will fit inside the region while
leaving space for other tls data. A larger size would overflow the
memory region returned by vm_region_recurse_64, and a smaller size
wouldn't leave room for all the pthread keys. In addition, the
stress test added here passes, which means that we are scanning at
least the full set of possible pthread keys, and probably
the full tls section.
Reviewers: alekseyshl, kubamracek
Subscribers: krytarowski, llvm-commits
Differential Revision: https://reviews.llvm.org/D33215
llvm-svn: 303262
This inclusion is needed to fix the ARM build. The int_lib.h include is
slightly ugly, but allows us to use the `AEABI_RTABI` macro to decorate
the CC for the functions.
llvm-svn: 303190
These actually may change calling conventions. We cannot simply provide
function aliases as the aliased function may have a different calling
convention. Provide a forwarding function instead to permit the
compiler to synthesize the calling convention adjustment thunk.
Remove the `ARM_EABI_FNALIAS` macro as that is not safe to use.
Resolves PR33030!
llvm-svn: 303188
Summary: Use __linux__ to check for Linux and bring back the check for __GNU__.
Reviewers: echristo, krytarowski, compnerd, rengolin
Reviewed By: krytarowski
Subscribers: phosek, llvm-commits, srhines
Differential Revision: https://reviews.llvm.org/D33219
llvm-svn: 303131
Some build targets (e.g. i686) have aliased names (e.g. i386). We would
get multiple definitions previously and have the linker arbitrarily
select a definition on those aliased targets. Make this more
deterministic by checking those aliases.
llvm-svn: 303103
Add a lit substitution (I chose %gmlt) so that only stack trace tests
get debug info.
We need a lit substition so that this expands to -gline-tables-only
-gcodeview on Windows. I think in the future we should reconsider the
need for -gcodeview from the GCC driver, but for now, this is necessary.
llvm-svn: 303083
Summary:
With rL279771, SizeClassAllocator64 was changed to accept only one template
instead of 5, for the following reasons: "First, this will make the mangled
names shorter. Second, this will make adding more parameters simpler". This
patch mirrors that work for SizeClassAllocator32.
This is in preparation for introducing the randomization of chunks in the
32-bit SizeClassAllocator in a later patch.
Reviewers: kcc, alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33141
llvm-svn: 303071
This fixes tests that use debug info to check ubsan stack traces. One
was XFAILd on Windows and the other was actively failing for weeks.
llvm-svn: 302924
These tests don't fail consistently in all cases, but they
fail most of the time on the buildbots. Mark as UNSUPPORTED for now to
avoid buildbots failing due to XPASS.
llvm-svn: 302920
Our theory is that reserving large amounts of shadow memory isn't
reliable on Win7 and earlier NT kernels. This affects the
clang-x64-ninja-win7 buildbot, which uses Windows 7.
llvm-svn: 302917
Summary:
Sanitizer procmaps uses dyld apis to iterate over the list of images
in the process. This is much more performan than manually recursing
over all of the memory regions in the process, however, dyld does
not report itself in the list of images. In order to prevent reporting
leaks from dyld globals and to symbolize dyld functions in stack traces,
this patch special-cases dyld and ensures that it is added to the
list of modules.
This is accomplished by recursing through the memory map of the process
until a dyld Mach header is found. While this recursion is expensive,
it is run before the full set of images has been loaded in the process,
so only a few calls are required. The result is cached so that it never
needs to be searched for when the full process memory map exists, as this
would be incredibly slow, on the order of minutes for leak sanitizer with
only 25 or so libraries loaded.
Reviewers: alekseyshl, kubamracek
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32968
llvm-svn: 302899
thread_get_register_pointer_values handles the redzone computation
automatically, but is marked as an unavailable API function. This
patch replicates its logic accounting for the stack redzone on
x86_64.
Should fix flakiness in the use_stack_threaded test for lsan on darwin.
llvm-svn: 302898
This is a follow-up to r302787, which broke MemorySanitizer.ICmpRelational.
MSan is now reporting a false positive on the following test case:
TestForNotPoisoned((poisoned(-1, 0x80000000U) >= poisoned(-1, 0U)))
, which is sort of anticipated, because we're approximating the comparison
with an OR of the arguments' shadow values.
llvm-svn: 302887
We only have an implementation in x86_64 that works for the
patching/unpatching and runtime support (trampolines).
Follow-up to D30630.
llvm-svn: 302873
Summary:
This change implements support for the custom event logging sleds and
intrinsics at runtime. For now it only supports handling the sleds in
x86_64, with the implementations for other architectures stubbed out to
do nothing.
NOTE: Work in progress, uploaded for exposition/exploration purposes.
Depends on D27503, D30018, and D33032.
Reviewers: echristo, javed.absar, timshen
Subscribers: mehdi_amini, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D30630
llvm-svn: 302857
Summary:
The reasoning behind this change is twofold:
- the current combined allocator (sanitizer_allocator_combined.h) implements
features that are not relevant for Scudo, making some code redundant, and
some restrictions not pertinent (alignments for example). This forced us to
do some weird things between the frontend and our secondary to make things
work;
- we have enough information to be able to know if a chunk will be serviced by
the Primary or Secondary, allowing us to avoid extraneous calls to functions
such as `PointerIsMine` or `CanAllocate`.
As a result, the new scudo-specific combined allocator is very straightforward,
and allows us to remove some now unnecessary code both in the frontend and the
secondary. Unused functions have been left in as unimplemented for now.
It turns out to also be a sizeable performance gain (3% faster in some Android
memory_replay benchmarks, doing some more on other platforms).
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33007
llvm-svn: 302830
This breaks several tests because we don't always have
access to __cxa_guard functions
This reverts commit 45eb470c3e9e8f6993a204e247c33d4092237efe.
llvm-svn: 302693
Summary:
This bug is caused by the incorrect handling of return-value registers.
According to OpenPOWER 64-Bit ELF V2 ABI 2.2.5, up to 2 general-purpose
registers are going to be used for return values, and up to 8 floating
point registers or vector registers are going to be used for return
values.
Reviewers: dberris, echristo
Subscribers: nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D33027
llvm-svn: 302691
Summary:
The test fails on PPC, because the address of a function may vary
depending on whether the "taker" shares the same ToC (roughly, in the
same "module") as the function.
Therefore the addresses of the functions taken in func-id-utils.cc may be
different from the addresses taken in xray runtime.
Change the test to be permissive on address comparison.
Reviewers: dberris, echristo
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33026
llvm-svn: 302686
Disable building enable_execute_stack.c for targets that do not have
support for mprotect().
Differential Revision: https://reviews.llvm.org/D33018
llvm-svn: 302680
Summary:
Sanitizer procmaps uses dyld apis to iterate over the list of images
in the process. This is much more performan than manually recursing
over all of the memory regions in the process, however, dyld does
not report itself in the list of images. In order to prevent reporting
leaks from dyld globals and to symbolize dyld functions in stack traces,
this patch special-cases dyld and ensures that it is added to the
list of modules.
This is accomplished by recursing through the memory map of the process
until a dyld Mach header is found. While this recursion is expensive,
it is run before the full set of images has been loaded in the process,
so only a few calls are required. The result is cached so that it never
needs to be searched for when the full process memory map exists, as this
would be incredibly slow, on the order of minutes for leak sanitizer with
only 25 or so libraries loaded.
Reviewers: alekseyshl, kubamracek
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32968
llvm-svn: 302673
Some configuration (for instance default docker ubuntu images) uses
a default empty and invalid /etc/fstab configuration file. It makes
any call to getmntent return NULL and it leads to failures on
Msan-aarch64{-with-call}-Test/MemorySanitizer.getmntent{_r}.
This patch fixes it by creating a temporary file with some valid
entries (although not valid for the system) to use along with
setmntent/getmntent.
llvm-svn: 302639
By default glibc writes its diagnostics directly to tty so the `2>&1 |`
redirection in the test doesn't catch the *** stack smashing detected ***
message, which in turn breaks printing the lit's progress bar. By defining
the LIBC_FATAL_STDERR_ environment variable we force glibc to direct
diagnostic messages to stderr.
Differential Revision: https://reviews.llvm.org/D32599
llvm-svn: 302628
This commit made ubsan use the fast unwinder. On SystemZ this requires
test cases to be compiled with -mbackchain. That was already done for
asan, but not ubsan. Add the flag for ubsan as well.
llvm-svn: 302562
Summary:
This change optimizes several aspects of the checksum used for chunk headers.
First, there is no point in checking the weak symbol `computeHardwareCRC32`
everytime, it will either be there or not when we start, so check it once
during initialization and set the checksum type accordingly.
Then, the loading of `HashAlgorithm` for SSE versions (and ARM equivalent) was
not optimized out, while not necessary. So I reshuffled that part of the code,
which duplicates a tiny bit of code, but ends up in a much cleaner assembly
(and faster as we avoid an extraneous load and some calls).
The following code is the checksum at the end of `scudoMalloc` for x86_64 with
full SSE 4.2, before:
```
mov rax, 0FFFFFFFFFFFFFFh
shl r10, 38h
mov edi, dword ptr cs:_ZN7__scudoL6CookieE ; __scudo::Cookie
and r14, rax
lea rsi, [r13-10h]
movzx eax, cs:_ZN7__scudoL13HashAlgorithmE ; __scudo::HashAlgorithm
or r14, r10
mov rbx, r14
xor bx, bx
call _ZN7__scudo20computeHardwareCRC32Ejm ; __scudo::computeHardwareCRC32(uint,ulong)
mov rsi, rbx
mov edi, eax
call _ZN7__scudo20computeHardwareCRC32Ejm ; __scudo::computeHardwareCRC32(uint,ulong)
mov r14w, ax
mov rax, r13
mov [r13-10h], r14
```
After:
```
mov rax, cs:_ZN7__scudoL6CookieE ; __scudo::Cookie
lea rcx, [rbx-10h]
mov rdx, 0FFFFFFFFFFFFFFh
and r14, rdx
shl r9, 38h
or r14, r9
crc32 eax, rcx
mov rdx, r14
xor dx, dx
mov eax, eax
crc32 eax, rdx
mov r14w, ax
mov rax, rbx
mov [rbx-10h], r14
```
Reviewers: dvyukov, alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: aemerson, rengolin, llvm-commits
Differential Revision: https://reviews.llvm.org/D32971
llvm-svn: 302538
Summary: This should significantly improve darwin lsan performance in cases where root regions are not used.
Reviewers: alekseyshl, kubamracek
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32966
llvm-svn: 302530
Summary:
This change adds Android support to the allocator (but doesn't yet enable it in
the cmake config), and should be the last fragment of the rewritten change
D31947.
Android has more memory constraints than other platforms, so the idea of a
unique context per thread would not have worked. The alternative chosen is to
allocate a set of contexts based on the number of cores on the machine, and
share those contexts within the threads. Contexts can be dynamically reassigned
to threads to prevent contention, based on a scheme suggested by @dvyuokv in
the initial review.
Additionally, given that Android doesn't support ELF TLS (only emutls for now),
we use the TSan TLS slot to make things faster: Scudo is mutually exclusive
with other sanitizers so this shouldn't cause any problem.
An additional change made here, is replacing `thread_local` by `THREADLOCAL`
and using the initial-exec thread model in the non-Android version to prevent
extraneous weak definition and checks on the relevant variables.
Reviewers: kcc, dvyukov, alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D32649
llvm-svn: 302300
Follow-up on D32846 to simplify testing and not rely on FileCheck to
test boundary conditions, and instead do all the testing in code
instead.
llvm-svn: 302212
Summary:
This change allows us to provide users and implementers of XRay handlers
a means of converting XRay function id's to addresses. This, in
combination with the facilities provided in D32695, allows users to find
out:
- How many function id's there are defined in the current binary.
- Get the address of the function associated with this function id.
- Patch only specific functions according to their requirements.
While we don't directly provide symbolization support in XRay, having
the function's address lets users determine this information easily
either during runtime, or offline with tools like 'addr2line'.
Reviewers: dblaikie, echristo, pelikan
Subscribers: kpw, llvm-commits
Differential Revision: https://reviews.llvm.org/D32846
llvm-svn: 302210
Summary:
glibc on Linux calls __longjmp_chk instead of longjmp (or _longjmp) when
_FORTIFY_SOURCE is defined. Ensure that an ASAN-instrumented program
intercepts this function when a system library calls it, otherwise the
stack might remain poisoned and result in CHECK failures and false
positives.
Fixes https://github.com/google/sanitizers/issues/721
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D32408
llvm-svn: 302152
Match the builtins that GCC provides for IEEE754 quad precision
on MIPS64. Also, enable building them with clang as PR20098 is resolved.
Disable tests for xf and xc modes as MIPS doesn't support that mode in
hardware or software.
Reviewers: slthakur
Differential Revision: https://reviews.llvm.org/D32794
llvm-svn: 302147
Summary:
This change allows us to patch/unpatch specific functions using the
function ID. This is useful in cases where implementations might want to
do coverage-style, or more fine-grained control of which functions to
patch or un-patch at runtime.
Depends on D32693.
Reviewers: dblaikie, echristo, kpw
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32695
llvm-svn: 302112
This patch allows the Swift compiler to emit calls to `__tsan_external_write` before starting any modifying access, which will cause TSan to detect races on arrays, dictionaries and other classes defined in non-instrumented modules. Races on collections from the Swift standard library and user-defined structs and a frequent cause of subtle bugs and it's important that TSan detects those on top of existing LLVM IR instrumentation, which already detects races in direct memory accesses.
Differential Revision: https://reviews.llvm.org/D31630
llvm-svn: 302050
This patch marks a few ASan tests as unsupported on iOS. These are mostly tests that use files or paths that are invalid/inaccessible on iOS or the simulator. We currently don't have a good way of propagating/copying secondary files that individual tests need. The same problem exists on Android, so I'm just marking the tests as UNSUPPORTED now.
Differential Revision: https://reviews.llvm.org/D32632
llvm-svn: 301966
The fast reset for large memory regions is not working
only on windows. So enable it for Go/linux/darwin/freebsd.
See https://github.com/golang/go/issues/20139
for background and motivation.
Based on idea by Josh Bleecher Snyder.
llvm-svn: 301927
Summary:
TSan's Android `__get_tls()` and `TLS_SLOT_TSAN` can be used by other sanitizers as well (see D32649), this change moves them to sanitizer_common.
I picked sanitizer_linux.h as their new home.
In the process, add the 32-bit versions for ARM, i386 & MIPS.
Can the address of `__get_tls()[TLS_SLOT_TSAN]` change in between the calls?
I am not sure if there is a need to repeat the construct as opposed to using a variable. So I left things as they were.
Testing on my side was restricted to a successful cross-compilation.
Reviewers: dvyukov, kubamracek
Reviewed By: dvyukov
Subscribers: aemerson, rengolin, srhines, dberris, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D32705
llvm-svn: 301926
This makes it possible to get stacktrace info when print_stacktrace=1 on
Darwin (where the slow unwinder is not currently supported [1]). This
should not regress any other platforms.
[1] The thread about r300295 has a relatively recent discusion about
this. We should be able to enable the existing slow unwind functionality
for Darwin, but this needs more testing.
Differential Revision: https://reviews.llvm.org/D32517
llvm-svn: 301839