In both CMake and Makefiles, we are inconsistent about the use of libstdc++ vs. libc++, SDKs and minimum deployment targets for OS X. Let's fix the detection of SDKs, and let's explicitely set that we link against libc++ and mmacosx-version-min is 10.7.
llvm-svn: 227509
asan_symbolize.py isn't needed on Windows, but it's nice if asan has a unified
UI on all platforms. So rather than have asan_symolize.py die on startup due to
it importing modules that don't exist on Windows, let it just echo the input.
llvm-svn: 227326
Make sure "void *ctx" doesn't point to an object which already went out
of scope. This might also fix -Wuninitialized warnings GCC 4.7 produces
while building ASan runtime.
llvm-svn: 227258
If a memory access is unaligned, emit __tsan_unaligned_read/write
callbacks instead of __tsan_read/write.
Required to change semantics of __tsan_unaligned_read/write to not do the user memory.
But since they were unused (other than through __sanitizer_unaligned_load/store) this is fine.
Fixes long standing issue 17:
https://code.google.com/p/thread-sanitizer/issues/detail?id=17
llvm-svn: 227230
These functions are already present in the cc_kext for arm32 and for x86 and
x86_64. It was an oversight that they were not included for arm64.
Based on a patch by Lawrence D'Anna. Thanks!
llvm-svn: 227206
A flexible way of describing MSan memory layout details on various
platforms. No significant functional changes, but the memory layout
description that you get at verbosity=1 looks slightly different.
This change includes stronger sanity checks than before.
The goal of this change is to allow more than 2 application memory
ranges for https://code.google.com/p/memory-sanitizer/issues/detail?id=76.
llvm-svn: 227192
Modifying Darwin/interception-in-shared-lib-test.cc and suppressions-library.cc
to use rpath instead of linking against the full path to the temporary file.
NFC.
llvm-svn: 227161
The idea is to ensure that the ASan runtime gets initialized early (i.e.
before other initializers/constructors) even when DYLD_INSERT_LIBRARIES
is not used. In that case, the interceptors are not installed (on OS X,
DYLD_INSERT_LIBRARIES is required for interceptors to work), and therefore
ASan gets currently initialized quite late -- from the main executable's
module initializer. The following issues are a consequence of this:
https://code.google.com/p/address-sanitizer/issues/detail?id=363https://code.google.com/p/address-sanitizer/issues/detail?id=357
Both of them are fixed with this patch.
Reviewed at http://reviews.llvm.org/D7117
llvm-svn: 226929
The interceptor of ioctl is using a non-standard prototype:
INTERCEPTOR(int, ioctl, int d, unsigned request, void *arg)
At least on OS X, the request argument should be unsigned long and not
just unsigned, and also instead of the last argument (arg), the function
should be accepting a variable number of arguments, so the prototype
should be:
int ioctl(int fildes, unsigned long request, ...);
We can still keep using `unsigned` internally to save space, because we
know that all possible values of `request` will fit into it.
Reviewed at http://reviews.llvm.org/D7038
llvm-svn: 226926
This patch is a proposed solution for https://code.google.com/p/address-sanitizer/issues/detail?id=375:
When the stacktraces are captured and printed by ASan itself, they are fine, but when the program has already printed the report (or is just printing it), capturing a stacktrace via other means is broken. "Other means" include OS X CrashReporter, debuggers or calling backtrace() within the program. For example calling backtrace() from a sanitizer_set_death_callback function prints a very truncated stacktrace.
Reviewed at http://reviews.llvm.org/D7103
llvm-svn: 226878
By attaching an extra integer tag to heap origins, we are able
to distinguish between uninits
- created by heap allocation,
- created by heap deallocation (i.e. use-after-free),
- created by __msan_allocated_memory call,
- etc.
See https://code.google.com/p/memory-sanitizer/issues/detail?id=35.
llvm-svn: 226821
Fixes 2 issues in origins arising from realloc() calls:
* In the in-place grow case origin for the new memory is not set at all.
* In the copy-realloc case __msan_memcpy is used, which unwinds stack from
inside the MSan runtime. This does not generally work (as we may be built
w/o frame pointers), and produces "bad" stack trace anyway, with several
uninteresting (internal) frames on top.
This change also makes realloc() honor "zeroise" and "poison_in_malloc" flags.
See https://code.google.com/p/memory-sanitizer/issues/detail?id=73.
llvm-svn: 226674
Even sleep(1) lead to episodical flakes on some machines.
Use an invisible by tsan barrier to enforce required execution order instead.
This makes the tests deterministic and faster.
llvm-svn: 226659
Previously we always stored 4 bytes of origin at the destination address
even for 8-byte (and longer) stores.
This should fix rare missing, or incorrect, origin stacks in MSan reports.
llvm-svn: 226658
MemoryAccess function consumes ~4K of stack in debug mode,
in significant part due to the unrolled loop.
And gtest gives only 4K of stack to death test
threads, which causes stack overflows in debug mode.
llvm-svn: 226644
aarch64-linux kernel has configurable 39, 42 or 47 bit virtual address
space. Most distros AFAIK use 42-bit VA right now, but there are also
39-bit VA users too. The ppc64 handling can be used for this just fine
and support all the 3 sizes.
There are other issues, like allocator32 not really being able to support
the larger addres spaces, and hardcoded 39-bit address space size in other
macros.
Patch by Jakub Jelinek.
llvm-svn: 226639
glibc recently changed ABI on aarch64-linux:
https://sourceware.org/git/?p=glibc.git;a=commit;h=5c40c3bab2fddaca8cfe12d75944d1fef8adf1a4
Instead of having unsigned short mode; unsigned short __pad1; it now has
unsigned int mode; field in ipc_perm structure.
This patch allows to build against the recent glibc and disables the
ipc_perm.mode verification for older versions of glibc.
I think it shouldn't be a big deal even for older glibcs, I couldn't find
any place which would actually care about the exact mode field, rather than
the whole structure, appart from the CHECK_SIZE_AND_OFFSET macro.
Patch by Jakub Jelinek
llvm-svn: 226637
Use synci implementation of clear_cache for short address ranges.
For long address ranges, make a kernel call.
Differential Revision: http://reviews.llvm.org/D6661
llvm-svn: 226567
TSAN_SHADOW_COUNT is defined to 4 in all environments.
Other values of TSAN_SHADOW_COUNT were never tested and
were broken by recent changes to shadow mapping.
Remove it as there is no reason to fix nor maintain it.
llvm-svn: 226466
InternalAlloc is quite complex and its behavior may depend on the values of
flags. As such, it should not be used while parsing flags.
Sadly, LowLevelAlloc does not support deallocation of memory.
llvm-svn: 226453
Setting the maximum read size in FlagHandlerInclude to 2^15 might be a good
default, but causes the read to fail on systems with a page size larger than
that (ReadFileToBuffer(...) will fail if the maximum allowed size is less than
the value returned by GetPageSizeCached()). For example, on my PPC64/Linux
system, GetPageSizeCached() returns 2^16. In case the page size is larger, use
that instead.
llvm-svn: 226368
Debugging a missing profile is a bit painful right now. We can make
people's lives a bit easier by adding a knob to enable printing a
helpful error message for such failures.
llvm-svn: 226312
This test casts 0x4 to a function pointer and calls it. Unfortunately, the
faulting address may not exactly be 0x4 on PPC64 ELFv1 systems. The LLVM PPC
backend used to always generate the loads "in order", so we'd fault at 0x4
anyway. However, at upcoming change to loosen that ordering, and we'll pick a
different order on some targets. As a result, as explained in the comment, we
need to allow for certain nearby addresses as well.
llvm-svn: 226202
The new parser is a lot stricter about syntax, reports unrecognized
flags, and will make it easier to implemented some of the planned features.
llvm-svn: 226169