This patch makes ASAN for aarch64 use the same shadow offset for all
currently supported VMAs (39 and 42 bits). The shadow offset is the
same for 39-bit (36).
llvm-svn: 252497
This patch adds support for asan on aarch64-linux with 42-bit VMA
(current default config for 64K pagesize kernels). The support is
enabled by defining the SANITIZER_AARCH64_VMA to 42 at build time
for both clang/llvm and compiler-rt. The default VMA is 39 bits.
For 42-bit VMA aarch64 uses SANITIZIER_CAN_USER_ALLOCATOR64.
llvm-svn: 245596
The number of unaccessible pages at the beginning of the address
space can differ between processes on the same machine. Try different
values at runtime to protect as much memory as possible.
llvm-svn: 244364
This patch enables asan for aarch64/linux. It marks it as 'unstable-release',
since some tests are failing due either kernel missing support of non-executable
pages in mmap or environment instability (infinite loop in juno reference
boards).
It sets decorate_proc_maps test to require stable-release, since the test expects
the shadow memory to not be executable and the support for aarch64 is only
added recently by Linux (da141706aea52c1a9 - 4.0).
It also XFAIL static_tls test for aarch64 linker may omit the __tls_get_addr call
as a TLS optimization.
llvm-svn: 244054
ASan shadow on Android starts at address 0 for both historic and
performance reasons. This is possible because the platform mandates
-pie, which makes lower memory region always available.
This is not such a good idea on 64-bit platforms because of MAP_32BIT
incompatibility.
This patch changes Android/AArch64 mapping to be the same as that of
Linux/AAarch64.
llvm-svn: 243548
Summary:
This is one of many changes needed for compiler-rt to get it building on iOS.
This change ifdefs out headers and functionality that aren't available on iOS, and adds support for iOS and the iOS simulator to as an.
Note: this change does not enable building for iOS, as there are more changes to come.
Reviewers: glider, kubabrecka, bogner, samsonov
Reviewed By: samsonov
Subscribers: samsonov, zaks.anna, llvm-commits
Differential Revision: http://reviews.llvm.org/D10515
llvm-svn: 240469
for both PPC64 Big and Little endian modes, so also eliminates the need for
the BIG_ENDIAN/LITTLE_ENDIAN #ifdeffery.
By trial and error, it also looks like the kPPC64_ShadowOffset64 value is
valid using (1ULL << 41) for both BE and LE, so that #if/#elif/#endif block
has also been simplified.
Differential Revision: http://reviews.llvm.org/D6044
llvm-svn: 221457
Whitespace update for lint check by myself (Will). Otherwise code and comments by Peter Bergner, as previously seen on llvm-commits.
The following patch gets ASAN somewhat working on powerpc64le-linux.
It currently assumes the LE kernel uses 46-bit addressing, which is
true, but it doesn't solve the case for BE where it may be 44 or
46 bits. That can be fixed with a follow on patch.
There are some test suite fails even with this patch that I haven't had
time to solve yet, but this is better than the state it is in now.
The limited debugging of those test suite fails seems to show that the
address map for 46-bit addressing has changed and so we'll need to
modify the shadow memory location slightly. Again, that can be fixed
with a follow on patch.
llvm-svn: 219827
by their authors.
This may break builds where others added code relying on these patches,
but please *do not* revert this commit. Instead, we will prepare patches
which fix the failures.
Reverts the following commits:
r168306: "[asan] support x32 mode in the fast stack unwinder. Patch by H.J. Lu"
r168356: "[asan] more support for powerpc, patch by Peter Bergner"
r196489: "[sanitizer] fix the ppc32 build (patch by Jakub Jelinek)"
llvm-svn: 196802
When prelink is installed in the system, prelink-ed
libraries map between 0x003000000000 and 0x004000000000 thus occupying the shadow Gap,
so we need so split the address space even further, like this:
|| [0x10007fff8000, 0x7fffffffffff] || HighMem ||
|| [0x02008fff7000, 0x10007fff7fff] || HighShadow ||
|| [0x004000000000, 0x02008fff6fff] || ShadowGap3 ||
|| [0x003000000000, 0x003fffffffff] || MidMem ||
|| [0x00087fff8000, 0x002fffffffff] || ShadowGap2 ||
|| [0x00067fff8000, 0x00087fff7fff] || MidShadow ||
|| [0x00008fff7000, 0x00067fff7fff] || ShadowGap ||
|| [0x00007fff8000, 0x00008fff6fff] || LowShadow ||
|| [0x000000000000, 0x00007fff7fff] || LowMem ||
Do it only if necessary.
Also added a bit of profiling code to make sure that the
mapping code is efficient.
Added a lit test to simulate prelink-ed libraries.
Unfortunately, this test does not work with binutils-gold linker.
If gold is the default linker the test silently passes.
Also replaced
__has_feature(address_sanitizer)
with
__has_feature(address_sanitizer) || defined(__SANITIZE_ADDRESS__)
in two places.
Patch partially by Jakub Jelinek.
llvm-svn: 175263
In the current implementation AsanThread::GetFrameNameByAddr scans the
stack for a magic guard value to locate base address of the stack
frame. This is not reliable, especially on ARM, where the code that
stores this magic value has to construct it in a register from two
small intermediates; this register can then end up stored in a random
stack location in the prologue of another function.
With this change, GetFrameNameByAddr scans the shadow memory for the
signature of a left stack redzone instead. It is now possible to
remove the magic from the instrumentation pass for additional
performance gain. We keep it there for now just to make sure the new
algorithm does not fail in some corner case.
llvm-svn: 156710