for both PPC64 Big and Little endian modes, so also eliminates the need for
the BIG_ENDIAN/LITTLE_ENDIAN #ifdeffery.
By trial and error, it also looks like the kPPC64_ShadowOffset64 value is
valid using (1ULL << 41) for both BE and LE, so that #if/#elif/#endif block
has also been simplified.
Differential Revision: http://reviews.llvm.org/D6044
llvm-svn: 221457
Whitespace update for lint check by myself (Will). Otherwise code and comments by Peter Bergner, as previously seen on llvm-commits.
The following patch gets ASAN somewhat working on powerpc64le-linux.
It currently assumes the LE kernel uses 46-bit addressing, which is
true, but it doesn't solve the case for BE where it may be 44 or
46 bits. That can be fixed with a follow on patch.
There are some test suite fails even with this patch that I haven't had
time to solve yet, but this is better than the state it is in now.
The limited debugging of those test suite fails seems to show that the
address map for 46-bit addressing has changed and so we'll need to
modify the shadow memory location slightly. Again, that can be fixed
with a follow on patch.
llvm-svn: 219827
by their authors.
This may break builds where others added code relying on these patches,
but please *do not* revert this commit. Instead, we will prepare patches
which fix the failures.
Reverts the following commits:
r168306: "[asan] support x32 mode in the fast stack unwinder. Patch by H.J. Lu"
r168356: "[asan] more support for powerpc, patch by Peter Bergner"
r196489: "[sanitizer] fix the ppc32 build (patch by Jakub Jelinek)"
llvm-svn: 196802
When prelink is installed in the system, prelink-ed
libraries map between 0x003000000000 and 0x004000000000 thus occupying the shadow Gap,
so we need so split the address space even further, like this:
|| [0x10007fff8000, 0x7fffffffffff] || HighMem ||
|| [0x02008fff7000, 0x10007fff7fff] || HighShadow ||
|| [0x004000000000, 0x02008fff6fff] || ShadowGap3 ||
|| [0x003000000000, 0x003fffffffff] || MidMem ||
|| [0x00087fff8000, 0x002fffffffff] || ShadowGap2 ||
|| [0x00067fff8000, 0x00087fff7fff] || MidShadow ||
|| [0x00008fff7000, 0x00067fff7fff] || ShadowGap ||
|| [0x00007fff8000, 0x00008fff6fff] || LowShadow ||
|| [0x000000000000, 0x00007fff7fff] || LowMem ||
Do it only if necessary.
Also added a bit of profiling code to make sure that the
mapping code is efficient.
Added a lit test to simulate prelink-ed libraries.
Unfortunately, this test does not work with binutils-gold linker.
If gold is the default linker the test silently passes.
Also replaced
__has_feature(address_sanitizer)
with
__has_feature(address_sanitizer) || defined(__SANITIZE_ADDRESS__)
in two places.
Patch partially by Jakub Jelinek.
llvm-svn: 175263
In the current implementation AsanThread::GetFrameNameByAddr scans the
stack for a magic guard value to locate base address of the stack
frame. This is not reliable, especially on ARM, where the code that
stores this magic value has to construct it in a register from two
small intermediates; this register can then end up stored in a random
stack location in the prologue of another function.
With this change, GetFrameNameByAddr scans the shadow memory for the
signature of a left stack redzone instead. It is now possible to
remove the magic from the instrumentation pass for additional
performance gain. We keep it there for now just to make sure the new
algorithm does not fail in some corner case.
llvm-svn: 156710