This moves logic for setting kAliasRegionStart into hwasan_allocator.cpp
so other platforms that do not support aliasing mode will not need to define
kAliasRegionStart.
Differential Revision: https://reviews.llvm.org/D105725
Based on comments in D91466, we can make the current implementation
linux-speciic. The fuchsia implementation will just be a call to
`__sanitizer_fill_shadow`.
Differential Revision: https://reviews.llvm.org/D105663
The new name is something less linux-y and more platform generic. Once we
finalize the tagged pointer ABI in zircon, we will do the appropriate check
for fuchsia to see that TBI is enabled.
Differential Revision: https://reviews.llvm.org/D105667
Similar to InitOptions in asan, we can use this optional struct for
initializing some members thread objects before they are created. On
linux, this is unused and can remain undefined. On fuchsia, this will
just be the stack bounds.
Differential Revision: https://reviews.llvm.org/D104553
Once D104553 lands, CreateCurrentThread will be able to accept optional
parameters for initializing the hwasan thread object. On fuchsia, we can get
stack info in the platform-specific InitThreads and pass it through
CreateCurrentThread. On linux, this is a no-op.
Differential Revision: https://reviews.llvm.org/D104561
This allows for other implementations to define their own version of `Thread::Init`.
This will be the case for Fuchsia where much of the thread initialization can be
broken up between different thread hooks (`__sanitizer_before_thread_create_hook`,
`__sanitizer_thread_create_hook`, `__sanitizer_thread_start_hook`). Namely, setting
up the heap ring buffer and stack info and can be setup before thread creation.
The stack ring buffer can also be setup before thread creation, but storing it into
`__hwasan_tls` can only be done on the thread start hook since it's only then we
can access `__hwasan_tls` for that thread correctly.
Differential Revision: https://reviews.llvm.org/D104248
This moves the implementations for HandleTagMismatch, __hwasan_tag_mismatch4,
and HwasanAtExit from hwasan_linux.cpp to hwasan.cpp and declares them in hwasan.h.
This way, calls to those functions can be shared with the fuchsia implementation
without duplicating code.
Differential Revision: https://reviews.llvm.org/D103562
Since we have both aliasing mode and Intel LAM on x86_64, we need to
choose the mode at either run time or compile time. This patch
implements the plumbing to build both and choose between them at
compile time.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D102286
Userspace page aliasing allows us to use middle pointer bits for tags
without untagging them before syscalls or accesses. This should enable
easier experimentation with HWASan on x86_64 platforms.
Currently stack, global, and secondary heap tagging are unsupported.
Only primary heap allocations get tagged.
Note that aliasing mode will not work properly in the presence of
fork(), since heap memory will be shared between the parent and child
processes. This mode is non-ideal; we expect Intel LAM to enable full
HWASan support on x86_64 in the future.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98875
Userspace page aliasing allows us to use middle pointer bits for tags
without untagging them before syscalls or accesses. This should enable
easier experimentation with HWASan on x86_64 platforms.
Currently stack, global, and secondary heap tagging are unsupported.
Only primary heap allocations get tagged.
Note that aliasing mode will not work properly in the presence of
fork(), since heap memory will be shared between the parent and child
processes. This mode is non-ideal; we expect Intel LAM to enable full
HWASan support on x86_64 in the future.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98875
When adding this function in https://reviews.llvm.org/D68794 I did not
notice that internal_prctl has the API of the syscall to prctl rather
than the API of the glibc (posix) wrapper.
This means that the error return value is not necessarily -1 and that
errno is not set by the call.
For InitPrctl this means that the checks do not catch running on a
kernel *without* the required ABI (not caught since I only tested this
function correctly enables the ABI when it exists).
This commit updates the two calls which check for an error condition to
use internal_iserror. That function sets a provided integer to an
equivalent errno value and returns a boolean to indicate success or not.
Tested by running on a kernel that has this ABI and on one that does
not. Verified that running on the kernel without this ABI the current
code prints the provided error message and does not attempt to run the
program. Verified that running on the kernel with this ABI the current
code does not print an error message and turns on the ABI.
This done on an x86 kernel (where the ABI does not exist), an AArch64
kernel without this ABI, and an AArch64 kernel with this ABI.
In order to keep running the testsuite on kernels that do not provide
this new ABI we add another option to the HWASAN_OPTIONS environment
variable, this option determines whether the library kills the process
if it fails to enable the relaxed syscall ABI or not.
This new flag is `fail_without_syscall_abi`.
The check-hwasan testsuite results do not change with this patch on
either x86, AArch64 without a kernel supporting this ABI, and AArch64
with a kernel supporting this ABI.
Differential Revision: https://reviews.llvm.org/D96964
When adding this function in https://reviews.llvm.org/D68794 I did not
notice that internal_prctl has the API of the syscall to prctl rather
than the API of the glibc (posix) wrapper.
This means that the error return value is not necessarily -1 and that
errno is not set by the call.
For InitPrctl this means that the checks do not catch running on a
kernel *without* the required ABI (not caught since I only tested this
function correctly enables the ABI when it exists).
This commit updates the two calls which check for an error condition to
use `internal_iserror`. That function sets a provided integer to an
equivalent errno value and returns a boolean to indicate success or not.
Tested by running on a kernel that has this ABI and on one that does
not. Verified that running on the kernel without this ABI the current
code prints the provided error message and does not attempt to run the
program. Verified that running on the kernel with this ABI the current
code does not print an error message and turns on the ABI.
All tests done on an AArch64 Linux machine.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D94425
Summary:
This refactors some common support related to shadow memory setup from
asan and hwasan into sanitizer_common. This should not only reduce code
duplication but also make these facilities available for new compiler-rt
uses (e.g. heap profiling).
In most cases the separate copies of the code were either identical, or
at least functionally identical. A few notes:
In ProtectGap, the asan version checked the address against an upper
bound (kZeroBaseMaxShadowStart, which is (2^18). I have created a copy
of kZeroBaseMaxShadowStart in hwasan_mapping.h, with the same value, as
it isn't clear why that code should not do the same check. If it
shouldn't, I can remove this and guard this check so that it only
happens for asan.
In asan's InitializeShadowMemory, in the dynamic shadow case it was
setting __asan_shadow_memory_dynamic_address to 0 (which then sets both
macro SHADOW_OFFSET as well as macro kLowShadowBeg to 0) before calling
FindDynamicShadowStart(). AFAICT this is only needed because
FindDynamicShadowStart utilizes kHighShadowEnd to
get the shadow size, and kHighShadowEnd is a macro invoking
MEM_TO_SHADOW(kHighMemEnd) which in turn invokes:
(((kHighMemEnd) >> SHADOW_SCALE) + (SHADOW_OFFSET))
I.e. it computes the shadow space needed by kHighMemEnd (the shift), and
adds the offset. Since we only want the shadow space here, the earlier
setting of SHADOW_OFFSET to 0 via __asan_shadow_memory_dynamic_address
accomplishes this. In the hwasan version, it simply gets the shadow
space via "MemToShadowSize(kHighMemEnd)", where MemToShadowSize just
does the shift. I've simplified the asan handling to do the same
thing, and therefore was able to remove the setting of the SHADOW_OFFSET
via __asan_shadow_memory_dynamic_address to 0.
Reviewers: vitalybuka, kcc, eugenis
Subscribers: dberris, #sanitizers, llvm-commits, davidxl
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D83247
Summary:
This is necessary to handle calls to free() after __hwasan_thread_exit,
which is possible in glibc.
Also, add a null check to GetCurrentThread, otherwise the logic in
GetThreadByBufferAddress turns it into a non-null value. This means that
all of the checks for GetCurrentThread() != nullptr do not have any
effect at all right now!
Reviewers: pcc, hctim
Subscribers: #sanitizers, llvm-commits
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D79608
This was an experiment made possible by a non-standard feature of the
Android dynamic loader.
It required introducing a flag to tell the compiler which ABI was being
targeted.
This flag is no longer needed, since the generated code now works for
both ABI's.
We leave that flag untouched for backwards compatibility. This also
means that if we need to distinguish between targeted ABI's again
we can do that without disturbing any existing workflows.
We leave a comment in the source code and mention in the help text to
explain this for any confused person reading the code in the future.
Patch by Matthew Malcomson
Differential Revision: https://reviews.llvm.org/D69574
Summary:
GCC would like to emit a function call to report a tag mismatch
rather than hard-code the `brk` instruction directly.
__hwasan_tag_mismatch_stub contains most of the functionality to do
this already, but requires exposure in the dynamic library.
This patch moves __hwasan_tag_mismatch_stub outside of the anonymous
namespace that it was defined in and declares it in
hwasan_interface_internal.h.
We also add the ability to pass sizes larger than 16 bytes to this
reporting function by providing a fourth parameter that is only looked
at when the size provided is not in the original accepted range.
This does not change the behaviour where it is already being called,
since the previous definition only accepted sizes up to 16 bytes and
hence the change in behaviour is not seen by existing users.
The change in declaration does not matter, since the only existing use
is in the __hwasan_tag_mismatch function written in assembly.
Reviewers: eugenis, kcc, pcc, #sanitizers
Reviewed By: eugenis, #sanitizers
Subscribers: kristof.beyls, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D69113
Patch by Matthew Malcomson <matthew.malcomson@arm.com>
Summary:
GCC would like to emit a function call to report a tag mismatch
rather than hard-code the `brk` instruction directly.
__hwasan_tag_mismatch_stub contains most of the functionality to do
this already, but requires exposure in the dynamic library.
This patch moves __hwasan_tag_mismatch_stub outside of the anonymous
namespace that it was defined in and declares it in
hwasan_interface_internal.h.
We also add the ability to pass sizes larger than 16 bytes to this
reporting function by providing a fourth parameter that is only looked
at when the size provided is not in the original accepted range.
This does not change the behaviour where it is already being called,
since the previous definition only accepted sizes up to 16 bytes and
hence the change in behaviour is not seen by existing users.
The change in declaration does not matter, since the only existing use
is in the __hwasan_tag_mismatch function written in assembly.
Tested with gcc and clang on an AArch64 vm.
Reviewers: eugenis, kcc, pcc, #sanitizers
Reviewed By: eugenis, #sanitizers
Subscribers: kristof.beyls, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D69113
Summary:
Until now AArch64 development has been on patched kernels that have an always
on relaxed syscall ABI where tagged pointers are accepted.
The patches that have gone into the mainline kernel rely on each process opting
in to this relaxed ABI.
This commit adds code to choose that ABI into __hwasan_init.
The idea has already been agreed with one of the hwasan developers
(http://lists.llvm.org/pipermail/llvm-dev/2019-September/135328.html).
The patch ignores failures of `EINVAL` for Android, since there are older versions of the Android kernel that don't require this `prctl` or even have the relevant values. Avoiding EINVAL will let the library run on them.
I've tested this on an AArch64 VM running a kernel that requires this
prctl, having compiled both with clang and gcc.
Patch by Matthew Malcomson.
Reviewers: eugenis, kcc, pcc
Reviewed By: eugenis
Subscribers: srhines, kristof.beyls, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D68794
llvm-svn: 375166
Summary:
I'm not aware of any platforms where this will work, but the code should at least compile.
HWASAN_WITH_INTERCEPTORS=OFF means there is magic in libc that would call __hwasan_thread_enter /
__hwasan_thread_exit as appropriate.
Reviewers: pcc, winksaville
Subscribers: srhines, kubamracek, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D61337
llvm-svn: 359914
Summary:
This change change the instrumentation to allow users to view the registers at the point at which tag mismatch occured. Most of the heavy lifting is done in the runtime library, where we save the registers to the stack and emit unwind information. This allows us to reduce the overhead, as very little additional work needs to be done in each __hwasan_check instance.
In this implementation, the fast path of __hwasan_check is unmodified. There are an additional 4 instructions (16B) emitted in the slow path in every __hwasan_check instance. This may increase binary size somewhat, but as most of the work is done in the runtime library, it's manageable.
The failure trace now contains a list of registers at the point of which the failure occured, in a format similar to that of Android's tombstones. It currently has the following format:
Registers where the failure occurred (pc 0x0055555561b4):
x0 0000000000000014 x1 0000007ffffff6c0 x2 1100007ffffff6d0 x3 12000056ffffe025
x4 0000007fff800000 x5 0000000000000014 x6 0000007fff800000 x7 0000000000000001
x8 12000056ffffe020 x9 0200007700000000 x10 0200007700000000 x11 0000000000000000
x12 0000007fffffdde0 x13 0000000000000000 x14 02b65b01f7a97490 x15 0000000000000000
x16 0000007fb77376b8 x17 0000000000000012 x18 0000007fb7ed6000 x19 0000005555556078
x20 0000007ffffff768 x21 0000007ffffff778 x22 0000000000000001 x23 0000000000000000
x24 0000000000000000 x25 0000000000000000 x26 0000000000000000 x27 0000000000000000
x28 0000000000000000 x29 0000007ffffff6f0 x30 00000055555561b4
... and prints after the dump of memory tags around the buggy address.
Every register is saved exactly as it was at the point where the tag mismatch occurs, with the exception of x16/x17. These registers are used in the tag mismatch calculation as scratch registers during __hwasan_check, and cannot be saved without affecting the fast path. As these registers are designated as scratch registers for linking, there should be no important information in them that could aid in debugging.
Reviewers: pcc, eugenis
Reviewed By: pcc, eugenis
Subscribers: srhines, kubamracek, mgorny, javed.absar, krytarowski, kristof.beyls, hiraditya, jdoerfert, llvm-commits, #sanitizers
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D58857
llvm-svn: 355738
Retrying without replacing call sites in sanitizer_common (which might
not have a symbol definition).
Add new Unwind API. This is the final envisioned API with the correct
abstraction level. It hides/slow fast unwinder selection from the caller
and doesn't take any arguments that would leak that abstraction (i.e.,
arguments like stack_top/stack_bottom).
GetStackTrace will become an implementation detail (private method) of
the BufferedStackTrace class.
Reviewers: vitalybuka
Differential Revision: https://reviews.llvm.org/D58741
> llvm-svn: 355168
llvm-svn: 355172
Add new Unwind API. This is the final envisioned API with the correct
abstraction level. It hides/slow fast unwinder selection from the caller
and doesn't take any arguments that would leak that abstraction (i.e.,
arguments like stack_top/stack_bottom).
GetStackTrace will become an implementation detail (private method) of
the BufferedStackTrace class.
Reviewers: vitalybuka
Differential Revision: https://reviews.llvm.org/D58741
llvm-svn: 355168
As discussed elsewhere: LLVM uses cpp as its C++ source extension; the
sanitizers should too. This updates files in hwasan.
Patch generated by
for f in lib/hwasan/*.cc ; do svn mv $f ${f%.cc}.cpp; done
followed by
for f in lib/hwasan/*.cpp ; do sed -i '' -e '1s/\.cc -/.cpp /' $f; done
CMakeLists.txt updated manually.
Differential Revision: https://reviews.llvm.org/D58620
llvm-svn: 354989