A heap or global buffer that is far away from the faulting address is
unlikely to be the cause, especially if there is a potential
use-after-free as well, so we want to show it after the other
causes.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D104781
Similar to InitOptions in asan, we can use this optional struct for
initializing some members thread objects before they are created. On
linux, this is unused and can remain undefined. On fuchsia, this will
just be the stack bounds.
Differential Revision: https://reviews.llvm.org/D104553
Once D104553 lands, CreateCurrentThread will be able to accept optional
parameters for initializing the hwasan thread object. On fuchsia, we can get
stack info in the platform-specific InitThreads and pass it through
CreateCurrentThread. On linux, this is a no-op.
Differential Revision: https://reviews.llvm.org/D104561
Explain what the given stack trace means before showing it, rather than
only in the paragraph at the end.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D104523
This allows for other implementations to define their own version of `Thread::Init`.
This will be the case for Fuchsia where much of the thread initialization can be
broken up between different thread hooks (`__sanitizer_before_thread_create_hook`,
`__sanitizer_thread_create_hook`, `__sanitizer_thread_start_hook`). Namely, setting
up the heap ring buffer and stack info and can be setup before thread creation.
The stack ring buffer can also be setup before thread creation, but storing it into
`__hwasan_tls` can only be done on the thread start hook since it's only then we
can access `__hwasan_tls` for that thread correctly.
Differential Revision: https://reviews.llvm.org/D104248
The default callback instrumentation in x86 LAM mode uses ASLR bits
to randomly choose a tag, and thus has a 1/64 chance of choosing a
stack tag of 0, causing stack tests to fail intermittently. By using
__hwasan_generate_tag to pick tags, we guarantee non-zero tags and
eliminate the test flakiness.
aarch64 doesn't seem to have this problem using thread-local addresses
to pick tags, so perhaps we can remove this workaround once we implement
a similar mechanism for LAM.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D104470
This is to fix build on Android. And we don't want to intercept more new/delete operators on Android.
Differential Revision: https://reviews.llvm.org/D104313
Before: ADDR is located -320 bytes to the right of 1072-byte region
After: ADDR is located 752 bytes inside 1072-byte region
Reviewed By: eugenis, walli99
Differential Revision: https://reviews.llvm.org/D104412
Short granule tags as poison cause a UaF to read the referenced
memory to retrieve the tag, and means we do not detect the UaF
if the last granule's tag is still around.
This only increases the change of not catching a UaF from
0.39 % (1 / 256) to 0.42 % (1 / (256 - 17)).
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D104304
Similar to SHADOW_OFFSET on asan, we can use this for hwasan so platforms that
use a constant value for the start of shadow memory can just use the constant
rather than access a global.
Differential Revision: https://reviews.llvm.org/D104275
This removes the `__sanitizer_*` allocation function definitions from
`hwasan_interceptors.cpp` and moves them into their own file. This way
implementations that do not use interceptors at all can just ignore
(almost) everything in `hwasan_interceptors.cpp`.
Also remove some unused headers in `hwasan_interceptors.cpp` after the move.
Differential Revision: https://reviews.llvm.org/D103564
This moves the implementations for HandleTagMismatch, __hwasan_tag_mismatch4,
and HwasanAtExit from hwasan_linux.cpp to hwasan.cpp and declares them in hwasan.h.
This way, calls to those functions can be shared with the fuchsia implementation
without duplicating code.
Differential Revision: https://reviews.llvm.org/D103562
Since we have both aliasing mode and Intel LAM on x86_64, we need to
choose the mode at either run time or compile time. This patch
implements the plumbing to build both and choose between them at
compile time.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D102286
We have some significant amount of duplication around
CheckFailed functionality. Each sanitizer copy-pasted
a chunk of code. Some got random improvements like
dealing with recursive failures better. These improvements
could benefit all sanitizers, but they don't.
Deduplicate CheckFailed logic across sanitizers and let each
sanitizer only print the current stack trace.
I've tried to dedup stack printing as well,
but this got me into cmake hell. So let's keep this part
duplicated in each sanitizer for now.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102221
The problem was introduced in D100348.
It's really hard to trigger the bug in a stress test - the race is just too
narrow - but the new checks in Thread::Init should at least provide usable
diagnostic if the problem ever returns.
Differential Revision: https://reviews.llvm.org/D101881
Code patterns like this are common, `#` at the line beginning
(https://google.github.io/styleguide/cppguide.html#Preprocessor_Directives),
one space indentation for if/elif/else directives.
```
#if SANITIZER_LINUX
# if defined(__aarch64__)
# endif
#endif
```
However, currently clang-format wants to reformat the code to
```
#if SANITIZER_LINUX
#if defined(__aarch64__)
#endif
#endif
```
This significantly harms readability in my review. Use `IndentPPDirectives:
AfterHash` to defeat the diagnostic. clang-format will now suggest:
```
#if SANITIZER_LINUX
# if defined(__aarch64__)
# endif
#endif
```
Unfortunately there is no clang-format option using indent with 1 for
just preprocessor directives. However, this is still one step forward
from the current behavior.
Reviewed By: #sanitizers, vitalybuka
Differential Revision: https://reviews.llvm.org/D100238
Do not hold the free/live thread list lock longer than necessary.
This change speeds up the following benchmark 10x.
constexpr int kTopThreads = 50;
constexpr int kChildThreads = 20;
constexpr int kChildIterations = 8;
void Thread() {
for (int i = 0; i < kChildIterations; ++i) {
std::vector<std::thread> threads;
for (int i = 0; i < kChildThreads; ++i)
threads.emplace_back([](){});
for (auto& t : threads)
t.join();
}
}
int main() {
std::vector<std::thread> threads;
for (int i = 0; i < kTopThreads; ++i)
threads.emplace_back(Thread);
for (auto& t : threads)
t.join();
}
Differential Revision: https://reviews.llvm.org/D100348
This was reverted by f176803ef1 due to
Ubuntu 16.04 x86-64 glibc 2.23 problems.
This commit additionally calls `__tls_get_addr({modid,0})` to work around the
dlpi_tls_data==NULL issues for glibc<2.25
(https://sourceware.org/bugzilla/show_bug.cgi?id=19826)
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS blocks. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize` entirely. Use the simplified method with non-Android Linux for
now, but in theory this can be used with *BSD and potentially other ELF OSes.
This simplification enables D99566 for TLS Variant I architectures.
See https://reviews.llvm.org/D93972#2480556 for analysis on GetTls usage
across various sanitizers.
Differential Revision: https://reviews.llvm.org/D98926
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS ranges. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize` entirely. Use the simplified method with non-Android Linux for
now, but in theory this can be used with *BSD and potentially other ELF OSes.
In the future, we can move `ThreadDescriptorSize` code to lsan (and consider
intercepting `pthread_setspecific`) to avoid hacks in generic code.
See https://reviews.llvm.org/D93972#2480556 for analysis on GetTls usage
across various sanitizers.
Differential Revision: https://reviews.llvm.org/D98926
Userspace page aliasing allows us to use middle pointer bits for tags
without untagging them before syscalls or accesses. This should enable
easier experimentation with HWASan on x86_64 platforms.
Currently stack, global, and secondary heap tagging are unsupported.
Only primary heap allocations get tagged.
Note that aliasing mode will not work properly in the presence of
fork(), since heap memory will be shared between the parent and child
processes. This mode is non-ideal; we expect Intel LAM to enable full
HWASan support on x86_64 in the future.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98875
Userspace page aliasing allows us to use middle pointer bits for tags
without untagging them before syscalls or accesses. This should enable
easier experimentation with HWASan on x86_64 platforms.
Currently stack, global, and secondary heap tagging are unsupported.
Only primary heap allocations get tagged.
Note that aliasing mode will not work properly in the presence of
fork(), since heap memory will be shared between the parent and child
processes. This mode is non-ideal; we expect Intel LAM to enable full
HWASan support on x86_64 in the future.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98875
x86_64 aliasing mode will use fewer than 8 bits for tags, so refactor
existing code to remove hard-coded 0xff and 8 values.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98072
InternalScopedString uses InternalMmapVector internally
so it can be resized dynamically as needed.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D98751
When adding this function in https://reviews.llvm.org/D68794 I did not
notice that internal_prctl has the API of the syscall to prctl rather
than the API of the glibc (posix) wrapper.
This means that the error return value is not necessarily -1 and that
errno is not set by the call.
For InitPrctl this means that the checks do not catch running on a
kernel *without* the required ABI (not caught since I only tested this
function correctly enables the ABI when it exists).
This commit updates the two calls which check for an error condition to
use internal_iserror. That function sets a provided integer to an
equivalent errno value and returns a boolean to indicate success or not.
Tested by running on a kernel that has this ABI and on one that does
not. Verified that running on the kernel without this ABI the current
code prints the provided error message and does not attempt to run the
program. Verified that running on the kernel with this ABI the current
code does not print an error message and turns on the ABI.
This done on an x86 kernel (where the ABI does not exist), an AArch64
kernel without this ABI, and an AArch64 kernel with this ABI.
In order to keep running the testsuite on kernels that do not provide
this new ABI we add another option to the HWASAN_OPTIONS environment
variable, this option determines whether the library kills the process
if it fails to enable the relaxed syscall ABI or not.
This new flag is `fail_without_syscall_abi`.
The check-hwasan testsuite results do not change with this patch on
either x86, AArch64 without a kernel supporting this ABI, and AArch64
with a kernel supporting this ABI.
Differential Revision: https://reviews.llvm.org/D96964
When adding this function in https://reviews.llvm.org/D68794 I did not
notice that internal_prctl has the API of the syscall to prctl rather
than the API of the glibc (posix) wrapper.
This means that the error return value is not necessarily -1 and that
errno is not set by the call.
For InitPrctl this means that the checks do not catch running on a
kernel *without* the required ABI (not caught since I only tested this
function correctly enables the ABI when it exists).
This commit updates the two calls which check for an error condition to
use `internal_iserror`. That function sets a provided integer to an
equivalent errno value and returns a boolean to indicate success or not.
Tested by running on a kernel that has this ABI and on one that does
not. Verified that running on the kernel without this ABI the current
code prints the provided error message and does not attempt to run the
program. Verified that running on the kernel with this ABI the current
code does not print an error message and turns on the ABI.
All tests done on an AArch64 Linux machine.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D94425
Similar to __asan_set_error_report_callback, pass the entire report to a
user provided callback function.
Differential Revision: https://reviews.llvm.org/D91825
HwasanThreadList::DontNeedThread clobbers Thread::next_,
Breaking the freelist. As a result, only the top of the freelist ever
gets reused, and the rest of it is lost.
Since the Thread object with its associated ring buffer is only 8Kb, this is
typically only noticable in long running processes, such as fuzzers.
Fix the problem by switching from an intrusive linked list to a vector.
Differential Revision: https://reviews.llvm.org/D91392
In `GetGlobalSizeFromDescriptor` we use `dladdr` to get info on the the
current address. `dladdr` returns 0 if it failed.
During testing on Linux this returned 0 to indicate failure, and
populated the `info` structure with a NULL pointer which was
dereferenced later.
This patch checks for `dladdr` returning 0, and in that case returns 0
from `GetGlobalSizeFromDescriptor` to indicate failure of identifying
the address.
This occurs when `GetModuleNameAndOffsetForPC` succeeds for some address
not in a dynamically loaded library. One example is when the found
"module" is '[stack]' having come from parsing /proc/self/maps.
Differential Revision: https://reviews.llvm.org/D91344
HwasanThreadList::DontNeedThread clobbers Thread::next_, breaking the
freelist. As a result, only the top of the freelist ever gets reused,
and the rest of it is lost.
Since the Thread object its associated ring buffer is only 8Kb, this is
typically only noticable in long running processes, such as fuzzers.
Fix the problem by switching from an intrusive linked list to a vector.
Differential Revision: https://reviews.llvm.org/D91208
It turns out that we can't remove the operator new and delete
interceptors on Android without breaking ABI, so bring them back
as forwards to the malloc and free functions.
Differential Revision: https://reviews.llvm.org/D91219
While sanitizers don't use C++ standard library, we could still end
up accidentally including or linking it just by the virtue of using
the C++ compiler. Pass -nostdinc++ and -nostdlib++ to avoid these
accidental dependencies.
Differential Revision: https://reviews.llvm.org/D88922
Adds a check to avoid symbolization when printing stack traces if the
stack_trace_format flag does not need it. While there is a symbolize
flag that can be turned off to skip some of the symbolization,
SymbolizePC() still unconditionally looks up the module name and offset.
Avoid invoking SymbolizePC() at all if not needed.
This is an efficiency improvement when dumping all stack traces as part
of the memory profiler in D87120, for large stripped apps where we want
to symbolize as a post pass.
Differential Revision: https://reviews.llvm.org/D88361
D28596 added SANITIZER_INTERFACE_WEAK_DEF which can guarantee `*_default_options` are always defined.
The weak attributes on the `__{asan,lsan,msan,ubsan}_default_options` declarations can thus be removed.
`MaybeCall*DefaultOptions` no longer need nullptr checks, so their call sites can just be replaced by `__*_default_options`.
Reviewed By: #sanitizers, vitalybuka
Differential Revision: https://reviews.llvm.org/D87175
Summary:
This refactors some common support related to shadow memory setup from
asan and hwasan into sanitizer_common. This should not only reduce code
duplication but also make these facilities available for new compiler-rt
uses (e.g. heap profiling).
In most cases the separate copies of the code were either identical, or
at least functionally identical. A few notes:
In ProtectGap, the asan version checked the address against an upper
bound (kZeroBaseMaxShadowStart, which is (2^18). I have created a copy
of kZeroBaseMaxShadowStart in hwasan_mapping.h, with the same value, as
it isn't clear why that code should not do the same check. If it
shouldn't, I can remove this and guard this check so that it only
happens for asan.
In asan's InitializeShadowMemory, in the dynamic shadow case it was
setting __asan_shadow_memory_dynamic_address to 0 (which then sets both
macro SHADOW_OFFSET as well as macro kLowShadowBeg to 0) before calling
FindDynamicShadowStart(). AFAICT this is only needed because
FindDynamicShadowStart utilizes kHighShadowEnd to
get the shadow size, and kHighShadowEnd is a macro invoking
MEM_TO_SHADOW(kHighMemEnd) which in turn invokes:
(((kHighMemEnd) >> SHADOW_SCALE) + (SHADOW_OFFSET))
I.e. it computes the shadow space needed by kHighMemEnd (the shift), and
adds the offset. Since we only want the shadow space here, the earlier
setting of SHADOW_OFFSET to 0 via __asan_shadow_memory_dynamic_address
accomplishes this. In the hwasan version, it simply gets the shadow
space via "MemToShadowSize(kHighMemEnd)", where MemToShadowSize just
does the shift. I've simplified the asan handling to do the same
thing, and therefore was able to remove the setting of the SHADOW_OFFSET
via __asan_shadow_memory_dynamic_address to 0.
Reviewers: vitalybuka, kcc, eugenis
Subscribers: dberris, #sanitizers, llvm-commits, davidxl
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D83247
Summary: Refactor the current global header iteration to be callback-based, and add a feature that reports the size of the global variable during reporting. This allows binaries without symbols to still report the size of the global variable, which is always available in the HWASan globals PT_NOTE metadata.
Reviewers: eugenis, pcc
Reviewed By: pcc
Subscribers: mgorny, llvm-commits, #sanitizers
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D80599
Summary: Non-zero malloc fill is causing way too many hard to debug issues.
Reviewers: kcc, pcc, hctim
Subscribers: #sanitizers, llvm-commits
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D81284
Summary:
This is necessary to handle calls to free() after __hwasan_thread_exit,
which is possible in glibc.
Also, add a null check to GetCurrentThread, otherwise the logic in
GetThreadByBufferAddress turns it into a non-null value. This means that
all of the checks for GetCurrentThread() != nullptr do not have any
effect at all right now!
Reviewers: pcc, hctim
Subscribers: #sanitizers, llvm-commits
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D79608
Summary:
Fix parsing of mangled stack trace lines where the address has been
replaced with "0x", literally.
Reviewers: vitalybuka
Subscribers: #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D73823
Summary:
A script to symbolize hwasan reports after the fact using unstripped
binaries. Supports stack-based reports. Requires llvm-symbolizer
(addr2line is not an option).
Reviewers: pcc, hctim
Subscribers: mgorny, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D71148
Shadow memory (and short granules) are not prepended with memory
address and arrow at the end of line is removed.
Differential Revision: https://reviews.llvm.org/D70707
This was an experiment made possible by a non-standard feature of the
Android dynamic loader.
It required introducing a flag to tell the compiler which ABI was being
targeted.
This flag is no longer needed, since the generated code now works for
both ABI's.
We leave that flag untouched for backwards compatibility. This also
means that if we need to distinguish between targeted ABI's again
we can do that without disturbing any existing workflows.
We leave a comment in the source code and mention in the help text to
explain this for any confused person reading the code in the future.
Patch by Matthew Malcomson
Differential Revision: https://reviews.llvm.org/D69574
Summary:
The hwasan interceptor ABI doesn't have interceptors for longjmp and setjmp.
This patch introduces them.
We require the size of the jmp_buf on the platform to be at least as large as
the jmp_buf in our implementation. To enforce this we compile
hwasan_type_test.cpp that ensures a compile time failure if this is not true.
Tested on both GCC and clang using an AArch64 virtual machine.
Reviewers: eugenis, kcc, pcc, Sanatizers
Reviewed By: eugenis, Sanatizers
Tags: #sanatizers, #llvm
Differential Revision: https://reviews.llvm.org/D69045
Patch By: Matthew Malcomson <matthew.malcomson@arm.com>
Summary:
GCC would like to emit a function call to report a tag mismatch
rather than hard-code the `brk` instruction directly.
__hwasan_tag_mismatch_stub contains most of the functionality to do
this already, but requires exposure in the dynamic library.
This patch moves __hwasan_tag_mismatch_stub outside of the anonymous
namespace that it was defined in and declares it in
hwasan_interface_internal.h.
We also add the ability to pass sizes larger than 16 bytes to this
reporting function by providing a fourth parameter that is only looked
at when the size provided is not in the original accepted range.
This does not change the behaviour where it is already being called,
since the previous definition only accepted sizes up to 16 bytes and
hence the change in behaviour is not seen by existing users.
The change in declaration does not matter, since the only existing use
is in the __hwasan_tag_mismatch function written in assembly.
Reviewers: eugenis, kcc, pcc, #sanitizers
Reviewed By: eugenis, #sanitizers
Subscribers: kristof.beyls, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D69113
Patch by Matthew Malcomson <matthew.malcomson@arm.com>
Summary:
GCC would like to emit a function call to report a tag mismatch
rather than hard-code the `brk` instruction directly.
__hwasan_tag_mismatch_stub contains most of the functionality to do
this already, but requires exposure in the dynamic library.
This patch moves __hwasan_tag_mismatch_stub outside of the anonymous
namespace that it was defined in and declares it in
hwasan_interface_internal.h.
We also add the ability to pass sizes larger than 16 bytes to this
reporting function by providing a fourth parameter that is only looked
at when the size provided is not in the original accepted range.
This does not change the behaviour where it is already being called,
since the previous definition only accepted sizes up to 16 bytes and
hence the change in behaviour is not seen by existing users.
The change in declaration does not matter, since the only existing use
is in the __hwasan_tag_mismatch function written in assembly.
Tested with gcc and clang on an AArch64 vm.
Reviewers: eugenis, kcc, pcc, #sanitizers
Reviewed By: eugenis, #sanitizers
Subscribers: kristof.beyls, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D69113
Summary:
This has been an experiment with late malloc interposition, made
possible by a non-standard feature of the Android dynamic loader.
Reviewers: pcc, mmalcomson
Subscribers: srhines, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D69199
llvm-svn: 375296
Summary:
Until now AArch64 development has been on patched kernels that have an always
on relaxed syscall ABI where tagged pointers are accepted.
The patches that have gone into the mainline kernel rely on each process opting
in to this relaxed ABI.
This commit adds code to choose that ABI into __hwasan_init.
The idea has already been agreed with one of the hwasan developers
(http://lists.llvm.org/pipermail/llvm-dev/2019-September/135328.html).
The patch ignores failures of `EINVAL` for Android, since there are older versions of the Android kernel that don't require this `prctl` or even have the relevant values. Avoiding EINVAL will let the library run on them.
I've tested this on an AArch64 VM running a kernel that requires this
prctl, having compiled both with clang and gcc.
Patch by Matthew Malcomson.
Reviewers: eugenis, kcc, pcc
Reviewed By: eugenis
Subscribers: srhines, kristof.beyls, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D68794
llvm-svn: 375166
We can't use short granules with stack instrumentation when targeting older
API levels because the rest of the system won't understand the short granule
tags stored in shadow memory.
Moreover, we need to be able to let old binaries (which won't understand
short granule tags) run on a new system that supports short granule
tags. Such binaries will call the __hwasan_tag_mismatch function when their
outlined checks fail. We can compensate for the binary's lack of support
for short granules by implementing the short granule part of the check in
the __hwasan_tag_mismatch function. Unfortunately we can't do anything about
inline checks, but I don't believe that we can generate these by default on
aarch64, nor did we do so when the ABI was fixed.
A new function, __hwasan_tag_mismatch_v2, is introduced that lets code
targeting the new runtime avoid redoing the short granule check. Because tag
mismatches are rare this isn't important from a performance perspective; the
main benefit is that it introduces a symbol dependency that prevents binaries
targeting the new runtime from running on older (i.e. incompatible) runtimes.
Differential Revision: https://reviews.llvm.org/D68059
llvm-svn: 373035
There is no requirement for the producer of a note to include the note
alignment in these fields. As a result we can end up missing the HWASAN note
if one of the other notes in the binary has the alignment missing.
Differential Revision: https://reviews.llvm.org/D66692
llvm-svn: 369826
One problem with untagging memory in landing pads is that it only works
correctly if the function that catches the exception is instrumented.
If the function is uninstrumented, we have no opportunity to untag the
memory.
To address this, replace landing pad instrumentation with personality function
wrapping. Each function with an instrumented stack has its personality function
replaced with a wrapper provided by the runtime. Functions that did not have
a personality function to begin with also get wrappers if they may be unwound
past. As the unwinder calls personality functions during stack unwinding,
the original personality function is called and the function's stack frame is
untagged by the wrapper if the personality function instructs the unwinder
to keep unwinding. If unwinding stops at a landing pad, the function is
still responsible for untagging its stack frame if it resumes unwinding.
The old landing pad mechanism is preserved for compatibility with old runtimes.
Differential Revision: https://reviews.llvm.org/D66377
llvm-svn: 369721
See D65364 for the code model requirements for tagged globals. Because
of the relocations used these requirements cannot be checked at link
time so they must be checked at runtime.
Differential Revision: https://reviews.llvm.org/D65968
llvm-svn: 368351
Globals are instrumented by adding a pointer tag to their symbol values
and emitting metadata into a special section that allows the runtime to tag
their memory when the library is loaded.
Due to order of initialization issues explained in more detail in the comments,
shadow initialization cannot happen during regular global initialization.
Instead, the location of the global section is marked using an ELF note,
and we require libc support for calling a function provided by the HWASAN
runtime when libraries are loaded and unloaded.
Based on ideas discussed with @evgeny777 in D56672.
Differential Revision: https://reviews.llvm.org/D65770
llvm-svn: 368102
A short granule is a granule of size between 1 and `TG-1` bytes. The size
of a short granule is stored at the location in shadow memory where the
granule's tag is normally stored, while the granule's actual tag is stored
in the last byte of the granule. This means that in order to verify that a
pointer tag matches a memory tag, HWASAN must check for two possibilities:
* the pointer tag is equal to the memory tag in shadow memory, or
* the shadow memory tag is actually a short granule size, the value being loaded
is in bounds of the granule and the pointer tag is equal to the last byte of
the granule.
Pointer tags between 1 to `TG-1` are possible and are as likely as any other
tag. This means that these tags in memory have two interpretations: the full
tag interpretation (where the pointer tag is between 1 and `TG-1` and the
last byte of the granule is ordinary data) and the short tag interpretation
(where the pointer tag is stored in the granule).
When HWASAN detects an error near a memory tag between 1 and `TG-1`, it
will show both the memory tag and the last byte of the granule. Currently,
it is up to the user to disambiguate the two possibilities.
Because this functionality obsoletes the right aligned heap feature of
the HWASAN memory allocator (and because we can no longer easily test
it), the feature is removed.
Also update the documentation to cover both short granule tags and
outlined checks.
Differential Revision: https://reviews.llvm.org/D63908
llvm-svn: 365551
Each function's PC is recorded in the ring buffer. From there we can access
the function's local variables and reconstruct the tag of each one with the
help of the information printed by llvm-symbolizer's new FRAME command. We
can then find the variable that was likely being accessed by matching the
pointer's tag against the reconstructed tag.
Differential Revision: https://reviews.llvm.org/D63469
llvm-svn: 364607
This saves roughly 32 bytes of instructions per function with stack objects
and causes us to preserve enough information that we can recover the original
tags of all stack variables.
Now that stack tags are deterministic, we no longer need to pass
-hwasan-generate-tags-with-calls during check-hwasan. This also means that
the new stack tag generation mechanism is exercised by check-hwasan.
Differential Revision: https://reviews.llvm.org/D63360
llvm-svn: 363636
This allows instrumenting programs which have their own
versions of new and delete operators.
Differential revision: https://reviews.llvm.org/D62794
llvm-svn: 362478
- Several "warning: extra ';' [-Wpedantic]"
- One "C++ style comments are not allowed in ISO C90 [enabled by default]"
in a file that uses C style comments everywhere but in one place
llvm-svn: 360430
Summary:
I'm not aware of any platforms where this will work, but the code should at least compile.
HWASAN_WITH_INTERCEPTORS=OFF means there is magic in libc that would call __hwasan_thread_enter /
__hwasan_thread_exit as appropriate.
Reviewers: pcc, winksaville
Subscribers: srhines, kubamracek, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D61337
llvm-svn: 359914