Summary:
This patch aims at condensing the hardware CRC32 feature detection and making
it slightly more effective on Android.
The following changes are included:
- remove the `CPUFeature` enum, and get rid of one level of nesting of
functions: we only used CRC32, so we just implement and use
`hasHardwareCRC32`;
- allow for a weak `getauxval`: the Android toolchain is compiled at API level
14 for Android ARM, meaning no `getauxval` at compile time, yet we will run
on API level 27+ devices. The `/proc/self/auxv` fallback can work but is
worthless for a process like `init` where the proc filesystem doesn't exist
yet. If a weak `getauxval` doesn't exist, then fallback.
- couple of extra corrections.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: kubamracek, aemerson, srhines, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D40322
llvm-svn: 318859
Summary:
This change allows Fuchsia to boot properly using the Scudo allocator.
A first version of this commit was reverted by rL317834 because it broke Android
builds for toolchains generated with older NDKs. This commit introduces a
fall back to solve that issue.
Reviewers: cryptoad, krytarowski, rnk, alekseyshl
Reviewed By: cryptoad, krytarowski, alekseyshl
Subscribers: llvm-commits, srhines, kubamracek, krytarowski
Differential Revision: https://reviews.llvm.org/D40121
llvm-svn: 318802
Summary:
This implements an opportunistic check for the RSS limit.
For ASan, this was implemented thanks to a background thread checking the
current RSS vs the set limit every 100ms. This was deemed problematic for Scudo
due to potential Android concerns (Zygote as pointed out by Aleksey) as well as
the general inconvenience of having a permanent background thread.
If a limit (soft or hard) is specified, we will attempt to update the RSS limit
status (exceeded or not) every 100ms. This is done in an opportunistic way: if
we can update it, we do it, if not we return the current status, mostly because
we don't need it to be fully consistent (it's done every 100ms anyway). If the
limit is exceeded `allocate` will act as if OOM for a soft limit, or just die
for a hard limit.
We use the `common_flags()`'s `hard_rss_limit_mb` & `soft_rss_limit_mb` for
configuration of the limits.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D40038
llvm-svn: 318301
Summary:
This is mostly some cleanup and shouldn't affect functionalities.
Reviewing some code for a future addition, I realized that the complexity of
the initialization path was unnecessary, and so was maintaining a structure
for the allocator options throughout the initialization.
So we get rid of that structure, of an extraneous level of nesting for the
`init` function, and correct a couple of related code inaccuracies in the
flags cpp.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39974
llvm-svn: 318157
Summary:
The ScudoAllocator uses a SecondaryHeader to keep track of the size and base address of each mmap'd chunk.
This aligns well with what the ReservedAddressRange is trying to do. This changeset converts the scudo allocator from using the MmapNoAccess/MmapFixed APIs to the ReservedAddressRange::Init and ::Map APIs. In doing so, it replaces the SecondayHeader struct with the ReservedAddressRange object.
This is part 3 of a 4 part changeset; part 1 https://reviews.llvm.org/D39072 and part 2 https://reviews.llvm.org/D38592
Reviewers: alekseyshl, mcgrathr, cryptoad, phosek
Reviewed By: cryptoad
Subscribers: llvm-commits, cryptoad, kubamracek
Differential Revision: https://reviews.llvm.org/D38593
llvm-svn: 318080
Summary:
`getauxval` was introduced in 18 & 21 depending on the architecture. Bump the
requirement to 21.
It also turns out that the NDK is finicky: NDK r13b doesn't include sys/auxv.h
when creating a standalone toolchain at API level 19 for ARM. So 18 didn't work
well with older NDKs.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: aemerson, srhines, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D39905
llvm-svn: 317907
Summary:
This reverts D39490.
For toolchains generated with older NDKs (<=r13b as far as we tested),
`cpu_set_t` doesn't exist in `sched.h`.
We have to figure out another way to get the number of CPUs without this.
Reviewers: rnk
Reviewed By: rnk
Subscribers: kubamracek, llvm-commits, krytarowski
Differential Revision: https://reviews.llvm.org/D39867
llvm-svn: 317834
Summary:
Scudo abides by the coding style enforced by the sanitizer_common
linter, but as of right now, it's not linter-enforced.
Add Scudo to the list of directories checked by check_lint.sh.
Also: fixes some linter errors found after getting this running.
Reviewers: cryptoad
Reviewed By: cryptoad
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D39757
llvm-svn: 317699
Summary:
Initially, Scudo had a monolithic design where both C and C++ functions were
living in the same library. This was not necessarily ideal, and with the work
on -fsanitize=scudo, it became more apparent that this needed to change.
We are splitting the new/delete interceptor in their own C++ library. This
allows more flexibility, notably with regard to std::bad_alloc when the work is
done. This also allows us to not link new & delete when using pure C.
Additionally, we add the UBSan runtimes with Scudo, in order to be able to have
a -fsanitize=scudo,undefined in Clang (see work in D39334).
The changes in this patch:
- split the cxx specific code in the scudo cmake file into a new library;
(remove the spurious foreach loop, that was not necessary)
- add the UBSan runtimes (both C and C++);
- change the test cmake file to allow for specific C & C++ tests;
- make C tests pure C, rename their extension accordingly.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D39461
llvm-svn: 317097
Summary:
This introduces `SCUDO_MAX_CACHES` allowing to define an upper bound to the
number of `ScudoTSD` created in the Shared TSD model (by default 32U).
This name felt clearer than `SCUDO_MAX_TSDS` which is technically what it really
is. I am opened to suggestions if that doesn't feel right.
Additionally change `getNumberOfCPUs` to return a `u32` to be more consistent.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39338
llvm-svn: 316788
Summary:
The 32-bit allocator is now on par with the 64-bit in terms of security (chunks
randomization is done, batches separation is done).
Unless objection, the comment can go away.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39303
llvm-svn: 316620
Summary:
Up to now, the Scudo cmake target only provided a static library that had to be
linked to an executable to benefit from the hardened allocator.
This introduces a shared library as well, that can be LD_PRELOAD'ed.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D38980
llvm-svn: 316342
Summary:
Move the `sanitizer_posix.h` include within the `SANITIZER_ANDROID` `#if`,
otherwise this errors when built on non-Posix platforms (eg: Fuchsia).
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38956
llvm-svn: 315917
Summary:
Follow up to D38826.
We introduce `pthread_{get,set}specific` versions of `{get,set}CurrentTSD` to
allow for non Android platforms to use the Shared TSD model.
We now allow `SCUDO_TSD_EXCLUSIVE` to be defined at compile time.
A couple of things:
- I know that `#if SANITIZER_ANDROID` is not ideal within a function, but in
the end I feel it looks more compact and clean than going the .inc route; I
am open to an alternative if anyone has one;
- `SCUDO_TSD_EXCLUSIVE=1` requires ELF TLS support (and not emutls as this uses
malloc). I haven't found anything to enforce that, so it's currently not
checked.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38854
llvm-svn: 315751
Summary:
This first part just prepares the grounds for part 2 and doesn't add any new
functionality. It mostly consists of small refactors:
- move the `pthread.h` include higher as it will be used in the headers;
- use `errno.h` in `scudo_allocator.cpp` instead of the sanitizer one, update
the `errno` assignments accordingly (otherwise it creates conflicts on some
platforms due to `pthread.h` including `errno.h`);
- introduce and use `getCurrentTSD` and `setCurrentTSD` for the shared TSD
model code;
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits, srhines
Differential Revision: https://reviews.llvm.org/D38826
llvm-svn: 315583
Summary:
Previous parts: D38139, D38183.
In this part of the refactor, we abstract the Linux vs Android TSD dissociation
in favor of a Exclusive vs Shared one, allowing for easier platform introduction
and configuration.
Most of this change consist of shuffling the files around to reflect the new
organization.
We introduce `scudo_platform.h` where platform specific definition lie. This
involves the TSD model and the platform specific allocator parameters. In an
upcoming CL, those will be configurable via defines, but we currently stick
with conservative defaults.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D38244
llvm-svn: 314224
Summary:
Following D38139, we now consolidate the TSD definition, merging the shared
TSD definition with the exclusive TSD definition. We introduce a boolean set
at initializaton denoting the need for the TSD to be unlocked or not. This
adds some unused members to the exclusive TSD, but increases consistency and
reduces the definitions fragmentation.
We remove the fallback mechanism from `scudo_allocator.cpp` and add a fallback
TSD in the non-shared version. Since the shared version doesn't require one,
this makes overall more sense.
There are a couple of additional cosmetic changes: removing the header guards
from the remaining `.inc` files, added error string to a `CHECK`.
Question to reviewers: I thought about friending `getTSDAndLock` in `ScudoTSD`
so that the `FallbackTSD` could `Mutex.Lock()` directly instead of `lock()`
which involved zeroing out the `Precedence`, which is unused otherwise. Is it
worth doing?
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38183
llvm-svn: 314110
Summary:
We are going through an overhaul of Scudo's TSD, to allow for new platforms
to be integrated more easily, and make the code more sound.
This first part is mostly renaming, preferring some shorter names, correcting
some comments. I removed `getPrng` and `getAllocatorCache` to directly access
the members, there was not really any benefit to them (and it was suggested by
Dmitry in D37590).
The only functional change is in `scudo_tls_android.cpp`: we enforce bounds to
the `NumberOfTSDs` and most of the logic in `getTSDAndLockSlow` is skipped if we
only have 1 TSD.
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: llvm-commits, srhines
Differential Revision: https://reviews.llvm.org/D38139
llvm-svn: 313987
Summary:
In a few functions (`scudoMemalign` and the like), we would call
`ScudoAllocator::FailureHandler::OnBadRequest` if the parameters didn't check
out. The issue is that if the allocator had not been initialized (eg: if this
is the first heap related function called), we would use variables like
`allocator_may_return_null` and `exitcode` that still had their default value
(as opposed to the one set by the user or the initialization path).
To solve this, we introduce `handleBadRequest` that will call `initThreadMaybe`,
allowing the options to be correctly initialized.
Unfortunately, the tests were passing because `exitcode` was still 0, so the
results looked like success. Change those tests to do what they were supposed
to.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D37853
llvm-svn: 313294
Summary:
Some of glibc's own thread local data is destroyed after a user's thread local
destructors are called, via __libc_thread_freeres. This might involve calling
free, as is the case for strerror_thread_freeres.
If there is no prior heap operation in the thread, this free would end up
initializing some thread specific data that would never be destroyed properly
(as user's pthread destructors have already been called), while still being
deallocated when the TLS goes away. As a result, a program could SEGV, usually
in __sanitizer::AllocatorGlobalStats::Unregister, where one of the doubly linked
list links would refer to a now unmapped memory area.
To prevent this from happening, we will not do a full initialization from the
deallocation path. This means that the fallback cache & quarantine will be used
if no other heap operation has been called, and we effectively prevent the TSD
being initialized and never destroyed. The TSD will be fully initialized for all
other paths.
In the event of a thread doing only frees and nothing else, a TSD would never
be initialized for that thread, but this situation is unlikely and we can live
with that.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D37697
llvm-svn: 312939
Summary:
`getauxval` was introduced with API level 18. In order to get things to work
at lower API levels (for the toolchain itself which is built at 14 for 32-bit),
we introduce an alternative implementation reading directly from
`/proc/self/auxv`.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D37488
llvm-svn: 312653
Summary:
Currently `TransferBatch` are located within the same memory regions as
"regular" chunks. This is not ideal for security: they make for an interesting
target to overwrite, and are not protected by the frontend (namely, Scudo).
To solve this, we re-introduce `kUseSeparateSizeClassForBatch` for the 32-bit
Primary allowing for `TransferBatch` to end up in their own memory region.
Currently only Scudo would use this new feature, the default behavior remains
unchanged. The separate `kBatchClassID` was used for a brief period of time
previously but removed when the 64-bit ended up using the "free array".
Reviewers: alekseyshl, kcc, eugenis
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D37082
llvm-svn: 311891
Summary:
This patch changes a few (small) things around for compatibility purposes for
the current Android & Fuchsia work:
- `realloc`'ing some memory that was not allocated with `malloc`, `calloc` or
`realloc`, while UB according to http://pubs.opengroup.org/onlinepubs/009695399/functions/realloc.html
is more common that one would think. We now only check this if
`DeallocationTypeMismatch` is set; change the "mismatch" error
messages to be more homogeneous;
- some sketchily written but widely used libraries expect a call to `realloc`
to copy the usable size of the old chunk to the new one instead of the
requested size. We have to begrundingly abide by this de-facto standard.
This doesn't seem to impact security either way, unless someone comes up with
something we didn't think about;
- the CRC32 intrinsics for 64-bit take a 64-bit first argument. This is
misleading as the upper 32 bits end up being ignored. This was also raising
`-Wconversion` errors. Change things to take a `u32` as first argument.
This also means we were (and are) only using 32 bits of the Cookie - not a
big thing, but worth mentioning.
- Includes-wise: prefer `stddef.h` to `cstddef`, move `scudo_flags.h` where it
is actually needed.
- Add tests for the memalign-realloc case, and the realloc-usable-size one.
(Edited typos)
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D36754
llvm-svn: 311018
Summary:
On platforms with `getrandom`, the system call defaults to blocking. This
becomes an issue in the very early stage of the boot for Scudo, when the RNG
source is not set-up yet: the syscall will block and we'll stall.
Introduce a parameter to specify that the function should not block, defaulting
to blocking as the underlying syscall does.
Update Scudo to use the non-blocking version.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D36399
llvm-svn: 310839
Summary:
Previously we were rounding up the size passed to `pvalloc` to the next
multiple of page size no matter what. There is an overflow possibility that
wasn't accounted for. So now, return null in the event of an overflow. The man
page doesn't seem to indicate the errno to set in this particular situation,
but the glibc unit tests go for ENOMEM (https://code.woboq.org/userspace/glibc/malloc/tst-pvalloc.c.html#54)
so we'll do the same.
Update the aligned allocation funtions tests to check for properly aligned
returned pointers, and the `pvalloc` corner cases.
@alekseyshl: do you want me to do the same in the other Sanitizers?
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: kubamracek, alekseyshl, llvm-commits
Differential Revision: https://reviews.llvm.org/D35818
llvm-svn: 309033
Summary:
First, some context.
The main feedback we get about the quarantine is that it's too memory hungry.
A single MB of quarantine will have an impact of 3 to 4MB of PSS/RSS, and
things quickly get out of hand in terms of memory usage, and the quarantine
ends up disabled.
The main objective of the quarantine is to protect from use-after-free
exploitation by making it harder for an attacker to reallocate a controlled
chunk in place of the targeted freed chunk. This is achieved by not making it
available to the backend right away for reuse, but holding it a little while.
Historically, what has usually been the target of such attacks was objects,
where vtable pointers or other function pointers could constitute a valuable
targeti to replace. Those are usually on the smaller side. There is barely any
advantage in putting the quarantine several megabytes of RGB data or the like.
Now for the patch.
This patch introduces a new way the Quarantine behaves in Scudo. First of all,
the size of the Quarantine will be defined in KB instead of MB, then we
introduce a new option: the size up to which (lower than or equal to) a chunk
will be quarantined. This way, we only quarantine smaller chunks, and the size
of the quarantine remains manageable. It also prevents someone from triggering
a recycle by allocating something huge. We default to 512 bytes on 32-bit and
2048 bytes on 64-bit platforms.
In details, the patches includes the following:
- introduce `QuarantineSizeKb`, but honor `QuarantineSizeMb` if set to fall
back to the old behavior (meaning no threshold in that case);
`QuarantineSizeMb` is described as deprecated in the options descriptios;
documentation update will follow;
- introduce `QuarantineChunksUpToSize`, the new threshold value;
- update the `quarantine.cpp` test, and other tests using `QuarantineSizeMb`;
- remove `AllocatorOptions::copyTo`, it wasn't used;
- slightly change the logic around `quarantineOrDeallocateChunk` to accomodate
for the new logic; rename a couple of variables there as well;
Rewriting the tests, I found a somewhat annoying bug where non-default aligned
chunks would account for more than needed when placed in the quarantine due to
`<< MinAlignment` instead of `<< MinAlignmentLog`. This is fixed and tested for
now.
Reviewers: alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D35694
llvm-svn: 308884
Summary:
ASan/MSan/LSan allocators set errno on allocation failures according to
malloc/calloc/etc. expected behavior.
MSan allocator was refactored a bit to make its structure more similar
with other allocators.
Also switch Scudo allocator to the internal errno definitions.
TSan allocator changes will follow.
Reviewers: eugenis
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D35275
llvm-svn: 308344
Summary:
Set proper errno code on alloction failure and change pvalloc and
posix_memalign implementation to satisfy their man-specified
requirements.
Reviewers: cryptoad
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D35429
llvm-svn: 308053
Summary:
Secondary backed allocations do not require a cache. While it's not necessary
an issue when each thread has its cache, it becomes one with a shared pool of
caches (Android), as a Secondary backed allocation or deallocation holds a
cache that could be useful to another thread doing a Primary backed allocation.
We introduce an additional PRNG and its mutex (to avoid contention with the
Fallback one for Primary allocations) that will provide the `Salt` needed for
Secondary backed allocations.
I changed some of the code in a way that feels more readable to me (eg: using
some values directly rather than going through ternary assigned variables,
using directly `true`/`false` rather than `FromPrimary`). I will let reviewers
decide if it actually is.
An additional change is to mark `CheckForCallocOverflow` as `UNLIKELY`.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D35358
llvm-svn: 307958
Summary:
This follows the addition of `GetRandom` with D34412. We remove our
`/dev/urandom` code and use the new function. Additionally, change the PRNG for
a slightly faster version. One of the issues with the old code is that we have
64 full bits of randomness per "next", using only 8 of those for the Salt and
discarding the rest. So we add a cached u64 in the PRNG that can serve up to
8 u8 before having to call the "next" function again.
During some integration work, I also realized that some very early processes
(like `init`) do not benefit from `/dev/urandom` yet. So if there is no
`getrandom` syscall as well, we have to fallback to some sort of initialization
of the PRNG.
Now a few words on why XoRoShiRo and not something else. I have played a while
with various PRNGs on 32 & 64 bit platforms. Some results are below. LCG 32 & 64
are usually faster but produce respectively 15 & 31 bits of entropy, meaning
that to get a full 64-bit, you would need to call them several times. The simple
XorShift is fast, produces 32 bits but is mediocre with regard to PRNG test
suites, PCG is slower overall, and XoRoShiRo is faster than XorShift128+ and
produces full 64 bits.
%%%
root@tulip-chiphd:/data # ./randtest.arm
[+] starting xs32...
[?] xs32 duration: 22431833053ns
[+] starting lcg32...
[?] lcg32 duration: 14941402090ns
[+] starting pcg32...
[?] pcg32 duration: 44941973771ns
[+] starting xs128p...
[?] xs128p duration: 48889786981ns
[+] starting lcg64...
[?] lcg64 duration: 33831042391ns
[+] starting xos128p...
[?] xos128p duration: 44850878605ns
root@tulip-chiphd:/data # ./randtest.aarch64
[+] starting xs32...
[?] xs32 duration: 22425151678ns
[+] starting lcg32...
[?] lcg32 duration: 14954255257ns
[+] starting pcg32...
[?] pcg32 duration: 37346265726ns
[+] starting xs128p...
[?] xs128p duration: 22523807219ns
[+] starting lcg64...
[?] lcg64 duration: 26141304679ns
[+] starting xos128p...
[?] xos128p duration: 14937033215ns
%%%
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: aemerson, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D35221
llvm-svn: 307798
Summary:
We were not following the `man` documented behaviors for invalid arguments to
`memalign` and associated functions. Using `CHECK` for those was a bit extreme,
so we relax the behavior to return null pointers as expected when this happens.
Adapt the associated test.
I am using this change also to change a few more minor performance improvements:
- mark as `UNLIKELY` a bunch of unlikely conditions;
- the current `CHECK` in `__sanitizer::RoundUpTo` is redundant for us in *all*
calls. So I am introducing our own version without said `CHECK`.
- change our combined allocator `GetActuallyAllocatedSize`. We already know if
the pointer is from the Primary or Secondary, so the `PointerIsMine` check is
redundant as well, and costly for the 32-bit Primary. So we get the size by
directly using the available Primary functions.
Finally, change a `int` to `uptr` to avoid a warning/error when compiling on
Android.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34782
llvm-svn: 306698
Summary:
Operator new interceptors behavior is now controlled by their nothrow
property as well as by allocator_may_return_null flag value:
- allocator_may_return_null=* + new() - die on allocation error
- allocator_may_return_null=0 + new(nothrow) - die on allocation error
- allocator_may_return_null=1 + new(nothrow) - return null
Ideally new() should throw std::bad_alloc exception, but that is not
trivial to achieve, hence TODO.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34731
llvm-svn: 306604
Summary:
Move cached allocator_may_return_null flag to sanitizer_allocator.cc and
provide API to consolidate and unify the behavior of all specific allocators.
Make all sanitizers using CombinedAllocator to follow
AllocatorReturnNullOrDieOnOOM() rules to behave the same way when OOM
happens.
When OOM happens, turn allocator_out_of_memory flag on regardless of
allocator_may_return_null flag value (it used to not to be set when
allocator_may_return_null == true).
release_to_os_interval_ms and rss_limit_exceeded will likely be moved to
sanitizer_allocator.cc too (later).
Reviewers: eugenis
Subscribers: srhines, kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34310
llvm-svn: 305858
Summary:
Currently we are not enforcing the success of `pthread_once`, and
`pthread_setspecific`. Errors could lead to harder to debug issues later in
the thread's life. This adds checks for a 0 return value for both.
If `pthread_setspecific` fails in the teardown path, opt for an immediate
teardown as opposed to a fatal failure.
Reviewers: alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33555
llvm-svn: 303998
Summary:
After discussing the current defaults with a couple of parties, the consensus
is that they are too high. 1Mb of quarantine has about a 4Mb impact on PSS, so
memory usage goes up quickly.
This is obviously configurable, but the default value should be more
"approachable", so both the global size and the thread local size are 1/4 of
what they used to be.
Reviewers: alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33321
llvm-svn: 303380
Summary:
With rL279771, SizeClassAllocator64 was changed to accept only one template
instead of 5, for the following reasons: "First, this will make the mangled
names shorter. Second, this will make adding more parameters simpler". This
patch mirrors that work for SizeClassAllocator32.
This is in preparation for introducing the randomization of chunks in the
32-bit SizeClassAllocator in a later patch.
Reviewers: kcc, alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33141
llvm-svn: 303071
Summary:
The reasoning behind this change is twofold:
- the current combined allocator (sanitizer_allocator_combined.h) implements
features that are not relevant for Scudo, making some code redundant, and
some restrictions not pertinent (alignments for example). This forced us to
do some weird things between the frontend and our secondary to make things
work;
- we have enough information to be able to know if a chunk will be serviced by
the Primary or Secondary, allowing us to avoid extraneous calls to functions
such as `PointerIsMine` or `CanAllocate`.
As a result, the new scudo-specific combined allocator is very straightforward,
and allows us to remove some now unnecessary code both in the frontend and the
secondary. Unused functions have been left in as unimplemented for now.
It turns out to also be a sizeable performance gain (3% faster in some Android
memory_replay benchmarks, doing some more on other platforms).
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33007
llvm-svn: 302830
Summary:
This change optimizes several aspects of the checksum used for chunk headers.
First, there is no point in checking the weak symbol `computeHardwareCRC32`
everytime, it will either be there or not when we start, so check it once
during initialization and set the checksum type accordingly.
Then, the loading of `HashAlgorithm` for SSE versions (and ARM equivalent) was
not optimized out, while not necessary. So I reshuffled that part of the code,
which duplicates a tiny bit of code, but ends up in a much cleaner assembly
(and faster as we avoid an extraneous load and some calls).
The following code is the checksum at the end of `scudoMalloc` for x86_64 with
full SSE 4.2, before:
```
mov rax, 0FFFFFFFFFFFFFFh
shl r10, 38h
mov edi, dword ptr cs:_ZN7__scudoL6CookieE ; __scudo::Cookie
and r14, rax
lea rsi, [r13-10h]
movzx eax, cs:_ZN7__scudoL13HashAlgorithmE ; __scudo::HashAlgorithm
or r14, r10
mov rbx, r14
xor bx, bx
call _ZN7__scudo20computeHardwareCRC32Ejm ; __scudo::computeHardwareCRC32(uint,ulong)
mov rsi, rbx
mov edi, eax
call _ZN7__scudo20computeHardwareCRC32Ejm ; __scudo::computeHardwareCRC32(uint,ulong)
mov r14w, ax
mov rax, r13
mov [r13-10h], r14
```
After:
```
mov rax, cs:_ZN7__scudoL6CookieE ; __scudo::Cookie
lea rcx, [rbx-10h]
mov rdx, 0FFFFFFFFFFFFFFh
and r14, rdx
shl r9, 38h
or r14, r9
crc32 eax, rcx
mov rdx, r14
xor dx, dx
mov eax, eax
crc32 eax, rdx
mov r14w, ax
mov rax, rbx
mov [rbx-10h], r14
```
Reviewers: dvyukov, alekseyshl, kcc
Reviewed By: alekseyshl
Subscribers: aemerson, rengolin, llvm-commits
Differential Revision: https://reviews.llvm.org/D32971
llvm-svn: 302538
Summary:
This change adds Android support to the allocator (but doesn't yet enable it in
the cmake config), and should be the last fragment of the rewritten change
D31947.
Android has more memory constraints than other platforms, so the idea of a
unique context per thread would not have worked. The alternative chosen is to
allocate a set of contexts based on the number of cores on the machine, and
share those contexts within the threads. Contexts can be dynamically reassigned
to threads to prevent contention, based on a scheme suggested by @dvyuokv in
the initial review.
Additionally, given that Android doesn't support ELF TLS (only emutls for now),
we use the TSan TLS slot to make things faster: Scudo is mutually exclusive
with other sanitizers so this shouldn't cause any problem.
An additional change made here, is replacing `thread_local` by `THREADLOCAL`
and using the initial-exec thread model in the non-Android version to prevent
extraneous weak definition and checks on the relevant variables.
Reviewers: kcc, dvyukov, alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D32649
llvm-svn: 302300
Summary:
This change introduces scudo_tls.h & scudo_tls_linux.cpp, where we move the
thread local variables used by the allocator, namely the cache, quarantine
cache & prng. `ScudoThreadContext` will hold those. This patch doesn't
introduce any new platform support yet, this will be the object of a later
patch. This also changes the PRNG so that the structure can be POD.
Reviewers: kcc, dvyukov, alekseyshl
Reviewed By: dvyukov, alekseyshl
Subscribers: llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D32440
llvm-svn: 301584
Summary:
In the current state of things, the deallocation path puts a chunk in the
Quarantine whether it's enabled or not (size of 0). When the Quarantine is
disabled, this results in the header being loaded (and checked) twice, and
stored (and checksummed) once, in `deallocate` and `Recycle`.
This change introduces a `quarantineOrDeallocateChunk` function that has a
fast path to deallocation if the Quarantine is disabled. Even though this is
not the preferred configuration security-wise, this change saves a sizeable
amount of processing for that particular situation (which could be adopted by
low memory devices). Additionally this simplifies a bit `deallocate` and
`reallocate`.
Reviewers: dvyukov, kcc, alekseyshl
Reviewed By: dvyukov
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32310
llvm-svn: 301015
Summary:
GetActuallyAllocatedSize is actually expensive. In order to avoid calling this
function in the malloc/free fast path, we change the Scudo chunk header to
store the size of the chunk, if from the Primary, or the amount of unused
bytes if from the Secondary. This way, we only have to call the culprit
function for Secondary backed allocations (and still in realloc).
The performance gain on a singly threaded pure malloc/free benchmark exercising
the Primary allocator is above 5%.
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: dvyukov
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32299
llvm-svn: 300861
Summary:
This is part of D31947 that is being split into several smaller changes.
This one deals with all the minor changes, more specifically:
- Rename some variables and functions to make their purpose clearer;
- Reorder some code;
- Mark the hot termination incurring checks as `UNLIKELY`; if they happen, the
program will die anyway;
- Add a `getScudoChunk` method;
- Add an `eraseHeader` method to ScudoChunk that will clear a header with 0s;
- Add a parameter to `allocate` to know if the allocated chunk should be filled
with zeros. This allows `calloc` to not have to call
`GetActuallyAllocatedSize`; more changes to get rid of this function on the
hot paths will follow;
- reallocate was missing a check to verify that the pointer is properly
aligned on `MinAlignment`;
- The `Stats` in the secondary have to be protected by a mutex as the `Add`
and `Sub` methods are actually not atomic;
- The software CRC32 function was moved to the header to allow for inlining.
Reviewers: dvyukov, alekseyshl, kcc
Reviewed By: dvyukov
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32242
llvm-svn: 300846
Summary:
The local and global quarantine sizes were not offering a distinction for
32-bit and 64-bit platforms. This is addressed with lower values for 32-bit.
When writing additional tests for the quarantine, it was discovered that when
calling some of the allocator interface function prior to any allocation
operation having occured, the test would crash due to the allocator not being
initialized. This was addressed by making sure the allocator is initialized
for those scenarios.
Relevant tests were added in interface.cpp and quarantine.cpp.
Last change being the removal of the extraneous link dependencies for the
tests thanks to rL293220, anf the addition of the gc-sections linker flag.
Reviewers: kcc, alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29341
llvm-svn: 294037
Summary:
In an effort to getting rid of dependencies to external libraries, we are
replacing atomic PackedHeader use of std::atomic with Sanitizer's
atomic_uint64_t, which allows us to avoid -latomic.
Reviewers: kcc, phosek, alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28864
llvm-svn: 292630
Summary:
ARM & AArch64 runtime detection for hardware support of CRC32 has been added
via check of the AT_HWVAL auxiliary vector.
Following Michal's suggestions in D28417, the CRC32 code has been further
changed and looks better now. When compiled with full relro (which is strongly
suggested to benefit from additional hardening), the weak symbol for
computeHardwareCRC32 is read-only and the assembly generated is fairly clean
and straight forward. As suggested, an additional optimization is to skip
the runtime check if SSE 4.2 has been enabled globally, as opposed to only
for scudo_crc32.cpp.
scudo_crc32.h has no purpose anymore and was removed.
Reviewers: alekseyshl, kcc, rengolin, mgorny, phosek
Reviewed By: rengolin, mgorny
Subscribers: aemerson, rengolin, llvm-commits
Differential Revision: https://reviews.llvm.org/D28574
llvm-svn: 292409
Making this variable non-static avoids the need for locking to ensure
that the initialization is thread-safe which in turns eliminates the
runtime dependency on libc++abi library (for __cxa_guard_acquire and
__cxa_guard_release) which makes it possible to link scudo against
pure C programs.
Differential Revision: https://reviews.llvm.org/D28757
llvm-svn: 292253
Summary:
As raised in D28304, enabling SSE 4.2 for the whole Scudo tree leads to the
emission of SSE 4.2 instructions everywhere, while the runtime checks only
applied to the CRC32 computing function.
This patch separates the CRC32 function taking advantage of the hardware into
its own file, and only enabled -msse4.2 for that file, if detected to be
supported by the compiler.
Another consequence of removing SSE4.2 globally is realizing that memcpy were
not being optimized, which turned out to be due to the -fno-builtin in
SANITIZER_COMMON_CFLAGS. So we now explicitely enable builtins for Scudo.
The resulting assembly looks good, with some CALLs are introduced instead of
the CRC32 code being inlined.
Reviewers: kcc, mgorny, alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28417
llvm-svn: 291570
Disable the code appending -msse4.2 flag implicitly when the compiler
supports it. The compiler support for this flags do not indicate that
the underlying CPU will support SSE4.2, and passing it may result in
SSE4.2 code being emitted *implicitly*.
If the target platform supports SSE4.2 appropriately, the relevant bits
should be already enabled via -march= or equivalent. In this case
passing -msse4.2 is redundant.
If a runtime detection is desired (which seems to be a case with SCUDO),
then (as gcc manpage points out) the specific SSE4.2 needs to be
isolated into a separate file, the -msse4.2 flag can be forced only
for that file and the function defined in that file can only be called
when the CPU is determined to support SSE4.2.
This fixes SIGILL on SCUDO when it is compiled using gcc-5.4.
Differential Revision: https://reviews.llvm.org/D28304
llvm-svn: 291217
Summary:
With the recent changes to the Secondary, we use less bits for UnusedBytes,
which allows us in return to increase the bits used for Offset. That means
that we can use a Primary SizeClassMap allowing for a larger maximum size.
Reviewers: kcc, alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27816
llvm-svn: 289838
Summary:
I atually had an integer overflow on 32-bit with D27428 that didn't reproduce
locally, as the test servers would manage allocate addresses in the 0xffffxxxx
range, which led to some issues when rounding addresses.
At this point, I feel that Scudo could benefit from having its own combined
allocator, as we don't get any benefit from the current one, but have to work
around some hurdles (alignment checks, rounding up that is no longer needed,
extraneous code).
Reviewers: kcc, alekseyshl
Subscribers: llvm-commits, kubabrecka
Differential Revision: https://reviews.llvm.org/D27681
llvm-svn: 289572
Summary:
The combined allocator rounds up the requested size with regard to the
alignment, which makes sense when being serviced by the primary as it comes
with alignment guarantees, but not with the secondary. For the rare case of
large alignments, it wastes memory, and entices unnecessarily large fields for
the Scudo header. With this patch, we pass the non-alignement-rounded-up size
to the secondary, and adapt the Scudo code for this change.
Reviewers: alekseyshl, kcc
Subscribers: llvm-commits, kubabrecka
Differential Revision: https://reviews.llvm.org/D27428
llvm-svn: 289088
Summary:
This update introduces i386 support for the Scudo Hardened Allocator, and
offers software alternatives for functions that used to require hardware
specific instruction sets. This should make porting to new architectures
easier.
Among the changes:
- The chunk header has been changed to accomodate the size limitations
encountered on 32-bit architectures. We now fit everything in 64-bit. This
was achieved by storing the amount of unused bytes in an allocation rather
than the size itself, as one can be deduced from the other with the help
of the GetActuallyAllocatedSize function. As it turns out, this header can
be used for both 64 and 32 bit, and as such we dropped the requirement for
the 128-bit compare and exchange instruction support (cmpxchg16b).
- Add 32-bit support for the checksum and the PRNG functions: if the SSE 4.2
instruction set is supported, use the 32-bit CRC32 instruction, and in the
XorShift128, use a 32-bit based state instead of 64-bit.
- Add software support for CRC32: if SSE 4.2 is not supported, fallback on a
software implementation.
- Modify tests that were not 32-bit compliant, and expand them to cover more
allocation and alignment sizes. The random shuffle test has been deactivated
for linux-i386 & linux-i686 as the 32-bit sanitizer allocator doesn't
currently randomize chunks.
Reviewers: alekseyshl, kcc
Subscribers: filcab, llvm-commits, tberghammer, danalbert, srhines, mgorny, modocache
Differential Revision: https://reviews.llvm.org/D26358
llvm-svn: 288255
Summary:
In order to avoid starting a separate thread to return unused memory to
the system (the thread interferes with process startup on Android,
Zygota waits for all threads to exit before fork, but this thread never
exits), try to return it right after free.
Reviewers: eugenis
Subscribers: cryptoad, filcab, danalbert, kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27003
llvm-svn: 288091
Summary:
In order to support 32-bit platforms, we have to make some adjustments in
multiple locations, one of them being the Scudo chunk header. For it to fit on
64 bits (as a reminder, on x64 it's 128 bits), I had to crunch the space taken
by some of the fields. In order to keep the offset field small, the secondary
allocator was changed to accomodate aligned allocations for larger alignments,
hence making the offset constant for chunks serviced by it.
The resulting header candidate has been added, and further modifications to
allow 32-bit support will follow.
Another notable change is the addition of MaybeStartBackgroudThread() to allow
release of the memory to the OS.
Reviewers: kcc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25688
llvm-svn: 285209
Summary:
s/CHECK_LT/CHECK_LE/ in the secondary allocator, as under certain circumstances
Ptr + Size can be equal to MapEnd. This edge case was not found by the current
tests, so those were extended to be able to catch that.
Reviewers: kcc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25101
llvm-svn: 282913
Summary:
GetActuallyAllocatedSize() was not accounting for the last page of the mapping
being a guard page, and was returning the wrong number of actually allocated
bytes, which in turn would mess up with the realloc logic. Current tests didn't
find this as the size exercised was only serviced by the Primary.
Correct the issue by subtracting PageSize, and update the realloc test to
exercise paths in both the Primary and the Secondary.
Reviewers: kcc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24787
llvm-svn: 282030
Summary:
The Sanitizer Secondary Allocator was not entirely ideal was Scudo for several
reasons: decent amount of unneeded code, redundant checks already performed by
the front end, unneeded data structures, difficulty to properly protect the
secondary chunks header.
Given that the second allocator is pretty straight forward, Scudo will use its
own, trimming all the unneeded code off of the Sanitizer one. A significant
difference in terms of security is that now each secondary chunk is preceded
and followed by a guard page, thus mitigating overflows into and from the
chunk.
A test was added as well to illustrate the overflow & underflow situations
into the guard pages.
Reviewers: kcc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24737
llvm-svn: 281938
This patch builds on LLVM r279776.
In this patch I've done some cleanup and abstracted three common steps runtime components have in their CMakeLists files, and added a fourth.
The three steps I abstract are:
(1) Add a top-level target (i.e asan, msan, ...)
(2) Set the target properties for sorting files in IDE generators
(3) Make the compiler-rt target depend on the top-level target
The new step is to check if a command named "runtime_register_component" is defined, and to call it with the component name.
The runtime_register_component command is defined in llvm/runtimes/CMakeLists.txt, and presently just adds the component to a list of sub-components, which later gets used to generate target mappings.
With this patch a new workflow for runtimes builds is supported. The new workflow when building runtimes from the LLVM runtimes directory is:
> cmake [...]
> ninja runtimes-configure
> ninja asan
The "runtimes-configure" target builds all the dependencies for configuring the runtimes projects, and runs CMake on the runtimes projects. Running the runtimes CMake generates a list of targets to bind into the top-level CMake so subsequent build invocations will have access to some of Compiler-RT's targets through the top-level build.
Note: This patch does exclude some top-level targets from compiler-rt libraries because they either don't install files (sanitizer_common), or don't have a cooresponding `check` target (stats).
llvm-svn: 279863
Summary:
Currently, the Scudo Hardened Allocator only gets its flags via the SCUDO_OPTIONS environment variable.
With this patch, we offer the opportunity for programs to define their own options via __scudo_default_options() which behaves like __asan_default_options() (weak symbol).
A relevant test has been added as well, and the documentation updated accordingly.
I also used this patch as an opportunity to rename a few variables to comply with the LLVM naming scheme, and replaced a use of Report with dieWithMessage for consistency (and to avoid a callback).
Reviewers: llvm-commits, kcc
Differential Revision: https://reviews.llvm.org/D23018
llvm-svn: 277536
Summary:
This patch is a refactoring of the way cmake 'targets' are grouped.
It won't affect non-UI cmake-generators.
Clang/LLVM are using a structured way to group targets which ease
navigation through Visual Studio UI. The Compiler-RT projects
differ from the way Clang/LLVM are grouping targets.
This patch doesn't contain behavior changes.
Reviewers: kubabrecka, rnk
Subscribers: wang0109, llvm-commits, kubabrecka, chrisha
Differential Revision: http://reviews.llvm.org/D21952
llvm-svn: 275111
Summary:
This is an initial implementation of a Hardened Allocator based on Sanitizer Common's CombinedAllocator.
It aims at mitigating heap based vulnerabilities by adding several features to the base allocator, while staying relatively fast.
The following were implemented:
- additional consistency checks on the allocation function parameters and on the heap chunks;
- use of checksum protected chunk header, to detect corruption;
- randomness to the allocator base;
- delayed freelist (quarantine), to mitigate use after free and overall determinism.
Additional mitigations are in the works.
Reviewers: eugenis, aizatsky, pcc, krasin, vitalybuka, glider, dvyukov, kcc
Subscribers: kubabrecka, filcab, llvm-commits
Differential Revision: http://reviews.llvm.org/D20084
llvm-svn: 271968