to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Summary:
This is a follow up patch to r349138.
This patch makes a `AddressSpaceView` a type declaration in the
allocator parameters used by `SizeClassAllocator64`. For ASan, LSan, and
the unit tests the AP64 declarations have been made templated so that
`AddressSpaceView` can be changed at compile time. For the other
sanitizers we just hard-code `LocalAddressSpaceView` because we have no
plans to use these allocators in an out-of-process manner.
rdar://problem/45284065
Reviewers: kcc, dvyukov, vitalybuka, cryptoad, eugenis, kubamracek, george.karpenkov
Subscribers: #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D55764
llvm-svn: 349954
Summary:
This is a follow up patch to r346956 for the `SizeClassAllocator32`
allocator.
This patch makes `AddressSpaceView` a template parameter both to the
`ByteMap` implementations (but makes `LocalAddressSpaceView` the
default), some `AP32` implementations and is used in `SizeClassAllocator32`.
The actual changes to `ByteMap` implementations and
`SizeClassAllocator32` are very simple. However the patch is large
because it requires changing all the `AP32` definitions, and users of
those definitions.
For ASan and LSan we make `AP32` and `ByteMap` templateds type that take
a single `AddressSpaceView` argument. This has been done because we will
instantiate the allocator with a type that isn't `LocalAddressSpaceView`
in the future patches. For the allocators used in the other sanitizers
(i.e. HWAsan, MSan, Scudo, and TSan) use of `LocalAddressSpaceView` is
hard coded because we do not intend to instantiate the allocators with
any other type.
In the cases where untemplated types have become templated on a single
`AddressSpaceView` parameter (e.g. `PrimaryAllocator`) their name has
been changed to have a `ASVT` suffix (Address Space View Type) to
indicate they are templated. The only exception to this are the `AP32`
types due to the desire to keep the type name as short as possible.
In order to check that template is instantiated in the correct a way a
`static_assert(...)` has been added that checks that the
`AddressSpaceView` type used by `Params::ByteMap::AddressSpaceView` matches
the `Params::AddressSpaceView`. This uses the new `sanitizer_type_traits.h`
header.
rdar://problem/45284065
Reviewers: kcc, dvyukov, vitalybuka, cryptoad, eugenis, kubamracek, george.karpenkov
Subscribers: mgorny, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D54904
llvm-svn: 349138
Summary:
There is currently too much redundancy in the class/variable/* names in Scudo:
- we are in the namespace `__scudo`, so there is no point in having something
named `ScudoX` to end up with a final name of `__scudo::ScudoX`;
- there are a lot of types/* that have `Allocator` in the name, given that
Scudo is an allocator I figure this doubles up as well.
So change a bunch of the Scudo names to make them shorter, less redundant, and
overall simpler. They should still be pretty self explaining (or at least it
looks so to me).
The TSD part will be done in another CL (eg `__scudo::ScudoTSD`).
Reviewers: alekseyshl, eugenis
Reviewed By: alekseyshl
Subscribers: delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D49505
llvm-svn: 337557
Summary:
This CL adds support for aligned new/delete operators (C++17). Currently we
do not support alignment inconsistency detection on deallocation, as this
requires a header change, but the APIs are introduced and are functional.
Add a smoke test for the aligned version of the operators.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D48031
llvm-svn: 334505
Summary:
Instead of using `AlignedChunkHeaderSize`, introduce a `constexpr` function
`getHeaderSize` in the `Chunk` namespace. Switch `RoundUpTo` to a `constexpr`
as well (so we can use it in `constexpr` declarations). Mark a few variables
in the areas touched as `const`.
Overall this has no functional change, and is mostly to make things a bit more
consistent.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D43772
llvm-svn: 326206
Summary:
It was deemed that the salt in the chunk header didn't improve security
significantly (and could actually decrease it). The initial idea was that the
same chunk would different headers on different allocations, allowing for less
predictability. The issue is that gathering the same chunk header with different
salts can give information about the other "secrets" (cookie, pointer), and that
if an attacker leaks a header, they can reuse it anyway for that same chunk
anyway since we don't enforce the salt value.
So we get rid of the salt in the header. This means we also get rid of the
thread local Prng, and that we don't need a global Prng anymore as well. This
makes everything faster.
We reuse those 8 bits to store the `ClassId` of a chunk now (0 for a secondary
based allocation). This way, we get some additional speed gains:
- `ClassId` is computed outside of the locked block;
- `getActuallyAllocatedSize` doesn't need the `GetSizeClass` call;
- same for `deallocatePrimary`;
We add a sanity check at init for this new field (all sanity checks are moved
in their own function, `init` was getting crowded).
Reviewers: alekseyshl, flowerhack
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D40796
llvm-svn: 319791
Summary:
The 32-bit allocator is now on par with the 64-bit in terms of security (chunks
randomization is done, batches separation is done).
Unless objection, the comment can go away.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39303
llvm-svn: 316620
Summary:
Previous parts: D38139, D38183.
In this part of the refactor, we abstract the Linux vs Android TSD dissociation
in favor of a Exclusive vs Shared one, allowing for easier platform introduction
and configuration.
Most of this change consist of shuffling the files around to reflect the new
organization.
We introduce `scudo_platform.h` where platform specific definition lie. This
involves the TSD model and the platform specific allocator parameters. In an
upcoming CL, those will be configurable via defines, but we currently stick
with conservative defaults.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D38244
llvm-svn: 314224
Summary:
Currently `TransferBatch` are located within the same memory regions as
"regular" chunks. This is not ideal for security: they make for an interesting
target to overwrite, and are not protected by the frontend (namely, Scudo).
To solve this, we re-introduce `kUseSeparateSizeClassForBatch` for the 32-bit
Primary allowing for `TransferBatch` to end up in their own memory region.
Currently only Scudo would use this new feature, the default behavior remains
unchanged. The separate `kBatchClassID` was used for a brief period of time
previously but removed when the 64-bit ended up using the "free array".
Reviewers: alekseyshl, kcc, eugenis
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D37082
llvm-svn: 311891
Summary:
This patch changes a few (small) things around for compatibility purposes for
the current Android & Fuchsia work:
- `realloc`'ing some memory that was not allocated with `malloc`, `calloc` or
`realloc`, while UB according to http://pubs.opengroup.org/onlinepubs/009695399/functions/realloc.html
is more common that one would think. We now only check this if
`DeallocationTypeMismatch` is set; change the "mismatch" error
messages to be more homogeneous;
- some sketchily written but widely used libraries expect a call to `realloc`
to copy the usable size of the old chunk to the new one instead of the
requested size. We have to begrundingly abide by this de-facto standard.
This doesn't seem to impact security either way, unless someone comes up with
something we didn't think about;
- the CRC32 intrinsics for 64-bit take a 64-bit first argument. This is
misleading as the upper 32 bits end up being ignored. This was also raising
`-Wconversion` errors. Change things to take a `u32` as first argument.
This also means we were (and are) only using 32 bits of the Cookie - not a
big thing, but worth mentioning.
- Includes-wise: prefer `stddef.h` to `cstddef`, move `scudo_flags.h` where it
is actually needed.
- Add tests for the memalign-realloc case, and the realloc-usable-size one.
(Edited typos)
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D36754
llvm-svn: 311018
Summary:
We were not following the `man` documented behaviors for invalid arguments to
`memalign` and associated functions. Using `CHECK` for those was a bit extreme,
so we relax the behavior to return null pointers as expected when this happens.
Adapt the associated test.
I am using this change also to change a few more minor performance improvements:
- mark as `UNLIKELY` a bunch of unlikely conditions;
- the current `CHECK` in `__sanitizer::RoundUpTo` is redundant for us in *all*
calls. So I am introducing our own version without said `CHECK`.
- change our combined allocator `GetActuallyAllocatedSize`. We already know if
the pointer is from the Primary or Secondary, so the `PointerIsMine` check is
redundant as well, and costly for the 32-bit Primary. So we get the size by
directly using the available Primary functions.
Finally, change a `int` to `uptr` to avoid a warning/error when compiling on
Android.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34782
llvm-svn: 306698
Summary:
With rL279771, SizeClassAllocator64 was changed to accept only one template
instead of 5, for the following reasons: "First, this will make the mangled
names shorter. Second, this will make adding more parameters simpler". This
patch mirrors that work for SizeClassAllocator32.
This is in preparation for introducing the randomization of chunks in the
32-bit SizeClassAllocator in a later patch.
Reviewers: kcc, alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33141
llvm-svn: 303071
Summary:
The reasoning behind this change is twofold:
- the current combined allocator (sanitizer_allocator_combined.h) implements
features that are not relevant for Scudo, making some code redundant, and
some restrictions not pertinent (alignments for example). This forced us to
do some weird things between the frontend and our secondary to make things
work;
- we have enough information to be able to know if a chunk will be serviced by
the Primary or Secondary, allowing us to avoid extraneous calls to functions
such as `PointerIsMine` or `CanAllocate`.
As a result, the new scudo-specific combined allocator is very straightforward,
and allows us to remove some now unnecessary code both in the frontend and the
secondary. Unused functions have been left in as unimplemented for now.
It turns out to also be a sizeable performance gain (3% faster in some Android
memory_replay benchmarks, doing some more on other platforms).
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33007
llvm-svn: 302830
Summary:
This change adds Android support to the allocator (but doesn't yet enable it in
the cmake config), and should be the last fragment of the rewritten change
D31947.
Android has more memory constraints than other platforms, so the idea of a
unique context per thread would not have worked. The alternative chosen is to
allocate a set of contexts based on the number of cores on the machine, and
share those contexts within the threads. Contexts can be dynamically reassigned
to threads to prevent contention, based on a scheme suggested by @dvyuokv in
the initial review.
Additionally, given that Android doesn't support ELF TLS (only emutls for now),
we use the TSan TLS slot to make things faster: Scudo is mutually exclusive
with other sanitizers so this shouldn't cause any problem.
An additional change made here, is replacing `thread_local` by `THREADLOCAL`
and using the initial-exec thread model in the non-Android version to prevent
extraneous weak definition and checks on the relevant variables.
Reviewers: kcc, dvyukov, alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D32649
llvm-svn: 302300
Summary:
This change introduces scudo_tls.h & scudo_tls_linux.cpp, where we move the
thread local variables used by the allocator, namely the cache, quarantine
cache & prng. `ScudoThreadContext` will hold those. This patch doesn't
introduce any new platform support yet, this will be the object of a later
patch. This also changes the PRNG so that the structure can be POD.
Reviewers: kcc, dvyukov, alekseyshl
Reviewed By: dvyukov, alekseyshl
Subscribers: llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D32440
llvm-svn: 301584
Summary:
GetActuallyAllocatedSize is actually expensive. In order to avoid calling this
function in the malloc/free fast path, we change the Scudo chunk header to
store the size of the chunk, if from the Primary, or the amount of unused
bytes if from the Secondary. This way, we only have to call the culprit
function for Secondary backed allocations (and still in realloc).
The performance gain on a singly threaded pure malloc/free benchmark exercising
the Primary allocator is above 5%.
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: dvyukov
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32299
llvm-svn: 300861
Summary:
This is part of D31947 that is being split into several smaller changes.
This one deals with all the minor changes, more specifically:
- Rename some variables and functions to make their purpose clearer;
- Reorder some code;
- Mark the hot termination incurring checks as `UNLIKELY`; if they happen, the
program will die anyway;
- Add a `getScudoChunk` method;
- Add an `eraseHeader` method to ScudoChunk that will clear a header with 0s;
- Add a parameter to `allocate` to know if the allocated chunk should be filled
with zeros. This allows `calloc` to not have to call
`GetActuallyAllocatedSize`; more changes to get rid of this function on the
hot paths will follow;
- reallocate was missing a check to verify that the pointer is properly
aligned on `MinAlignment`;
- The `Stats` in the secondary have to be protected by a mutex as the `Add`
and `Sub` methods are actually not atomic;
- The software CRC32 function was moved to the header to allow for inlining.
Reviewers: dvyukov, alekseyshl, kcc
Reviewed By: dvyukov
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32242
llvm-svn: 300846
Summary:
In an effort to getting rid of dependencies to external libraries, we are
replacing atomic PackedHeader use of std::atomic with Sanitizer's
atomic_uint64_t, which allows us to avoid -latomic.
Reviewers: kcc, phosek, alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28864
llvm-svn: 292630
Summary:
ARM & AArch64 runtime detection for hardware support of CRC32 has been added
via check of the AT_HWVAL auxiliary vector.
Following Michal's suggestions in D28417, the CRC32 code has been further
changed and looks better now. When compiled with full relro (which is strongly
suggested to benefit from additional hardening), the weak symbol for
computeHardwareCRC32 is read-only and the assembly generated is fairly clean
and straight forward. As suggested, an additional optimization is to skip
the runtime check if SSE 4.2 has been enabled globally, as opposed to only
for scudo_crc32.cpp.
scudo_crc32.h has no purpose anymore and was removed.
Reviewers: alekseyshl, kcc, rengolin, mgorny, phosek
Reviewed By: rengolin, mgorny
Subscribers: aemerson, rengolin, llvm-commits
Differential Revision: https://reviews.llvm.org/D28574
llvm-svn: 292409
Summary:
With the recent changes to the Secondary, we use less bits for UnusedBytes,
which allows us in return to increase the bits used for Offset. That means
that we can use a Primary SizeClassMap allowing for a larger maximum size.
Reviewers: kcc, alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27816
llvm-svn: 289838
Summary:
This update introduces i386 support for the Scudo Hardened Allocator, and
offers software alternatives for functions that used to require hardware
specific instruction sets. This should make porting to new architectures
easier.
Among the changes:
- The chunk header has been changed to accomodate the size limitations
encountered on 32-bit architectures. We now fit everything in 64-bit. This
was achieved by storing the amount of unused bytes in an allocation rather
than the size itself, as one can be deduced from the other with the help
of the GetActuallyAllocatedSize function. As it turns out, this header can
be used for both 64 and 32 bit, and as such we dropped the requirement for
the 128-bit compare and exchange instruction support (cmpxchg16b).
- Add 32-bit support for the checksum and the PRNG functions: if the SSE 4.2
instruction set is supported, use the 32-bit CRC32 instruction, and in the
XorShift128, use a 32-bit based state instead of 64-bit.
- Add software support for CRC32: if SSE 4.2 is not supported, fallback on a
software implementation.
- Modify tests that were not 32-bit compliant, and expand them to cover more
allocation and alignment sizes. The random shuffle test has been deactivated
for linux-i386 & linux-i686 as the 32-bit sanitizer allocator doesn't
currently randomize chunks.
Reviewers: alekseyshl, kcc
Subscribers: filcab, llvm-commits, tberghammer, danalbert, srhines, mgorny, modocache
Differential Revision: https://reviews.llvm.org/D26358
llvm-svn: 288255
Summary:
In order to avoid starting a separate thread to return unused memory to
the system (the thread interferes with process startup on Android,
Zygota waits for all threads to exit before fork, but this thread never
exits), try to return it right after free.
Reviewers: eugenis
Subscribers: cryptoad, filcab, danalbert, kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27003
llvm-svn: 288091
Summary:
In order to support 32-bit platforms, we have to make some adjustments in
multiple locations, one of them being the Scudo chunk header. For it to fit on
64 bits (as a reminder, on x64 it's 128 bits), I had to crunch the space taken
by some of the fields. In order to keep the offset field small, the secondary
allocator was changed to accomodate aligned allocations for larger alignments,
hence making the offset constant for chunks serviced by it.
The resulting header candidate has been added, and further modifications to
allow 32-bit support will follow.
Another notable change is the addition of MaybeStartBackgroudThread() to allow
release of the memory to the OS.
Reviewers: kcc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25688
llvm-svn: 285209
Summary:
This is an initial implementation of a Hardened Allocator based on Sanitizer Common's CombinedAllocator.
It aims at mitigating heap based vulnerabilities by adding several features to the base allocator, while staying relatively fast.
The following were implemented:
- additional consistency checks on the allocation function parameters and on the heap chunks;
- use of checksum protected chunk header, to detect corruption;
- randomness to the allocator base;
- delayed freelist (quarantine), to mitigate use after free and overall determinism.
Additional mitigations are in the works.
Reviewers: eugenis, aizatsky, pcc, krasin, vitalybuka, glider, dvyukov, kcc
Subscribers: kubabrecka, filcab, llvm-commits
Differential Revision: http://reviews.llvm.org/D20084
llvm-svn: 271968