Currently we use very common names for macros like ACQUIRE/RELEASE,
which cause conflicts with system headers.
Prefix all macros with SANITIZER_ to avoid conflicts.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D116652
The main use case for this change is HWASan aliasing mode, which premaps
the alias space adjacent to the dynamic shadow. With this change, the
primary allocator can allocate from the alias space instead of a
separate region.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98293
The main use case for this change is HWASan aliasing mode, which premaps
the alias space adjacent to the dynamic shadow. With this change, the
primary allocator can allocate from the alias space instead of a
separate region.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98293
Asan does not use metadata with primary allocators.
It should match AP64::kMetadataSize whic is 0.
Depends on D86917.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D86919
This patch fixes https://github.com/google/sanitizers/issues/703
On a Graviton-A1 aarch64 machine with 48-bit VMA,
the time spent in LSan and ASan reduced from 2.5s to 0.01s when running
clang -fsanitize=leak compiler-rt/test/lsan/TestCases/sanity_check_pure_c.c && time ./a.out
clang -fsanitize=address compiler-rt/test/lsan/TestCases/sanity_check_pure_c.c && time ./a.out
With this patch, LSan and ASan create both the 32 and 64 allocators and select
at run time between the two allocators following a global variable that is
initialized at init time to whether the allocator64 can be used in the virtual
address space.
Differential Revision: https://reviews.llvm.org/D60243
llvm-svn: 369441
Summary: If bots work we can replace #ifs with template specialization by TwoLevelByteMapSize1.
There is known users of TwoLevelByteMap with TwoLevelByteMapSize1 equal 8,
and users of FlatByteMap with TwoLevelByteMapSize1 equal 2.
Reviewers: eugenis
Subscribers: kubamracek, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D61200
llvm-svn: 359364
Fails on bots with:
/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-expensive/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_allocator_primary32.h:69:3: error: static_assert failed due to requirement 'TwoLevelByteMapSize1 > 128' "TwoLevelByteMap should be used"
static_assert(TwoLevelByteMapSize1 > 128, "TwoLevelByteMap should be used");
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-expensive/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_allocator_combined.h:29:34: note: in instantiation of template class '__sanitizer::SizeClassAllocator32<__sanitizer::AP32>' requested here
typename PrimaryAllocator::AddressSpaceView>::value,
^
http://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-expensive/13960/console
llvm-svn: 359352
Summary: If bots work we can replace #ifs with template specialization by TwoLevelByteMapSize1.
Reviewers: eugenis
Subscribers: kubamracek, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D61200
llvm-svn: 359333
Originally this code as added for 64-bit platform and was never changed.
Add static_assert to make sure that we have correct map on all platforms.
llvm-svn: 359269
Summary:
This patch contains the bits required to make the common 32-bit allocator work on SPARC64/Linux.
Patch by Eric Botcazou.
Reviewers: #sanitizers, vitalybuka
Reviewed By: #sanitizers, vitalybuka
Subscribers: krytarowski, vitalybuka, ro, jyknight, kubamracek, fedor.sergeev, jdoerfert, llvm-commits, #sanitizers
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D58432
llvm-svn: 355978
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
It should be at the class scope and not inside the `Init(...)` function
because we want to error out as soon as the wrong type is constructed.
At the function scope the `static_assert` is only checked if the
function might be called.
This is a follow up to r349138.
rdar://problem/45284065
llvm-svn: 349959
Summary:
This is a follow up patch to r346956 for the `SizeClassAllocator32`
allocator.
This patch makes `AddressSpaceView` a template parameter both to the
`ByteMap` implementations (but makes `LocalAddressSpaceView` the
default), some `AP32` implementations and is used in `SizeClassAllocator32`.
The actual changes to `ByteMap` implementations and
`SizeClassAllocator32` are very simple. However the patch is large
because it requires changing all the `AP32` definitions, and users of
those definitions.
For ASan and LSan we make `AP32` and `ByteMap` templateds type that take
a single `AddressSpaceView` argument. This has been done because we will
instantiate the allocator with a type that isn't `LocalAddressSpaceView`
in the future patches. For the allocators used in the other sanitizers
(i.e. HWAsan, MSan, Scudo, and TSan) use of `LocalAddressSpaceView` is
hard coded because we do not intend to instantiate the allocators with
any other type.
In the cases where untemplated types have become templated on a single
`AddressSpaceView` parameter (e.g. `PrimaryAllocator`) their name has
been changed to have a `ASVT` suffix (Address Space View Type) to
indicate they are templated. The only exception to this are the `AP32`
types due to the desire to keep the type name as short as possible.
In order to check that template is instantiated in the correct a way a
`static_assert(...)` has been added that checks that the
`AddressSpaceView` type used by `Params::ByteMap::AddressSpaceView` matches
the `Params::AddressSpaceView`. This uses the new `sanitizer_type_traits.h`
header.
rdar://problem/45284065
Reviewers: kcc, dvyukov, vitalybuka, cryptoad, eugenis, kubamracek, george.karpenkov
Subscribers: mgorny, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D54904
llvm-svn: 349138
Summary:
For the 32-bit TransferBatch:
- `SetFromArray` callers have bounds `count`, so relax the `CHECK` to `DCHECK`;
- same for `Add`;
- mark `CopyToArray` as `const`;
For the 32-bit Primary:
- `{Dea,A}llocateBatch` are only called from places that check `class_id`,
relax the `CHECK` to `DCHECK`;
- same for `AllocateRegion`;
- remove `GetRegionBeginBySizeClass` that is not used;
- use a local variable for the random shuffle state, so that the compiler can
use a register instead of reading and writing to the `SizeClassInfo` at every
iteration;
For the 32-bit local cache:
- pass the count to drain instead of doing a `Min` everytime which is at times
superfluous.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D46657
llvm-svn: 332478
Summary:
The `TestOnlyInit` function of `{Flat,TwoLevel}ByteMap` seems to be a misnomer
since the function is used outside of tests as well, namely in
`SizeClassAllocator32::Init`. Rename it to `Init` and update the callers.
Reviewers: alekseyshl, vitalybuka
Reviewed By: vitalybuka
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D46408
llvm-svn: 331662
Summary:
In the same spirit of SanitizerToolName, allow the Primary & Secondary
allocators to have names that can be set by the tools via PrimaryAllocatorName
and SecondaryAllocatorName.
Additionally, set a non-default name for Scudo.
Reviewers: alekseyshl, vitalybuka
Reviewed By: alekseyshl, vitalybuka
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D45600
llvm-svn: 330055
Summary:
This is a new version of D44261, which broke some builds with older gcc, as
they can't align on a constexpr, but rather require an integer (see
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56859) among others.
We introduce `SANITIZER_CACHE_LINE_SIZE` in `sanitizer_platform.h` to be
used in `ALIGNED` attributes instead of using directly `kCacheLineSize`.
Reviewers: alekseyshl, thakis
Reviewed By: alekseyshl
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D44326
llvm-svn: 327297
Summary:
Both `SizeClassInfo` structures for the 32-bit primary & `RegionInfo`
structures for the 64-bit primary can be used by different threads, and as such
they should be aligned & padded to the cacheline size to avoid false sharing.
The former was padded but the array was not aligned, the latter was not padded
but we lucked up as the size of the structure was 192 bytes, and aligned by
the properties of `mmap`.
I plan on adding a couple of fields to the `RegionInfo`, and some highly
threaded tests pointed out that without proper padding & alignment, performance
was getting a hit - and it is going away with proper padding.
This patch makes sure that we are properly padded & aligned for both. I used
a template to avoid padding if the size is already a multiple of the cacheline
size. There might be a better way to do this, I am open to suggestions.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D44261
llvm-svn: 327145
Summary:
- Reland rL324263, this time allowing for a compile-time decision as to whether
or not use the 32-bit division. A single test is using a class map covering
a maximum size greater than 4GB, this can be checked via the template
parameters, and allows SizeClassAllocator64PopulateFreeListOOM to pass;
- `MaxCachedHint` is always called on a class id for which we have already
computed the size, but we still recompute `Size(class_id)`. Change the
prototype of the function to work on sizes instead of class ids. This also
allows us to get rid of the `kBatchClassID` special case. Update the callers
accordingly;
- `InitCache` and `Drain` will start iterating at index 1: index 0 contents are
unused and can safely be left to be 0. Plus we do not pay the cost of going
through an `UNLIKELY` in `MaxCachedHint`, and touching memory that is
otherwise not used;
- `const` some variables in the areas modified;
- Remove an spurious extra line at the end of a file.
Reviewers: alekseyshl, tl0gic, dberris
Reviewed By: alekseyshl, dberris
Subscribers: dberris, kubamracek, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D43088
llvm-svn: 324906
Summary:
The 64-bit primary has had random shuffling of chunks for a while, this
implements it for the 32-bit primary. Scudo is currently the only user of
`kRandomShuffleChunks`.
This change consists of a few modifications:
- move the random shuffling functions out of the 64-bit primary to
`sanitizer_common.h`. Alternatively I could move them to
`sanitizer_allocator.h` as they are only used in the allocator, I don't feel
strongly either way;
- small change in the 64-bit primary to make the `rand_state` initialization
`UNLIKELY`;
- addition of a `rand_state` in the 32-bit primary's `SizeClassInfo` and
shuffling of chunks when populating the free list.
- enabling the `random_shuffle.cpp` test on platforms using the 32-bit primary
for Scudo.
Some comments on why the shuffling is done that way. Initially I just
implemented a `Shuffle` function in the `TransferBatch` which was simpler but I
came to realize this wasn't good enough: for chunks of 10000 bytes for example,
with a `CompactSizeClassMap`, a batch holds only 1 chunk, meaning shuffling the
batch has no effect, while a region is usually 1MB, eg: 104 chunks of that size.
So I decided to "stage" the newly gathered chunks in a temporary array that
would be shuffled prior to placing the chunks in batches.
The result is looping twice through n_chunks even if shuffling is not enabled,
but I didn't notice any significant significant performance impact.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D39244
llvm-svn: 316596
Summary:
Purging allocator quarantine and returning memory to OS might be desired
between fuzzer iterations since, most likely, the quarantine is not
going to catch bugs in the code under fuzz, but reducing RSS might
significantly prolong the fuzzing session.
Reviewers: cryptoad
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D39153
llvm-svn: 316347
Summary:
Currently `TransferBatch` are located within the same memory regions as
"regular" chunks. This is not ideal for security: they make for an interesting
target to overwrite, and are not protected by the frontend (namely, Scudo).
To solve this, we re-introduce `kUseSeparateSizeClassForBatch` for the 32-bit
Primary allowing for `TransferBatch` to end up in their own memory region.
Currently only Scudo would use this new feature, the default behavior remains
unchanged. The separate `kBatchClassID` was used for a brief period of time
previously but removed when the 64-bit ended up using the "free array".
Reviewers: alekseyshl, kcc, eugenis
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D37082
llvm-svn: 311891
Summary:
In `sanitizer_allocator_primary32.h`:
- rounding up in `MapWithCallback` is not needed as `MmapOrDie` does it. Note
that the 64-bit counterpart doesn't round up, this keeps the behavior
consistent;
- since `IsAligned` exists, use it in `AllocateRegion`;
- in `PopulateFreeList`:
- checking `b->Count` to be greater than 0 when `b->Count() == max_count` is
redundant when done more than once. Just check that `max_count` is greater
than 0 out of the loop; the compiler (at least on ARM) didn't optimize it;
- mark the batch creation failure as `UNLIKELY`;
In `sanitizer_allocator_primary64.h`:
- in `MapWithCallback`, mark the failure condition as `UNLIKELY`;
In `sanitizer_posix.h`:
- mark a bunch of Mmap related failure conditions as `UNLIKELY`;
- in `MmapAlignedOrDieOnFatalError`, we have `IsAligned`, so use it; rearrange
the conditions as one test was redudant;
- in `MmapFixedImpl`, 30 chars was not large enough to hold the message and a
full 64-bit address (or at least a 48-bit usermode address), increase to 40.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: aemerson, kubamracek, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D34840
llvm-svn: 306834
Summary:
Make SizeClassAllocator32 return nullptr when it encounters OOM, which
allows the entire sanitizer's allocator to follow allocator_may_return_null=1
policy, even for small allocations (LargeMmapAllocator is already fixed
by D34243).
Will add a test for OOM in primary allocator later, when
SizeClassAllocator64 can gracefully handle OOM too.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34433
llvm-svn: 305972
Summary:
This broke thread_local_quarantine_pthread_join.cc on some architectures, due
to the overhead of the stashed regions. Reverting while figuring out the best
way to deal with it.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34213
llvm-svn: 305404
Summary:
The reasoning behind this change is explained in D33454, which unfortunately
broke the Windows version (due to the platform not supporting partial unmapping
of a memory region).
This new approach changes `MmapAlignedOrDie` to allow for the specification of
a `padding_chunk`. If non-null, and the initial allocation is aligned, this
padding chunk will hold the address of the extra memory (of `alignment` bytes).
This allows `AllocateRegion` to get 2 regions if the memory is aligned
properly, and thus help reduce fragmentation (and saves on unmapping
operations). As with the initial D33454, we use a stash in the 32-bit Primary
to hold those extra regions and return them on the fast-path.
The Windows version of `MmapAlignedOrDie` will always return a 0
`padding_chunk` if one was requested.
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34152
llvm-svn: 305391
Summary:
Apparently Windows's `UnmapOrDie` doesn't support partial unmapping. Which
makes the new region allocation technique not Windows compliant.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33554
llvm-svn: 303883
Summary:
Currently, AllocateRegion has a tendency to fragment memory: it allocates
`2*kRegionSize`, and if the memory is aligned, will unmap `kRegionSize` bytes,
thus creating a hole, which can't itself be reused for another region. This
is exacerbated by the fact that if 2 regions get allocated one after another
without any `mmap` in between, the second will be aligned due to mappings
generally being contiguous.
An idea, suggested by @alekseyshl, to prevent such a behavior is to have a
stash of regions: if the `2*kRegionSize` allocation is properly aligned, split
it in two, and stash the second part to be returned next time a region is
requested.
At this point, I thought about a couple of ways to implement this:
- either an `IntrusiveList` of regions candidates, storing `next` at the
begining of the region;
- a small array of regions candidates existing in the Primary.
While the second option is more constrained in terms of size, it offers several
advantages:
- security wise, a pointer in a region candidate could be overflowed into, and
abused when popping an element;
- we do not dirty the first page of the region by storing something in it;
- unless several threads request regions simultaneously from different size
classes, the stash rarely goes above 1 entry.
I am not certain about the Windows impact of this change, as `sanitizer_win.cc`
has its own version of MmapAlignedOrDie, maybe someone could chime in on this.
MmapAlignedOrDie is effectively unused after this change and could be removed
at a later point. I didn't notice any sizeable performance gain, even though we
are saving a few `mmap`/`munmap` syscalls.
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33454
llvm-svn: 303879
Summary:
With rL279771, SizeClassAllocator64 was changed to accept only one template
instead of 5, for the following reasons: "First, this will make the mangled
names shorter. Second, this will make adding more parameters simpler". This
patch mirrors that work for SizeClassAllocator32.
This is in preparation for introducing the randomization of chunks in the
32-bit SizeClassAllocator in a later patch.
Reviewers: kcc, alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33141
llvm-svn: 303071
Summary:
In order to avoid starting a separate thread to return unused memory to
the system (the thread interferes with process startup on Android,
Zygota waits for all threads to exit before fork, but this thread never
exits), try to return it right after free.
Reviewers: eugenis
Subscribers: cryptoad, filcab, danalbert, kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27003
llvm-svn: 288091