Summary:
Purging allocator quarantine and returning memory to OS might be desired
between fuzzer iterations since, most likely, the quarantine is not
going to catch bugs in the code under fuzz, but reducing RSS might
significantly prolong the fuzzing session.
Reviewers: cryptoad
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D39153
llvm-svn: 316347
Summary:
Currently `TransferBatch` are located within the same memory regions as
"regular" chunks. This is not ideal for security: they make for an interesting
target to overwrite, and are not protected by the frontend (namely, Scudo).
To solve this, we re-introduce `kUseSeparateSizeClassForBatch` for the 32-bit
Primary allowing for `TransferBatch` to end up in their own memory region.
Currently only Scudo would use this new feature, the default behavior remains
unchanged. The separate `kBatchClassID` was used for a brief period of time
previously but removed when the 64-bit ended up using the "free array".
Reviewers: alekseyshl, kcc, eugenis
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D37082
llvm-svn: 311891
Summary:
In `sanitizer_allocator_primary32.h`:
- rounding up in `MapWithCallback` is not needed as `MmapOrDie` does it. Note
that the 64-bit counterpart doesn't round up, this keeps the behavior
consistent;
- since `IsAligned` exists, use it in `AllocateRegion`;
- in `PopulateFreeList`:
- checking `b->Count` to be greater than 0 when `b->Count() == max_count` is
redundant when done more than once. Just check that `max_count` is greater
than 0 out of the loop; the compiler (at least on ARM) didn't optimize it;
- mark the batch creation failure as `UNLIKELY`;
In `sanitizer_allocator_primary64.h`:
- in `MapWithCallback`, mark the failure condition as `UNLIKELY`;
In `sanitizer_posix.h`:
- mark a bunch of Mmap related failure conditions as `UNLIKELY`;
- in `MmapAlignedOrDieOnFatalError`, we have `IsAligned`, so use it; rearrange
the conditions as one test was redudant;
- in `MmapFixedImpl`, 30 chars was not large enough to hold the message and a
full 64-bit address (or at least a 48-bit usermode address), increase to 40.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: aemerson, kubamracek, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D34840
llvm-svn: 306834
Summary:
Make SizeClassAllocator32 return nullptr when it encounters OOM, which
allows the entire sanitizer's allocator to follow allocator_may_return_null=1
policy, even for small allocations (LargeMmapAllocator is already fixed
by D34243).
Will add a test for OOM in primary allocator later, when
SizeClassAllocator64 can gracefully handle OOM too.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34433
llvm-svn: 305972
Summary:
This broke thread_local_quarantine_pthread_join.cc on some architectures, due
to the overhead of the stashed regions. Reverting while figuring out the best
way to deal with it.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34213
llvm-svn: 305404
Summary:
The reasoning behind this change is explained in D33454, which unfortunately
broke the Windows version (due to the platform not supporting partial unmapping
of a memory region).
This new approach changes `MmapAlignedOrDie` to allow for the specification of
a `padding_chunk`. If non-null, and the initial allocation is aligned, this
padding chunk will hold the address of the extra memory (of `alignment` bytes).
This allows `AllocateRegion` to get 2 regions if the memory is aligned
properly, and thus help reduce fragmentation (and saves on unmapping
operations). As with the initial D33454, we use a stash in the 32-bit Primary
to hold those extra regions and return them on the fast-path.
The Windows version of `MmapAlignedOrDie` will always return a 0
`padding_chunk` if one was requested.
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34152
llvm-svn: 305391
Summary:
Apparently Windows's `UnmapOrDie` doesn't support partial unmapping. Which
makes the new region allocation technique not Windows compliant.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33554
llvm-svn: 303883
Summary:
Currently, AllocateRegion has a tendency to fragment memory: it allocates
`2*kRegionSize`, and if the memory is aligned, will unmap `kRegionSize` bytes,
thus creating a hole, which can't itself be reused for another region. This
is exacerbated by the fact that if 2 regions get allocated one after another
without any `mmap` in between, the second will be aligned due to mappings
generally being contiguous.
An idea, suggested by @alekseyshl, to prevent such a behavior is to have a
stash of regions: if the `2*kRegionSize` allocation is properly aligned, split
it in two, and stash the second part to be returned next time a region is
requested.
At this point, I thought about a couple of ways to implement this:
- either an `IntrusiveList` of regions candidates, storing `next` at the
begining of the region;
- a small array of regions candidates existing in the Primary.
While the second option is more constrained in terms of size, it offers several
advantages:
- security wise, a pointer in a region candidate could be overflowed into, and
abused when popping an element;
- we do not dirty the first page of the region by storing something in it;
- unless several threads request regions simultaneously from different size
classes, the stash rarely goes above 1 entry.
I am not certain about the Windows impact of this change, as `sanitizer_win.cc`
has its own version of MmapAlignedOrDie, maybe someone could chime in on this.
MmapAlignedOrDie is effectively unused after this change and could be removed
at a later point. I didn't notice any sizeable performance gain, even though we
are saving a few `mmap`/`munmap` syscalls.
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33454
llvm-svn: 303879
Summary:
With rL279771, SizeClassAllocator64 was changed to accept only one template
instead of 5, for the following reasons: "First, this will make the mangled
names shorter. Second, this will make adding more parameters simpler". This
patch mirrors that work for SizeClassAllocator32.
This is in preparation for introducing the randomization of chunks in the
32-bit SizeClassAllocator in a later patch.
Reviewers: kcc, alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33141
llvm-svn: 303071
Summary:
In order to avoid starting a separate thread to return unused memory to
the system (the thread interferes with process startup on Android,
Zygota waits for all threads to exit before fork, but this thread never
exits), try to return it right after free.
Reviewers: eugenis
Subscribers: cryptoad, filcab, danalbert, kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27003
llvm-svn: 288091