to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Summary:
This is a follow up patch to r349138.
This patch makes a `AddressSpaceView` a type declaration in the
allocator parameters used by `SizeClassAllocator64`. For ASan, LSan, and
the unit tests the AP64 declarations have been made templated so that
`AddressSpaceView` can be changed at compile time. For the other
sanitizers we just hard-code `LocalAddressSpaceView` because we have no
plans to use these allocators in an out-of-process manner.
rdar://problem/45284065
Reviewers: kcc, dvyukov, vitalybuka, cryptoad, eugenis, kubamracek, george.karpenkov
Subscribers: #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D55764
llvm-svn: 349954
Summary:
The previous version of the patch makes some code unable to distinguish
failure to map address 0 and error.
Revert to turn the bots back to green while figuring out a new approach.
Reviewers: eugenis
Reviewed By: eugenis
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D51451
llvm-svn: 340957
Summary:
`MmapNoAccess` & `MmapFixedNoAccess` return directly the result of
`internal_mmap`, as opposed to other Mmap functions that return nullptr.
This inconsistency leads to some confusion for the callers, as some check for
`~(uptr)0` (`MAP_FAILED`) for failure (while it can fail with `-ENOMEM` for
example).
Two potential solutions: change the callers, or make the functions return
`nullptr` on failure to follow the precedent set by the other functions.
The second option looked more appropriate to me.
Correct the callers that were wrongly checking for `~(uptr)0` or
`MAP_FAILED`.
TODO for follow up CLs:
- There are a couple of `internal_mmap` calls in XRay that check for
MMAP_FAILED as a result as well (cc: @dberris); they should use
`internal_iserror`;
Reviewers: eugenis, alekseyshl, dberris, kubamracek
Reviewed By: alekseyshl
Subscribers: kristina, kubamracek, delcypher, #sanitizers, dberris, llvm-commits
Differential Revision: https://reviews.llvm.org/D50940
llvm-svn: 340576
Summary:
Remove the generic error nadling policies and handle each allocator error
explicitly. Although more verbose, it allows for more comprehensive, precise
and actionable allocator related failure reports.
This finishes up the series of changes of the particular sanitizer
allocators, improves the internal allocator error reporting and removes
now unused policies.
Reviewers: vitalybuka, cryptoad
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D48328
llvm-svn: 335147
Summary:
In the same spirit of SanitizerToolName, allow the Primary & Secondary
allocators to have names that can be set by the tools via PrimaryAllocatorName
and SecondaryAllocatorName.
Additionally, set a non-default name for Scudo.
Reviewers: alekseyshl, vitalybuka
Reviewed By: alekseyshl, vitalybuka
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D45600
llvm-svn: 330055
Summary:
This is a new version of D44261, which broke some builds with older gcc, as
they can't align on a constexpr, but rather require an integer (see
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56859) among others.
We introduce `SANITIZER_CACHE_LINE_SIZE` in `sanitizer_platform.h` to be
used in `ALIGNED` attributes instead of using directly `kCacheLineSize`.
Reviewers: alekseyshl, thakis
Reviewed By: alekseyshl
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D44326
llvm-svn: 327297
Summary:
Both `SizeClassInfo` structures for the 32-bit primary & `RegionInfo`
structures for the 64-bit primary can be used by different threads, and as such
they should be aligned & padded to the cacheline size to avoid false sharing.
The former was padded but the array was not aligned, the latter was not padded
but we lucked up as the size of the structure was 192 bytes, and aligned by
the properties of `mmap`.
I plan on adding a couple of fields to the `RegionInfo`, and some highly
threaded tests pointed out that without proper padding & alignment, performance
was getting a hit - and it is going away with proper padding.
This patch makes sure that we are properly padded & aligned for both. I used
a template to avoid padding if the size is already a multiple of the cacheline
size. There might be a better way to do this, I am open to suggestions.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D44261
llvm-svn: 327145
Summary:
See D40657 & D40679 for previous versions of this patch & description.
A couple of things were fixed here to have it not break some bots.
Weak symbols can't be used with `SANITIZER_GO` so the previous version was
breakin TsanGo. I set up some additional local tests and those pass now.
I changed the workaround for the glibc vDSO issue: `__progname` is initialized
after the vDSO and is actually public and of known type, unlike
`__vdso_clock_gettime`. This works better, and with all compilers.
The rest is the same.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, kubamracek, krytarowski, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D41121
llvm-svn: 320594
Summary:
Redo of D40657, which had the initial discussion. The initial code had to move
into a libcdep file, and things had to be shuffled accordingly.
`NanoTime` is a time sink when checking whether or not to release memory to
the OS. While reducing the amount of calls to said function is in the works,
another solution that was found to be beneficial was to use a timing function
that can leverage the vDSO.
We hit a couple of snags along the way, like the fact that the glibc crashes
when clock_gettime is called from a preinit_array, or the fact that
`__vdso_clock_gettime` is mangled (for security purposes) and can't be used
directly, and also that clock_gettime can be intercepted.
The proposed solution takes care of all this as far as I can tell, and
significantly improve performances and some Scudo load tests with memory
reclaiming enabled.
@mcgrathr: please feel free to follow up on
https://reviews.llvm.org/D40657#940857 here. I posted a reply at
https://reviews.llvm.org/D40657#940974.
Reviewers: alekseyshl, krytarowski, flowerhack, mcgrathr, kubamracek
Reviewed By: alekseyshl, krytarowski
Subscribers: #sanitizers, mcgrathr, srhines, llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D40679
llvm-svn: 320409
Summary:
This is an attempt at making `PopulateFreeArray` less obscure, more consistent,
and a tiny bit faster in some circumstances:
- use more consistent variable names, that work both for the user & the metadata
portions of the code; the purpose of the code is mostly the same for both
regions, so it makes sense that the code should be mostly similar as well;
- replace the while sum loops with a single `RoundUpTo`;
- mask most of the metadata computations behind kMetadataSize, allowing some
blocks to be completely optimized out if not use metadata;
- `const` the constant variables;
- add a `LIKELY` as the branch it applies to will almost always be taken.
Reviewers: alekseyshl, flowerhack
Reviewed By: alekseyshl
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D40754
llvm-svn: 319673
Summary:
Update sanitizer_allocator to use new API.
Second patch in a series. First patch https://reviews.llvm.org/D39072
Updates MmapNoAccess / MmapFixed call sites in the saniziter_allocator
to use the new Init/Map APIs instead.
Reviewers: alekseyshl, cryptoad, phosek, mcgrathr, dvyukov
Reviewed By: alekseyshl, cryptoad
Subscribers: dvyukov, mcgrathr, kubamracek
Differential Revision: https://reviews.llvm.org/D38592
llvm-svn: 317586
Summary:
Call NanoTime() in primary 64 bit allocator only when necessary,
otherwise the unwarranted syscall causes problems in sandbox environments.
ReleaseToOSIntervalMs() conditional allows them to turn the feature off
with allocator_release_to_os_interval_ms=-1 flag.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D39624
llvm-svn: 317386
Summary:
With new release to OS approach (see D38245) it's reasonable to enable
it by default. Setting allocator_release_to_os_interval_ms to 5000 seems
to be a reasonable default (might be tuned later, based on the
feedback).
Also delaying the first release to OS in each bucket for at least
allocator_release_to_os_interval_ms after the first allocation to
prevent just allocated memory to be madvised back to OS and let short
lived processes to avoid release to OS overhead altogether.
Reviewers: cryptoad
Subscribers: kubamracek, llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D39318
llvm-svn: 316683
Summary:
The 64-bit primary has had random shuffling of chunks for a while, this
implements it for the 32-bit primary. Scudo is currently the only user of
`kRandomShuffleChunks`.
This change consists of a few modifications:
- move the random shuffling functions out of the 64-bit primary to
`sanitizer_common.h`. Alternatively I could move them to
`sanitizer_allocator.h` as they are only used in the allocator, I don't feel
strongly either way;
- small change in the 64-bit primary to make the `rand_state` initialization
`UNLIKELY`;
- addition of a `rand_state` in the 32-bit primary's `SizeClassInfo` and
shuffling of chunks when populating the free list.
- enabling the `random_shuffle.cpp` test on platforms using the 32-bit primary
for Scudo.
Some comments on why the shuffling is done that way. Initially I just
implemented a `Shuffle` function in the `TransferBatch` which was simpler but I
came to realize this wasn't good enough: for chunks of 10000 bytes for example,
with a `CompactSizeClassMap`, a batch holds only 1 chunk, meaning shuffling the
batch has no effect, while a region is usually 1MB, eg: 104 chunks of that size.
So I decided to "stage" the newly gathered chunks in a temporary array that
would be shuffled prior to placing the chunks in batches.
The result is looping twice through n_chunks even if shuffling is not enabled,
but I didn't notice any significant significant performance impact.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: srhines, llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D39244
llvm-svn: 316596
Summary:
Purging allocator quarantine and returning memory to OS might be desired
between fuzzer iterations since, most likely, the quarantine is not
going to catch bugs in the code under fuzz, but reducing RSS might
significantly prolong the fuzzing session.
Reviewers: cryptoad
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D39153
llvm-svn: 316347
Summary:
The current implementation of the allocator returning freed memory
back to OS (controlled by allocator_release_to_os_interval_ms flag)
requires sorting of the free chunks list, which has two major issues,
first, when free list grows to millions of chunks, sorting, even the
fastest one, is just too slow, and second, sorting chunks in place
is unacceptable for Scudo allocator as it makes allocations more
predictable and less secure.
The proposed approach is linear in complexity (altough requires quite
a bit more temporary memory). The idea is to count the number of free
chunks on each memory page and release pages containing free chunks
only. It requires one iteration over the free list of chunks and one
iteration over the array of page counters. The obvious disadvantage
is the allocation of the array of the counters, but even in the worst
case we support (4T allocator space, 64 buckets, 16 bytes bucket size,
full free list, which leads to 2 bytes per page counter and ~17M page
counters), requires just about 34Mb of the intermediate buffer (comparing
to ~64Gb of actually allocated chunks) and usually it stays under 100K
and released after each use. It is expected to be a relatively rare event,
releasing memory back to OS, keeping the buffer between those runs
and added complexity of the bookkeeping seems unnesessary here (it can
always be improved later, though, never say never).
The most interesting problem here is how to calculate the number of chunks
falling into each memory page in the bucket. Skipping all the details,
there are three cases when the number of chunks per page is constant:
1) P >= C, P % C == 0 --> N = P / C
2) C > P , C % P == 0 --> N = 1
3) C <= P, P % C != 0 && C % (P % C) == 0 --> N = P / C + 1
where P is page size, C is chunk size and N is the number of chunks per
page and the rest of the cases, where the number of chunks per page is
calculated on the go, during the page counter array iteration.
Among the rest, there are still cases where N can be deduced from the
page index, but they require not that much less calculations per page
than the current "brute force" way and 2/3 of the buckets fall into
the first three categories anyway, so, for the sake of simplicity,
it was decided to stick to those two variations. It can always be
refined and improved later, should we see that brute force way slows
us down unacceptably.
Reviewers: eugenis, cryptoad, dvyukov
Subscribers: kubamracek, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D38245
llvm-svn: 314311
Summary:
In `sanitizer_allocator_primary32.h`:
- rounding up in `MapWithCallback` is not needed as `MmapOrDie` does it. Note
that the 64-bit counterpart doesn't round up, this keeps the behavior
consistent;
- since `IsAligned` exists, use it in `AllocateRegion`;
- in `PopulateFreeList`:
- checking `b->Count` to be greater than 0 when `b->Count() == max_count` is
redundant when done more than once. Just check that `max_count` is greater
than 0 out of the loop; the compiler (at least on ARM) didn't optimize it;
- mark the batch creation failure as `UNLIKELY`;
In `sanitizer_allocator_primary64.h`:
- in `MapWithCallback`, mark the failure condition as `UNLIKELY`;
In `sanitizer_posix.h`:
- mark a bunch of Mmap related failure conditions as `UNLIKELY`;
- in `MmapAlignedOrDieOnFatalError`, we have `IsAligned`, so use it; rearrange
the conditions as one test was redudant;
- in `MmapFixedImpl`, 30 chars was not large enough to hold the message and a
full 64-bit address (or at least a 48-bit usermode address), increase to 40.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: aemerson, kubamracek, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D34840
llvm-svn: 306834
Summary:
The current code was sometimes attempting to release huge chunks of
memory due to undesired RoundUp/RoundDown interaction when the requested
range is fully contained within one memory page.
Reviewers: eugenis
Subscribers: kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27228
llvm-svn: 288271
Summary:
In order to avoid starting a separate thread to return unused memory to
the system (the thread interferes with process startup on Android,
Zygota waits for all threads to exit before fork, but this thread never
exits), try to return it right after free.
Reviewers: eugenis
Subscribers: cryptoad, filcab, danalbert, kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27003
llvm-svn: 288091
Summary:
The sanitizer allocators can works with a dynamic address space
(i.e. specified with ~0ULL).
Unfortunately, the code was broken on GetMetadata and GetChunkIdx.
The current patch is moving the Win64 memory test to a dynamic
address space. There is a migration to move every concept to a
dynamic address space on windows.
To have a better coverage, the unittest are now testing
dynamic address space on other platforms too.
Reviewers: rnk, kcc
Subscribers: kubabrecka, dberris, llvm-commits, chrisha
Differential Revision: https://reviews.llvm.org/D23170
llvm-svn: 277745