Summary:
Make common allocator agnostic to failure handling modes and move the
decision up to the particular sanitizer's allocator, where the context
is available (call stack, parameters, return nullptr/crash mode etc.)
It simplifies the common allocator and allows the particular sanitizer's
allocator to generate more specific and detailed error reports (which
will be implemented later).
The behavior is largely the same, except one case, the violation of the
common allocator's check for "size + alignment" overflow is now reportied
as OOM instead of "bad request". It feels like a worthy tradeoff and
"size + alignment" is huge in this case anyway (thus, can be interpreted
as not enough memory to satisfy the request). There's also a Report()
statement added there.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D42198
llvm-svn: 322784
Summary:
The current implementation of the allocator returning freed memory
back to OS (controlled by allocator_release_to_os_interval_ms flag)
requires sorting of the free chunks list, which has two major issues,
first, when free list grows to millions of chunks, sorting, even the
fastest one, is just too slow, and second, sorting chunks in place
is unacceptable for Scudo allocator as it makes allocations more
predictable and less secure.
The proposed approach is linear in complexity (altough requires quite
a bit more temporary memory). The idea is to count the number of free
chunks on each memory page and release pages containing free chunks
only. It requires one iteration over the free list of chunks and one
iteration over the array of page counters. The obvious disadvantage
is the allocation of the array of the counters, but even in the worst
case we support (4T allocator space, 64 buckets, 16 bytes bucket size,
full free list, which leads to 2 bytes per page counter and ~17M page
counters), requires just about 34Mb of the intermediate buffer (comparing
to ~64Gb of actually allocated chunks) and usually it stays under 100K
and released after each use. It is expected to be a relatively rare event,
releasing memory back to OS, keeping the buffer between those runs
and added complexity of the bookkeeping seems unnesessary here (it can
always be improved later, though, never say never).
The most interesting problem here is how to calculate the number of chunks
falling into each memory page in the bucket. Skipping all the details,
there are three cases when the number of chunks per page is constant:
1) P >= C, P % C == 0 --> N = P / C
2) C > P , C % P == 0 --> N = 1
3) C <= P, P % C != 0 && C % (P % C) == 0 --> N = P / C + 1
where P is page size, C is chunk size and N is the number of chunks per
page and the rest of the cases, where the number of chunks per page is
calculated on the go, during the page counter array iteration.
Among the rest, there are still cases where N can be deduced from the
page index, but they require not that much less calculations per page
than the current "brute force" way and 2/3 of the buckets fall into
the first three categories anyway, so, for the sake of simplicity,
it was decided to stick to those two variations. It can always be
refined and improved later, should we see that brute force way slows
us down unacceptably.
Reviewers: eugenis, cryptoad, dvyukov
Subscribers: kubamracek, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D38245
llvm-svn: 314311
Summary:
Currently `TransferBatch` are located within the same memory regions as
"regular" chunks. This is not ideal for security: they make for an interesting
target to overwrite, and are not protected by the frontend (namely, Scudo).
To solve this, we re-introduce `kUseSeparateSizeClassForBatch` for the 32-bit
Primary allowing for `TransferBatch` to end up in their own memory region.
Currently only Scudo would use this new feature, the default behavior remains
unchanged. The separate `kBatchClassID` was used for a brief period of time
previously but removed when the 64-bit ended up using the "free array".
Reviewers: alekseyshl, kcc, eugenis
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D37082
llvm-svn: 311891
Summary:
Move cached allocator_may_return_null flag to sanitizer_allocator.cc and
provide API to consolidate and unify the behavior of all specific allocators.
Make all sanitizers using CombinedAllocator to follow
AllocatorReturnNullOrDieOnOOM() rules to behave the same way when OOM
happens.
When OOM happens, turn allocator_out_of_memory flag on regardless of
allocator_may_return_null flag value (it used to not to be set when
allocator_may_return_null == true).
release_to_os_interval_ms and rss_limit_exceeded will likely be moved to
sanitizer_allocator.cc too (later).
Reviewers: eugenis
Subscribers: srhines, kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34310
llvm-svn: 305858
Summary:
With rL279771, SizeClassAllocator64 was changed to accept only one template
instead of 5, for the following reasons: "First, this will make the mangled
names shorter. Second, this will make adding more parameters simpler". This
patch mirrors that work for SizeClassAllocator32.
This is in preparation for introducing the randomization of chunks in the
32-bit SizeClassAllocator in a later patch.
Reviewers: kcc, alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33141
llvm-svn: 303071
Summary:
In order to avoid starting a separate thread to return unused memory to
the system (the thread interferes with process startup on Android,
Zygota waits for all threads to exit before fork, but this thread never
exits), try to return it right after free.
Reviewers: eugenis
Subscribers: cryptoad, filcab, danalbert, kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27003
llvm-svn: 288091
The definitions in sanitizer_common may conflict with definitions from system headers because:
The runtime includes the system headers after the project headers (as per LLVM coding guidelines).
lib/sanitizer_common/sanitizer_internal_defs.h pollutes the namespace of everything defined after it, which is all/most of the sanitizer .h and .cc files and the included system headers with: using namespace __sanitizer; // NOLINT
This patch solves the problem by introducing the namespace only within the sanitizer namespaces as proposed by Dmitry.
Differential Revision: https://reviews.llvm.org/D21947
llvm-svn: 281657
These got out of sync and the tests were failing for me locally. We
assume a 47 bit address space in ASan, so we should do the same in the
tests.
llvm-svn: 281622
Summary:
The sanitizer allocators can works with a dynamic address space
(i.e. specified with ~0ULL).
Unfortunately, the code was broken on GetMetadata and GetChunkIdx.
The current patch is moving the Win64 memory test to a dynamic
address space. There is a migration to move every concept to a
dynamic address space on windows.
To have a better coverage, the unittest are now testing
dynamic address space on other platforms too.
Reviewers: rnk, kcc
Subscribers: kubabrecka, dberris, llvm-commits, chrisha
Differential Revision: https://reviews.llvm.org/D23170
llvm-svn: 277745
This test attempts to allocate 100 512MB aligned pages of memory. This
is implemented in the usual way by allocating size + alignment bytes and
aligning the result. As a result, this test allocates 51.2GB of memory.
Windows allocates swap for all memory allocated, and our bots do not
have this much swap available.
Avoid the failure by using a more reasonable alignment, like 16MB, as we
do on 32-bit.
llvm-svn: 276779
Add a %stdcxx11 lit substitution for -std=c++11. Windows defaults to
-std=c++14 when VS 2015 is used because the STL requires it. Harcoding
-std=c++11 in the ASan tests actually downgrades the C++ standard level,
leading to test failures.
Relax a FileCheck pattern in use-after-scope-types.cc.
Disable the sanitizer_common OOM tests. They fail on bots with low swap,
and cause other concurrently running tests to OOM.
llvm-svn: 276454
Summary:
This patch is fixing unittests for sanitizer memory allocator.
There was two issues:
1) The VirtualAlloc can't reserve twice a memory range.
The memory space used by the SizeClass allocator is reserved
with NoAccess and pages are commited on demand (using MmapFixedOrDie).
2) The address space is allocated using two VirtualAlloc calls. The first one
for the memory space, the second one for the AdditionnalSpace (after).
On windows, they need to be freed separately.
Reviewers: rnk
Subscribers: llvm-commits, wang0109, kubabrecka, chrisha
Differential Revision: http://reviews.llvm.org/D21900
llvm-svn: 274772
Summary:
The unittest was not freeing the mapped memory.
```
Repeating all tests (iteration 1) . . .
Note: Google Test filter = Allocator.AllocatorCacheDeallocNewThread
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Allocator
[ RUN ] Allocator.AllocatorCacheDeallocNewThread
[ OK ] Allocator.AllocatorCacheDeallocNewThread (3 ms)
[----------] 1 test from Allocator (4 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (5 ms total)
[ PASSED ] 1 test.
Repeating all tests (iteration 2) . . .
Note: Google Test filter = Allocator.AllocatorCacheDeallocNewThread
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Allocator
[ RUN ] Allocator.AllocatorCacheDeallocNewThread
==4504==WARNING: SanitizerTool failed to mprotect 0x010000003000 (1099511640064) bytes at 0x010000000000 (error code: 48
7)
==4504==Sanitizer CHECK failed: D:/src/llvm/llvm/projects/compiler-rt/lib\sanitizer_common/sanitizer_allocator.h:329 ((kSpaceBeg)) == ((reinterpret_cast<uptr>( MmapFixedNoAccess(kSpaceBeg, TotalSpaceSize)))) (1099511627776, 0)
```
Reviewers: rnk
Subscribers: llvm-commits, kubabrecka, chrisha
Differential Revision: http://reviews.llvm.org/D22094
llvm-svn: 274764
This teaches sanitizer_common about s390 and s390x virtual space size.
s390 is unusual in that it has 31-bit virtual space.
Differential Revision: http://reviews.llvm.org/D18896
llvm-svn: 266296
Summary:
Turn "allocator_may_return_null" common flag into an
Allocator::may_return_null bool flag. We want to make sure
that common flags are immutable after initialization. There
are cases when we want to change this flag in the allocator
at runtime: e.g. in unit tests and during ASan activation
on Android.
Test Plan: regression test suite, real-life applications
Reviewers: kcc, eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6623
llvm-svn: 224148
Enabling COMPILER_RT_INCLUDE_TESTS and updating tests/sanitizer_allocator_test.cc to remove Allocator64 related tests for MIPS.
Reviewed By: samsonov
llvm-svn: 224101
I don't remember that crash on mmap in internal allocator
ever yielded anything useful, only crashes in rare wierd untested situations.
One of the reasons for crash was to catch if tsan starts allocating
clocks using mmap. Tsan does not allocate clocks using internal_alloc anymore.
Solve it once and for all by allowing mmaps.
llvm-svn: 217929
Summary:
Implement TwoLevelByteMap and use it for the internal allocator on 64-bit.
This reduces bss on 64-bit by ~8Mb because we don't use FlatByteMap on 64-bits any more.
Dmitry, please check my understanding of atomics.
Reviewers: dvyukov
Reviewed By: dvyukov
CC: samsonov, llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2259
llvm-svn: 195637
Summary:
This fixes a deadlock which happens in lsan
on a large memalign-allocated chunk that resides in lsan's root set.
Reviewers: samsonov, earthdok
Reviewed By: earthdok
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1957
llvm-svn: 192885