There is no centralized store of information related to secondary
allocations. Moreover the allocations themselves become inaccessible
when the allocation is freed in order to implement UAF detection,
so we can't store information there to be used in case of UAF
anyway.
Therefore our storage location for tracking stack traces of secondary
allocations is a ring buffer. The ring buffer is copied to the process
creating the crash dump when a fault occurs.
The ring buffer is also used to store stack traces for primary
deallocations. Stack traces for primary allocations continue to be
stored inline.
In order to support the scenario where an access to the ring buffer
is interrupted by a concurrently occurring crash, the ring buffer is
accessed in a lock-free manner.
Differential Revision: https://reviews.llvm.org/D94212
This patch enhances the secondary allocator to be able to detect buffer
overflow, and (on hardware supporting memory tagging) use-after-free
and buffer underflow.
Use-after-free detection is implemented by setting memory page
protection to PROT_NONE on free. Because this must be done immediately
rather than after the memory has been quarantined, we no longer use the
combined allocator quarantine for secondary allocations. Instead, a
quarantine has been added to the secondary allocator cache.
Buffer overflow detection is implemented by aligning the allocation
to the right of the writable pages, so that any overflows will
spill into the guard page to the right of the allocation, which
will have PROT_NONE page protection. Because this would require the
secondary allocator to produce a header at the correct position,
the responsibility for ensuring chunk alignment has been moved to
the secondary allocator.
Buffer underflow detection has been implemented on hardware supporting
memory tagging by tagging the memory region between the start of the
mapping and the start of the allocation with a non-zero tag. Due to
the cost of pre-tagging secondary allocations and the memory bandwidth
cost of tagged accesses, the allocation itself uses a tag of 0 and
only the first four pages have memory tagging enabled.
This is a reland of commit 7a0da88943 which was reverted in commit
9678b07e42. This reland includes the following changes:
- Fix the calculation of BlockSize which led to incorrect statistics
returned by mallinfo().
- Add -Wno-pedantic to silence GCC warning.
- Optionally add some slack at the end of secondary allocations to help
work around buggy applications that read off the end of their
allocation.
Differential Revision: https://reviews.llvm.org/D93731
As of 4f395db86b which contains updates to
-Wfree-nonheap-object, a line in this test will trigger the warning. This
particular line is ok though since it's meant to test a free on a bad pointer.
Differential Revision: https://reviews.llvm.org/D97516
This CL introduces configuration options to allow pointers to be
compacted in the thread-specific caches and transfer batches. This
offers the possibility to have them use 32-bit of space instead of
64-bit for the 64-bit Primary, thus cutting the size of the caches
and batches by nearly half (and as such the memory used in size
class 0). The cost is an additional read from the region information
in the fast path.
This is not a new idea, as it's being used in the sanitizer_common
64-bit primary. The difference here is that it is configurable via
the allocator config, with the possibility of not compacting at all.
This CL enables compacting pointers in the Android and Fuchsia default
configurations.
Differential Revision: https://reviews.llvm.org/D96435
This patch enhances the secondary allocator to be able to detect buffer
overflow, and (on hardware supporting memory tagging) use-after-free
and buffer underflow.
Use-after-free detection is implemented by setting memory page
protection to PROT_NONE on free. Because this must be done immediately
rather than after the memory has been quarantined, we no longer use the
combined allocator quarantine for secondary allocations. Instead, a
quarantine has been added to the secondary allocator cache.
Buffer overflow detection is implemented by aligning the allocation
to the right of the writable pages, so that any overflows will
spill into the guard page to the right of the allocation, which
will have PROT_NONE page protection. Because this would require the
secondary allocator to produce a header at the correct position,
the responsibility for ensuring chunk alignment has been moved to
the secondary allocator.
Buffer underflow detection has been implemented on hardware supporting
memory tagging by tagging the memory region between the start of the
mapping and the start of the allocation with a non-zero tag. Due to
the cost of pre-tagging secondary allocations and the memory bandwidth
cost of tagged accesses, the allocation itself uses a tag of 0 and
only the first four pages have memory tagging enabled.
Differential Revision: https://reviews.llvm.org/D93731
GNU binutils accepts only `.arch_extension memtag` while Clang
accepts either that or `.arch_extension mte` to mean the same thing.
Reviewed By: pcc
Differential Revision: https://reviews.llvm.org/D95996
Adds a new allocation API to GWP-ASan that handles size+alignment
restrictions.
Reviewed By: cryptoad, eugenis
Differential Revision: https://reviews.llvm.org/D94830
With D92696, the Scudo Standalone GWP-ASan flag parsing was changed to
the new GWP-ASan optional one. We do not necessarily want this, as this
duplicates flag parsing code in Scudo Standalone when using the
GWP-ASan integration.
This CL reverts the changes within Scudo Standalone, and increases
`MaxFlags` to 20 as an addionnal option got us to the current max.
Differential Revision: https://reviews.llvm.org/D95542
zxtest doesn't have `EXPECT_DEATH` and the Scudo unit-tests were
defining it as a no-op.
This enables death tests on Fuchsia by using `ASSERT_DEATH` instead.
I used a lambda to wrap the expressions as this appears to not be
working the same way as `EXPECT_DEATH`.
Additionnally, a death test using `alarm` was failing with the change,
as it's currently not implemented in Fuchsia, so move that test within
a `!SCUDO_FUCHSIA` block.
Differential Revision: https://reviews.llvm.org/D94362
In preparation for the inbuilt options parser, this is a minor refactor
of optional components including:
- Putting certain optional elements in the right header files,
according to their function and their dependencies.
- Cleaning up some old and mostly-dead code.
- Moving some functions into anonymous namespaces to prevent symbol
export.
Reviewed By: cryptoad, eugenis
Differential Revision: https://reviews.llvm.org/D94117
The primary and secondary allocators will need to share this bit,
so move the management of the bit to the combined allocator and
make useMemoryTagging() a free function.
Differential Revision: https://reviews.llvm.org/D93730
Kernel support for MTE has been released in Linux 5.10. This means
that it is a stable API and we no longer need to make the support
conditional on a macro. We do need to provide conditional definitions
of the new macros though in order to avoid a dependency on new
kernel headers.
Differential Revision: https://reviews.llvm.org/D93513
canAllocate() does not take into account the header size so it does
not return the right answer in borderline cases. There was already
code handling this correctly in isTaggedAllocation() so split it out
into a separate function and call it from the test.
Furthermore the test was incorrect when MTE is enabled because MTE
does not pattern fill primary allocations. Fix it.
Differential Revision: https://reviews.llvm.org/D93437
Initially we were avoiding the release of smaller size classes due to
the fact that it was an expensive operation, particularly on 32-bit
platforms. With a lot of batches, and given that there are a lot of
blocks per page, this was a lengthy operation with little results.
There has been some improvements since then to the 32-bit release,
and we still have some criterias preventing us from wasting time
(eg, 9x% free blocks in the class size, etc).
Allowing to release blocks < 128 bytes helps in situations where a lot
of small chunks would not have been reclaimed if not for a forced
reclaiming.
Additionally change some `CHECK` to `DCHECK` and rearrange a bit the
code.
I didn't experience any regressions in my benchmarks.
Differential Revision: https://reviews.llvm.org/D93141
Make these arguments named constants in the Config class instead
of being positional arguments to MapAllocatorCache. This makes the
configuration easier to follow.
Eventually we should follow suit with the other classes but this is
a start.
Differential Revision: https://reviews.llvm.org/D93251
There are a few things that I wanted to reorganize for a while:
- the loop that incrementally goes through classes on failure looked
horrible in assembly, mostly because of `LIKELY`/`UNLIKELY` within
the loop. So remove those, we are already in an unlikely scenario
- hooks are not used by default on Android/Fuchsia/etc so mark the
tests for the existence of the weak functions as unlikely
- mark of couple of conditions as likely/unlikely
- in `reallocate`, the old size was computed again while we already
have it in a variable. So just use the one we have.
- remove the bitwise AND trick and use a logical AND, that has one
less test by using a purposeful underflow when `Size` is 0 (I
actually looked at the assembly of the previous code to steal that
trick)
- move the read of the options closer to where they are used, mark them
as `const`
Overall this makes things a tiny bit faster, but cleaner.
Differential Revision: https://reviews.llvm.org/D92689
Normally compilers will allocate space for struct fields even if the
field is an empty struct. Use the [[no_unique_address]] attribute to
suppress that behavior. This attribute that was introduced in C++20,
but compilers that do not support [[no_unique_address]] will ignore
it since it uses C++11 attribute syntax.
Differential Revision: https://reviews.llvm.org/D92966
Quarantines have always been broken when MTE is enabled because the
quarantine batch allocator fails to reset tags that may have been
left behind by a user allocation.
This was only noticed when running the Scudo unit tests with Scudo
as the system allocator because quarantines are turned off by
default on Android and the test binary turns them on by defining
__scudo_default_options, which affects the system allocator as well.
Differential Revision: https://reviews.llvm.org/D92881
Separate the IRG part from the STZG part since we will need to use
the latter on its own for some upcoming changes.
Differential Revision: https://reviews.llvm.org/D92880
In ScopedString::append va_list ArgsCopy is created but never cleanuped
which can lead to undefined behaviour, like stack corruption.
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D92383
The original code to keep track of the minimum and maximum indices
of allocated 32-bit primary regions was sketchy at best.
`MinRegionIndex` & `MaxRegionIndex` were shared between all size
classes, and could (theoretically) have been updated concurrently. This
didn't materialize anywhere I could see, but still it's not proper.
This changes those min/max indices by making them class specific rather
than global: classes are locked when growing, so there is no
concurrency there. This also allows to simplify some of the 32-bit
release code, that now doesn't have to go through all the regions to
get the proper min/max. Iterate and unmap will no longer have access to
the global min/max, but they aren't used as much so this is fine.
Differential Revision: https://reviews.llvm.org/D91106
This unit test code was using malloc without a corresponding free.
When the system malloc is not being overridden by the code under
test, it might an asan/lsan allocator that notices leaks.
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D91472
`populateFreelist` was more complicated that it needed to be. We used
to call to `populateBatches` that would do some internal shuffling and
add pointers one by one to the batches, but ultimately this was not
needed. We can get rid of `populateBatches`, and do processing in
bulk. This doesn't necessarily make things faster as this is not on the
hot path, but it makes the function cleaner.
Additionally clean up a couple of items, like `UNLIKELY`s and setting
`Exhausted` to `false` which can't happen.
Differential Revision: https://reviews.llvm.org/D90700
There is no need to memset released pages because they are already
zero. On db845c, before:
BM_stdlib_malloc_free_default/131072 34562 ns 34547 ns 20258 bytes_per_second=3.53345G/s
after:
BM_stdlib_malloc_free_default/131072 29618 ns 29589 ns 23485 bytes_per_second=4.12548G/s
Differential Revision: https://reviews.llvm.org/D90814
- we have clutter-reducing helpers for relaxed atomics that were barely
used, use them everywhere we can
- clang-format everything with a recent version
Differential Revision: https://reviews.llvm.org/D90649
Move some of the flags previously in Options, as well as the
UseMemoryTagging flag previously in the primary allocator, into an
atomic variable so that it can be updated while other threads are
running. Relaxed accesses are used because we only have the requirement
that the other threads see the new value eventually.
The code is set up so that the variable is generally loaded once per
allocation function call with the exception of some rarely used code
such as error handlers. The flag bits can generally stay in a register
during the execution of the allocation function which means that they
can be branched on with minimal overhead (e.g. TBZ on aarch64).
Differential Revision: https://reviews.llvm.org/D88523
Said test was flaking on Fuchsia for non-obvious reasons, and only
for ASan variants (the release was returning 0).
It turned out that the templating was off, `true` being promoted to
a `s32` and used as the minimum interval argument. This meant that in
some circumstances, the normal release would occur, and the forced
release would have nothing to release, hence the 0 byte released.
The symbols are giving it away (note the 1):
```
scudo::SizeClassAllocator64<scudo::FixedSizeClassMap<scudo::DefaultSizeClassConfig>,24ul,1,2147483647,false>::releaseToOS(void)
```
This also probably means that there was no MTE version of that test!
Differential Revision: https://reviews.llvm.org/D88457
`atomic_compare_exchange_weak` is unused in Scudo, and its associated
test is actually wrong since the weak variant is allowed to fail
spuriously (thanks Roland).
This lead to flakes such as:
```
[ RUN ] ScudoAtomicTest.AtomicCompareExchangeTest
../../zircon/third_party/scudo/src/tests/atomic_test.cpp:98: Failure: Expected atomic_compare_exchange_weak(reinterpret_cast<T *>(&V), &OldVal, NewVal, memory_order_relaxed) is true.
Expected: true
Which is: 01
Actual : atomic_compare_exchange_weak(reinterpret_cast<T *>(&V), &OldVal, NewVal, memory_order_relaxed)
Which is: 00
../../zircon/third_party/scudo/src/tests/atomic_test.cpp💯 Failure: Expected atomic_compare_exchange_weak( reinterpret_cast<T *>(&V), &OldVal, NewVal, memory_order_relaxed) is false.
Expected: false
Which is: 00
Actual : atomic_compare_exchange_weak( reinterpret_cast<T *>(&V), &OldVal, NewVal, memory_order_relaxed)
Which is: 01
../../zircon/third_party/scudo/src/tests/atomic_test.cpp:101: Failure: Expected OldVal == NewVal.
Expected: NewVal
Which is: 24
Actual : OldVal
Which is: 42
[ FAILED ] ScudoAtomicTest.AtomicCompareExchangeTest (0 ms)
[----------] 2 tests from ScudoAtomicTest (1 ms total)
```
So I am removing this, if someone ever needs the weak variant, feel
free to add it back with a test that is not as terrible. This test was
initially ported from sanitizer_common, but their weak version calls
the strong version, so it works for them.
Differential Revision: https://reviews.llvm.org/D88443
Move smaller and frequently-accessed fields near the beginning
of the data structure in order to improve locality and reduce
the number of instructions required to form an access to those
fields. With this change I measured a ~5% performance improvement on
BM_malloc_sql_trace_default on aarch64 Android devices (Pixel 4 and
DragonBoard 845c).
Differential Revision: https://reviews.llvm.org/D88350
Fix a potential UB in `appendSignedDecimal` (with -INT64_MIN) by making
it a special case.
Fix the terrible test cases for `isOwned`: I was pretty sloppy on those
and used some stack & static variables, but since `isOwned` accesses
memory prior to the pointer to check for the validity of the Scudo
header, it ended up being detected as some global and stack buffer out
of bounds accesses. So not I am using buffers with enough room so that
the test will not access memory prior to the variables.
With those fixes, the tests pass on the ASan+UBSan Fuchsia build.
Thanks to Roland for pointing those out!
Differential Revision: https://reviews.llvm.org/D88170
https://reviews.llvm.org/D87420 removed the uses of the pthread key,
but the key itself was left in the shared TSD registry. It is created
on registry initialization, and destroyed on registry teardown.
There is really no use for it now, so we can just remove it.
Differential Revision: https://reviews.llvm.org/D88046
1U has type unsigned int, and << of 32 or more is undefined behavior.
Use the proper type in the lhs of the shift.
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D87973
Here "memory initialization" refers to zero- or pattern-init on
non-MTE hardware, or (where possible to avoid) memory tagging on MTE
hardware. With shared TSD the per-thread memory initialization state
is stored in bit 0 of the TLS slot, similar to PointerIntPair in LLVM.
Differential Revision: https://reviews.llvm.org/D87739
An upcoming change to Scudo will change how we use the TLS slot
in tsd_shared.h, which will be a little easier to deal with if
we can remove the code path that calls pthread_getspecific and
pthread_setspecific. The only known user of this code path is Fuchsia.
We can't eliminate this code path by making Fuchsia use ELF TLS
because although Fuchsia supports ELF TLS, it is not supported within
libc itself. To address this, Roland McGrath on the Fuchsia team has
proposed that Scudo will optionally call a platform-provided function
to access a TLS slot reserved for Scudo. Android also has a reserved
TLS slot, but the code that accesses the TLS slot lives in Scudo.
We can eliminate some complexity and duplicated code by having Android
use the same mechanism that was proposed for Fuchsia, which is what
this change does. A separate change to Android implements it.
Differential Revision: https://reviews.llvm.org/D87420
I had left this as a TODO, but it turns out it wasn't complicated.
By specifying `MAP_RESIZABLE`, it allows us to keep the VMO which we
can then use for release purposes.
`releasePagesToOS` also had to be called the "proper" way, as Fuchsia
requires the `Offset` field to be correct. This has no impact on
non-Fuchsia platforms.
Differential Revision: https://reviews.llvm.org/D86800
With the 'new' way of releasing on 32-bit, we iterate through all the
regions in between `First` and `Last`, which covers regions that do not
belong to the class size we are working with. This is effectively wasted
cycles.
With this change, we add a `SkipRegion` lambda to `releaseFreeMemoryToOS`
that will allow the release function to know when to skip a region.
For the 64-bit primary, since we are only working with 1 region, we never
skip.
Reviewed By: hctim
Differential Revision: https://reviews.llvm.org/D86399