Commit Graph

47 Commits

Author SHA1 Message Date
Kostya Kortchinsky 5009be2f09 [scudo] Fix format string specifiers
Enable `-Wformat` again, and fix the offending instances.

Differential Revision: https://reviews.llvm.org/D108168
2021-08-17 08:37:49 -07:00
Kostya Kortchinsky 8b062b6160 [scudo] Ensure proper allocator alignment in TSD test
The `MockAllocator` used in `ScudoTSDTest` wasn't allocated
properly aligned, which resulted in the `TSDs` of the shared
registry not being aligned either. This lead to some failures
like: https://reviews.llvm.org/D103119#2822008

This changes how the `MockAllocator` is allocated, same as
Vitaly did in the combined tests, properly aligning it, which
results in the `TSDs` being aligned as well.

Add a `DCHECK` in the shared registry to check that it is.

Differential Revision: https://reviews.llvm.org/D104402
2021-06-16 14:21:58 -07:00
Daniel Michael 2551053e8d [scudo] Add Scudo support for Trusty OS
trusty.cpp and trusty.h define Trusty implementations of map and other
platform-specific functions. In addition to adding Trusty configurations
in allocator_config.h and size_class_map.h, MapSizeIncrement and
PrimaryEnableRandomOffset are added as configurable options in
allocator_config.h.
Background on Trusty: https://source.android.com/security/trusty

Differential Revision: https://reviews.llvm.org/D103578
2021-06-08 14:02:10 -07:00
Kostya Kortchinsky 868317b3fd [scudo] Rework Vector/String
Some platforms (eg: Trusty) are extremelly memory constrained, which
doesn't necessarily work well with some of Scudo's current assumptions.

`Vector` by default (and as such `String` and `ScopedString`) maps a
page, which is a bit of a waste. This CL changes `Vector` to use a
buffer local to the class first, then potentially map more memory if
needed (`ScopedString` currently are all stack based so it would be
stack data). We also want to allow a platform to prevent any dynamic
resizing, so I added a `CanGrow` templated parameter that for now is
always `true` but would be set to `false` on Trusty.

Differential Revision: https://reviews.llvm.org/D103641
2021-06-03 18:12:24 -07:00
Kostya Kortchinsky a45877eea8 [scudo] Get rid of initLinkerInitialized
Now that everything is forcibly linker initialized, it feels like a
good time to get rid of the `init`/`initLinkerInitialized` split.

This allows to get rid of various `memset` construct in `init` that
gcc complains about (this fixes a Fuchsia open issue).

I added various `DCHECK`s to ensure that we would get a zero-inited
object when entering `init`, which required ensuring that
`unmapTestOnly` leaves the object in a good state (tests are currently
the only location where an allocator can be "de-initialized").

Running the tests with `--gtest_repeat=` showed no issue.

Differential Revision: https://reviews.llvm.org/D103119
2021-05-26 09:53:40 -07:00
Vitaly Buka d56ef8523c [scudo] Use require_constant_initialization
Attribute guaranties safe static initialization of globals.

Reviewed By: hctim

Differential Revision: https://reviews.llvm.org/D101514
2021-05-01 01:46:47 -07:00
Vitaly Buka ea7618684c Revert "[scudo] Use require_constant_initialization"
This reverts commit 7ad4dee3e7.
2021-04-29 09:55:54 -07:00
Vitaly Buka 7ad4dee3e7 [scudo] Use require_constant_initialization
Attribute guaranties safe static initialization of globals.

Differential Revision: https://reviews.llvm.org/D101514
2021-04-29 09:47:59 -07:00
Kostya Kortchinsky 3f97c66b00 [scudo][standalone] Fuchsia related fixes
While attempting to roll the latest Scudo in Fuchsia, some issues
arose. While trying to debug them, it appeared that `DCHECK`s were
also never exercised in Fuchsia. This CL fixes the following
problems:
- the size of a block in the TransferBatch class must be a multiple
  of the compact pointer scale. In some cases, it wasn't true, which
  lead to obscure crashes. Now, we round up `sizeof(TransferBatch)`.
  This only materialized in Fuchsia due to the specific parameters
  of the `DefaultConfig`;
- 2 `DCHECK` statements in Fuchsia were incorrect;
- `map()` & co. require a size multiple of a page (as enforced in
  Fuchsia `DCHECK`s), which wasn't the case for `PackedCounters`.
- In the Secondary, a parameter was marked as `UNUSED` while it is
  actually used.

Differential Revision: https://reviews.llvm.org/D100524
2021-04-15 13:26:23 -07:00
Peter Collingbourne 3f71ce8589 scudo: Support memory tagging in the secondary allocator.
This patch enhances the secondary allocator to be able to detect buffer
overflow, and (on hardware supporting memory tagging) use-after-free
and buffer underflow.

Use-after-free detection is implemented by setting memory page
protection to PROT_NONE on free. Because this must be done immediately
rather than after the memory has been quarantined, we no longer use the
combined allocator quarantine for secondary allocations. Instead, a
quarantine has been added to the secondary allocator cache.

Buffer overflow detection is implemented by aligning the allocation
to the right of the writable pages, so that any overflows will
spill into the guard page to the right of the allocation, which
will have PROT_NONE page protection. Because this would require the
secondary allocator to produce a header at the correct position,
the responsibility for ensuring chunk alignment has been moved to
the secondary allocator.

Buffer underflow detection has been implemented on hardware supporting
memory tagging by tagging the memory region between the start of the
mapping and the start of the allocation with a non-zero tag. Due to
the cost of pre-tagging secondary allocations and the memory bandwidth
cost of tagged accesses, the allocation itself uses a tag of 0 and
only the first four pages have memory tagging enabled.

This is a reland of commit 7a0da88943 which was reverted in commit
9678b07e42. This reland includes the following changes:

- Fix the calculation of BlockSize which led to incorrect statistics
  returned by mallinfo().
- Add -Wno-pedantic to silence GCC warning.
- Optionally add some slack at the end of secondary allocations to help
  work around buggy applications that read off the end of their
  allocation.

Differential Revision: https://reviews.llvm.org/D93731
2021-03-08 14:39:33 -08:00
Peter Collingbourne 9678b07e42 Revert 7a0da88943, "scudo: Support memory tagging in the secondary allocator."
We measured a 2.5 seconds (17.5%) regression in Android boot time
performance with this change.
2021-02-25 16:50:02 -08:00
Kostya Kortchinsky 2c56776a31 [scudo][standalone] Compact pointers for Caches/Batches
This CL introduces configuration options to allow pointers to be
compacted in the thread-specific caches and transfer batches. This
offers the possibility to have them use 32-bit of space instead of
64-bit for the 64-bit Primary, thus cutting the size of the caches
and batches by nearly half (and as such the memory used in size
class 0). The cost is an additional read from the region information
in the fast path.

This is not a new idea, as it's being used in the sanitizer_common
64-bit primary. The difference here is that it is configurable via
the allocator config, with the possibility of not compacting at all.

This CL enables compacting pointers in the Android and Fuchsia default
configurations.

Differential Revision: https://reviews.llvm.org/D96435
2021-02-25 12:14:38 -08:00
Peter Collingbourne 7a0da88943 scudo: Support memory tagging in the secondary allocator.
This patch enhances the secondary allocator to be able to detect buffer
overflow, and (on hardware supporting memory tagging) use-after-free
and buffer underflow.

Use-after-free detection is implemented by setting memory page
protection to PROT_NONE on free. Because this must be done immediately
rather than after the memory has been quarantined, we no longer use the
combined allocator quarantine for secondary allocations. Instead, a
quarantine has been added to the secondary allocator cache.

Buffer overflow detection is implemented by aligning the allocation
to the right of the writable pages, so that any overflows will
spill into the guard page to the right of the allocation, which
will have PROT_NONE page protection. Because this would require the
secondary allocator to produce a header at the correct position,
the responsibility for ensuring chunk alignment has been moved to
the secondary allocator.

Buffer underflow detection has been implemented on hardware supporting
memory tagging by tagging the memory region between the start of the
mapping and the start of the allocation with a non-zero tag. Due to
the cost of pre-tagging secondary allocations and the memory bandwidth
cost of tagged accesses, the allocation itself uses a tag of 0 and
only the first four pages have memory tagging enabled.

Differential Revision: https://reviews.llvm.org/D93731
2021-02-22 14:35:39 -08:00
Peter Collingbourne faac1c02c8 scudo: Move the management of the UseMemoryTagging bit out of the Primary. NFCI.
The primary and secondary allocators will need to share this bit,
so move the management of the bit to the combined allocator and
make useMemoryTagging() a free function.

Differential Revision: https://reviews.llvm.org/D93730
2020-12-22 16:52:54 -08:00
Peter Collingbourne 6dfe5801e0 scudo: Move the configuration for the primary allocator to Config. NFCI.
This will allow the primary and secondary allocators to share
the MaySupportMemoryTagging bool.

Differential Revision: https://reviews.llvm.org/D93728
2020-12-22 14:54:40 -08:00
Kostya Kortchinsky 1dbf2c96bc [scudo][standalone] Allow the release of smaller sizes
Initially we were avoiding the release of smaller size classes due to
the fact that it was an expensive operation, particularly on 32-bit
platforms. With a lot of batches, and given that there are a lot of
blocks per page, this was a lengthy operation with little results.

There has been some improvements since then to the 32-bit release,
and we still have some criterias preventing us from wasting time
(eg, 9x% free blocks in the class size, etc).

Allowing to release blocks < 128 bytes helps in situations where a lot
of small chunks would not have been reclaimed if not for a forced
reclaiming.

Additionally change some `CHECK` to `DCHECK` and rearrange a bit the
code.

I didn't experience any regressions in my benchmarks.

Differential Revision: https://reviews.llvm.org/D93141
2020-12-17 10:01:57 -08:00
Kostya Kortchinsky c955989046 [scudo][standalone] Simplify populateFreelist
`populateFreelist` was more complicated that it needed to be. We used
to call to `populateBatches` that would do some internal shuffling and
add pointers one by one to the batches, but ultimately this was not
needed. We can get rid of `populateBatches`, and do processing in
bulk. This doesn't necessarily make things faster as this is not on the
hot path, but it makes the function cleaner.

Additionally clean up a couple of items, like `UNLIKELY`s and setting
`Exhausted` to `false` which can't happen.

Differential Revision: https://reviews.llvm.org/D90700
2020-11-06 09:44:36 -08:00
Kostya Kortchinsky b3420adf5a [scudo][standalone] Code tidying (NFC)
- we have clutter-reducing helpers for relaxed atomics that were barely
  used, use them everywhere we can
- clang-format everything with a recent version

Differential Revision: https://reviews.llvm.org/D90649
2020-11-02 16:00:31 -08:00
Peter Collingbourne 719ab7309e scudo: Make it thread-safe to set some runtime configuration flags.
Move some of the flags previously in Options, as well as the
UseMemoryTagging flag previously in the primary allocator, into an
atomic variable so that it can be updated while other threads are
running. Relaxed accesses are used because we only have the requirement
that the other threads see the new value eventually.

The code is set up so that the variable is generally loaded once per
allocation function call with the exception of some rarely used code
such as error handlers. The flag bits can generally stay in a register
during the execution of the allocation function which means that they
can be branched on with minimal overhead (e.g. TBZ on aarch64).

Differential Revision: https://reviews.llvm.org/D88523
2020-09-30 09:42:45 -07:00
Kostya Kortchinsky bd5ca4f0ed [scudo][standalone] Skip irrelevant regions during release
With the 'new' way of releasing on 32-bit, we iterate through all the
regions in between `First` and `Last`, which covers regions that do not
belong to the class size we are working with. This is effectively wasted
cycles.

With this change, we add a `SkipRegion` lambda to `releaseFreeMemoryToOS`
that will allow the release function to know when to skip a region.
For the 64-bit primary, since we are only working with 1 region, we never
skip.

Reviewed By: hctim

Differential Revision: https://reviews.llvm.org/D86399
2020-08-25 07:41:02 -07:00
Kostya Kortchinsky 6f00f3b56e [scudo][standalone] mallopt runtime configuration options
Summary:
Partners have requested the ability to configure more parts of Scudo
at runtime, notably the Secondary cache options (maximum number of
blocks cached, maximum size) as well as the TSD registry options
(the maximum number of TSDs in use).

This CL adds a few more Scudo specific `mallopt` parameters that are
passed down to the various subcomponents of the Combined allocator.

- `M_CACHE_COUNT_MAX`: sets the maximum number of Secondary cached items
- `M_CACHE_SIZE_MAX`: sets the maximum size of a cacheable item in the Secondary
- `M_TSDS_COUNT_MAX`: sets the maximum number of TSDs that can be used (Shared Registry only)

Regarding the TSDs maximum count, this is a one way option, only
allowing to increase the count.

In order to allow for this, I rearranged the code to have some `setOption`
member function to the relevant classes, using the `scudo::Option` class
enum to determine what is to be set.

This also fixes an issue where a static variable (`Ready`) was used in
templated functions without being set back to `false` every time.

Reviewers: pcc, eugenis, hctim, cferris

Subscribers: jfb, llvm-commits, #sanitizers

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D84667
2020-07-28 11:57:54 -07:00
Kostya Kortchinsky 998334da2b [scudo][standalone] Change the release loop for efficiency purposes
Summary:
On 32-b, the release algo loops multiple times over the freelist for a size
class, which lead to a decrease in performance when there were a lot of free
blocks.

This changes the release functions to loop only once over the freelist, at the
cost of using a little bit more memory for the release process: instead of
working on one region at a time, we pass the whole memory area covered by all
the regions for a given size class, and work on sub-areas of `RegionSize` in
this large area. For 64-b, we just have 1 sub-area encompassing the whole
region. Of course, not all the sub-areas within that large memory area will
belong to the class id we are working on, but those will just be left untouched
(which will not add to the RSS during the release process).

Reviewers: pcc, cferris, hctim, eugenis

Subscribers: llvm-commits, #sanitizers

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D83993
2020-07-24 10:35:49 -07:00
Kostya Kortchinsky 79de8f8441 [scudo][standalone] Release smaller blocks less often
Summary:
Releasing smaller blocks is costly and only yields significant
results when there is a large percentage of free bytes for a given
size class (see numbers below).

This CL introduces a couple of additional checks for sizes lower
than 256. First we want to make sure that there is enough free bytes,
relatively to the amount of allocated bytes. We are looking at 8X% to
9X% (smaller blocks require higher percentage). We also want to make
sure there has been enough activity with the freelist to make it
worth the time, so we now check that the bytes pushed to the freelist
is at least 1/16th of the allocated bytes for those classes.

Additionally, we clear batches before destroying them now - this
could have prevented some releases to occur (class id 0 rarely
releases anyway).

Here are the numbers, for about 1M allocations in multiple threads:

Size: 16
85% freed -> 0% released
86% freed -> 0% released
87% freed -> 0% released
88% freed -> 0% released
89% freed -> 0% released
90% freed -> 0% released
91% freed -> 0% released
92% freed -> 0% released
93% freed -> 0% released
94% freed -> 0% released
95% freed -> 0% released
96% freed -> 0% released
97% freed -> 2% released
98% freed -> 7% released
99% freed -> 27% released
Size: 32
85% freed -> 0% released
86% freed -> 0% released
87% freed -> 0% released
88% freed -> 0% released
89% freed -> 0% released
90% freed -> 0% released
91% freed -> 0% released
92% freed -> 0% released
93% freed -> 0% released
94% freed -> 0% released
95% freed -> 1% released
96% freed -> 3% released
97% freed -> 7% released
98% freed -> 17% released
99% freed -> 41% released
Size: 48
85% freed -> 0% released
86% freed -> 0% released
87% freed -> 0% released
88% freed -> 0% released
89% freed -> 0% released
90% freed -> 0% released
91% freed -> 0% released
92% freed -> 0% released
93% freed -> 0% released
94% freed -> 1% released
95% freed -> 3% released
96% freed -> 7% released
97% freed -> 13% released
98% freed -> 27% released
99% freed -> 52% released
Size: 64
85% freed -> 0% released
86% freed -> 0% released
87% freed -> 0% released
88% freed -> 0% released
89% freed -> 0% released
90% freed -> 0% released
91% freed -> 0% released
92% freed -> 1% released
93% freed -> 2% released
94% freed -> 3% released
95% freed -> 6% released
96% freed -> 11% released
97% freed -> 20% released
98% freed -> 35% released
99% freed -> 59% released
Size: 80
85% freed -> 0% released
86% freed -> 0% released
87% freed -> 0% released
88% freed -> 0% released
89% freed -> 0% released
90% freed -> 1% released
91% freed -> 1% released
92% freed -> 2% released
93% freed -> 4% released
94% freed -> 6% released
95% freed -> 10% released
96% freed -> 17% released
97% freed -> 26% released
98% freed -> 41% released
99% freed -> 64% released
Size: 96
85% freed -> 0% released
86% freed -> 0% released
87% freed -> 0% released
88% freed -> 0% released
89% freed -> 1% released
90% freed -> 1% released
91% freed -> 3% released
92% freed -> 4% released
93% freed -> 6% released
94% freed -> 10% released
95% freed -> 14% released
96% freed -> 21% released
97% freed -> 31% released
98% freed -> 47% released
99% freed -> 68% released
Size: 112
85% freed -> 0% released
86% freed -> 1% released
87% freed -> 1% released
88% freed -> 2% released
89% freed -> 3% released
90% freed -> 4% released
91% freed -> 6% released
92% freed -> 8% released
93% freed -> 11% released
94% freed -> 16% released
95% freed -> 22% released
96% freed -> 30% released
97% freed -> 40% released
98% freed -> 55% released
99% freed -> 74% released
Size: 128
85% freed -> 0% released
86% freed -> 1% released
87% freed -> 1% released
88% freed -> 2% released
89% freed -> 3% released
90% freed -> 4% released
91% freed -> 6% released
92% freed -> 8% released
93% freed -> 11% released
94% freed -> 16% released
95% freed -> 22% released
96% freed -> 30% released
97% freed -> 40% released
98% freed -> 55% released
99% freed -> 74% released
Size: 144
85% freed -> 1% released
86% freed -> 2% released
87% freed -> 3% released
88% freed -> 4% released
89% freed -> 6% released
90% freed -> 7% released
91% freed -> 10% released
92% freed -> 13% released
93% freed -> 17% released
94% freed -> 22% released
95% freed -> 28% released
96% freed -> 37% released
97% freed -> 47% released
98% freed -> 61% released
99% freed -> 78% released
Size: 160
85% freed -> 1% released
86% freed -> 2% released
87% freed -> 3% released
88% freed -> 4% released
89% freed -> 5% released
90% freed -> 7% released
91% freed -> 10% released
92% freed -> 13% released
93% freed -> 17% released
94% freed -> 22% released
95% freed -> 28% released
96% freed -> 37% released
97% freed -> 47% released
98% freed -> 61% released
99% freed -> 78% released
Size: 176
85% freed -> 2% released
86% freed -> 3% released
87% freed -> 4% released
88% freed -> 6% released
89% freed -> 7% released
90% freed -> 9% released
91% freed -> 12% released
92% freed -> 15% released
93% freed -> 20% released
94% freed -> 25% released
95% freed -> 32% released
96% freed -> 40% released
97% freed -> 51% released
98% freed -> 64% released
99% freed -> 80% released
Size: 192
85% freed -> 4% released
86% freed -> 5% released
87% freed -> 6% released
88% freed -> 8% released
89% freed -> 10% released
90% freed -> 13% released
91% freed -> 16% released
92% freed -> 20% released
93% freed -> 24% released
94% freed -> 30% released
95% freed -> 37% released
96% freed -> 45% released
97% freed -> 55% released
98% freed -> 68% released
99% freed -> 82% released
Size: 224
85% freed -> 8% released
86% freed -> 10% released
87% freed -> 12% released
88% freed -> 14% released
89% freed -> 17% released
90% freed -> 20% released
91% freed -> 23% released
92% freed -> 28% released
93% freed -> 33% released
94% freed -> 39% released
95% freed -> 46% released
96% freed -> 53% released
97% freed -> 63% released
98% freed -> 73% released
99% freed -> 85% released
Size: 240
85% freed -> 8% released
86% freed -> 10% released
87% freed -> 12% released
88% freed -> 14% released
89% freed -> 17% released
90% freed -> 20% released
91% freed -> 23% released
92% freed -> 28% released
93% freed -> 33% released
94% freed -> 39% released
95% freed -> 46% released
96% freed -> 54% released
97% freed -> 63% released
98% freed -> 73% released
99% freed -> 85% released

Reviewers: cferris, pcc, hctim, eugenis

Subscribers: #sanitizers, llvm-commits

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D82031
2020-07-16 09:44:25 -07:00
Peter Collingbourne 21d50019ca scudo: Add support for diagnosing memory errors when memory tagging is enabled.
Introduce a function __scudo_get_error_info() that may be called to interpret
a crash resulting from a memory error, potentially in another process,
given information extracted from the crashing process. The crash may be
interpreted as a use-after-free, buffer overflow or buffer underflow.

Also introduce a feature to optionally record a stack trace for each
allocation and deallocation. If this feature is enabled, a stack trace for
the allocation and (if applicable) the deallocation will also be available
via __scudo_get_error_info().

Differential Revision: https://reviews.llvm.org/D77283
2020-04-17 17:26:30 -07:00
Peter Collingbourne 9c86b83ffc scudo: Replace ALIGNED macro with standard alignas specifier.
alignas was introduced in C++11 and is already being used throughout LLVM.

Differential Revision: https://reviews.llvm.org/D77823
2020-04-09 14:36:03 -07:00
Kostya Kortchinsky a0e86420ae [scudo][standalone] Do not fill 32b regions at once
Summary:
For the 32b primary, whenever we created a region, we would fill it
all at once (eg: create all the transfer batches for all the blocks
in that region). This wasn't ideal as all the potential blocks in
a newly created region might not be consummed right away, and it was
using extra memory (and release cycles) to keep all those free
blocks.

So now we keep track of the current region for a given class, and
how filled it is, carving out at most `MaxNumBatches` worth of
blocks at a time.

Additionally, lower `MaxNumBatches` on Android from 8 to 4. This
lowers the randomness of blocks, which isn't ideal for security, but
keeps things more clumped up for PSS/RSS accounting purposes.

Subscribers: #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D75551
2020-03-04 14:22:24 -08:00
Kostya Kortchinsky c753a306fd [scudo][standalone] Various improvements wrt RSS
Summary:
This patch includes several changes to reduce the overall footprint
of the allocator:
- for realloc'd chunks: only keep the same chunk when lowering the size
  if the delta is within a page worth of bytes;
- when draining a cache: drain the beginning, not the end; we add pointers
  at the end, so that meant we were draining the most recently added
  pointers;
- change the release code to account for an freed up last page: when
  scanning the pages, we were looking for pages fully covered by blocks;
  in the event of the last page, if it's only partially covered, we
  wouldn't mark it as releasable - even what follows the last chunk is
  all 0s. So now mark the rest of the page as releasable, and adapt the
  test;
- add a missing `setReleaseToOsIntervalMs` to the cacheless secondary;
- adjust the Android classes based on more captures thanks to pcc@'s
  tool.

Reviewers: pcc, cferris, hctim, eugenis

Subscribers: #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D75142
2020-02-26 12:25:43 -08:00
Kostya Kortchinsky fc69967a4b [scudo][standalone] Shift some data from dynamic to static
Summary:
Most of our larger data is dynamically allocated (via `map`) but it
became an hindrance with regard to init time, for a cost to benefit
ratio that is not great. So change the `TSD`s, `RegionInfo`, `ByteMap`
to be static.

Additionally, for reclaiming, we used mapped & unmapped a buffer each
time, which is costly. It turns out that we can have a static buffer,
and that there isn't much contention on it.

One of the other things changed here, is that we hard set the number
of TSDs on Android to the maximum number, as there could be a
situation where cores are put to sleep and we could miss some.

Subscribers: mgorny, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D74696
2020-02-18 09:38:50 -08:00
Christopher Ferris 5f91c7b980 [scudo][standalone] Allow setting release to OS
Summary:
Add a method to set the release to OS value as the system runs,
and allow this to be set differently in the primary and the secondary.
Also, add a default value to use for primary and secondary. This
allows Android to have a default that is different for
primary/secondary.

Update mallopt to support setting the release to OS value.

Reviewers: pcc, cryptoad

Reviewed By: cryptoad

Subscribers: cryptoad, jfb, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D74448
2020-02-14 12:57:34 -08:00
Kostya Kortchinsky 64cb77b946 [scudo][standalone] Change default Android config
Summary:
This changes a couple of parameters in the default Android config to
address some performance and memory footprint issues (well to be closer
to the default Bionic allocator numbers).

Subscribers: #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D73750
2020-01-31 14:46:35 -08:00
Kostya Kortchinsky 654f5d6845 [scudo][standalone] Release secondary memory on purge
Summary:
The Secondary's cache needs to be released when the Combined's
`releaseToOS` function is called (via `M_PURGE`) for example,
which this CL adds.

Additionally, if doing a forced release, we'll release the
transfer batch class as well since now we can do that.

There is a couple of other house keeping changes as well:
- read the page size only once in the Secondary Cache `store`
- remove the interval check for `CanRelease`: we are going to
  make that configurable via `mallopt` so this needs not be
  set in stone there.

Reviewers: cferris, hctim, pcc, eugenis

Subscribers: #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D73730
2020-01-30 13:22:45 -08:00
Kostya Kortchinsky 6cb830de6e [scudo][standalone] Revert some perf-degrading changes
Summary:
A couple of seemingly innocuous changes ended up having a large impact
on the 32-bit performance. I still have to make those configurable at
some point, but right now it will have to do.

Subscribers: #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D73658
2020-01-29 14:00:48 -08:00
Kostya Kortchinsky 993e3c9269 [scudo][standalone] Secondary & general other improvements
Summary:
This CL changes multiple things to improve performance (notably on
Android).We introduce a cache class for the Secondary that is taking
care of this mechanism now.

The changes:
- change the Secondary "freelist" to an array. By keeping free secondary
  blocks linked together through their headers, we were keeping a page
  per block, which isn't great. Also we know touch less pages when
  walking the new "freelist".
- fix an issue with the freelist getting full: if the pattern is an ever
  increasing size malloc then free, the freelist would fill up and
  entries would not be used. So now we empty the list if we get to many
  "full" events;
- use the global release to os interval option for the secondary: it
  was too costly to release all the time, particularly for pattern that
  are malloc(X)/free(X)/malloc(X). Now the release will only occur
  after the selected interval, when going through the deallocate path;
- allow release of the `BatchClassId` class: it is releasable, we just
  have to make sure we don't mark the batches containing batches
  pointers as free.
- change the default release interval to 1s for Android to match the
  current Bionic allocator configuration. A patch is coming up to allow
  changing it through `mallopt`.
- lower the smallest class that can be released to `PageSize/64`.

Reviewers: cferris, pcc, eugenis, morehouse, hctim

Subscribers: phosek, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D73507
2020-01-28 07:28:55 -08:00
Peter Collingbourne c299d1981d scudo: Add initial memory tagging support.
When the hardware and operating system support the ARM Memory Tagging
Extension, tag primary allocation granules with a random tag. The granules
either side of the allocation are tagged with tag 0, which is normally
excluded from the set of tags that may be selected randomly. Memory is
also retagged with a random tag when it is freed, and we opportunistically
reuse the new tag when the block is reused to reduce overhead. This causes
linear buffer overflows to be caught deterministically and non-linear buffer
overflows and use-after-free to be caught probabilistically.

This feature is currently only enabled for the Android allocator
and depends on an experimental Linux kernel branch available here:
https://github.com/pcc/linux/tree/android-experimental-mte

All code that depends on the kernel branch is hidden behind a macro,
ANDROID_EXPERIMENTAL_MTE. This is the same macro that is used by the Android
platform and may only be defined in non-production configurations. When the
userspace interface is finalized the code will be updated to use the stable
interface and all #ifdef ANDROID_EXPERIMENTAL_MTE will be removed.

Differential Revision: https://reviews.llvm.org/D70762
2020-01-16 13:27:49 -08:00
Kostya Kortchinsky 9ef6faf496 [scudo][standalone] Fork support
Summary:
fork() wasn't well (or at all) supported in Scudo. This materialized
in deadlocks in children.

In order to properly support fork, we will lock the allocator pre-fork
and unlock it post-fork in parent and child. This is done via a
`pthread_atfork` call installing the necessary handlers.

A couple of things suck here: this function allocates - so this has to
be done post initialization as our init path is not reentrance, and it
doesn't allow for an extra pointer - so we can't pass the allocator we
are currently working with.

In order to work around this, I added a post-init template parameter
that gets executed once the allocator is initialized for the current
thread. Its job for the C wrappers is to install the atfork handlers.

I reorganized a bit the impacted area and added some tests, courtesy
of cferris@ that were deadlocking prior to this fix.

Subscribers: jfb, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D72470
2020-01-14 07:51:48 -08:00
Peter Collingbourne 6fd6cfdf72 scudo: Replace a couple of macros with their expansions.
The macros INLINE and COMPILER_CHECK always expand to the same thing (inline
and static_assert respectively). Both expansions are standards compliant C++
and are used consistently in the rest of LLVM, so let's improve consistency
with the rest of LLVM by replacing them with the expansions.

Differential Revision: https://reviews.llvm.org/D70793
2019-11-27 10:12:27 -08:00
Kostya Kortchinsky 46240c3872 [scudo][standalone] Minor optimization & improvements
Summary:
A few small improvements and optimizations:
- when refilling the free list, push back the last batch and return
  the front one: this allows to keep the allocations towards the front
  of the region;
- instead of using 48 entries in the shuffle array, use a multiple of
  `MaxNumCached`;
- make the maximum number of batches to create on refil a constant;
  ultimately it should be configurable, but that's for later;
- `initCache` doesn't need to zero out the cache, it's already done.
- it turns out that when using `||` or `&&`, the compiler is adamant
  on adding a short circuit for every part of the expression. Which
  ends up making somewhat annoying asm with lots of test and
  conditional jump. I am changing that to bitwise `|` or `&` in two
  place so that the generated code looks better. Added comments since
  it might feel weird to people.

This yields to some small performance gains overall, nothing drastic
though.

Reviewers: hctim, morehouse, cferris, eugenis

Subscribers: #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D70452
2019-11-21 10:05:39 -08:00
Kostya Kortchinsky 6f2de9cbb3 [scudo][standalone] Consolidate lists
Summary:
This is a clean patch using the last diff of D69265, but using git
instead of svn, since svn went ro and arc was making my life harded
than it needed to be.

I was going to introduce a couple more lists and realized that our
lists are currently a bit all over the place. While we have a singly
linked list type relatively well defined, we are using doubly linked
lists defined on the fly for the stats and for the secondary blocks.

This CL adds a doubly linked list object, reorganizing the singly list
one to extract as much of the common code as possible. We use this
new type in the stats and the secondary. We also reorganize the list
tests to benefit from this consolidation.

There are a few side effect changes such as using for iterator loops
that are, in my opinion, cleaner in a couple of places.

Reviewers: hctim, morehouse, pcc, cferris

Reviewed By: hctim

Subscribers: jfb, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D69516
2019-10-28 09:34:36 -07:00
Kostya Kortchinsky f7b1489ffc [scudo][standalone] Get statistics in a char buffer
Summary:
Following up on D68471, this CL introduces some `getStats` APIs to
gather statistics in char buffers (`ScopedString` really) instead of
printing them out right away. Ultimately `printStats` will just
output the buffer, but that allows us to potentially do some work
on the intermediate buffer, and can be used for a `mallocz` type
of functionality. This allows us to pretty much get rid of all the
`Printf` calls around, but I am keeping the function in for
debugging purposes.

This changes the existing tests to use the new APIs when required.

I will add new tests as suggested in D68471 in another CL.

Reviewers: morehouse, hctim, vitalybuka, eugenis, cferris

Reviewed By: morehouse

Subscribers: delcypher, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D68653

llvm-svn: 374173
2019-10-09 15:09:28 +00:00
Kostya Kortchinsky bebdab63e8 [scudo][standalone] Correct releaseToOS behavior
Summary:
There was an issue in `releaseToOSMaybe`: one of the criteria to
decide if we should proceed with the release was wrong. Namely:

```
const uptr N = Sci->Stats.PoppedBlocks - Sci->Stats.PushedBlocks;
if (N * BlockSize < PageSize)
  return; // No chance to release anything.
```

I meant to check if the amount of bytes in the free list was lower
than a page, but this actually checks if the amount of **in use** bytes
was lower than a page.

The correct code is:

```
const uptr BytesInFreeList =
  Region->AllocatedUser -
  (Region->Stats.PoppedBlocks - Region->Stats.PushedBlocks) * BlockSize;
if (BytesInFreeList < PageSize)
  return 0; // No chance to release anything.
```

Consequences of the bug:
- if a class size has less than a page worth of in-use bytes (allocated
  or in a cache), reclaiming would not occur, whatever the amount of
  blocks in the free list; in real world scenarios this is unlikely to
  happen and be impactful;
- if a class size had less than a page worth of free bytes (and enough
  in-use bytes, etc), then reclaiming would be attempted, with likely
  no result. This means the reclaiming was overzealous at times.

I didn't have a good way to test for this, so I changed the prototype
of the function to return the number of bytes released, allowing to
get the information needed. The test added fails with the initial
criteria.

Another issue is that `ReleaseToOsInterval` can actually be 0, meaning
we always try to release (side note: it's terrible for performances).
so change a `> 0` check to `>= 0`.

Additionally, decrease the `CanRelease` threshold to `PageSize / 32`.
I still have to make that configurable but I will do it at another time.

Finally, rename some variables in `printStats`: I feel like "available"
was too ambiguous, so change it to "total".

Reviewers: morehouse, hctim, eugenis, vitalybuka, cferris

Reviewed By: morehouse

Subscribers: delcypher, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D68471

llvm-svn: 373930
2019-10-07 17:37:39 +00:00
Kostya Kortchinsky 3e5360f194 [scudo][standalone] Fix malloc_iterate
Summary:
cferris's Bionic tests found an issue in Scudo's `malloc_iterate`.

We were inclusive of both boundaries, which resulted in a `Block` that
was located on said boundary to be possibly accounted for twice, or
just being accounted for while iterating on regions that are not ours
(usually the unmapped ones in between Primary regions).

The fix is to exclude the upper boundary in `iterateOverChunks`, and
add a regression test.

This additionally corrects a typo in a comment, and change the 64-bit
Primary iteration function to not assume that `BatchClassId` is 0.

Reviewers: cferris, morehouse, hctim, vitalybuka, eugenis

Reviewed By: hctim

Subscribers: delcypher, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D66231

llvm-svn: 369400
2019-08-20 16:17:08 +00:00
Kostya Kortchinsky 2be59170d4 [scudo][standalone] Add more stats to mallinfo
Summary:
Android requires additional stats in mallinfo. While we can provide
right away the number of bytes mapped (Primary+Secondary), there was
no way to get the number of free bytes (only makes sense for the
Primary since the Secondary unmaps everything on deallocation).

An approximation could be `StatMapped - StatAllocated`, but since we
are mapping in `1<<17` increments for the 64-bit Primary, it's fairly
inaccurate.

So we introduce `StatFree` (note it's `Free`, not `Freed`!), which
keeps track of the amount of Primary blocks currently unallocated.

Reviewers: cferris, eugenis, vitalybuka, hctim, morehouse

Reviewed By: morehouse

Subscribers: delcypher, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D66112

llvm-svn: 368866
2019-08-14 16:04:01 +00:00
Kostya Kortchinsky 419f1a4185 [scudo][standalone] Optimization pass
Summary:
This introduces a bunch of small optimizations with the purpose of
making the fastpath tighter:
- tag more conditions as `LIKELY`/`UNLIKELY`: as a rule of thumb we
  consider that every operation related to the secondary is unlikely
- attempt to reduce the number of potentially extraneous instructions
- reorganize the `Chunk` header to not straddle a word boundary and
  use more appropriate types

Note that some `LIKELY`/`UNLIKELY` impact might be less obvious as
they are in slow paths (for example in `secondary.cc`), but at this
point I am throwing a pretty wide net, and it's consistant and doesn't
hurt.

This was mosly done for the benfit of Android, but other platforms
benefit from it too. An aarch64 Android benchmark gives:
- before:
```
  BM_youtube/min_time:15.000/repeats:4/manual_time_mean              445244 us       659385 us            4
  BM_youtube/min_time:15.000/repeats:4/manual_time_median            445007 us       658970 us            4
  BM_youtube/min_time:15.000/repeats:4/manual_time_stddev               885 us         1332 us            4
```
- after:
```
  BM_youtube/min_time:15.000/repeats:4/manual_time_mean       415697 us       621925 us            4
  BM_youtube/min_time:15.000/repeats:4/manual_time_median     415913 us       622061 us            4
  BM_youtube/min_time:15.000/repeats:4/manual_time_stddev        990 us         1163 us            4
```

Additional since `-Werror=conversion` is enabled on some platforms we
are built on, enable it upstream to catch things early: a few sign
conversions had slept through and needed additional casting.

Reviewers: hctim, morehouse, eugenis, vitalybuka

Reviewed By: vitalybuka

Subscribers: srhines, mgorny, javed.absar, kristof.beyls, delcypher, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D64664

llvm-svn: 366918
2019-07-24 16:36:01 +00:00
Kostya Kortchinsky 8f18a4c980 [scudo][standalone] NFC corrections
Summary:
A few corrections:
- rename `TransferBatch::MaxCached` to `getMaxCached` to conform with
  the style guide;
- move `getBlockBegin` from `Chunk::` to `Allocator::`: I believe it
  was a fallacy to have this be a `Chunk` method, as chunks'
  relationship to backend blocks are up to the frontend allocator. It
  makes more sense now, particularly with regard to the offset. Update
  the associated chunk test as the method isn't available there
  anymore;
- add a forgotten `\n` to a log string;
- for `releaseToOs`, instead of starting at `1`, start at `0` and
  `continue` on `BatchClassId`: in the end it's identical but doesn't
  assume a particular class id for batches;
- change a `CHECK` to a `reportOutOfMemory`: it's a clearer message

Reviewers: hctim, morehouse, eugenis, vitalybuka

Reviewed By: hctim

Subscribers: delcypher, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D64570

llvm-svn: 365816
2019-07-11 19:55:53 +00:00
Kostya Kortchinsky aeb3826228 [scudo][standalone] Merge Spin & Blocking mutex into a Hybrid one
Summary:
We ran into a problem on Fuchsia where yielding threads would never
be deboosted, ultimately resulting in several threads spinning on the
same TSD, and no possibility for another thread to be scheduled,
dead-locking the process.

While this was fixed in Zircon, this lead to discussions about if
spinning without a break condition was a good decision, and settled on
a new hybrid model that would spin for a while then block.

Currently we are using a number of iterations for spinning that is
mostly arbitrary (based on sanitizer_common values), but this can
be tuned in the future.

Since we are touching `common.h`, we also use this change as a vehicle
for an Android optimization (the page size is fixed in Bionic, so use
a fixed value too).

Reviewers: morehouse, hctim, eugenis, dvyukov, vitalybuka

Reviewed By: hctim

Subscribers: srhines, delcypher, jfb, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D64358

llvm-svn: 365790
2019-07-11 15:32:26 +00:00
Kostya Kortchinsky 624a24e156 [scudo][standalone] Unmap memory in tests
Summary:
The more tests are added, the more we are limited by the size of the
address space on 32-bit. Implement `unmapTestOnly` all around (like it
is in sanitzer_common) to be able to free up some memory.
This is not intended to be a proper "destructor" for an allocator, but
allows us to not fail due to having no memory left.

Reviewers: morehouse, vitalybuka, eugenis, hctim

Reviewed By: morehouse

Subscribers: delcypher, jfb, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D63146

llvm-svn: 363095
2019-06-11 19:50:12 +00:00
Kostya Kortchinsky 52f0130216 [scudo][standalone] Introduce the Primary(s) and LocalCache
Summary:
This CL introduces the 32 & 64-bit primary allocators, and associated
Local Cache. While the general idea is mostly similar to what exists
in sanitizer_common, it departs from the original code somewhat
significantly:
- the 64-bit primary no longer uses a free array at the end of a region
  but uses batches of free blocks in region 0, allowing for a
  convergence with the 32-bit primary behavior;
- as a result, there is only one (templated) local cache type for both
  primary allocators, and memory reclaiming can be implemented similarly
  for the 32-bit & 64-bit platforms;
- 64-bit primary regions are handled a bit differently: we do not
  reserve 4TB of memory that we split, but reserve `NumClasses *
  2^RegionSizeLog`, each region being offseted by a random number of
  pages from its computed base. A side effect of this is that the 64-bit
  primary works on 32-bit platform (I don't think we want to encourage
  it but it's an interesting side effect);

Reviewers: vitalybuka, eugenis, morehouse, hctim

Reviewed By: morehouse

Subscribers: srhines, mgorny, delcypher, jfb, #sanitizers, llvm-commits

Tags: #llvm, #sanitizers

Differential Revision: https://reviews.llvm.org/D61745

llvm-svn: 361159
2019-05-20 14:40:04 +00:00