[scudo] Scudo thread specific data refactor, part 3
Summary:
Previous parts: D38139, D38183.
In this part of the refactor, we abstract the Linux vs Android TSD dissociation
in favor of a Exclusive vs Shared one, allowing for easier platform introduction
and configuration.
Most of this change consist of shuffling the files around to reflect the new
organization.
We introduce `scudo_platform.h` where platform specific definition lie. This
involves the TSD model and the platform specific allocator parameters. In an
upcoming CL, those will be configurable via defines, but we currently stick
with conservative defaults.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D38244
llvm-svn: 314224
2017-09-27 01:20:02 +08:00
|
|
|
//===-- scudo_tsd.h ---------------------------------------------*- C++ -*-===//
|
2017-04-28 04:21:16 +08:00
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2017-04-28 04:21:16 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
///
|
[scudo] Scudo thread specific data refactor, part 3
Summary:
Previous parts: D38139, D38183.
In this part of the refactor, we abstract the Linux vs Android TSD dissociation
in favor of a Exclusive vs Shared one, allowing for easier platform introduction
and configuration.
Most of this change consist of shuffling the files around to reflect the new
organization.
We introduce `scudo_platform.h` where platform specific definition lie. This
involves the TSD model and the platform specific allocator parameters. In an
upcoming CL, those will be configurable via defines, but we currently stick
with conservative defaults.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D38244
llvm-svn: 314224
2017-09-27 01:20:02 +08:00
|
|
|
/// Scudo thread specific data definition.
|
2017-04-28 04:21:16 +08:00
|
|
|
/// Implementation will differ based on the thread local storage primitives
|
|
|
|
/// offered by the underlying platform.
|
|
|
|
///
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
[scudo] Scudo thread specific data refactor, part 3
Summary:
Previous parts: D38139, D38183.
In this part of the refactor, we abstract the Linux vs Android TSD dissociation
in favor of a Exclusive vs Shared one, allowing for easier platform introduction
and configuration.
Most of this change consist of shuffling the files around to reflect the new
organization.
We introduce `scudo_platform.h` where platform specific definition lie. This
involves the TSD model and the platform specific allocator parameters. In an
upcoming CL, those will be configurable via defines, but we currently stick
with conservative defaults.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D38244
llvm-svn: 314224
2017-09-27 01:20:02 +08:00
|
|
|
#ifndef SCUDO_TSD_H_
|
|
|
|
#define SCUDO_TSD_H_
|
2017-04-28 04:21:16 +08:00
|
|
|
|
|
|
|
#include "scudo_allocator.h"
|
|
|
|
#include "scudo_utils.h"
|
|
|
|
|
2017-10-12 23:01:09 +08:00
|
|
|
#include <pthread.h>
|
|
|
|
|
2017-04-28 04:21:16 +08:00
|
|
|
namespace __scudo {
|
|
|
|
|
[scudo] Improve the scalability of the shared TSD model
Summary:
The shared TSD model in its current form doesn't scale. Here is an example of
rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core
machines (defaulting to a `CompactSizeClassMap` and no Quarantine):
- with tcmalloc: 337K reqs/sec, peak RSS of 338MB;
- with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB;
- with scudo (shared): 241K reqs/sec, peak RSS of 324MB.
This isn't great, since the exclusive model uses a lot of memory, while the
shared model doesn't even come close to be competitive.
This is mostly due to the fact that we are consistently scanning the TSD pool
starting at index 0 for an available TSD, which can result in a lot of failed
lock attempts, and touching some memory that needs not be touched.
This CL attempts to make things better in most situations:
- first, use a thread local variable on Linux (intead of pthread APIs) to store
the current TSD in the shared model;
- move the locking boolean out of the TSD: this allows the compiler to use a
register and potentially optimize out a branch instead of reading it from the
TSD everytime (we also save a tiny bit of memory per TSD);
- 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so
store the `Precedence` in a `uptr` instead of a `u64`. We lose some
nanoseconds of precision and we'll wrap around at some point, but the benefit
is worth it;
- change a `CHECK` to a `DCHECK`: this should never happen, but if something is
ever terribly wrong, we'll crash on a near null AV if the TSD happens to be
null;
- based on an idea by dvyukov@, we are implementing a bound random scan for
an available TSD. This requires computing the coprimes for the number of TSDs,
and attempting to lock up to 4 TSDs in an random order before falling back to
the current one. This is obviously slightly more expansive when we have just
2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still
basically corresponds to the moment of the first contention on a TSD. To seed
on random choice, we use the precedence of the current TSD since it is very
likely to be non-zero (since we are in the slow path after a failed `tryLock`)
With those modifications, the benchmark yields to:
- with scudo (shared): 330K reqs/sec, peak RSS of 327MB.
So the shared model for this specific situation not only becomes competitive but
outperforms the exclusive model. I experimented with some values greater than 4
for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just
sticking with the current TSD is also a tad slower. Numbers on platforms with
less cores (eg: Android) remain similar.
Reviewers: alekseyshl, dvyukov, javed.absar
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D47289
llvm-svn: 334410
2018-06-11 22:50:31 +08:00
|
|
|
struct ALIGNED(SANITIZER_CACHE_LINE_SIZE) ScudoTSD {
|
2018-07-20 23:07:17 +08:00
|
|
|
AllocatorCacheT Cache;
|
2017-04-28 04:21:16 +08:00
|
|
|
uptr QuarantineCachePlaceHolder[4];
|
[scudo] Scudo thread specific data refactor, part 2
Summary:
Following D38139, we now consolidate the TSD definition, merging the shared
TSD definition with the exclusive TSD definition. We introduce a boolean set
at initializaton denoting the need for the TSD to be unlocked or not. This
adds some unused members to the exclusive TSD, but increases consistency and
reduces the definitions fragmentation.
We remove the fallback mechanism from `scudo_allocator.cpp` and add a fallback
TSD in the non-shared version. Since the shared version doesn't require one,
this makes overall more sense.
There are a couple of additional cosmetic changes: removing the header guards
from the remaining `.inc` files, added error string to a `CHECK`.
Question to reviewers: I thought about friending `getTSDAndLock` in `ScudoTSD`
so that the `FallbackTSD` could `Mutex.Lock()` directly instead of `lock()`
which involved zeroing out the `Precedence`, which is unused otherwise. Is it
worth doing?
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38183
llvm-svn: 314110
2017-09-25 23:12:08 +08:00
|
|
|
|
[scudo] Improve the scalability of the shared TSD model
Summary:
The shared TSD model in its current form doesn't scale. Here is an example of
rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core
machines (defaulting to a `CompactSizeClassMap` and no Quarantine):
- with tcmalloc: 337K reqs/sec, peak RSS of 338MB;
- with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB;
- with scudo (shared): 241K reqs/sec, peak RSS of 324MB.
This isn't great, since the exclusive model uses a lot of memory, while the
shared model doesn't even come close to be competitive.
This is mostly due to the fact that we are consistently scanning the TSD pool
starting at index 0 for an available TSD, which can result in a lot of failed
lock attempts, and touching some memory that needs not be touched.
This CL attempts to make things better in most situations:
- first, use a thread local variable on Linux (intead of pthread APIs) to store
the current TSD in the shared model;
- move the locking boolean out of the TSD: this allows the compiler to use a
register and potentially optimize out a branch instead of reading it from the
TSD everytime (we also save a tiny bit of memory per TSD);
- 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so
store the `Precedence` in a `uptr` instead of a `u64`. We lose some
nanoseconds of precision and we'll wrap around at some point, but the benefit
is worth it;
- change a `CHECK` to a `DCHECK`: this should never happen, but if something is
ever terribly wrong, we'll crash on a near null AV if the TSD happens to be
null;
- based on an idea by dvyukov@, we are implementing a bound random scan for
an available TSD. This requires computing the coprimes for the number of TSDs,
and attempting to lock up to 4 TSDs in an random order before falling back to
the current one. This is obviously slightly more expansive when we have just
2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still
basically corresponds to the moment of the first contention on a TSD. To seed
on random choice, we use the precedence of the current TSD since it is very
likely to be non-zero (since we are in the slow path after a failed `tryLock`)
With those modifications, the benchmark yields to:
- with scudo (shared): 330K reqs/sec, peak RSS of 327MB.
So the shared model for this specific situation not only becomes competitive but
outperforms the exclusive model. I experimented with some values greater than 4
for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just
sticking with the current TSD is also a tad slower. Numbers on platforms with
less cores (eg: Android) remain similar.
Reviewers: alekseyshl, dvyukov, javed.absar
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D47289
llvm-svn: 334410
2018-06-11 22:50:31 +08:00
|
|
|
void init();
|
2017-04-28 04:21:16 +08:00
|
|
|
void commitBack();
|
[scudo] Scudo thread specific data refactor, part 2
Summary:
Following D38139, we now consolidate the TSD definition, merging the shared
TSD definition with the exclusive TSD definition. We introduce a boolean set
at initializaton denoting the need for the TSD to be unlocked or not. This
adds some unused members to the exclusive TSD, but increases consistency and
reduces the definitions fragmentation.
We remove the fallback mechanism from `scudo_allocator.cpp` and add a fallback
TSD in the non-shared version. Since the shared version doesn't require one,
this makes overall more sense.
There are a couple of additional cosmetic changes: removing the header guards
from the remaining `.inc` files, added error string to a `CHECK`.
Question to reviewers: I thought about friending `getTSDAndLock` in `ScudoTSD`
so that the `FallbackTSD` could `Mutex.Lock()` directly instead of `lock()`
which involved zeroing out the `Precedence`, which is unused otherwise. Is it
worth doing?
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38183
llvm-svn: 314110
2017-09-25 23:12:08 +08:00
|
|
|
|
|
|
|
INLINE bool tryLock() {
|
|
|
|
if (Mutex.TryLock()) {
|
|
|
|
atomic_store_relaxed(&Precedence, 0);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
if (atomic_load_relaxed(&Precedence) == 0)
|
[scudo] Improve the scalability of the shared TSD model
Summary:
The shared TSD model in its current form doesn't scale. Here is an example of
rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core
machines (defaulting to a `CompactSizeClassMap` and no Quarantine):
- with tcmalloc: 337K reqs/sec, peak RSS of 338MB;
- with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB;
- with scudo (shared): 241K reqs/sec, peak RSS of 324MB.
This isn't great, since the exclusive model uses a lot of memory, while the
shared model doesn't even come close to be competitive.
This is mostly due to the fact that we are consistently scanning the TSD pool
starting at index 0 for an available TSD, which can result in a lot of failed
lock attempts, and touching some memory that needs not be touched.
This CL attempts to make things better in most situations:
- first, use a thread local variable on Linux (intead of pthread APIs) to store
the current TSD in the shared model;
- move the locking boolean out of the TSD: this allows the compiler to use a
register and potentially optimize out a branch instead of reading it from the
TSD everytime (we also save a tiny bit of memory per TSD);
- 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so
store the `Precedence` in a `uptr` instead of a `u64`. We lose some
nanoseconds of precision and we'll wrap around at some point, but the benefit
is worth it;
- change a `CHECK` to a `DCHECK`: this should never happen, but if something is
ever terribly wrong, we'll crash on a near null AV if the TSD happens to be
null;
- based on an idea by dvyukov@, we are implementing a bound random scan for
an available TSD. This requires computing the coprimes for the number of TSDs,
and attempting to lock up to 4 TSDs in an random order before falling back to
the current one. This is obviously slightly more expansive when we have just
2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still
basically corresponds to the moment of the first contention on a TSD. To seed
on random choice, we use the precedence of the current TSD since it is very
likely to be non-zero (since we are in the slow path after a failed `tryLock`)
With those modifications, the benchmark yields to:
- with scudo (shared): 330K reqs/sec, peak RSS of 327MB.
So the shared model for this specific situation not only becomes competitive but
outperforms the exclusive model. I experimented with some values greater than 4
for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just
sticking with the current TSD is also a tad slower. Numbers on platforms with
less cores (eg: Android) remain similar.
Reviewers: alekseyshl, dvyukov, javed.absar
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D47289
llvm-svn: 334410
2018-06-11 22:50:31 +08:00
|
|
|
atomic_store_relaxed(&Precedence, static_cast<uptr>(
|
|
|
|
MonotonicNanoTime() >> FIRST_32_SECOND_64(16, 0)));
|
[scudo] Scudo thread specific data refactor, part 2
Summary:
Following D38139, we now consolidate the TSD definition, merging the shared
TSD definition with the exclusive TSD definition. We introduce a boolean set
at initializaton denoting the need for the TSD to be unlocked or not. This
adds some unused members to the exclusive TSD, but increases consistency and
reduces the definitions fragmentation.
We remove the fallback mechanism from `scudo_allocator.cpp` and add a fallback
TSD in the non-shared version. Since the shared version doesn't require one,
this makes overall more sense.
There are a couple of additional cosmetic changes: removing the header guards
from the remaining `.inc` files, added error string to a `CHECK`.
Question to reviewers: I thought about friending `getTSDAndLock` in `ScudoTSD`
so that the `FallbackTSD` could `Mutex.Lock()` directly instead of `lock()`
which involved zeroing out the `Precedence`, which is unused otherwise. Is it
worth doing?
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38183
llvm-svn: 314110
2017-09-25 23:12:08 +08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
INLINE void lock() {
|
|
|
|
atomic_store_relaxed(&Precedence, 0);
|
[scudo] Improve the scalability of the shared TSD model
Summary:
The shared TSD model in its current form doesn't scale. Here is an example of
rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core
machines (defaulting to a `CompactSizeClassMap` and no Quarantine):
- with tcmalloc: 337K reqs/sec, peak RSS of 338MB;
- with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB;
- with scudo (shared): 241K reqs/sec, peak RSS of 324MB.
This isn't great, since the exclusive model uses a lot of memory, while the
shared model doesn't even come close to be competitive.
This is mostly due to the fact that we are consistently scanning the TSD pool
starting at index 0 for an available TSD, which can result in a lot of failed
lock attempts, and touching some memory that needs not be touched.
This CL attempts to make things better in most situations:
- first, use a thread local variable on Linux (intead of pthread APIs) to store
the current TSD in the shared model;
- move the locking boolean out of the TSD: this allows the compiler to use a
register and potentially optimize out a branch instead of reading it from the
TSD everytime (we also save a tiny bit of memory per TSD);
- 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so
store the `Precedence` in a `uptr` instead of a `u64`. We lose some
nanoseconds of precision and we'll wrap around at some point, but the benefit
is worth it;
- change a `CHECK` to a `DCHECK`: this should never happen, but if something is
ever terribly wrong, we'll crash on a near null AV if the TSD happens to be
null;
- based on an idea by dvyukov@, we are implementing a bound random scan for
an available TSD. This requires computing the coprimes for the number of TSDs,
and attempting to lock up to 4 TSDs in an random order before falling back to
the current one. This is obviously slightly more expansive when we have just
2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still
basically corresponds to the moment of the first contention on a TSD. To seed
on random choice, we use the precedence of the current TSD since it is very
likely to be non-zero (since we are in the slow path after a failed `tryLock`)
With those modifications, the benchmark yields to:
- with scudo (shared): 330K reqs/sec, peak RSS of 327MB.
So the shared model for this specific situation not only becomes competitive but
outperforms the exclusive model. I experimented with some values greater than 4
for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just
sticking with the current TSD is also a tad slower. Numbers on platforms with
less cores (eg: Android) remain similar.
Reviewers: alekseyshl, dvyukov, javed.absar
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D47289
llvm-svn: 334410
2018-06-11 22:50:31 +08:00
|
|
|
Mutex.Lock();
|
[scudo] Scudo thread specific data refactor, part 2
Summary:
Following D38139, we now consolidate the TSD definition, merging the shared
TSD definition with the exclusive TSD definition. We introduce a boolean set
at initializaton denoting the need for the TSD to be unlocked or not. This
adds some unused members to the exclusive TSD, but increases consistency and
reduces the definitions fragmentation.
We remove the fallback mechanism from `scudo_allocator.cpp` and add a fallback
TSD in the non-shared version. Since the shared version doesn't require one,
this makes overall more sense.
There are a couple of additional cosmetic changes: removing the header guards
from the remaining `.inc` files, added error string to a `CHECK`.
Question to reviewers: I thought about friending `getTSDAndLock` in `ScudoTSD`
so that the `FallbackTSD` could `Mutex.Lock()` directly instead of `lock()`
which involved zeroing out the `Precedence`, which is unused otherwise. Is it
worth doing?
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38183
llvm-svn: 314110
2017-09-25 23:12:08 +08:00
|
|
|
}
|
|
|
|
|
[scudo] Improve the scalability of the shared TSD model
Summary:
The shared TSD model in its current form doesn't scale. Here is an example of
rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core
machines (defaulting to a `CompactSizeClassMap` and no Quarantine):
- with tcmalloc: 337K reqs/sec, peak RSS of 338MB;
- with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB;
- with scudo (shared): 241K reqs/sec, peak RSS of 324MB.
This isn't great, since the exclusive model uses a lot of memory, while the
shared model doesn't even come close to be competitive.
This is mostly due to the fact that we are consistently scanning the TSD pool
starting at index 0 for an available TSD, which can result in a lot of failed
lock attempts, and touching some memory that needs not be touched.
This CL attempts to make things better in most situations:
- first, use a thread local variable on Linux (intead of pthread APIs) to store
the current TSD in the shared model;
- move the locking boolean out of the TSD: this allows the compiler to use a
register and potentially optimize out a branch instead of reading it from the
TSD everytime (we also save a tiny bit of memory per TSD);
- 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so
store the `Precedence` in a `uptr` instead of a `u64`. We lose some
nanoseconds of precision and we'll wrap around at some point, but the benefit
is worth it;
- change a `CHECK` to a `DCHECK`: this should never happen, but if something is
ever terribly wrong, we'll crash on a near null AV if the TSD happens to be
null;
- based on an idea by dvyukov@, we are implementing a bound random scan for
an available TSD. This requires computing the coprimes for the number of TSDs,
and attempting to lock up to 4 TSDs in an random order before falling back to
the current one. This is obviously slightly more expansive when we have just
2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still
basically corresponds to the moment of the first contention on a TSD. To seed
on random choice, we use the precedence of the current TSD since it is very
likely to be non-zero (since we are in the slow path after a failed `tryLock`)
With those modifications, the benchmark yields to:
- with scudo (shared): 330K reqs/sec, peak RSS of 327MB.
So the shared model for this specific situation not only becomes competitive but
outperforms the exclusive model. I experimented with some values greater than 4
for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just
sticking with the current TSD is also a tad slower. Numbers on platforms with
less cores (eg: Android) remain similar.
Reviewers: alekseyshl, dvyukov, javed.absar
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D47289
llvm-svn: 334410
2018-06-11 22:50:31 +08:00
|
|
|
INLINE void unlock() { Mutex.Unlock(); }
|
[scudo] Scudo thread specific data refactor, part 2
Summary:
Following D38139, we now consolidate the TSD definition, merging the shared
TSD definition with the exclusive TSD definition. We introduce a boolean set
at initializaton denoting the need for the TSD to be unlocked or not. This
adds some unused members to the exclusive TSD, but increases consistency and
reduces the definitions fragmentation.
We remove the fallback mechanism from `scudo_allocator.cpp` and add a fallback
TSD in the non-shared version. Since the shared version doesn't require one,
this makes overall more sense.
There are a couple of additional cosmetic changes: removing the header guards
from the remaining `.inc` files, added error string to a `CHECK`.
Question to reviewers: I thought about friending `getTSDAndLock` in `ScudoTSD`
so that the `FallbackTSD` could `Mutex.Lock()` directly instead of `lock()`
which involved zeroing out the `Precedence`, which is unused otherwise. Is it
worth doing?
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38183
llvm-svn: 314110
2017-09-25 23:12:08 +08:00
|
|
|
|
[scudo] Improve the scalability of the shared TSD model
Summary:
The shared TSD model in its current form doesn't scale. Here is an example of
rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core
machines (defaulting to a `CompactSizeClassMap` and no Quarantine):
- with tcmalloc: 337K reqs/sec, peak RSS of 338MB;
- with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB;
- with scudo (shared): 241K reqs/sec, peak RSS of 324MB.
This isn't great, since the exclusive model uses a lot of memory, while the
shared model doesn't even come close to be competitive.
This is mostly due to the fact that we are consistently scanning the TSD pool
starting at index 0 for an available TSD, which can result in a lot of failed
lock attempts, and touching some memory that needs not be touched.
This CL attempts to make things better in most situations:
- first, use a thread local variable on Linux (intead of pthread APIs) to store
the current TSD in the shared model;
- move the locking boolean out of the TSD: this allows the compiler to use a
register and potentially optimize out a branch instead of reading it from the
TSD everytime (we also save a tiny bit of memory per TSD);
- 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so
store the `Precedence` in a `uptr` instead of a `u64`. We lose some
nanoseconds of precision and we'll wrap around at some point, but the benefit
is worth it;
- change a `CHECK` to a `DCHECK`: this should never happen, but if something is
ever terribly wrong, we'll crash on a near null AV if the TSD happens to be
null;
- based on an idea by dvyukov@, we are implementing a bound random scan for
an available TSD. This requires computing the coprimes for the number of TSDs,
and attempting to lock up to 4 TSDs in an random order before falling back to
the current one. This is obviously slightly more expansive when we have just
2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still
basically corresponds to the moment of the first contention on a TSD. To seed
on random choice, we use the precedence of the current TSD since it is very
likely to be non-zero (since we are in the slow path after a failed `tryLock`)
With those modifications, the benchmark yields to:
- with scudo (shared): 330K reqs/sec, peak RSS of 327MB.
So the shared model for this specific situation not only becomes competitive but
outperforms the exclusive model. I experimented with some values greater than 4
for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just
sticking with the current TSD is also a tad slower. Numbers on platforms with
less cores (eg: Android) remain similar.
Reviewers: alekseyshl, dvyukov, javed.absar
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D47289
llvm-svn: 334410
2018-06-11 22:50:31 +08:00
|
|
|
INLINE uptr getPrecedence() { return atomic_load_relaxed(&Precedence); }
|
[scudo] Scudo thread specific data refactor, part 2
Summary:
Following D38139, we now consolidate the TSD definition, merging the shared
TSD definition with the exclusive TSD definition. We introduce a boolean set
at initializaton denoting the need for the TSD to be unlocked or not. This
adds some unused members to the exclusive TSD, but increases consistency and
reduces the definitions fragmentation.
We remove the fallback mechanism from `scudo_allocator.cpp` and add a fallback
TSD in the non-shared version. Since the shared version doesn't require one,
this makes overall more sense.
There are a couple of additional cosmetic changes: removing the header guards
from the remaining `.inc` files, added error string to a `CHECK`.
Question to reviewers: I thought about friending `getTSDAndLock` in `ScudoTSD`
so that the `FallbackTSD` could `Mutex.Lock()` directly instead of `lock()`
which involved zeroing out the `Precedence`, which is unused otherwise. Is it
worth doing?
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: dvyukov
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D38183
llvm-svn: 314110
2017-09-25 23:12:08 +08:00
|
|
|
|
|
|
|
private:
|
|
|
|
StaticSpinMutex Mutex;
|
[scudo] Improve the scalability of the shared TSD model
Summary:
The shared TSD model in its current form doesn't scale. Here is an example of
rpc2-benchmark (with default parameters, which is threading heavy) on a 72-core
machines (defaulting to a `CompactSizeClassMap` and no Quarantine):
- with tcmalloc: 337K reqs/sec, peak RSS of 338MB;
- with scudo (exclusive): 321K reqs/sec, peak RSS of 637MB;
- with scudo (shared): 241K reqs/sec, peak RSS of 324MB.
This isn't great, since the exclusive model uses a lot of memory, while the
shared model doesn't even come close to be competitive.
This is mostly due to the fact that we are consistently scanning the TSD pool
starting at index 0 for an available TSD, which can result in a lot of failed
lock attempts, and touching some memory that needs not be touched.
This CL attempts to make things better in most situations:
- first, use a thread local variable on Linux (intead of pthread APIs) to store
the current TSD in the shared model;
- move the locking boolean out of the TSD: this allows the compiler to use a
register and potentially optimize out a branch instead of reading it from the
TSD everytime (we also save a tiny bit of memory per TSD);
- 64-bit atomic operations on 32-bit ARM platforms happen to be expensive: so
store the `Precedence` in a `uptr` instead of a `u64`. We lose some
nanoseconds of precision and we'll wrap around at some point, but the benefit
is worth it;
- change a `CHECK` to a `DCHECK`: this should never happen, but if something is
ever terribly wrong, we'll crash on a near null AV if the TSD happens to be
null;
- based on an idea by dvyukov@, we are implementing a bound random scan for
an available TSD. This requires computing the coprimes for the number of TSDs,
and attempting to lock up to 4 TSDs in an random order before falling back to
the current one. This is obviously slightly more expansive when we have just
2 TSDs (barely noticeable) but is otherwise beneficial. The `Precedence` still
basically corresponds to the moment of the first contention on a TSD. To seed
on random choice, we use the precedence of the current TSD since it is very
likely to be non-zero (since we are in the slow path after a failed `tryLock`)
With those modifications, the benchmark yields to:
- with scudo (shared): 330K reqs/sec, peak RSS of 327MB.
So the shared model for this specific situation not only becomes competitive but
outperforms the exclusive model. I experimented with some values greater than 4
for the number of TSDs to attempt to lock and it yielded a decrease in QPS. Just
sticking with the current TSD is also a tad slower. Numbers on platforms with
less cores (eg: Android) remain similar.
Reviewers: alekseyshl, dvyukov, javed.absar
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D47289
llvm-svn: 334410
2018-06-11 22:50:31 +08:00
|
|
|
atomic_uintptr_t Precedence;
|
2017-04-28 04:21:16 +08:00
|
|
|
};
|
|
|
|
|
[scudo] Fix improper TSD init after TLS destructors are called
Summary:
Some of glibc's own thread local data is destroyed after a user's thread local
destructors are called, via __libc_thread_freeres. This might involve calling
free, as is the case for strerror_thread_freeres.
If there is no prior heap operation in the thread, this free would end up
initializing some thread specific data that would never be destroyed properly
(as user's pthread destructors have already been called), while still being
deallocated when the TLS goes away. As a result, a program could SEGV, usually
in __sanitizer::AllocatorGlobalStats::Unregister, where one of the doubly linked
list links would refer to a now unmapped memory area.
To prevent this from happening, we will not do a full initialization from the
deallocation path. This means that the fallback cache & quarantine will be used
if no other heap operation has been called, and we effectively prevent the TSD
being initialized and never destroyed. The TSD will be fully initialized for all
other paths.
In the event of a thread doing only frees and nothing else, a TSD would never
be initialized for that thread, but this situation is unlikely and we can live
with that.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D37697
llvm-svn: 312939
2017-09-12 03:59:40 +08:00
|
|
|
void initThread(bool MinimalInit);
|
2017-04-28 04:21:16 +08:00
|
|
|
|
[scudo] Scudo thread specific data refactor, part 3
Summary:
Previous parts: D38139, D38183.
In this part of the refactor, we abstract the Linux vs Android TSD dissociation
in favor of a Exclusive vs Shared one, allowing for easier platform introduction
and configuration.
Most of this change consist of shuffling the files around to reflect the new
organization.
We introduce `scudo_platform.h` where platform specific definition lie. This
involves the TSD model and the platform specific allocator parameters. In an
upcoming CL, those will be configurable via defines, but we currently stick
with conservative defaults.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D38244
llvm-svn: 314224
2017-09-27 01:20:02 +08:00
|
|
|
// TSD model specific fastpath functions definitions.
|
|
|
|
#include "scudo_tsd_exclusive.inc"
|
|
|
|
#include "scudo_tsd_shared.inc"
|
2017-04-28 04:21:16 +08:00
|
|
|
|
|
|
|
} // namespace __scudo
|
|
|
|
|
[scudo] Scudo thread specific data refactor, part 3
Summary:
Previous parts: D38139, D38183.
In this part of the refactor, we abstract the Linux vs Android TSD dissociation
in favor of a Exclusive vs Shared one, allowing for easier platform introduction
and configuration.
Most of this change consist of shuffling the files around to reflect the new
organization.
We introduce `scudo_platform.h` where platform specific definition lie. This
involves the TSD model and the platform specific allocator parameters. In an
upcoming CL, those will be configurable via defines, but we currently stick
with conservative defaults.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl, dvyukov
Subscribers: srhines, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D38244
llvm-svn: 314224
2017-09-27 01:20:02 +08:00
|
|
|
#endif // SCUDO_TSD_H_
|