[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
//===-- xray_allocator.h ---------------------------------------*- C++ -*-===//
|
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This file is a part of XRay, a dynamic runtime instrumentation system.
|
|
|
|
//
|
|
|
|
// Defines the allocator interface for an arena allocator, used primarily for
|
|
|
|
// the profiling runtime.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
#ifndef XRAY_ALLOCATOR_H
|
|
|
|
#define XRAY_ALLOCATOR_H
|
|
|
|
|
|
|
|
#include "sanitizer_common/sanitizer_common.h"
|
2018-07-10 16:25:44 +08:00
|
|
|
#include "sanitizer_common/sanitizer_internal_defs.h"
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
#include "sanitizer_common/sanitizer_mutex.h"
|
2018-11-22 10:00:44 +08:00
|
|
|
#if SANITIZER_FUCHSIA
|
|
|
|
#include <zircon/process.h>
|
2018-12-06 11:28:57 +08:00
|
|
|
#include <zircon/status.h>
|
2018-12-07 11:19:13 +08:00
|
|
|
#include <zircon/syscalls.h>
|
2018-11-22 10:00:44 +08:00
|
|
|
#else
|
2018-07-18 09:53:39 +08:00
|
|
|
#include "sanitizer_common/sanitizer_posix.h"
|
2018-11-22 10:00:44 +08:00
|
|
|
#endif
|
2018-09-07 18:16:14 +08:00
|
|
|
#include "xray_defs.h"
|
2018-07-10 16:25:44 +08:00
|
|
|
#include "xray_utils.h"
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
#include <cstddef>
|
|
|
|
#include <cstdint>
|
2018-08-16 20:19:03 +08:00
|
|
|
#include <sys/mman.h>
|
2018-07-19 13:08:59 +08:00
|
|
|
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
namespace __xray {
|
|
|
|
|
2018-09-07 18:16:14 +08:00
|
|
|
// We implement our own memory allocation routine which will bypass the
|
|
|
|
// internal allocator. This allows us to manage the memory directly, using
|
|
|
|
// mmap'ed memory to back the allocators.
|
|
|
|
template <class T> T *allocate() XRAY_NEVER_INSTRUMENT {
|
2018-09-22 00:34:42 +08:00
|
|
|
uptr RoundedSize = RoundUpTo(sizeof(T), GetPageSizeCached());
|
2018-11-22 10:00:44 +08:00
|
|
|
#if SANITIZER_FUCHSIA
|
|
|
|
zx_handle_t Vmo;
|
|
|
|
zx_status_t Status = _zx_vmo_create(RoundedSize, 0, &Vmo);
|
|
|
|
if (Status != ZX_OK) {
|
|
|
|
if (Verbosity())
|
|
|
|
Report("XRay Profiling: Failed to create VMO of size %zu: %s\n",
|
|
|
|
sizeof(T), _zx_status_get_string(Status));
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
uintptr_t B;
|
|
|
|
Status =
|
2018-12-07 11:19:13 +08:00
|
|
|
_zx_vmar_map(_zx_vmar_root_self(), ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0,
|
|
|
|
Vmo, 0, sizeof(T), &B);
|
2018-11-22 10:00:44 +08:00
|
|
|
_zx_handle_close(Vmo);
|
|
|
|
if (Status != ZX_OK) {
|
|
|
|
if (Verbosity())
|
2018-12-07 11:19:13 +08:00
|
|
|
Report("XRay Profiling: Failed to map VMAR of size %zu: %s\n", sizeof(T),
|
|
|
|
_zx_status_get_string(Status));
|
2018-11-22 10:00:44 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
return reinterpret_cast<T *>(B);
|
|
|
|
#else
|
2018-09-22 00:34:42 +08:00
|
|
|
uptr B = internal_mmap(NULL, RoundedSize, PROT_READ | PROT_WRITE,
|
|
|
|
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
|
2018-12-07 11:19:13 +08:00
|
|
|
int ErrNo = 0;
|
2018-09-22 00:34:42 +08:00
|
|
|
if (UNLIKELY(internal_iserror(B, &ErrNo))) {
|
2018-09-07 18:16:14 +08:00
|
|
|
if (Verbosity())
|
[XRay] fix more -Wformat warnings
Building xray with recent clang on a 64-bit system results in a number
of -Wformat warnings:
compiler-rt/lib/xray/xray_allocator.h:70:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
compiler-rt/lib/xray/xray_allocator.h:119:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
Since `__sanitizer::uptr` has the same size as `size_t`, these can be
fixed by using the printf specifier `%zu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:348:46: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Cleaned up log for TID: %d\n", GetTid());
~~ ^~~~~~~~
%llu
compiler-rt/lib/xray/xray_basic_logging.cpp:353:62: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Skipping buffer for TID: %d; Offset = %llu\n", GetTid(),
~~ ^~~~~~~~
%llu
Since `__sanitizer::tid_t` is effectively declared as `unsigned long
long`, these can be fixed by using the printf specifier `%llu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:354:14: warning: format specifies type 'unsigned long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
TLD.BufferOffset);
^~~~~~~~~~~~~~~~
Since `BufferOffset` is declared as `size_t`, this one can be fixed by
using `%zu` as a printf specifier.
compiler-rt/lib/xray/xray_interface.cpp:172:50: warning: format specifies type 'int' but the argument has type 'uint64_t' (aka 'unsigned long') [-Wformat]
Report("Unsupported sled kind '%d' @%04x\n", Sled.Address, int(Sled.Kind));
~~ ^~~~~~~~~~~~
%lu
Since ``xray::SledEntry::Address` is declared as `uint64_t`, this one
can be fixed by using `PRIu64`, and adding `<cinttypes>`.
compiler-rt/lib/xray/xray_interface.cpp:308:62: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("System page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
compiler-rt/lib/xray/xray_interface.cpp:359:64: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("Provided page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
Since `PageSize` is declared as `size_t`, these can be fixed by using
`%zu` as a printf specifier.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D114469
2021-11-24 04:21:02 +08:00
|
|
|
Report("XRay Profiling: Failed to allocate memory of size %zu; Error = "
|
|
|
|
"%zu\n",
|
|
|
|
RoundedSize, B);
|
2018-09-07 18:16:14 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
2018-11-22 10:00:44 +08:00
|
|
|
#endif
|
2018-09-07 18:16:14 +08:00
|
|
|
return reinterpret_cast<T *>(B);
|
|
|
|
}
|
|
|
|
|
|
|
|
template <class T> void deallocate(T *B) XRAY_NEVER_INSTRUMENT {
|
|
|
|
if (B == nullptr)
|
|
|
|
return;
|
2018-09-22 00:34:42 +08:00
|
|
|
uptr RoundedSize = RoundUpTo(sizeof(T), GetPageSizeCached());
|
2018-11-22 10:00:44 +08:00
|
|
|
#if SANITIZER_FUCHSIA
|
2018-12-07 11:19:13 +08:00
|
|
|
_zx_vmar_unmap(_zx_vmar_root_self(), reinterpret_cast<uintptr_t>(B),
|
|
|
|
RoundedSize);
|
2018-11-22 10:00:44 +08:00
|
|
|
#else
|
2018-09-22 00:34:42 +08:00
|
|
|
internal_munmap(B, RoundedSize);
|
2018-11-22 10:00:44 +08:00
|
|
|
#endif
|
2018-09-07 18:16:14 +08:00
|
|
|
}
|
|
|
|
|
2018-11-20 11:56:04 +08:00
|
|
|
template <class T = unsigned char>
|
|
|
|
T *allocateBuffer(size_t S) XRAY_NEVER_INSTRUMENT {
|
2018-09-22 00:34:42 +08:00
|
|
|
uptr RoundedSize = RoundUpTo(S * sizeof(T), GetPageSizeCached());
|
2018-11-22 10:00:44 +08:00
|
|
|
#if SANITIZER_FUCHSIA
|
|
|
|
zx_handle_t Vmo;
|
|
|
|
zx_status_t Status = _zx_vmo_create(RoundedSize, 0, &Vmo);
|
|
|
|
if (Status != ZX_OK) {
|
|
|
|
if (Verbosity())
|
2018-12-07 11:19:13 +08:00
|
|
|
Report("XRay Profiling: Failed to create VMO of size %zu: %s\n", S,
|
|
|
|
_zx_status_get_string(Status));
|
2018-11-22 10:00:44 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
uintptr_t B;
|
2018-12-07 11:19:13 +08:00
|
|
|
Status = _zx_vmar_map(_zx_vmar_root_self(),
|
|
|
|
ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0, Vmo, 0, S, &B);
|
2018-11-22 10:00:44 +08:00
|
|
|
_zx_handle_close(Vmo);
|
|
|
|
if (Status != ZX_OK) {
|
|
|
|
if (Verbosity())
|
2018-12-07 11:19:13 +08:00
|
|
|
Report("XRay Profiling: Failed to map VMAR of size %zu: %s\n", S,
|
|
|
|
_zx_status_get_string(Status));
|
2018-11-22 10:00:44 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
#else
|
2018-09-22 00:34:42 +08:00
|
|
|
uptr B = internal_mmap(NULL, RoundedSize, PROT_READ | PROT_WRITE,
|
|
|
|
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
|
2018-12-07 11:19:13 +08:00
|
|
|
int ErrNo = 0;
|
2018-09-22 00:34:42 +08:00
|
|
|
if (UNLIKELY(internal_iserror(B, &ErrNo))) {
|
2018-09-07 18:16:14 +08:00
|
|
|
if (Verbosity())
|
[XRay] fix more -Wformat warnings
Building xray with recent clang on a 64-bit system results in a number
of -Wformat warnings:
compiler-rt/lib/xray/xray_allocator.h:70:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
compiler-rt/lib/xray/xray_allocator.h:119:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
Since `__sanitizer::uptr` has the same size as `size_t`, these can be
fixed by using the printf specifier `%zu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:348:46: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Cleaned up log for TID: %d\n", GetTid());
~~ ^~~~~~~~
%llu
compiler-rt/lib/xray/xray_basic_logging.cpp:353:62: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Skipping buffer for TID: %d; Offset = %llu\n", GetTid(),
~~ ^~~~~~~~
%llu
Since `__sanitizer::tid_t` is effectively declared as `unsigned long
long`, these can be fixed by using the printf specifier `%llu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:354:14: warning: format specifies type 'unsigned long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
TLD.BufferOffset);
^~~~~~~~~~~~~~~~
Since `BufferOffset` is declared as `size_t`, this one can be fixed by
using `%zu` as a printf specifier.
compiler-rt/lib/xray/xray_interface.cpp:172:50: warning: format specifies type 'int' but the argument has type 'uint64_t' (aka 'unsigned long') [-Wformat]
Report("Unsupported sled kind '%d' @%04x\n", Sled.Address, int(Sled.Kind));
~~ ^~~~~~~~~~~~
%lu
Since ``xray::SledEntry::Address` is declared as `uint64_t`, this one
can be fixed by using `PRIu64`, and adding `<cinttypes>`.
compiler-rt/lib/xray/xray_interface.cpp:308:62: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("System page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
compiler-rt/lib/xray/xray_interface.cpp:359:64: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("Provided page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
Since `PageSize` is declared as `size_t`, these can be fixed by using
`%zu` as a printf specifier.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D114469
2021-11-24 04:21:02 +08:00
|
|
|
Report("XRay Profiling: Failed to allocate memory of size %zu; Error = "
|
|
|
|
"%zu\n",
|
|
|
|
RoundedSize, B);
|
2018-09-07 18:16:14 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
2018-11-22 10:00:44 +08:00
|
|
|
#endif
|
2018-09-17 11:09:01 +08:00
|
|
|
return reinterpret_cast<T *>(B);
|
2018-09-07 18:16:14 +08:00
|
|
|
}
|
|
|
|
|
2018-09-17 11:09:01 +08:00
|
|
|
template <class T> void deallocateBuffer(T *B, size_t S) XRAY_NEVER_INSTRUMENT {
|
2018-09-07 18:16:14 +08:00
|
|
|
if (B == nullptr)
|
|
|
|
return;
|
2018-09-22 00:34:42 +08:00
|
|
|
uptr RoundedSize = RoundUpTo(S * sizeof(T), GetPageSizeCached());
|
2018-11-22 10:00:44 +08:00
|
|
|
#if SANITIZER_FUCHSIA
|
2018-12-07 11:19:13 +08:00
|
|
|
_zx_vmar_unmap(_zx_vmar_root_self(), reinterpret_cast<uintptr_t>(B),
|
|
|
|
RoundedSize);
|
2018-11-22 10:00:44 +08:00
|
|
|
#else
|
2018-09-22 00:34:42 +08:00
|
|
|
internal_munmap(B, RoundedSize);
|
2018-11-22 10:00:44 +08:00
|
|
|
#endif
|
2018-09-07 18:16:14 +08:00
|
|
|
}
|
|
|
|
|
2018-09-17 11:09:01 +08:00
|
|
|
template <class T, class... U>
|
|
|
|
T *initArray(size_t N, U &&... Us) XRAY_NEVER_INSTRUMENT {
|
|
|
|
auto A = allocateBuffer<T>(N);
|
|
|
|
if (A != nullptr)
|
|
|
|
while (N > 0)
|
|
|
|
new (A + (--N)) T(std::forward<U>(Us)...);
|
|
|
|
return A;
|
|
|
|
}
|
|
|
|
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
/// The Allocator type hands out fixed-sized chunks of memory that are
|
|
|
|
/// cache-line aligned and sized. This is useful for placement of
|
|
|
|
/// performance-sensitive data in memory that's frequently accessed. The
|
|
|
|
/// allocator also self-limits the peak memory usage to a dynamically defined
|
|
|
|
/// maximum.
|
|
|
|
///
|
|
|
|
/// N is the lower-bound size of the block of memory to return from the
|
|
|
|
/// allocation function. N is used to compute the size of a block, which is
|
|
|
|
/// cache-line-size multiples worth of memory. We compute the size of a block by
|
|
|
|
/// determining how many cache lines worth of memory is required to subsume N.
|
2018-07-18 09:53:39 +08:00
|
|
|
///
|
|
|
|
/// The Allocator instance will manage its own memory acquired through mmap.
|
|
|
|
/// This severely constrains the platforms on which this can be used to POSIX
|
|
|
|
/// systems where mmap semantics are well-defined.
|
|
|
|
///
|
|
|
|
/// FIXME: Isolate the lower-level memory management to a different abstraction
|
|
|
|
/// that can be platform-specific.
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
template <size_t N> struct Allocator {
|
|
|
|
// The Allocator returns memory as Block instances.
|
|
|
|
struct Block {
|
|
|
|
/// Compute the minimum cache-line size multiple that is >= N.
|
2018-07-10 16:25:44 +08:00
|
|
|
static constexpr auto Size = nearest_boundary(N, kCacheLineSize);
|
2018-07-18 09:53:39 +08:00
|
|
|
void *Data;
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
private:
|
2018-12-07 11:19:13 +08:00
|
|
|
size_t MaxMemory{0};
|
2018-11-20 11:56:04 +08:00
|
|
|
unsigned char *BackingStore = nullptr;
|
|
|
|
unsigned char *AlignedNextBlock = nullptr;
|
2018-07-18 09:53:39 +08:00
|
|
|
size_t AllocatedBlocks = 0;
|
[XRay] Use preallocated memory for XRay profiling
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
2018-12-07 14:23:06 +08:00
|
|
|
bool Owned;
|
2018-07-18 09:53:39 +08:00
|
|
|
SpinMutex Mutex{};
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
|
2018-09-07 18:16:14 +08:00
|
|
|
void *Alloc() XRAY_NEVER_INSTRUMENT {
|
2018-07-18 09:53:39 +08:00
|
|
|
SpinMutexLock Lock(&Mutex);
|
|
|
|
if (UNLIKELY(BackingStore == nullptr)) {
|
2018-09-22 00:34:42 +08:00
|
|
|
BackingStore = allocateBuffer(MaxMemory);
|
|
|
|
if (BackingStore == nullptr) {
|
2018-07-18 09:53:39 +08:00
|
|
|
if (Verbosity())
|
[XRay] fix more -Wformat warnings
Building xray with recent clang on a 64-bit system results in a number
of -Wformat warnings:
compiler-rt/lib/xray/xray_allocator.h:70:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
compiler-rt/lib/xray/xray_allocator.h:119:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
Since `__sanitizer::uptr` has the same size as `size_t`, these can be
fixed by using the printf specifier `%zu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:348:46: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Cleaned up log for TID: %d\n", GetTid());
~~ ^~~~~~~~
%llu
compiler-rt/lib/xray/xray_basic_logging.cpp:353:62: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Skipping buffer for TID: %d; Offset = %llu\n", GetTid(),
~~ ^~~~~~~~
%llu
Since `__sanitizer::tid_t` is effectively declared as `unsigned long
long`, these can be fixed by using the printf specifier `%llu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:354:14: warning: format specifies type 'unsigned long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
TLD.BufferOffset);
^~~~~~~~~~~~~~~~
Since `BufferOffset` is declared as `size_t`, this one can be fixed by
using `%zu` as a printf specifier.
compiler-rt/lib/xray/xray_interface.cpp:172:50: warning: format specifies type 'int' but the argument has type 'uint64_t' (aka 'unsigned long') [-Wformat]
Report("Unsupported sled kind '%d' @%04x\n", Sled.Address, int(Sled.Kind));
~~ ^~~~~~~~~~~~
%lu
Since ``xray::SledEntry::Address` is declared as `uint64_t`, this one
can be fixed by using `PRIu64`, and adding `<cinttypes>`.
compiler-rt/lib/xray/xray_interface.cpp:308:62: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("System page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
compiler-rt/lib/xray/xray_interface.cpp:359:64: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("Provided page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
Since `PageSize` is declared as `size_t`, these can be fixed by using
`%zu` as a printf specifier.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D114469
2021-11-24 04:21:02 +08:00
|
|
|
Report("XRay Profiling: Failed to allocate memory for allocator\n");
|
2018-07-18 09:53:39 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
AlignedNextBlock = BackingStore;
|
|
|
|
|
|
|
|
// Ensure that NextBlock is aligned appropriately.
|
|
|
|
auto BackingStoreNum = reinterpret_cast<uintptr_t>(BackingStore);
|
|
|
|
auto AlignedNextBlockNum = nearest_boundary(
|
|
|
|
reinterpret_cast<uintptr_t>(AlignedNextBlock), kCacheLineSize);
|
|
|
|
if (diff(AlignedNextBlockNum, BackingStoreNum) > ptrdiff_t(MaxMemory)) {
|
2018-09-22 00:34:42 +08:00
|
|
|
deallocateBuffer(BackingStore, MaxMemory);
|
2018-07-18 09:53:39 +08:00
|
|
|
AlignedNextBlock = BackingStore = nullptr;
|
|
|
|
if (Verbosity())
|
|
|
|
Report("XRay Profiling: Cannot obtain enough memory from "
|
[XRay] fix more -Wformat warnings
Building xray with recent clang on a 64-bit system results in a number
of -Wformat warnings:
compiler-rt/lib/xray/xray_allocator.h:70:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
compiler-rt/lib/xray/xray_allocator.h:119:11: warning: format specifies type 'int' but the argument has type '__sanitizer::uptr' (aka 'unsigned long') [-Wformat]
RoundedSize, B);
^~~~~~~~~~~
Since `__sanitizer::uptr` has the same size as `size_t`, these can be
fixed by using the printf specifier `%zu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:348:46: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Cleaned up log for TID: %d\n", GetTid());
~~ ^~~~~~~~
%llu
compiler-rt/lib/xray/xray_basic_logging.cpp:353:62: warning: format specifies type 'int' but the argument has type '__sanitizer::tid_t' (aka 'unsigned long long') [-Wformat]
Report("Skipping buffer for TID: %d; Offset = %llu\n", GetTid(),
~~ ^~~~~~~~
%llu
Since `__sanitizer::tid_t` is effectively declared as `unsigned long
long`, these can be fixed by using the printf specifier `%llu`.
compiler-rt/lib/xray/xray_basic_logging.cpp:354:14: warning: format specifies type 'unsigned long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
TLD.BufferOffset);
^~~~~~~~~~~~~~~~
Since `BufferOffset` is declared as `size_t`, this one can be fixed by
using `%zu` as a printf specifier.
compiler-rt/lib/xray/xray_interface.cpp:172:50: warning: format specifies type 'int' but the argument has type 'uint64_t' (aka 'unsigned long') [-Wformat]
Report("Unsupported sled kind '%d' @%04x\n", Sled.Address, int(Sled.Kind));
~~ ^~~~~~~~~~~~
%lu
Since ``xray::SledEntry::Address` is declared as `uint64_t`, this one
can be fixed by using `PRIu64`, and adding `<cinttypes>`.
compiler-rt/lib/xray/xray_interface.cpp:308:62: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("System page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
compiler-rt/lib/xray/xray_interface.cpp:359:64: warning: format specifies type 'long long' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
Report("Provided page size is not a power of two: %lld\n", PageSize);
~~~~ ^~~~~~~~
%zu
Since `PageSize` is declared as `size_t`, these can be fixed by using
`%zu` as a printf specifier.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D114469
2021-11-24 04:21:02 +08:00
|
|
|
"preallocated region\n");
|
2018-07-18 09:53:39 +08:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
2018-11-20 11:56:04 +08:00
|
|
|
AlignedNextBlock = reinterpret_cast<unsigned char *>(AlignedNextBlockNum);
|
2018-07-18 09:53:39 +08:00
|
|
|
|
|
|
|
// Assert that AlignedNextBlock is cache-line aligned.
|
|
|
|
DCHECK_EQ(reinterpret_cast<uintptr_t>(AlignedNextBlock) % kCacheLineSize,
|
|
|
|
0);
|
|
|
|
}
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
|
[XRay] Use preallocated memory for XRay profiling
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
2018-12-07 14:23:06 +08:00
|
|
|
if (((AllocatedBlocks + 1) * Block::Size) > MaxMemory)
|
2018-07-18 09:53:39 +08:00
|
|
|
return nullptr;
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
|
2018-07-18 09:53:39 +08:00
|
|
|
// Align the pointer we'd like to return to an appropriate alignment, then
|
|
|
|
// advance the pointer from where to start allocations.
|
|
|
|
void *Result = AlignedNextBlock;
|
[XRay] Use preallocated memory for XRay profiling
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
2018-12-07 14:23:06 +08:00
|
|
|
AlignedNextBlock =
|
|
|
|
reinterpret_cast<unsigned char *>(AlignedNextBlock) + Block::Size;
|
2018-07-18 09:53:39 +08:00
|
|
|
++AllocatedBlocks;
|
|
|
|
return Result;
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
public:
|
2018-09-07 18:16:14 +08:00
|
|
|
explicit Allocator(size_t M) XRAY_NEVER_INSTRUMENT
|
2018-12-07 11:19:13 +08:00
|
|
|
: MaxMemory(RoundUpTo(M, kCacheLineSize)),
|
|
|
|
BackingStore(nullptr),
|
|
|
|
AlignedNextBlock(nullptr),
|
|
|
|
AllocatedBlocks(0),
|
[XRay] Use preallocated memory for XRay profiling
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
2018-12-07 14:23:06 +08:00
|
|
|
Owned(true),
|
|
|
|
Mutex() {}
|
|
|
|
|
|
|
|
explicit Allocator(void *P, size_t M) XRAY_NEVER_INSTRUMENT
|
|
|
|
: MaxMemory(M),
|
|
|
|
BackingStore(reinterpret_cast<unsigned char *>(P)),
|
|
|
|
AlignedNextBlock(reinterpret_cast<unsigned char *>(P)),
|
|
|
|
AllocatedBlocks(0),
|
|
|
|
Owned(false),
|
2018-12-07 11:19:13 +08:00
|
|
|
Mutex() {}
|
|
|
|
|
|
|
|
Allocator(const Allocator &) = delete;
|
|
|
|
Allocator &operator=(const Allocator &) = delete;
|
|
|
|
|
|
|
|
Allocator(Allocator &&O) XRAY_NEVER_INSTRUMENT {
|
|
|
|
SpinMutexLock L0(&Mutex);
|
|
|
|
SpinMutexLock L1(&O.Mutex);
|
|
|
|
MaxMemory = O.MaxMemory;
|
|
|
|
O.MaxMemory = 0;
|
|
|
|
BackingStore = O.BackingStore;
|
|
|
|
O.BackingStore = nullptr;
|
|
|
|
AlignedNextBlock = O.AlignedNextBlock;
|
|
|
|
O.AlignedNextBlock = nullptr;
|
|
|
|
AllocatedBlocks = O.AllocatedBlocks;
|
|
|
|
O.AllocatedBlocks = 0;
|
[XRay] Use preallocated memory for XRay profiling
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
2018-12-07 14:23:06 +08:00
|
|
|
Owned = O.Owned;
|
|
|
|
O.Owned = false;
|
2018-12-07 11:19:13 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
Allocator &operator=(Allocator &&O) XRAY_NEVER_INSTRUMENT {
|
|
|
|
SpinMutexLock L0(&Mutex);
|
|
|
|
SpinMutexLock L1(&O.Mutex);
|
|
|
|
MaxMemory = O.MaxMemory;
|
|
|
|
O.MaxMemory = 0;
|
|
|
|
if (BackingStore != nullptr)
|
|
|
|
deallocateBuffer(BackingStore, MaxMemory);
|
|
|
|
BackingStore = O.BackingStore;
|
|
|
|
O.BackingStore = nullptr;
|
|
|
|
AlignedNextBlock = O.AlignedNextBlock;
|
|
|
|
O.AlignedNextBlock = nullptr;
|
|
|
|
AllocatedBlocks = O.AllocatedBlocks;
|
|
|
|
O.AllocatedBlocks = 0;
|
[XRay] Use preallocated memory for XRay profiling
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
2018-12-07 14:23:06 +08:00
|
|
|
Owned = O.Owned;
|
|
|
|
O.Owned = false;
|
2018-12-07 11:19:13 +08:00
|
|
|
return *this;
|
|
|
|
}
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
|
2018-09-07 18:16:14 +08:00
|
|
|
Block Allocate() XRAY_NEVER_INSTRUMENT { return {Alloc()}; }
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
|
2018-09-07 18:16:14 +08:00
|
|
|
~Allocator() NOEXCEPT XRAY_NEVER_INSTRUMENT {
|
[XRay] Use preallocated memory for XRay profiling
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
2018-12-07 14:23:06 +08:00
|
|
|
if (Owned && BackingStore != nullptr) {
|
2018-09-22 00:34:42 +08:00
|
|
|
deallocateBuffer(BackingStore, MaxMemory);
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
}
|
|
|
|
}
|
2018-07-18 09:53:39 +08:00
|
|
|
};
|
[XRay][profiler] Part 1: XRay Allocator and Array Implementations
Summary:
This change is part of the larger XRay Profiling Mode effort.
Here we implement an arena allocator, for fixed sized buffers used in a
segmented array implementation. This change adds the segmented array
data structure, which relies on the allocator to provide and maintain
the storage for the segmented array.
Key features of the `Allocator` type:
* It uses cache-aligned blocks, intended to host the actual data. These
blocks are cache-line-size multiples of contiguous bytes.
* The `Allocator` has a maximum memory budget, set at construction
time. This allows us to cap the amount of data each specific
`Allocator` instance is responsible for.
* Upon destruction, the `Allocator` will clean up the storage it's
used, handing it back to the internal allocator used in
sanitizer_common.
Key features of the `Array` type:
* Each segmented array is always backed by an `Allocator`, which is
either user-provided or uses a global allocator.
* When an `Array` grows, it grows by appending a segment that's
fixed-sized. The size of each segment is computed by the number of
elements of type `T` that can fit into cache line multiples.
* An `Array` does not return memory to the `Allocator`, but it can keep
track of the current number of "live" objects it stores.
* When an `Array` is destroyed, it will not return memory to the
`Allocator`. Users should clean up the `Allocator` independently of
the `Array`.
* The `Array` type keeps a freelist of the chunks it's used before, so
that trimming and growing will re-use previously allocated chunks.
These basic data structures are used by the XRay Profiling Mode
implementation to implement efficient and cache-aware storage for data
that's typically read-and-write heavy for tracking latency information.
We're relying on the cache line characteristics of the architecture to
provide us good data isolation and cache friendliness, when we're
performing operations like searching for elements and/or updating data
hosted in these cache lines.
Reviewers: echristo, pelikan, kpw
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45756
llvm-svn: 331141
2018-04-29 21:46:30 +08:00
|
|
|
|
|
|
|
} // namespace __xray
|
|
|
|
|
|
|
|
#endif // XRAY_ALLOCATOR_H
|