to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Summary:
This change addresses an issue which shows up with the synchronised race
between threads writing into a buffer, and another thread reading the
buffer.
In a lot of cases, we cannot guarantee that threads will always see the
signal to finalise their buffers in time despite the grace periods and
state machine maintained through atomic variables. This change addresses
it by ensuring that the same instance being updated to indicate how much
of the buffer is "used" by the writing thread is the same instance being
read by the thread processing the buffer to be written out to disk or
handled through the iterators.
To do this, we ensure that all the "extents" instances live in their own
the backing store, in a different contiguous page from the
buffer-specific backing store. We also take precautions to ensure that
the atomic variables are cache-line-sized to prevent false-sharing from
unnecessarily causing cache contention on unrelated writes/reads.
It's feasible that we may in the future be able to move the storage of
the extents objects into the single backing store, slightly changing the
way to compute the size(s) of the buffers, but in the meantime we'll
settle for the isolation afforded by having a different backing store
for the extents instances.
Reviewers: mboerger
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D54684
llvm-svn: 347280
Summary:
In D53560, we assumed a specific layout for memory without using an
explicit structure. This follow-up change uses more portable layout
control by using unions in a struct, and consolidating the memory
management code in the buffer queue.
We also take the opportunity to improve the documentation on the types
and operations, along with simplifying some of the logic in the buffer
queue implementation.
Reviewers: mboerger, eizan
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D53802
llvm-svn: 345485
Summary:
This change implements the ref-counting for backing stores associated
with generational buffer management. We do this as an implementation
detail of the buffer queue, instead of exposing this to the interface.
This change allows us to keep the buffer queue interface and usage model
the same.
Depends on D53551.
Reviewers: mboerger, eizan
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D53560
llvm-svn: 345471
Summary:
This change updates the buffer queue implementation to support using a
generation number to identify the lifetime of buffers. This first part
introduces the notion of the generation number, without changing the way
we handle the buffers yet.
What's missing here is the cleanup of the buffers. Ideally we'll keep
the two most recent generations. We need to ensure that before we do any
writes to the buffers, that we check the generation number(s) first.
Those changes will follow-on from this change.
Depends on D52588.
Reviewers: mboerger, eizan
Subscribers: llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D52974
llvm-svn: 344881
Summary:
This change updates the buffer queue implementation to support using a
generation number to identify the lifetime of buffers. This first part
introduces the notion of the generation number, without changing the way
we handle the buffers yet.
What's missing here is the cleanup of the buffers. Ideally we'll keep
the two most recent generations. We need to ensure that before we do any
writes to the buffers, that we check the generation number(s) first.
Those changes will follow-on from this change.
Depends on D52588.
Reviewers: mboerger, eizan
Subscribers: llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D52974
llvm-svn: 344670
Summary:
The implementation of `internal_mmap(...)` deviates from the contract of
`mmap(...)` -- i.e. error returns are actually the equivalent of `errno`
results. We update how XRay uses `internal_mmap(...)` to better handle
these error conditions.
In the process, we change the default pointers we're using from `char*`
to `uint8_t*` to prevent potential usage of the pointers in the string
library functions that expect to operate on `char*`.
We also take the chance to "promote" sizes of individual `internal_mmap`
requests to at least page size bytes, consistent with the expectations
of calls to `mmap`.
Reviewers: cryptoad, mboerger
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D52361
llvm-svn: 342745
Summary:
This change makes XRay FDR mode use a single backing store for the
buffer queue, and have indexes into that backing store instead. We also
remove the reliance on the internal allocator implementation in the FDR
mode logging implementation.
In the process of making this change we found an inconsistency with the
way we're returning buffers to the queue, and how we're setting the
extents. We take the chance to simplify the way we're managing the
extents of each buffer. It turns out we do not need the indirection for
the extents, so we co-host the atomic 64-bit int with the buffer object.
It also seems that we've not been returning the buffers for the thread
running the flush functionality when writing out the files, so we can
run into a situation where we could be missing data.
We consolidate all the allocation routines now into xray_allocator.h,
where we used to have routines defined in xray_buffer_queue.cc.
Reviewers: mboerger, eizan
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D52077
llvm-svn: 342356
Summary:
This is part of the work to address http://llvm.org/PR32274.
We remove the calls to array-placement-new and array-delete. This allows
us to rely on the internal memory management provided by
sanitizer_common/sanitizer_internal_allocator.h.
Reviewers: eizan, kpw
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D47695
llvm-svn: 333982
Summary:
This change allows for handling the in-memory data associated with the
FDR mode implementation through the new `__xray_log_process_buffers`
API. With this change, we can now allow users to process the data
in-memory of the process instead of through writing files.
This for example allows users to stream the data of the FDR logging
implementation through network sockets, or through other mechanisms
instead of saving them to local files.
We introduce an FDR-specific flag, for "no_file_flush" which lets the
flushing logic skip opening/writing to files.
This option can be defaulted to `true` when building the compiler-rt
XRay runtime through the `XRAY_FDR_OPTIONS` preprocessor macro.
Reviewers: kpw, echristo, pelikan, eizan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46574
llvm-svn: 332208
Summary:
Before this change, the FDR mode implementation relied on at thread-exit
handling to return buffers back to the (global) buffer queue. This
introduces issues with the initialisation of the thread_local objects
which, even through the use of pthread_setspecific(...) may eventually
call into an allocation function. Similar to previous changes in this
line, we're finding that there is a huge potential for deadlocks when
initialising these thread-locals when the memory allocation
implementation is also xray-instrumented.
In this change, we limit the call to pthread_setspecific(...) to provide
a non-null value to associate to the key created with
pthread_key_create(...). While this doesn't completely eliminate the
potential for the deadlock(s), it does allow us to still clean up at
thread exit when we need to. The change is that we don't need to do more
work when starting and ending a thread's lifetime. We also have a test
to make sure that we actually can safely recycle the buffers in case we
end up re-using the buffer(s) available from the queue on multiple
thread entry/exits.
This change cuts across both LLVM and compiler-rt to allow us to update
both the XRay runtime implementation as well as the library support for
loading these new versions of the FDR mode logging. Version 2 of the FDR
logging implementation makes the following changes:
* Introduction of a new 'BufferExtents' metadata record that's outside
of the buffer's contents but are written before the actual buffer.
This data is associated to the Buffer handed out by the BufferQueue
rather than a record that occupies bytes in the actual buffer.
* Removal of the "end of buffer" records. This is in-line with the
changes we described above, to allow for optimistic logging without
explicit record writing at thread exit.
The optimistic logging model operates under the following assumptions:
* Threads writing to the buffers will potentially race with the thread
attempting to flush the log. To avoid this situation from occuring,
we make sure that when we've finalized the logging implementation,
that threads will see this finalization state on the next write, and
either choose to not write records the thread would have written or
write the record(s) in two phases -- first write the record(s), then
update the extents metadata.
* We change the buffer queue implementation so that once it's handed
out a buffer to a thread, that we assume that buffer is marked
"used" to be able to capture partial writes. None of this will be
safe to handle if threads are racing to write the extents records
and the reader thread is attempting to flush the log. The optimism
comes from the finalization routine being required to complete
before we attempt to flush the log.
This is a fairly significant semantics change for the FDR
implementation. This is why we've decided to update the version number
for FDR mode logs. The tools, however, still need to be able to support
older versions of the log until we finally deprecate those earlier
versions.
Reviewers: dblaikie, pelikan, kpw
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D39526
llvm-svn: 318733
Summary:
This change removes the dependency on C++ standard library
types/functions in the implementation of the buffer queue. This is an
incremental step in resolving llvm.org/PR32274.
Reviewers: dblaikie, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39175
llvm-svn: 316406
Summary:
This change removes the dependency on using a std::deque<...> for the
storage of the buffers in the buffer queue. We instead implement a
fixed-size circular buffer that's resilient to exhaustion, and preserves
the semantics of the BufferQueue.
We're moving away from using std::deque<...> for two reasons:
- We want to remove dependencies on the STL for data structures.
- We want the data structure we use to not require re-allocation in
the normal course of operation.
The internal implementation of the buffer queue uses heap-allocated
arrays that are initialized once when the BufferQueue is created, and
re-uses slots in the buffer array as buffers are returned in order.
We also change the lock used in the implementation to a spinlock
instead of a blocking mutex. We reason that since the release operations
now take very little time in the critical section, that a spinlock would
be appropriate.
This change is related to D38073.
This change is a re-submit with the following changes:
- Keeping track of the live buffers with a counter independent of the
pointers keeping track of the extents of the circular buffer.
- Additional documentation of what the data members are meant to
represent.
Reviewers: dblaikie, kpw, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38119
llvm-svn: 314877
Summary:
This change removes the dependency on using a std::deque<...> for the
storage of the buffers in the buffer queue. We instead implement a
fixed-size circular buffer that's resilient to exhaustion, and preserves
the semantics of the BufferQueue.
We're moving away from using std::deque<...> for two reasons:
- We want to remove dependencies on the STL for data structures.
- We want the data structure we use to not require re-allocation in
the normal course of operation.
The internal implementation of the buffer queue uses heap-allocated
arrays that are initialized once when the BufferQueue is created, and
re-uses slots in the buffer array as buffers are returned in order.
We also change the lock used in the implementation to a spinlock
instead of a blocking mutex. We reason that since the release operations
now take very little time in the critical section, that a spinlock would
be appropriate.
This change is related to D38073.
Reviewers: dblaikie, kpw, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38119
llvm-svn: 314766
Summary:
Before this change we seemed to not be running the unit tests, and therefore we
set out to run them. In the process of making this happen we found a divergence
between the implementation and the tests.
This includes changes to both the CMake files as well as the implementation and
headers of the XRay runtime. We've also updated documentation on the changed
functions.
Reviewers: kpw, eizan
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D37290
llvm-svn: 312202
Summary:
Currently the FDR log writer, upon flushing, dumps a sequence of buffers from
its freelist to disk. A reader can read the first buffer up to an EOB record,
but then it is unclear how far ahead to scan to find the next threads traces.
There are a few ways to handle this problem.
1. The reader has externalized knowledge of the buffer size.
2. The size of buffers is in the file header or otherwise encoded in the log.
3. Only write out the portion of the buffer with records. When released, the
buffers are marked with a size.
4. The reader looks for memory that matches a pattern and synchronizes on it.
2 and 3 seem the most flexible and 2 does not rule 3 out.
This is an implementation of 2.
In addition, the function handler for fdr more aggressively checks for
finalization and makes an attempt to release its buffer.
Reviewers: pelikan, dberris
Reviewed By: dberris
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31384
llvm-svn: 298982
Instead of std::atomic APIs for atomic operations, we instead use APIs
include with sanitizer_common. This allows us to, at runtime, not have
to depend on potentially dynamically provided implementations of these
atomic operations.
Fixes http://llvm.org/PR32274.
llvm-svn: 298833
Summary:
Depending on C++11 <system_error> introduces a link-time requirement to
C++11 symbols. Removing it allows us to depend on header-only C++11 and
up libraries.
Partially fixes http://llvm.org/PR32274 -- we know there's more invasive work
to be done, but we're doing it incrementally.
Reviewers: dblaikie, kpw, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31233
llvm-svn: 298480
Summary:
In this change we introduce the notion of a "flight data recorder" mode
for XRay logging, where XRay logs in-memory first, and write out data
on-demand as required (as opposed to the naive implementation that keeps
logging while tracing is "on"). This depends on D26232 where we
implement the core data structure for holding the buffers that threads
will be using to write out records of operation.
This implementation only currently works on x86_64 and depends heavily
on the TSC math to write out smaller records to the inmemory buffers.
Also, this implementation defines two different kinds of records with
different sizes (compared to the current naive implementation): a
MetadataRecord (16 bytes) and a FunctionRecord (8 bytes). MetadataRecord
entries are meant to write out information like the thread ID for which
the metadata record is defined for, whether the execution of a thread
moved to a different CPU, etc. while a FunctionRecord represents the
different kinds of function call entry/exit records we might encounter
in the course of a thread's execution along with a delta from the last
time the logging handler was called.
While this implementation is not exactly what is described in the
original XRay whitepaper, this one gives us an initial implementation
that we can iterate and build upon.
Reviewers: echristo, rSerge, majnemer
Subscribers: mehdi_amini, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D27038
llvm-svn: 293015
Summary:
In this change we introduce the notion of a "flight data recorder" mode
for XRay logging, where XRay logs in-memory first, and write out data
on-demand as required (as opposed to the naive implementation that keeps
logging while tracing is "on"). This depends on D26232 where we
implement the core data structure for holding the buffers that threads
will be using to write out records of operation.
This implementation only currently works on x86_64 and depends heavily
on the TSC math to write out smaller records to the inmemory buffers.
Also, this implementation defines two different kinds of records with
different sizes (compared to the current naive implementation): a
MetadataRecord (16 bytes) and a FunctionRecord (8 bytes). MetadataRecord
entries are meant to write out information like the thread ID for which
the metadata record is defined for, whether the execution of a thread
moved to a different CPU, etc. while a FunctionRecord represents the
different kinds of function call entry/exit records we might encounter
in the course of a thread's execution along with a delta from the last
time the logging handler was called.
While this implementation is not exactly what is described in the
original XRay whitepaper, this one gives us an initial implementation
that we can iterate and build upon.
Reviewers: echristo, rSerge, majnemer
Subscribers: mehdi_amini, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D27038
llvm-svn: 290852
This implements a simple buffer queue to manage a pre-allocated queue of
fixed-sized buffers to hold XRay records. We need this to support
Flight Data Recorder (FDR) mode. We also implement this as a sub-library
first to allow for development before actually using it in an
implementation.
Some important properties of the buffer queue:
- Thread-safe enqueueing/dequeueing of fixed-size buffers.
- Pre-allocation of buffers at construction.
This is a re-roll of the previous attempt to submit, because it caused
failures in arm and aarch64.
Reviewers: majnemer, echristo, rSerge
Subscribers: tberghammer, danalbert, srhines, modocache, mehdi_amini, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D26232
llvm-svn: 288775
Summary:
This implements a simple buffer queue to manage a pre-allocated queue of
fixed-sized buffers to hold XRay records. We need this to support
Flight Data Recorder (FDR) mode. We also implement this as a sub-library
first to allow for development before actually using it in an
implementation.
Some important properties of the buffer queue:
- Thread-safe enqueueing/dequeueing of fixed-size buffers.
- Pre-allocation of buffers at construction.
Reviewers: majnemer, rSerge, echristo
Subscribers: mehdi_amini, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D26232
llvm-svn: 287910