Currently VMO in Zircon create using the zx_vmo_create is resizable
by default, but we'll be changing this in the future, requiring an
explicit flag to make the VMO resizable.
Prepare for this change by passing ZX_VMO_RESIZABLE option to all
zx_vmo_create calls that need resizable VMO.
Differential Revision: https://reviews.llvm.org/D61450
llvm-svn: 359803
Add missed value "libcxxabi" and introduce SANITIZER_TEST_CXX for linking
unit tests. This needs to be a full C++ library and cannot be libcxxabi.
Recommit r354132 which I reverted in r354153 because it broke a sanitizer
bot. This was because of the "fixes" for pthread linking, so I've removed
these changes.
Differential Revision: https://reviews.llvm.org/D58012
llvm-svn: 354198
Add missed value "libcxxabi" and introduce SANITIZER_TEST_CXX for linking
unit tests. This needs to be a full C++ library and cannot be libcxxabi.
Differential Revision: https://reviews.llvm.org/D58012
llvm-svn: 354132
Summary:
A link error was encountered when using the Red Hat Developer Toolset.
In the RHDTS, `libstdc++.so` is a linker script that may resolve symbols
to a static library. This patch places `-lstdc++` later in the ordering.
Reviewers: sfertile, nemanjai, tstellar, dberris
Reviewed By: dberris
Subscribers: dberris, mgorny, delcypher, jdoerfert, #sanitizers, llvm-commits
Tags: #llvm, #sanitizers
Differential Revision: https://reviews.llvm.org/D58144
llvm-svn: 353905
Summary:
As reported on llvm-testers, during 8.0.0-rc1 testing I got errors while
building of `XRayTest`, during `check-all`:
```
[100%] Generating XRayTest-x86_64-Test
/home/dim/llvm/8.0.0/rc1/Phase3/Release/llvmCore-8.0.0-rc1.obj/./lib/libLLVMSupport.a(Signals.cpp.o): In function `llvm::sys::PrintStackTrace(llvm::raw_ostream&)':
Signals.cpp:(.text._ZN4llvm3sys15PrintStackTraceERNS_11raw_ostreamE+0x24): undefined reference to `backtrace'
Signals.cpp:(.text._ZN4llvm3sys15PrintStackTraceERNS_11raw_ostreamE+0x254): undefined reference to `llvm::itaniumDemangle(char const*, char*, unsigned long*, int*)'
clang-8: error: linker command failed with exit code 1 (use -v to see invocation)
gmake[3]: *** [projects/compiler-rt/lib/xray/tests/unit/CMakeFiles/TXRayTest-x86_64-Test.dir/build.make:73: projects/compiler-rt/lib/xray/tests/unit/XRayTest-x86_64-Test] Error 1
gmake[3]: Target 'projects/compiler-rt/lib/xray/tests/unit/CMakeFiles/TXRayTest-x86_64-Test.dir/build' not remade because of errors.
gmake[2]: *** [CMakeFiles/Makefile2:33513: projects/compiler-rt/lib/xray/tests/unit/CMakeFiles/TXRayTest-x86_64-Test.dir/all] Error 2
gmake[2]: Target 'CMakeFiles/check-all.dir/all' not remade because of errors.
gmake[1]: *** [CMakeFiles/Makefile2:737: CMakeFiles/check-all.dir/rule] Error 2
gmake[1]: Target 'check-all' not remade because of errors.
gmake: *** [Makefile:277: check-all] Error 2
[Release Phase3] check-all failed
```
This is because the `backtrace` function requires `-lexecinfo` on BSD
platforms. To fix this, detect the `execinfo` library in
`cmake/config-ix.cmake`, and add it to the unit test link flags.
Additionally, since the code in `sys::PrintStackTrace` makes use of
`itaniumDemangle`, also add `-lLLVMDemangle`. (Note that this is more
of a general problem with libLLVMSupport, but I'm looking for a quick
fix now so it can be merged to the 8.0 branch.)
Reviewers: dberris, hans, mgorny, samsonov
Reviewed By: dberris
Subscribers: krytarowski, delcypher, erik.pilkington, #sanitizers, emaste, llvm-commits
Differential Revision: https://reviews.llvm.org/D57181
llvm-svn: 352234
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Add a CheckMPROTECT() routine to detect when pax MPROTECT is enabled
on NetBSD, and error xray out when it is. The solution is adapted
from existing CheckASLR().
Differential Revision: https://reviews.llvm.org/D56049
llvm-svn: 350030
Disable enforcing alignas() for structs that are used as thread_local
data on NetBSD. The NetBSD ld.so implementation is buggy and does
not enforce correct alignment; however, clang seems to take it for
granted and generates instructions that segv on wrongly aligned objects.
Therefore, disable those alignas() statements on NetBSD until we can
establish a better fix.
Apparently, std::aligned_storage<> does not have any real effect
at the moment, so we can leave it as-is.
Differential Revision: https://reviews.llvm.org/D56000
llvm-svn: 350029
Add a code to properly test for presence of LLVMTestingSupport library
when performing a stand-alone build, and skip tests requiring it when
it is not present. Since the library is not installed, llvm-config
reported empty --libs for it and the tests failed to link with undefined
references. Skipping the two fdr_* test files is better than failing to
build, and should be good enough until we find a better solution.
NB: both installing LLVMTestingSupport and building it automatically
from within compiler-rt sources are non-trivial. The former due to
dependency on gtest, the latter due to tight integration with LLVM
source tree.
Differential Revision: https://reviews.llvm.org/D55891
llvm-svn: 349899
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
This reverts commit r348455, with some additional changes:
- Work-around deficiency of gcc-4.8 by duplicating the implementation of
`AppendEmplace` in `Append`, but instead of using brace-init for the
copy construction, use a placement new explicitly calling the copy
constructor.
llvm-svn: 348563
This is a follow-up to D54989.
Work-around gcc-4.8 failing to handle brace-init for structs to imply
default-construction of an aggregate, and treats it as an initialiser
list instead.
llvm-svn: 348445
Continuation of D54989.
Additional changes:
- Use `.AppendEmplace(...)` instead of `.Append(Type{...})` to appease
GCC 4.8 with confusion on when an initializer_list is used as
opposed to a temporary aggregate initialized object.
llvm-svn: 348438
.. and also the follow-ups r348336 r348338.
It broke stand-alone compiler-rt builds with GCC 4.8:
In file included from /work/llvm/projects/compiler-rt/lib/xray/xray_function_call_trie.h:20:0,
from /work/llvm/projects/compiler-rt/lib/xray/xray_profile_collector.h:21,
from /work/llvm/projects/compiler-rt/lib/xray/xray_profile_collector.cc:15:
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h: In instantiation of ‘T* __xray::Array<T>::AppendEmplace(Args&& ...) [with Args = {const __xray::FunctionCallTrie::mergeInto(__xray::FunctionCallTrie&) const::NodeAndTarget&}; T = __xray::FunctionCallTrie::mergeInto(__xray::FunctionCallTrie&) const::NodeAndTarget]’:
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h:383:71: required from ‘T* __xray::Array<T>::Append(const T&) [with T = __xray::FunctionCallTrie::mergeInto(__xray::FunctionCallTrie&) const::NodeAndTarget]’
/work/llvm/projects/compiler-rt/lib/xray/xray_function_call_trie.h:517:54: required from here
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h:378:5: error: could not convert ‘{std::forward<const __xray::FunctionCallTrie::mergeInto(__xray::FunctionCallTrie&) const::NodeAndTarget&>((* & args#0))}’ from ‘<brace-enclosed initializer list>’ to ‘__xray::FunctionCallTrie::mergeInto(__xray::FunctionCallTrie&) const::NodeAndTarget’
new (AlignedOffset) T{std::forward<Args>(args)...};
^
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h: In instantiation of ‘T* __xray::Array<T>::AppendEmplace(Args&& ...) [with Args = {const __xray::profileCollectorService::{anonymous}::ThreadTrie&}; T = __xray::profileCollectorService::{anonymous}::ThreadTrie]’:
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h:383:71: required from ‘T* __xray::Array<T>::Append(const T&) [with T = __xray::profileCollectorService::{anonymous}::ThreadTrie]’
/work/llvm/projects/compiler-rt/lib/xray/xray_profile_collector.cc:98:34: required from here
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h:378:5: error: could not convert ‘{std::forward<const __xray::profileCollectorService::{anonymous}::ThreadTrie&>((* & args#0))}’ from
‘<brace-enclosed initializer list>’ to ‘__xray::profileCollectorService::{anonymous}::ThreadTrie’
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h: In instantiation of ‘T* __xray::Array<T>::AppendEmplace(Args&& ...) [with Args = {const __xray::profileCollectorService::{anonymous}::ProfileBuffer&}; T = __xray::profileCollectorService::{anonymous}::ProfileBuffer]’:
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h:383:71: required from ‘T* __xray::Array<T>::Append(const T&) [with T = __xray::profileCollectorService::{anonymous}::ProfileBuffer]
’
/work/llvm/projects/compiler-rt/lib/xray/xray_profile_collector.cc:244:44: required from here
/work/llvm/projects/compiler-rt/lib/xray/xray_segmented_array.h:378:5: error: could not convert ‘{std::forward<const __xray::profileCollectorService::{anonymous}::ProfileBuffer&>((* & args#0))}’ from ‘<brace-enclosed initializer list>’ to ‘__xray::profileCollectorService::{anonymous}::ProfileBuffer’
> Summary:
> This change makes the allocator and function call trie implementations
> move-aware and remove the FunctionCallTrie's reliance on a
> heap-allocated set of allocators.
>
> The change makes it possible to always have storage associated with
> Allocator instances, not necessarily having heap-allocated memory
> obtainable from these allocator instances. We also use thread-local
> uninitialised storage.
>
> We've also re-worked the segmented array implementation to have more
> precondition and post-condition checks when built in debug mode. This
> enables us to better implement some of the operations with surrounding
> documentation as well. The `trim` algorithm now has more documentation
> on the implementation, reducing the requirement to handle special
> conditions, and being more rigorous on the computations involved.
>
> In this change we also introduce an initialisation guard, through which
> we prevent an initialisation operation from racing with a cleanup
> operation.
>
> We also ensure that the ThreadTries array is not destroyed while copies
> into the elements are still being performed by other threads submitting
> profiles.
>
> Note that this change still has an issue with accessing thread-local
> storage from signal handlers that are instrumented with XRay. We also
> learn that with the testing of this patch, that there will be cases
> where calls to mmap(...) (through internal_mmap(...)) might be called in
> signal handlers, but are not async-signal-safe. Subsequent patches will
> address this, by re-using the `BufferQueue` type used in the FDR mode
> implementation for pre-allocated memory segments per active, tracing
> thread.
>
> We still want to land this change despite the known issues, with fixes
> forthcoming.
>
> Reviewers: mboerger, jfb
>
> Subscribers: jfb, llvm-commits
>
> Differential Revision: https://reviews.llvm.org/D54989
llvm-svn: 348346
Summary:
This change makes the allocator and function call trie implementations
move-aware and remove the FunctionCallTrie's reliance on a
heap-allocated set of allocators.
The change makes it possible to always have storage associated with
Allocator instances, not necessarily having heap-allocated memory
obtainable from these allocator instances. We also use thread-local
uninitialised storage.
We've also re-worked the segmented array implementation to have more
precondition and post-condition checks when built in debug mode. This
enables us to better implement some of the operations with surrounding
documentation as well. The `trim` algorithm now has more documentation
on the implementation, reducing the requirement to handle special
conditions, and being more rigorous on the computations involved.
In this change we also introduce an initialisation guard, through which
we prevent an initialisation operation from racing with a cleanup
operation.
We also ensure that the ThreadTries array is not destroyed while copies
into the elements are still being performed by other threads submitting
profiles.
Note that this change still has an issue with accessing thread-local
storage from signal handlers that are instrumented with XRay. We also
learn that with the testing of this patch, that there will be cases
where calls to mmap(...) (through internal_mmap(...)) might be called in
signal handlers, but are not async-signal-safe. Subsequent patches will
address this, by re-using the `BufferQueue` type used in the FDR mode
implementation for pre-allocated memory segments per active, tracing
thread.
We still want to land this change despite the known issues, with fixes
forthcoming.
Reviewers: mboerger, jfb
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D54989
llvm-svn: 348335
Use a more representative test of allocating small chunks for
oddly-sized (small) objects from an allocator that has a page's worth of
memory.
llvm-svn: 347286
Summary:
This change addresses an issue which shows up with the synchronised race
between threads writing into a buffer, and another thread reading the
buffer.
In a lot of cases, we cannot guarantee that threads will always see the
signal to finalise their buffers in time despite the grace periods and
state machine maintained through atomic variables. This change addresses
it by ensuring that the same instance being updated to indicate how much
of the buffer is "used" by the writing thread is the same instance being
read by the thread processing the buffer to be written out to disk or
handled through the iterators.
To do this, we ensure that all the "extents" instances live in their own
the backing store, in a different contiguous page from the
buffer-specific backing store. We also take precautions to ensure that
the atomic variables are cache-line-sized to prevent false-sharing from
unnecessarily causing cache contention on unrelated writes/reads.
It's feasible that we may in the future be able to move the storage of
the extents objects into the single backing store, slightly changing the
way to compute the size(s) of the buffers, but in the meantime we'll
settle for the isolation afforded by having a different backing store
for the extents instances.
Reviewers: mboerger
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D54684
llvm-svn: 347280
This change adds a static check to ensure that all data metadata record
payloads don't go past the available buffers in Metadata records.
llvm-svn: 346476
Summary:
Before this change, we could run into a situation where we may try to
undo tail exit records after writing metadata records before a function
enter event. This change rectifies that by resetting the tail exit
counter after writing the metadata records.
Reviewers: mboerger
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D54292
llvm-svn: 346475
Summary:
We need these fences to ensure that other threads attempting to read
bytes in the buffer will see thw writes committed before the extents are
updated. Without these, the writes can be un-committed by the time the
buffer extents counter is updated -- the fences should ensure that the
records written into the log have completed by the time we observe the
buffer extents from different threads.
Reviewers: mboerger
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D54291
llvm-svn: 346474
Summary:
This change covers a number of things spanning LLVM and compiler-rt,
which are related in a non-trivial way.
In LLVM, we have a library that handles the FDR mode even log loading,
which uses C++'s runtime polymorphism feature to better faithfully
represent the events that are written down by the FDR mode runtime. We
do this by interpreting a trace that's serliased in a common format
agreed upon by both the trace loading library and the FDR mode runtime.
This library is under active development, which consists of features
allowing us to reconstitute a higher-level event log.
This event log is used by the conversion and visualisation tools we have
for interpreting XRay traces.
One of the tools we have is a diagnostic tool in llvm-xray called
`fdr-dump` which we've been using to debug our expectations of what the
FDR runtime should be writing and what the logical FDR event log
structures are. We use this fairly extensively to reason about why some
non-trivial traces we're generating with FDR mode runtimes fail to
convert or fail to parse correctly.
One of these failures we've found in manual debugging of some of the
traces we've seen involve an inconsistency between the buffer extents (a
record indicating how many bytes to follow are part of a logical
thread's event log) and the record of the bytes written into the log --
sometimes it turns out the data could be garbage, due to buffers being
recycled, but sometimes we're seeing the buffer extent indicating a log
is "shorter" than the actual records associated with the buffer. This
case happens particularly with function entry records with a call
argument.
This change for now updates the FDR mode runtime to write the bytes for
the function call and arg record before updating the buffer extents
atomically, allowing multiple threads to see a consistent view of the
data in the buffer using the atomic counter associated with a buffer.
What we're trying to prevent here is partial updates where we see the
intermediary updates to the buffer extents (function record size then
call argument record size) becoming observable from another thread, for
instance, one doing the serialization/flushing.
To do both diagnose this issue properly, we need to be able to honour
the extents being set in the `BufferExtents` records marking the
beginning of the logical buffers when reading an FDR trace. Since LLVM
doesn't use C++'s RTTI mechanism, we instead follow the advice in the
documentation for LLVM Style RTTI
(https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html). We then rely on
this RTTI feature to ensure that our file-based record producer (our
streaming "deserializer") can honour the extents of individual buffers
as we interpret traces.
This also sets us up to be able to eventually do smart
skipping/continuation of FDR logs, seeking instead to find BufferExtents
records in cases where we find potentially recoverable errors. In the
meantime, we make this change to operate in a strict mode when reading
logical buffers with extent records.
Reviewers: mboerger
Subscribers: hiraditya, llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D54201
llvm-svn: 346473
Summary:
This change updates the version number for FDR logs to 5, and update the
trace processing to support changes in the custom event records.
In the runtime, since we're already writing down the record preamble to
handle CPU migrations and TSC wraparound, we can use the same TSC delta
encoding in the custom event and typed event records that we use in
function event records. We do the same change to typed events (which
were unsupported before this change in the trace processing) which now
show up in the trace.
Future changes should increase our testing coverage to make custom and
typed events as first class entities in the FDR mode log processing
tools.
This change is also a good example of how we end up supporting new
record types in the FDR mode implementation. This shows the places where
new record types are added and supported.
Depends on D54139.
Reviewers: mboerger
Subscribers: hiraditya, arphaman, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D54140
llvm-svn: 346293
Summary:
For platforms without preinit support (such as NetBSD/amd64) the
initialization routine __xray_init() was called in non-deterministic order
compared to other constructors. This caused breakage failures
as xray routines attempted to execute code with assumption of
being initialized, which was no always true.
Use GCC/Clang extension to set maximal priority to the constructor
calling __xray_init(). This code switches away from C++ lambda form,
as it did not allow to specify this compiler extension.
Reviewers: dberris, joerg
Reviewed By: dberris
Subscribers: llvm-commits, mgorny, #sanitizers
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D54136
llvm-svn: 346222
Summary:
This change cuts across LLVM and compiler-rt to add support for
rendering custom events in the XRayRecord type, to allow for including
user-provided annotations in the output YAML (as raw bytes).
This work enables us to add custom event and typed event records into
the `llvm::xray::Trace` type for user-provided events. This can then be
programmatically handled through the C++ API and can be included in some
of the tooling as well. For now we support printing the raw data we
encounter in the custom events in the converted output.
Future work will allow us to start interpreting these custom and typed
events through a yet-to-be-defined API for extending the trace analysis
library.
Reviewers: mboerger
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54139
llvm-svn: 346214
Summary:
Prior to this change, we can run into situations where the TSC we're
getting when exiting a function is less than the TSC we got when
entering it. This would sometimes cause the counter for cumulative call
times overflow, which was erroneously also being stored as a signed
64-bit integer.
This change addresses both these issues while adding provisions for
tracking CPU migrations. We do this because moving from one CPU to
another doesn't guarantee that the timestamp counter for some
architectures aren't guaranteed to be synchronised. For the moment, we
leave the provisions there until we can update the data format to
include the counting of CPU migrations we can catch.
We update the necessary tests as well, ensuring that our expectations
for the cycle accounting to be met in case of counter wraparound.
Reviewers: mboerger
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D54088
llvm-svn: 346116
Summary:
Fix some issues discovered from mostly manual inspection of outputs from
the `llvm-xray fdr-dump` tool.
It turns out we haven't been writing the deltas properly, and have been
writing down zeros for deltas of some records. This change fixes this
oversight born by the recent refactoring.
Reviewers: mboerger
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D54022
llvm-svn: 345954
Summary:
This is a follow-on change to D53858 which turns out to have had a TSC
accounting bug when writing out function exit records in FDR mode.
This change adds a number of tests to ensure that:
- We are handling the delta between the exit TSC and the last TSC we've
seen.
- We are writing the custom event and typed event records as a single
update to the buffer extents.
- We are able to catch boundary conditions when loading FDR logs.
We introduce a TSC matcher to the test helpers, which we use in the
testing/verification of the TSC accounting change.
Reviewers: mboerger
Subscribers: mgorny, hiraditya, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D53967
llvm-svn: 345905
Summary:
This change cuts across compiler-rt and llvm, to increment the FDR log
version number to 4, and include the CPU ID in the custom event records.
This is a step towards allowing us to change the `llvm::xray::Trace`
object to start representing both custom and typed events in the stream
of records. Follow-on changes will allow us to change the kinds of
records we're presenting in the stream of traces, to incorporate the
data in custom/typed events.
A follow-on change will handle the typed event case, where it may not
fit within the 15-byte buffer for metadata records.
This work is part of the larger effort to enable writing analysis and
processing tools using a common in-memory representation of the events
found in traces. The work will focus on porting existing tools in LLVM
to use the common representation and informing the design of a
library/framework for expressing trace event analysis as C++ programs.
Reviewers: mboerger, eizan
Subscribers: hiraditya, mgrang, llvm-commits
Differential Revision: https://reviews.llvm.org/D53920
llvm-svn: 345798
Summary:
This change completes the refactoring of the FDR runtime to support the
following:
- Generational buffer management.
- Centralised and well-tested controller implementation.
In this change we've had to:
- Greatly simplify the code in xray_fdr_logging.cc to only implement the
glue code for calling into the controller.
- Implement the custom and typed event logging functions in the
FDRLogWriter.
- Imbue the `XRAY_NEVER_INSTRUMENT` attribute onto all functions in the
controller implementation.
Reviewers: mboerger, eizan, jfb
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D53858
llvm-svn: 345568
Summary:
Some cases where `postCurrentThreadFCT()` are not guarded by our
recursion guard. We've observed that sometimes these can lead to
deadlocks when some functions (like memcpy()) gets outlined and the
version of memcpy is XRay-instrumented, which can be materialised by the
compiler in the implementation of lower-level components used by the
profiling runtime.
This change ensures that all calls to `postCurrentThreadFCT` are guarded
by our thread-recursion guard, to prevent deadlocks.
Reviewers: mboerger, eizan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D53805
llvm-svn: 345489
Summary:
In D53560, we assumed a specific layout for memory without using an
explicit structure. This follow-up change uses more portable layout
control by using unions in a struct, and consolidating the memory
management code in the buffer queue.
We also take the opportunity to improve the documentation on the types
and operations, along with simplifying some of the logic in the buffer
queue implementation.
Reviewers: mboerger, eizan
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D53802
llvm-svn: 345485
Summary:
This change implements the ref-counting for backing stores associated
with generational buffer management. We do this as an implementation
detail of the buffer queue, instead of exposing this to the interface.
This change allows us to keep the buffer queue interface and usage model
the same.
Depends on D53551.
Reviewers: mboerger, eizan
Subscribers: jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D53560
llvm-svn: 345471
Summary:
This is an intermediary step in the full support for generational buffer
management in the FDR runtime. This change makes the FDR controller
aware of the new generation number in the buffers handed out by the
BufferQueue type.
In the process of making this change, we've realised that the cleanest
way of ensuring that the backing store per generation is live while all
the threads that need access to it will need reference counting to tie
the backing store to the lifetime of all threads that have a handle on
buffers associated with the memory.
We also learn that we're missing the edge-case in the function exit
handler's implementation where the first record being written into the
buffer is a function exit, which is caught/fixed by the test for
generational buffer management.
We still haven't wired the controller into the FDR mode runtime, which
will need the reference counting on the backing store implemented to
ensure that we're being conservatively thread-safe with this approach.
Depends on D52974.
Reviewers: mboerger, eizan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D53551
llvm-svn: 345445
Change the assumption when releasing memory to a buffer queue that new
generations might not be able to re-use the memory mapped addresses.
llvm-svn: 344882
Summary:
This change updates the buffer queue implementation to support using a
generation number to identify the lifetime of buffers. This first part
introduces the notion of the generation number, without changing the way
we handle the buffers yet.
What's missing here is the cleanup of the buffers. Ideally we'll keep
the two most recent generations. We need to ensure that before we do any
writes to the buffers, that we check the generation number(s) first.
Those changes will follow-on from this change.
Depends on D52588.
Reviewers: mboerger, eizan
Subscribers: llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D52974
llvm-svn: 344881
Summary:
This change allows us to handle allocator exhaustion properly in the
segmented array implementation. Before this change, we relied on the
caller of the `trim` function to provide a valid number of elements to
trim. This change allows us to do the right thing in case the elements
to trim is greater than the size of the container.
Reviewers: mboerger, eizan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D53484
llvm-svn: 344880
Summary:
This change updates the buffer queue implementation to support using a
generation number to identify the lifetime of buffers. This first part
introduces the notion of the generation number, without changing the way
we handle the buffers yet.
What's missing here is the cleanup of the buffers. Ideally we'll keep
the two most recent generations. We need to ensure that before we do any
writes to the buffers, that we check the generation number(s) first.
Those changes will follow-on from this change.
Depends on D52588.
Reviewers: mboerger, eizan
Subscribers: llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D52974
llvm-svn: 344670