This patch should fix a bug in PExpect.launch that happened when color
support is not enabled.
In that case, we need to add the `--no-use-colors` flag to lldb's launch
argument list. However, previously, each character to the string was
appended separately to the `args` list. This patch solves that by adding
the whole string to the list.
This should fix the TestIOHandlerResize failure on GreenDragon.
Differential Revision: https://reviews.llvm.org/D126021
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
This should fix the issues introduced by d71d1a9, which skipped all the
test setup commands.
This also fixes the test failures happening in TestAutosuggestion.py.
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
This patch adds a new `use_colors` argument to the PExpect.launch
method.
As the name suggests, it allows the user to conditionally enable color
support in the debugger, which can be helpful to test functionalities that
rely on that, like progress reporting. It defaults to False.
Differential Revision: https://reviews.llvm.org/D125915
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
This diffs implements per-core tracing on lldb-server. It also includes tests that ensure that tracing can be initiated from the client and that the jLLDBGetState ppacket returns the list of trace buffers per core.
This doesn't include any decoder changes.
Finally, this makes some little changes here and there improving the existing code.
A specific piece of code that can't reliably be tested is when tracing
per core fails due to permissions. In this case we add a
troubleshooting message and this is the manual test:
```
/proc/sys/kernel/perf_event_paranoid set to 1
(lldb) process trace start --per-core-tracing error: perf event syscall failed: Permission denied
You might need that /proc/sys/kernel/perf_event_paranoid has a value of 0 or -1.
``
Differential Revision: https://reviews.llvm.org/D124858
This updates the documentation of the gdb-remote protocol, as well as the help messages, to include the new --per-core-tracing option.
Differential Revision: https://reviews.llvm.org/D124640
distutils is deprecated and shutil.which is the suggested
replacement for this function.
https://peps.python.org/pep-0632/#migration-advicehttps://docs.python.org/3/library/shutil.html#shutil.which
It was added in Python3.3 but given that we're already using
shutil.which elsewhere I think this is ok/no worse than before.
We do have our own find_executable in lldb/test/Shell/helper/build.py
but I'd rather leave that as is for now. Also we have our own versions
of which() but again, a change for another time.
This work is part of #54337.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D124601
This patch replaces getargspec with getfullargspec in funcutils.py.
getargspec has been deprecated by python 11x release. This is
important to run LLDB testsuite in Windows/Arm64 platform
where Python native will be available from python release onwards.
Note: getfullargspec is not available in python 2
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D121786
This changes the decorator helper `_match_decorator_property` to
consider `None` as the actual value as not a match. Using `None` for the
pattern continues to be considered a match.
I discovered the issue because marking a test as NO_DEBUG_INFO_TESTCASE
will cause the call to `self.getDebugInfo()` to return `None` and
incorrectly skip or XFAIL the corresponding test.
I used the above scenario to create a test for the decorators.
Differential revision: https://reviews.llvm.org/D123401
This patch moves the platform creation and selection logic into the
per-debugger platform lists. I've tried to keep functional changes to a
minimum -- the main (only) observable difference in this change is that
APIs, which select a platform by name (e.g.,
Debugger::SetCurrentPlatform) will not automatically pick up a platform
associated with another debugger (or no debugger at all).
I've also added several tests for this functionality -- one of the
pleasant consequences of the debugger isolation is that it is now
possible to test the platform selection and creation logic.
This is a product of the discussion at
<https://discourse.llvm.org/t/multiple-platforms-with-the-same-name/59594>.
Differential Revision: https://reviews.llvm.org/D120810
As noticed in D87637, when LLDB crashes, we only print stack traces if
LLDB is directly executed, not when used via Python bindings. Enabling
this by default may be undesirable (libraries shouldn't be messing with
signal handlers), so make this an explicit opt-in.
I "commandeered" this patch from Jordan Rupprecht who put this up for
review originally.
Differential revision: https://reviews.llvm.org/D91835
When opening core files (and also in some other situations) we could end
up with two vdso modules. This could happen because the vdso module is
very special, and over the years, we have accumulated various ways to
load it.
In D10800, we added one mechanism for loading it, which took the form of
a generic load-from-memory capability. Unfortunately loading an elf file
from memory is not possible (because the loader never loads the entire
file), and our attempts to do so were causing crashes. So, in D34352, we
partially reverted D10800 and implemented a custom mechanism specific to
the vdso.
Unfortunately, enough of D10800 remained such that, under the right
circumstances, it could end up loading a second (non-functional) copy of
the vdso module. This happened when the process plugin did not support
the extended MemoryRegionInfo query (added in D22219, to workaround a
different bug), which meant that the loader plugin was not able to
recognise that the linux-vdso.so.1 module (this is how the loader calls
it) is in fact the same as the [vdso] module (the name used in
/proc/$PID/maps) we loaded before. This typically happened in a core
file, as they don't store this kind of information.
This patch fixes the issue by completing the revert of D10800 -- the
memory loading code is removed completely. It also reduces the scope of
the hackaround introduced in D22219 -- it isn't completely sound and is
only relevant for fairly old (but still supported) versions of android.
I added the memory loading logic to the wasm dynamic loader, which has
since appeared and is relying on this feature (it even has a test). As
far as I can tell loading wasm modules from memory is possible and
reliable. MachO memory loading is not affected by this patch, as it uses
a completely different code path.
Since the scenarios/patches I described came without test cases, I have
created two new gdb-client tests cases for them. They're not
particularly readable, but right now, this is the best way we can
simulate the behavior (bugs) of a particular dynamic linker.
Differential Revision: https://reviews.llvm.org/D122660
This recommits dddf4ce03, which was reverted because of a couple of test
failures on macos. The reason behind the failures was that the patch
inadvertenly changed the value returned by the host platform from
"macosx" to "darwin". The new version fixes that.
Original commit message was:
The decision which categories are relevant for a particular test run
happen very early in the test setup process. They use the SBPlatform
object to determine which categories should be skipped. The platform
object created for this purpose transcends individual test runs.
This setup is not compatible with the direction discussed in
<https://discourse.llvm.org/t/multiple-platforms-with-the-same-name/59594>
-- when platform objects are tied to a specific (SB)Debugger, they need
to be created alongside it, which currently happens in the test setUp
method.
This patch is the first step in that direction -- it rewrites the
category skipping logic to avoid depending on a global SBPlatform
object. Fortunately, the skipping logic is fairly simple (and I believe
it outght to stay that way) and mainly consists of comparing the
platform name against some hardcoded lists. This patch bases this
comparison on the platform name instead of the os part of the triple (as
reported by the platform).
Differential Revision: https://reviews.llvm.org/D121605
This patch introduces 2 new lldb utility functions:
- lldbutil.start_listening_from: This can be called in the test setup to
create a listener and set it up for a specific event mask and add it
to the user-provided broadcaster's list.
- lldbutil.fetch_next_event: This will use fetch a single event from the
provided istener and return it if it matches the provided broadcaster.
The motivation behind this is to easily test new kinds of events
(i.e. Swift type-system progress events). However, this patch also
updates `TestProgressReporting.py` and `TestDiagnosticReporting.py`
to make use of these new helper functions.
Differential Revision: https://reviews.llvm.org/D122193
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
By default these timeouts are extremely small (0.1s). This means that
100ms after sending an EOF, pexpect will start sending the process
increasingly aggressive signals, but the small timeouts mean that (on a
loaded machine) the kernel may not have enough time to process the
signal even if the overall effect of the signal is to kill the
application.
It turns out we were already relying on this signals (instead of regular
EOF quits) in our tests. In my experiments it was sufficient to block
SIGINT and SIGHUP to cause some test to become flaky. This was most
likely the reason of a couple of flakes on the lldb-x86_64-debian bot,
and is probably the reason why the pexpect tests are flaky on several
other (e.g. asan) bots.
This patch increses the timeout to 6 seconds (60-fold increase), which
is hopefully sufficient to avoid flakes even in the most extreme
situations.
This patch introduces a generic helper class that will listen for
event in a background thread and match it against a source broadcaster.
If the event received matches the source broadcaster, the event is
queued up in a list that the user can access later on.
The motivation behind this is to easily test new kinds of events
(i.e. Swift type-system progress events). However, this patch also
updates `TestProgressReporting.py` and `TestDiagnosticReporting.py`
to make use of this new helper class.
Differential Revision: https://reviews.llvm.org/D121977
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
The decision which categories are relevant for a particular test run
happen very early in the test setup process. They use the SBPlatform
object to determine which categories should be skipped. The platform
object created for this purpose transcends individual test runs.
This setup is not compatible with the direction discussed in
<https://discourse.llvm.org/t/multiple-platforms-with-the-same-name/59594>
-- when platform objects are tied to a specific (SB)Debugger, they need
to be created alongside it, which currently happens in the test setUp
method.
This patch is the first step in that direction -- it rewrites the
category skipping logic to avoid depending on a global SBPlatform
object. Fortunately, the skipping logic is fairly simple (and I believe
it outght to stay that way) and mainly consists of comparing the
platform name against some hardcoded lists. This patch bases this
comparison on the platform name instead of the os part of the triple (as
reported by the platform).
Differential Revision: https://reviews.llvm.org/D121605
Add `IsAggregateType` to the SB API.
I'd like to use this from tests, and there are numerous other `Is<X>Type`
predicates on `SBType`.
Differential Revision: https://reviews.llvm.org/D121252
Our SIGTSTP handler was working, but that was mostly accidental.
The reason it worked is because lldb is multithreaded for most of its
lifetime and the OS is reasonably fast at responding to signals. So,
what happened was that the kill(SIGTSTP) which we sent from inside the
handler was delivered to another thread while the handler was still set
to SIG_DFL (which then correctly put the entire process to sleep).
Sometimes it happened that the other thread got the second signal after
the first thread had already restored the handler, in which case the
signal handler would run again, and it would again attempt to send the
SIGTSTP signal back to itself.
Normally it didn't take many iterations for the signal to be delivered
quickly enough. However, if you were unlucky (or were playing around
with pexpect) you could get SIGTSTP while lldb was single-threaded, and
in that case, lldb would go into an endless loop because the second
SIGTSTP could only be handled on the main thread, and only after the
handler for the first signal returned (and re-installed itself). In that
situation the handler would keep re-sending the signal to itself.
This patch fixes the issue by implementing the handler the way it
supposed to be done:
- before sending the second SIGTSTP, we unblock the signal (it gets
automatically blocked upon entering the handler)
- we use raise to send the signal, which makes sure it gets delivered to
the thread which is running the handler
This also means we don't need the SIGCONT handler, as our TSTP handler
resumes right after the entire process is continued, and we can do the
required work there.
I also include a test case for the SIGTSTP flow. It uses pexpect, but it
includes a couple of extra twists. Specifically, I needed to create an
extra process on top of lldb, which will run lldb in a separate process
group and simulate the role of the shell. This is needed because SIGTSTP
is not effective on a session leader (the signal gets delivered, but it
does not cause a stop) -- normally there isn't anyone to notice the
stop.
Differential Revision: https://reviews.llvm.org/D120320
The race is between these two pieces of code that are executed in two separate
lldb-vscode threads (the first is in the main thread and another is in the
event-handling thread):
```
// lldb-vscode.cpp
g_vsc.debugger.SetAsync(false);
g_vsc.target.Launch(launch_info, error);
g_vsc.debugger.SetAsync(true);
```
```
// Target.cpp
bool old_async = debugger.GetAsyncExecution();
debugger.SetAsyncExecution(true);
debugger.GetCommandInterpreter().HandleCommands(GetCommands(), exc_ctx,
options, result);
debugger.SetAsyncExecution(old_async);
```
The sequence that leads to the bug is this one:
1. Main thread enables synchronous mode and launches the process.
2. When the process is launched, it generates the first stop event.
3. This stop event is catched by the event-handling thread and DoOnRemoval
is invoked.
4. Inside DoOnRemoval, this thread runs stop hooks. And before running stop
hooks, the current synchronization mode is stored into old_async (and
right now it is equal to "false").
5. The main thread finishes the launch and returns to lldb-vscode, the
synchronization mode is restored to asynchronous by lldb-vscode.
6. Event-handling thread finishes stop hooks processing and restores the
synchronization mode according to old_async (i.e. makes the mode synchronous)
7. And now the mode is synchronous while lldb-vscode expects it to be
asynchronous. Synchronous mode forbids the process to broadcast public stop
events, so, VS Code just hangs because lldb-vscode doesn't notify it about
stops.
So, this diff makes the target intercept the first stop event if the process is
launched in the synchronous mode, thus preventing stop hooks execution.
The bug is only present on Windows because other platforms already
intercept this event using their own hijacking listeners.
So, this diff also fixes some problems with lldb-vscode tests on Windows to make
it possible to run the related test. Other tests still can't be enabled because
the debugged program prints something into stdout and LLDB can't intercept this
output and redirect it to lldb-vscode properly.
Reviewed By: jingham
Differential Revision: https://reviews.llvm.org/D119548
We discovered that when using "launchCommands" or "attachCommands" that there was an issue where these commands were not being run synchronously. There were further problems in this case where we would get thread events for the process that was just launched or attached before the IDE was ready, which is after "configurationDone" was sent to lldb-vscode.
This fix introduces the ability to wait for the process to stop after the run or attach to ensure that we have a stopped process at the entry point that is ready for the debug session to proceed. This also allows us to run the normal launch or attach without needing to play with the async flag the debugger. We spin up the thread that listens for process events before we start the launch or attach, but we stop the first eStateStopped (with stop ID of zero) event from being delivered through the DAP protocol because the "configurationDone" request handler will deliver it manually as the IDE expects a stop after configuration done. The request_configurationDone will also only deliver the stop packet if the "stopOnEntry" is False in the launch configuration.
Also added a new "timeout" to the launch and attach launch configuration arguments that can be set and defaults to 30 seconds. Since we now poll to detect when the process is stopped, we need a timeout that can be changed in case certain workflows take longer that 30 seconds to attach. If the process is not stopped by the timeout, an error will be retured for the launch or attach.
Added a flag to the vscode.py protocol classes that detects and ensures that no "stopped" events are sent prior to "configurationDone" has been sent and will raise an error if it does happen.
This should make our launching and attaching more reliable and avoid some deadlocks that were being seen (https://reviews.llvm.org/D119548).
Differential Revision: https://reviews.llvm.org/D119797
Instead of using sleeps, have the inferior notify us (via a trap opcode) that
the requested number of threads have been created.
This allows us to get rid of some fairly dodgy test utility code --
wait_for_thread_count seemed like it was waiting for the threads to
appear, but it never actually let the inferior run, so it only succeeded
if the threads were already started when the function was called. Since
the function was called after a fairly small delay (1s, usually), this
is probably the reason why the tests were failing on some bots.
Differential Revision: https://reviews.llvm.org/D119167
This fixes TestGdbRemoteSingleStep.py and TestGdbRemote_vCont.py. This
patch updates the test to account for the possibility that the constants
are already materialized. This appears to behave differently between
embedded arm64 devices and Apple Silicon.
Implement the qXfer:siginfo:read that is used to read the siginfo_t
(extended signal information) for the current thread. This is currently
implemented on FreeBSD and Linux.
Differential Revision: https://reviews.llvm.org/D117113
Previously we would persist the flags indicating whether the remote side
supports a particular feature across reconnects, which is obviously not
a good idea.
I implement the clearing by nuking (its the only way to be sure :) the
entire GDBRemoteCommunication object in the disconnect operation and
creating a new one upon connection. This allows us to maintain a nice
invariant that the GDBRemoteCommunication object (which is now a
pointer) exists only if it is connected. The downside to that is that a
lot of functions now needs to check the validity of the pointer instead
of blindly accessing the object.
The process communication does not suffer from the same issue because we
always destroy the entire Process object for a relaunch.
Differential Revision: https://reviews.llvm.org/D116539
The lldbconfig module was necessary to run the LLDB test suite against a
reproducer. Since this functionality has been removed, the module is no
longer necessary.
It was being used only in some very old tests (which pass even without
it) and its implementation is highly questionable.
These days we have different mechanisms for requesting a build with a
particular kind of c++ library (USE_LIB(STD)CPP in the makefile).
This is an updated version of the https://reviews.llvm.org/D113789 patch with the following changes:
- We no longer modify modification times of the cache files
- Use LLVM caching and cache pruning instead of making a new cache mechanism (See DataFileCache.h/.cpp)
- Add signature to start of each file since we are not using modification times so we can tell when caches are stale and remove and re-create the cache file as files are changed
- Add settings to control the cache size, disk percentage and expiration in days to keep cache size under control
This patch enables symbol tables to be cached in the LLDB index cache directory. All cache files are in a single directory and the files use unique names to ensure that files from the same path will re-use the same file as files get modified. This means as files change, their cache files will be deleted and updated. The modification time of each of the cache files is not modified so that access based pruning of the cache can be implemented.
The symbol table cache files start with a signature that uniquely identifies a file on disk and contains one or more of the following items:
- object file UUID if available
- object file mod time if available
- object name for BSD archive .o files that are in .a files if available
If none of these signature items are available, then the file will not be cached. This keeps temporary object files from expressions from being cached.
When the cache files are loaded on subsequent debug sessions, the signature is compare and if the file has been modified (uuid changes, mod time changes, or object file mod time changes) then the cache file is deleted and re-created.
Module caching must be enabled by the user before this can be used:
symbols.enable-lldb-index-cache (boolean) = false
(lldb) settings set symbols.enable-lldb-index-cache true
There is also a setting that allows the user to specify a module cache directory that defaults to a directory that defaults to being next to the symbols.clang-modules-cache-path directory in a temp directory:
(lldb) settings show symbols.lldb-index-cache-path
/var/folders/9p/472sr0c55l9b20x2zg36b91h0000gn/C/lldb/IndexCache
If this setting is enabled, the finalized symbol tables will be serialized and saved to disc so they can be quickly loaded next time you debug.
Each module can cache one or more files in the index cache directory. The cache file names must be unique to a file on disk and its architecture and object name for .o files in BSD archives. This allows universal mach-o files to support caching multuple architectures in the same module cache directory. Making the file based on the this info allows this cache file to be deleted and replaced when the file gets updated on disk. This keeps the cache from growing over time during the compile/edit/debug cycle and prevents out of space issues.
If the cache is enabled, the symbol table will be loaded from the cache the next time you debug if the module has not changed.
The cache also has settings to control the size of the cache on disk. Each time LLDB starts up with the index cache enable, the cache will be pruned to ensure it stays within the user defined settings:
(lldb) settings set symbols.lldb-index-cache-expiration-days <days>
A value of zero will disable cache files from expiring when the cache is pruned. The default value is 7 currently.
(lldb) settings set symbols.lldb-index-cache-max-byte-size <size>
A value of zero will disable pruning based on a total byte size. The default value is zero currently.
(lldb) settings set symbols.lldb-index-cache-max-percent <percentage-of-disk-space>
A value of 100 will allow the disc to be filled to the max, a value of zero will disable percentage pruning. The default value is zero.
Reviewed By: labath, wallace
Differential Revision: https://reviews.llvm.org/D115324
Introduce a FreeBSDKernel plugin that provides the ability to read
FreeBSD kernel core dumps. The plugin utilizes libfbsdvmcore to provide
support for both "full memory dump" and minidump formats across variety
of architectures supported by FreeBSD. It provides the ability to read
kernel memory, as well as the crashed thread status with registers
on arm64, i386 and x86_64.
Differential Revision: https://reviews.llvm.org/D114911
Introduce a FreeBSDKernel plugin that provides the ability to read
FreeBSD kernel core dumps. The plugin utilizes libfbsdvmcore to provide
support for both "full memory dump" and minidump formats across variety
of architectures supported by FreeBSD. It provides the ability to read
kernel memory, as well as the crashed thread status with registers
on arm64, i386 and x86_64.
Differential Revision: https://reviews.llvm.org/D114911
727bd89b60 broke the UBSan decorator. The decorator compiles a custom
source code snippet that exposes UB and verifies the presence of a UBSan
symbol in the generated binary. The aforementioned commit broke both by
compiling a snippet without UB and discarding the result.
Make sure to add the PrivateFrameworks directory to the frameworks path
when using an internal SDK. This is necessary for the "on-device" test
suite.
rdar://84519268
Differential revision: https://reviews.llvm.org/D114742
This adds a new platform class, whose job is to enable running
(debugging) executables under qemu.
(For general information about qemu, I recommend reading the RFC thread
on lldb-dev
<https://lists.llvm.org/pipermail/lldb-dev/2021-October/017106.html>.)
This initial patch implements the necessary boilerplate as well as the
minimal amount of functionality needed to actually be able to do
something useful (which, in this case means debugging a fully statically
linked executable).
The knobs necessary to emulate dynamically linked programs, as well as
to control other aspects of qemu operation (the emulated cpu, for
instance) will be added in subsequent patches. Same goes for the ability
to automatically bind to the executables of the emulated architecture.
Currently only two settings are available:
- architecture: the architecture that we should emulate
- emulator-path: the path to the emulator
Even though this patch is relatively small, it doesn't lack subtleties
that are worth calling out explicitly:
- named sockets: qemu supports tcp and unix socket connections, both of
them in the "forward connect" mode (qemu listening, lldb connecting).
Forward TCP connections are impossible to realise in a race-free way.
This is the reason why I chose unix sockets as they have larger, more
structured names, which can guarantee that there are no collisions
between concurrent connection attempts.
- the above means that this code will not work on windows. I don't think
that's an issue since user mode qemu does not support windows anyway.
- Right now, I am leaving the code enabled for windows, but maybe it
would be better to disable it (otoh, disabling it means windows
developers can't check they don't break it)
- qemu-user also does not support macOS, so one could contemplate
disabling it there too. However, macOS does support named sockets, so
one can even run the (mock) qemu tests there, and I think it'd be a
shame to lose that.
Differential Revision: https://reviews.llvm.org/D114509
This is a preparatory commit to enable mocking of qemu startup. That
will involve running the mock server in a separate process, so there's
no need for multithreading.
Initialization is moved from the start function into the constructor
(which can then take an actual socket instead of a class), and the run
method is made public.
Depends on D114156.
Differential Revision: https://reviews.llvm.org/D114157
We were using the client socket close as a way to terminate the handler
thread. But this kind of concurrent access to the same socket is not
safe. It also complicates running the handler without a dedicated thread
(next patch).
Instead, here I add an explicit way for a packet handler to request
termination. Waiting for lldb to terminate the connection would almost
be sufficient, but in the pty test we want to keep the pty open so we
can examine its state. Ability to disconnect at an arbitrary point may
be useful for testing other aspects of lldb functionality as well.
The way this works is that now each packet handler can optionally return
a list of responses (instead of just one). One of those responses (it
only makes sense for it to be the last one) can be a special
RESPONSE_DISCONNECT object, which triggers a disconnection (via a new
TerminateConnectionException).
As the mock server now cleans up the connection whenever it disconnects,
the pty test needs to explicitly dup(2) the descriptors in order to
inspect the post-disconnect state.
Differential Revision: https://reviews.llvm.org/D114156