Rather than relaying on CMake to substitute the full path to the lldb
source root, use the value set in config.lldb_src_root. This makes it
slightly easier to write a custom lit.site.cfg.py.
The destructor must be defined in the implementation class so that it
can be called, as Vedant Kumar pointed out in:
'''
What were your thoughts, re:
+class Trace : public PluginInterface {
+public:
+ ~Trace() override = default;
Does this need to be `virtual ~Trace() = ...`?
Otherwise, when a std::shared_ptr<Trace> is destroyed, the
destructor for the derived TraceIntelPT instance won't run.
'''
The `macos-setup-codesign.sh` script has been in place for over two years. If there are no known issues, it's a good time to drop the manual steps from the docs.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88257
This reverts commit f775fe5964.
I fixed a return type error in the original patch that was causing a test failure.
Also added a REQUIRES: python to the shell test so we'll skip this for
people who build lldb w/o Python.
Also added another test for the error printing.
Configuring the variable in CMake isn't enought, because the build mode
can't be resolved until execution time, which requires the build mode to
be substituted by lit.
Add the flag in ProcessMachCore::DoLoadCore that stops additional
searches for the binaries when we have an LC_NOTE identifying the
firmware/standalone binary as the correct one & we have loaded it
successfully.
This enables support for writing LLDB documentation in markdown in
addition to reStructured text. We already had documentation written in
markdown (StructuredDataPlugins and DarwinLog) which will now also be
available on the website.
With the recent patches to the ASTImporter that improve template type importing
(D87444), most of the import-std-module tests can now finally import the
type of the STL container they are testing. This patch removes most of the casts
that were added to simplify types to something the ASTImporter can import
(for example, std::vector<int>::size_type was casted to `size_t` until now).
Also adds the missing tests that require referencing the container type (for
example simply printing the whole container) as here we couldn't use a casting
workaround.
The only casts that remain are in the forward_list tests that reference
the iterator and the stack test. Both tests are still failing to import the
respective container type correctly (or crash while trying to import).
Usually when we enter a SWIG wrapper function from Python, SWIG automatically
adds a `Py_BEGIN_ALLOW_THREADS`/`Py_END_ALLOW_THREADS` around the call to the SB
API C++ function. This will ensure that Python's GIL is released when we enter
LLDB and locked again when we return to the wrapper code.
D51569 changed this behaviour but only for the generated `__str__` wrappers. The
added `nothreadallow` disables the injection of the GIL release/re-acquire code
and the GIL is now kept locked when entering LLDB and is expected to be still
locked when returning from the LLDB implementation. The main reason for that was
that back when D51569 landed the wrapper itself created a Python string. These
days it just creates a std::string and SWIG itself takes care of getting the GIL
and creating the Python string from the std::string, so that workaround isn't
necessary anymore.
This patch just removes `nothreadallow` so that our `__str__` functions now
behave like all other wrapper functions in that they release the GIL when
calling into the SB API implementation.
The motivation here is actually to work around another potential bug in LLDB.
When one calls into the LLDB SB API while holding the GIL and that call causes
LLDB to interpret some Python script via `ScriptInterpreterPython`, then the GIL
will be unlocked when the control flow returns from the SB API. In the case of
the `__str__` wrapper this would cause that the next call to a Python function
requiring the GIL would fail (as SWIG will not try to reacquire the GIL as it
isn't aware that LLDB removed it).
The reason for this unexpected GIL release seems to be a workaround for recent
Python versions:
```
// The only case we should go further and acquire the GIL: it is unlocked.
if (PyGILState_Check())
return;
```
The early-exit here causes `InitializePythonRAII::m_was_already_initialized` to
be always false and that causes that `InitializePythonRAII`'s destructor always
directly unlocks the GIL via `PyEval_SaveThread`. I'm investigating how to
properly fix this bug in a follow up patch, but for now this straightforward
patch seems to be enough to unblock my other patches (and it also has the
benefit of removing this workaround).
The test for this is just a simple test for `std::deque` which has a synthetic
child provider implemented as a Python script. Inspecting the deque object will
cause `expect_expr` to create a string error message by calling
`str(deque_object)`. Printing the ValueObject causes the Python script for the
synthetic children to execute which then triggers the bug described above where
the GIL ends up being unlocked.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88302
RegInfoBasedABI::GetRegisterInfoByName was failing because mips/mips64 ABIs
don't use ConstString in their register info array.
Reviewed By: #lldb, teemperor
Differential Revision: https://reviews.llvm.org/D88375
When a Mach-O corefile has an LC_NOTE "main bin spec" for a
standalone binary / firmware, with only a UUID and no load
address, try to locate the binary and dSYM by UUID and if
found, load it at offset 0 for the user.
Add a test case that tests a firmware/standalone corefile
with both the "kern ver str" and "main bin spec" LC_NOTEs.
<rdar://problem/68193804>
Differential Revision: https://reviews.llvm.org/D88282
Every call to the protected SBAddress constructor and the SetAddress
method takes the address of a valid object which means we might as well
pass it as a const reference instead of a pointer and drop the null
check.
Differential revision: https://reviews.llvm.org/D88249
Recently https://reviews.llvm.org/D88103 introduced a nice API for
converting a JSON object into C++ types, which include nice error
messaging.
I'm using that new functioniality to perform the parsing in a much more
elegant way. As a result, the code looks simpler and more maintainable,
as we aren't parsing anymore individual fields manually.
I updated the test cases accordingly.
Differential Revision: https://reviews.llvm.org/D88264
The OS version field is generally not very helpful for non-Darwin
targets. On Linux, it identifies the kernel version which moves
out-of-sync with the userspace. On Windows, this field actually ends up
corresponding to the Visual Studio toolset version instead of the OS
version. Consider non-Darwin targets without an OS version to be fully
specified.
Differential Revision: https://reviews.llvm.org/D88181
Reviewed By: Jonas Devlieghere, Dave Lee
Translating between JSON objects and C++ strutctures is common.
From experience in clangd, fromJSON/ObjectMapper work well and save a lot of
code, but aren't adopted elsewhere at least partly due to total lack of error
reporting beyond "ok"/"bad".
The recently-added error model should be rich enough for most applications.
It requires tracking the path within the root object and reporting local
errors at appropriate places.
To do this, we exploit the fact that the call graph of recursive
parse functions mirror the structure of the JSON itself.
The current path is represented as a linked list of segments, each of which is
on the stack as a parameter. Concretely, fromJSON now looks like:
bool fromJSON(const Value&, T&, Path);
Beyond the signature change, this is reasonably unobtrusive: building
the path segments is mostly handled by ObjectMapper and the vector<T> fromJSON.
However the root caller of fromJSON must now create a Root object to
store the errors, which is a little clunky.
I've added high-level parse<T>(StringRef) -> Expected<T>, but it's not
general enough to be the primary interface I think (at least, not usable in
clangd).
All existing users (mostly just clangd) are updated in this patch,
making this change backwards-compatible is a bit hairy.
Differential Revision: https://reviews.llvm.org/D88103
The minidump-sysroot test I added in commit 20f84257 compares two paths
using a string comparison. This causes the Windows buildbot to fail
because of mismatched forward slashes and backslashes. Use
os.path.normcase to normalize before comparing.
Add an optimal thread strategy to execute specified amount of tasks.
This strategy should prevent us from creating too many threads if we
occasionaly have an unexpectedly small amount of tasks.
Differential Revision: https://reviews.llvm.org/D87765
When the various methods of locating the module in GetRemoteSharedModule
fail, make sure we pass the original module spec to the bail-out call to
the provided resolver function.
Also make sure we consistently use the resolved module spec from the
various success paths.
Thanks to what appears to have been an accidentally inverted condition
(commit 85967fa applied the new condition to a path where GetModuleSpec
returns false, but should have applied it when GetModuleSpec returns
true), without this fix we only pass the original module spec in the
fallback if the original spec has no uuid (or has a uuid that somehow
matches the resolved module's uuid despite the call to GetModuleSpec
failing). This manifested as a bug when processing a minidump file with
a user-provided sysroot, since in that case the resolver call was being
applied to resolved_module_spec (despite resolution failing), which did
not have the path of its file_spec set.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88099
Clang has some type sugar that only serves as a way to preserve the way a user
has typed a certain type in the source code. These types are currently not
unwrapped when we query the type name for a Clang type, which means that this
type sugar actually influences what formatters are picked for a certain type.
Currently if a user decides to reference a type by doing `::GlobalDecl Var = 3;`,
the type formatter for `GlobalDecl` will not be used (as the type sugar
around the type gives it the name `::GlobalDecl`. The same goes for other ways
to spell out a type such as `auto` etc.
With this patch most of this type sugar gets stripped when the full type name is
calculated. Typedefs are not getting desugared as that seems counterproductive.
I also don't desugar atomic types as that's technically not type sugar.
Reviewed By: jarin
Differential Revision: https://reviews.llvm.org/D87481
There was a little thinko which meant when stopped in a frame with
debug information but whose CU didn't have any global variables we
report:
no debug info for frame <N>
This patch fixes that error message to say the intended:
no global variables in current compile unit
<rdar://problem/69086361>
This is the first in a series of patches that will adds a new processor trace plug-in to LLDB.
The idea for this first patch to to add the plug-in interface with simple commands for the trace files that can "load" and "dump" the trace information. We can test the functionality and ensure people are happy with the way things are done and how things are organized before moving on to adding more functionality.
Processor trace information can be view in a few different ways:
- post mortem where a trace is saved off that can be viewed later in the debugger
- gathered while a process is running and allow the user to step back in time (with no variables, memory or registers) to see how each thread arrived at where it is currently stopped.
This patch attempts to start with the first solution of loading a trace file after the fact. The idea is that we will use a JSON file to load the trace information. JSON allows us to specify information about the trace like:
- plug-in name in LLDB
- path to trace file
- shared library load information so we can re-create a target and symbolicate the information in the trace
- any other info that the trace plug-in will need to be able to successfully parse the trace information
- cpu type
- version info
- ???
A new "trace" command was added at the top level of the LLDB commmands:
- "trace load"
- "trace dump"
I did this because if we load trace information we don't need to have a process and we might end up creating a new target for the trace information that will become active. If anyone has any input on where this would be better suited, please let me know. Walter Erquinigo will end up filling in the Intel PT specific plug-in so that it works and is tested once we can agree that the direction of this patch is the correct one, so please feel free to chime in with ideas on comments!
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D85705
A few fixes while trying to figure out why tests are being skipped for arsenm:
- We check `$compiler -v`, but `-v` is `--verbose`, not `--version`. Use the long flag name.
- We check all lines matching `version ...`, but we should exit early for the first version string we see (which should be the main one). I'm not sure if this is the issue, but perhaps this is causing some users to skip some tests if another "version ..." is showing up later.
- Having `\.` in a python string is triggering pylint warnings, because it should be escaped as a regex string, e.g. `r'\.' However, `.` in a character class does not need to be escaped, as it matches only a literal `.` in that context.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88051
When we fixed ImportDeclContext(...) in D71378 to make sure we complete each
FieldDecl of a RecordDecl when we are importing the definition we missed the
case where a FeildDecl was an ArrayType whose ElementType is a record.
This fix was motivated by a codegen crash during LLDB expression parsing. Since
we were not importing the definition we were crashing during layout which
required all the records be defined.
Differential Revision: https://reviews.llvm.org/D86660
Update the some examples in the help string for `breakpoint command add`.
Python breakpoint commands have different output than what's shown in the help string.
Notes:
* Removed an example containing an inner function, as it seems more about a Python technique than about `command script add`
* Updated `print x` to `print(x)` to be python 2/3 agnostic
Differential Revision: https://reviews.llvm.org/D87807