When a Mach-O corefile has an LC_NOTE "main bin spec" for a
standalone binary / firmware, with only a UUID and no load
address, try to locate the binary and dSYM by UUID and if
found, load it at offset 0 for the user.
Add a test case that tests a firmware/standalone corefile
with both the "kern ver str" and "main bin spec" LC_NOTEs.
<rdar://problem/68193804>
Differential Revision: https://reviews.llvm.org/D88282
Recently https://reviews.llvm.org/D88103 introduced a nice API for
converting a JSON object into C++ types, which include nice error
messaging.
I'm using that new functioniality to perform the parsing in a much more
elegant way. As a result, the code looks simpler and more maintainable,
as we aren't parsing anymore individual fields manually.
I updated the test cases accordingly.
Differential Revision: https://reviews.llvm.org/D88264
The minidump-sysroot test I added in commit 20f84257 compares two paths
using a string comparison. This causes the Windows buildbot to fail
because of mismatched forward slashes and backslashes. Use
os.path.normcase to normalize before comparing.
When the various methods of locating the module in GetRemoteSharedModule
fail, make sure we pass the original module spec to the bail-out call to
the provided resolver function.
Also make sure we consistently use the resolved module spec from the
various success paths.
Thanks to what appears to have been an accidentally inverted condition
(commit 85967fa applied the new condition to a path where GetModuleSpec
returns false, but should have applied it when GetModuleSpec returns
true), without this fix we only pass the original module spec in the
fallback if the original spec has no uuid (or has a uuid that somehow
matches the resolved module's uuid despite the call to GetModuleSpec
failing). This manifested as a bug when processing a minidump file with
a user-provided sysroot, since in that case the resolver call was being
applied to resolved_module_spec (despite resolution failing), which did
not have the path of its file_spec set.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88099
Clang has some type sugar that only serves as a way to preserve the way a user
has typed a certain type in the source code. These types are currently not
unwrapped when we query the type name for a Clang type, which means that this
type sugar actually influences what formatters are picked for a certain type.
Currently if a user decides to reference a type by doing `::GlobalDecl Var = 3;`,
the type formatter for `GlobalDecl` will not be used (as the type sugar
around the type gives it the name `::GlobalDecl`. The same goes for other ways
to spell out a type such as `auto` etc.
With this patch most of this type sugar gets stripped when the full type name is
calculated. Typedefs are not getting desugared as that seems counterproductive.
I also don't desugar atomic types as that's technically not type sugar.
Reviewed By: jarin
Differential Revision: https://reviews.llvm.org/D87481
There was a little thinko which meant when stopped in a frame with
debug information but whose CU didn't have any global variables we
report:
no debug info for frame <N>
This patch fixes that error message to say the intended:
no global variables in current compile unit
<rdar://problem/69086361>
This is the first in a series of patches that will adds a new processor trace plug-in to LLDB.
The idea for this first patch to to add the plug-in interface with simple commands for the trace files that can "load" and "dump" the trace information. We can test the functionality and ensure people are happy with the way things are done and how things are organized before moving on to adding more functionality.
Processor trace information can be view in a few different ways:
- post mortem where a trace is saved off that can be viewed later in the debugger
- gathered while a process is running and allow the user to step back in time (with no variables, memory or registers) to see how each thread arrived at where it is currently stopped.
This patch attempts to start with the first solution of loading a trace file after the fact. The idea is that we will use a JSON file to load the trace information. JSON allows us to specify information about the trace like:
- plug-in name in LLDB
- path to trace file
- shared library load information so we can re-create a target and symbolicate the information in the trace
- any other info that the trace plug-in will need to be able to successfully parse the trace information
- cpu type
- version info
- ???
A new "trace" command was added at the top level of the LLDB commmands:
- "trace load"
- "trace dump"
I did this because if we load trace information we don't need to have a process and we might end up creating a new target for the trace information that will become active. If anyone has any input on where this would be better suited, please let me know. Walter Erquinigo will end up filling in the Intel PT specific plug-in so that it works and is tested once we can agree that the direction of this patch is the correct one, so please feel free to chime in with ideas on comments!
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D85705
When we fixed ImportDeclContext(...) in D71378 to make sure we complete each
FieldDecl of a RecordDecl when we are importing the definition we missed the
case where a FeildDecl was an ArrayType whose ElementType is a record.
This fix was motivated by a codegen crash during LLDB expression parsing. Since
we were not importing the definition we were crashing during layout which
required all the records be defined.
Differential Revision: https://reviews.llvm.org/D86660
Previously when <addr> in "memory region <addr>" didn't
parse correctly, we'd print an error then also ask lldb-server
for a region containing LLDB_INVALID_ADDRESS.
(lldb) memory region not_an_address
error: invalid address argument "not_an_address"...
error: Server returned invalid range
Only send the command to lldb-server if the address
parsed correctly.
(lldb) memory region not_an_address
error: invalid address argument "not_an_address"...
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D87694
qemu calls the "fp" and "lr" registers via their generic names
(x29/x30). This mismatch manifested itself as not being able to unwind
or display values of some local variables.
This patch adds a way to fetch breakpoint metadatas as a serialized
`Structured` Data format (JSON). This can be used by IDEs to update
their UI when a breakpoint is set or modified from the console.
rdar://11013798
Differential Revision: https://reviews.llvm.org/D87491
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
They are currently not being set correctly for the case of multi-config generators like XCode and VS. There's also a typo in one of the cmake files.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D87466
This adds support for substituting std::pair instantiations with enabled
import-std-module.
With the fixes in parent revisions we can currently substitute a single pair
(however, a result that returns a second pair currently causes LLDB to crash
while importing the second template instantiation).
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D85141
The ASTImporter has an `Imported(From, To)` callback that notifies subclasses
that a declaration has been imported in some way. LLDB uses this in the
`CompleteTagDeclsScope` to see which records have been imported into the scratch
context. If the record was declared inside the expression, then the
`CompleteTagDeclsScope` will forcibly import the full definition of that record
to the scratch context so that the expression AST can safely be disposed later
(otherwise we might end up going back to the deleted AST to complete the
minimally imported record). The way this is implemented is that there is a list
of decls that need to be imported (`m_decls_to_complete`) and we keep completing
the declarations inside that list until the list is empty. Every `To` Decl we
get via the `Imported` callback will be added to the list of Decls to be
completed.
There are some situations where the ASTImporter will actually give us two
`Imported` calls with the same `To` Decl. One way where this happens is if the
ASTImporter decides to merge an imported definition into an already imported
one. Another way is that the ASTImporter just happens to get two calls to
`ASTImporter::Import` for the same Decl. This for example happens when importing
the DeclContext of a Decl requires importing the Decl itself, such as when
importing a RecordDecl that was declared inside a function.
The bug addressed in this patch is that when we end up getting two `Imported`
calls for the same `To` Decl, then we would crash in the
`CompleteTagDeclsScope`. That's because the first time we complete the Decl we
remove the Origin tracking information (that maps the Decl back to from where it
came from). The next time we try to complete the same `To` Decl the Origin
tracking information is gone and we hit the `to_context_md->getOrigin(decl).ctx
== m_src_ctx` assert (`getOrigin(decl).ctx` is a nullptr the second time as the
Origin was deleted).
This is actually a regression coming from D72495. Before D72495
`m_decls_to_complete` was actually a set so every declaration in there could
only be queued once to be completed. The set was changed to a vector to make the
iteration over it deterministic, but that also causes that we now potentially
end up trying to complete a Decl twice.
This patch essentially just reverts D72495 and makes the `CompleteTagDeclsScope`
use a SetVector for the list of declarations to be completed. The SetVector
should filter out the duplicates (as the original `set` did) and also ensure that
the completion order is deterministic. I actually couldn't find any way to cause
LLDB to reproduce this bug by merging declarations (this would require that we
for example declare two namespaces in a non-top-level expression which isn't
possible). But the bug reproduces very easily by just declaring a class in an
expression, so that's what the test is doing.
Reviewed By: shafik
Differential Revision: https://reviews.llvm.org/D85648
SemaSourceWithPriorities is a special SemaSource that wraps our normal LLDB
ExternalASTSource and the ASTReader (which is used for the C++ module loading).
It's only active when the `import-std-module` setting is turned on.
The `CompleteType` function there in `SemaSourceWithPriorities` is looping over
all ExternalASTSources and asks each to complete the type. However, that loop is
in another loop that keeps doing that until the type is complete. If that
function is ever called on a type that is a forward decl then that causes LLDB
to go into an infinite loop.
I remember I added that second loop and the comment because I thought I saw a
similar pattern in some other Clang code, but after some grepping I can't find
that code anywhere and it seems the rest of the code base only calls
CompleteType once (It would also be kinda silly to have calling it multiple
times). So it seems that's just a silly mistake.
The is implicitly tested by importing `std::pair`, but I also added a simpler
dedicated test that creates a dummy libc++ module with some forward declarations
and then imports them into the scratch AST context. At some point the
ASTImporter will check if one of the forward decls could be completed by the
ExternalASTSource, which will cause the `SemaSourceWithPriorities` to go into an
infinite loop once it receives the `CompleteType` call.
Reviewed By: shafik
Differential Revision: https://reviews.llvm.org/D87289
targetHasSVE helper function was added to test for availability of SVE support
by connected platform. We now intend to use this function in other testcases
and I am moving it to a generic location in lldbtest.py to allow usage by
other upcoming testcases.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D86872
TestCPP11EnumTypes is one of the most expensive tests on my system and takes
around 35 seconds to run. A relatively large amount of that time is actually
doing CPU intensive work it seems (and not waiting on timeouts like other
slow tests).
The main issue is that this test repeatedly compiles the same source files
with different compiler defines. The test is also including standard library
headers, so it will also build all system modules with the gmodules debug
info variant. This leads to the problem that this test ends up compiling all
system Clang modules 8 times (one for each subtest with a unique define). As
the system modules are quite large, this causes that this test spends most
of its runtime just recompiling all system modules on macOS.
There is also the small issue that this test is starting and start-stopping
the test process a few hundred times.
This rewrites the test to instead just use a macro to instantiate all the
enum types in a single source and uses global variables to test the values
(which means there is no more need to continue/stop or even start a process).
I kept running all the debug info variants (event though it doesn't seem really
relevant) to keep this as NFC as possible.
This reduced the test runtime by around 1.5 seconds on my system (or in relative
numbers, the runtime of this test decreases by 95%).
This is one of the most expensive tests and runs for nearly half a minute on
my machine. Beside this test just doing a lot of work by iterating 15k times on
one ValueObject (which seems to be the point), it also runs this for every
debug info variant which doesn't seem relevant to just iterating ValueObject.
This marks it as no_debug_info_test to only run one debug info variation
and cut down the runtime to around 7 seconds on my machine.
Caused by D86662. The fix is only checking some fields when the expect_debug_info_size flag is true. For some reason this was not failing on a local linux machine.
This updates the errors reported by expect()
to something like:
```
Ran command:
"help"
Got output:
Debugger commands:
<...>
Expecting start string: "Debugger commands:" (was found)
Expecting end string: "foo" (was not found)
```
(see added tests for more examples)
This shows the user exactly what was run,
what checks passed and which failed. Along with
whether that check was supposed to pass.
(including what regex patterns matched)
These lines are also output to the test
trace file, whether the test passes or not.
Note that expect() will still fail at the first failed
check, in line with previous behaviour.
Also I have flipped the wording of the assert
message functions (.*_MSG) to describe failures
not successes. This makes more sense as they are
only shown on assert failures.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D86792
This patch adds the ability to use a custom interpreter with the
`platform shell` command. If the user set the `-s|--shell` option
with the path to a binary, lldb passes it down to the platform's
`RunShellProcess` method and set it as the shell to use in
`ProcessLaunchInfo to run commands.
Note that not all the Platforms support running shell commands with
custom interpreters (i.e. RemoteGDBServer is only expected to use the
default shell).
This patch also makes some refactoring and cleanups, like swapping
CString for StringRef when possible and updating `SBPlatformShellCommand`
with new methods and a new constructor.
rdar://67759256
Differential Revision: https://reviews.llvm.org/D86667
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
The Symbol Status in modules view is simplified so that only when the module has debug info and its size is non-zero, will the status message be displayed. The symbol status message is renamed to debug info size and flag message like "Symbols not found" and "Symbols loaded" is deleted.
Differential Revision: https://reviews.llvm.org/D86662
1. Added a dedicated completion to class `CommandObjectTypeFormatterDelete`
which can be used by these commands: `type filter/format/summary/synthetic delete`;
2. Added a related test case.
Reviewed By: teemperor
Differential Revision: https://reviews.llvm.org/D84142
TestCompletion is randomly failing on some bots. The error message however states
that the computed completions actually do contain the expected pid we're
looking for, so there shouldn't be any test failure.
The reason for that turns out to be that complete_from_to is actually used
for testing two different features. It can be used for testing what the
common prefix for the list of completions is and *also* for checking all the
possible completions that are returned for a command. Which one of the two
things should be checked can't be defined by a parameter to the function, but
is instead guessed by the test method instead based on the results that were
returned. If there is a common prefix in all completions, then that prefix
is searched and otherwise all completions are searched.
For TestCompletion's pid test this behaviour leads to the strange test failures.
If all the pid's that our test LLDB can see have a common prefix (e.g., it
can only see pids [123, 122, 10004, 10000] -> common prefix '1'), then
complete_from_to check that the common prefix contains our pid, which is
always fails ('1' doesn't contain '123' or any other valid pid). If there
isn't a common prefix (e.g., pids are [123, 122, 10004, 777]) then
complete_from_to will check the list of completions instead which works correctly.
This patch is fixing this by adding a simple check method that doesn't
have this behaviour and is simply searching the returned list of completions.
This should get the bots green while I'm working on a proper fix that fixes
complete_from_to.
The Length, AbbrOffset and Values fields of the debug_info section are
optional. This patch helps remove them and simplify test cases.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D86857
LIT uses a model where the test suite is configurable trough a
lit.site.cfg file. Most of the time we use the lit.site.cfg with values
that match the current build configuration, generated by CMake.
Nothing prevents you from running the test suite with a different
configuration, either by overriding some of these values from the
command line, or by passing a different lit.site.cfg.
The latter is currently tedious. Many configuration values are optional
but they still need to be set because lit.cfg.py is accessing them
directly. This patch changes the code to use getattr to return the
attribute if it exists. This makes it possible to specify a minimal
lit.site.cfg with only the mandatory/desired configuration values.
Differential revision: https://reviews.llvm.org/D86821
This patch removes the rather confusing LLDB_LIB_DIR and LLDB_IMPLIB_DIR
environment variables. They are confusing because LLDB_LIB_DIR would
point to the bin subdirectory in the build root while LLDB_IMPLIB_DIR
would point to the lib subdirectory. The reason far this was
LLDB.framework, which gets build under bin.
This patch replaces their uses with configuration.lldb_framework_path
and configuration.lldb_libs_dir respectively.
Differential revision: https://reviews.llvm.org/D86817
TestStandardUnwind uses the full absolute path to a set of C/C++ files as the test case name, which in turn is used in the name of a log file. When the source file is long, and the directory where log files are stored is also long, this causes an OSError because the log filename is too long.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D86752
This test appears to have never worked on Linux but it seems none of the current
bots ever ran this test as it required enabling compiler-rt (otherwise it
would have just been skipped).
This just copies over the XFAIL decorator that are already on all other sanitizer
tests.
TestQueues is curiously failing for me as my queue for QOS_CLASS_UNSPECIFIED
is named "Utility" and not "User Initiated" or "Default". While debugging, this
I noticed that this test isn't actually using this API right from what I understand. The API documentation
for `dispatch_get_global_queue` specifies for the parameter: "You may specify the value
QOS_CLASS_USER_INTERACTIVE, QOS_CLASS_USER_INITIATED, QOS_CLASS_UTILITY, or QOS_CLASS_BACKGROUND."
QOS_CLASS_UNSPECIFIED isn't listed as one of the supported values. swift-corelibs-libdispatch
even checks for this value and returns a DISPATCH_BAD_INPUT. The
libdispatch shipped on macOS seems to also check for QOS_CLASS_UNSPECIFIED and seems to
instead cause a "client crash", but somehow this doesn't trigger in this test and instead we just
get whatever queue
This patch just removes that part of the test as it appears the code is just incorrect.
Reviewed By: jasonmolenda
Differential Revision: https://reviews.llvm.org/D86211
psutil isn't reall a dependency of the test suite so this shouldn't be
unconditionally be imported here. Instead just check for the process name
by looking for the "a.out" string to get the bots green again.
Breakpad will always have a UUID for binaries when it creates minidump files. If an ELF files has a GNU build ID, it will use that. If it doesn't, it will create one by hashing up to the first 4096 bytes of the .text section. LLDB was not able to load these binaries even when we had the right binary because the UUID didn't match. LLDB will use the GNU build ID first as the main UUID for a binary and fallback onto a 8 byte CRC if a binary doesn't have one. With this fix, we will check for the Breakpad hash or the Facebook hash (a modified version of the breakpad hash that collides a bit less) and accept binaries when these hashes match.
Differential Revision: https://reviews.llvm.org/D86261
1. Added a new common completion TypeCategoryNames to provide a list of category names for completion;
2. Applied the completion to these commands: type category delete/enable/disable/list/define;
3. Added a related test case;
4. Bound the completion to the arguments of the type 'eArgTypeName'.
Reviewed By: teemperor, JDevlieghere
Differential Revision: https://reviews.llvm.org/D84124