This seems to fail on ubuntu 18.04.5 with Clang 9 due to:
Error output:
error: Couldn't lookup symbols:
std::__1::default_delete<int>::operator()(int) const
This test seems to randomly fail on Linux machines. It's only one part of the
test failing randomly, so let's just skip it instead of reverting the whole
patch (again).
Matt's change to the register allocator in 89baeaef2f changed where we
end up after the `finish`. Before we'd end up on line 4.
* thread #1, queue = 'com.apple.main-thread', stop reason = step out
Return value: (int) $0 = 1
frame #0: 0x0000000100003f7d a.out`main(argc=1, argv=0x00007ffeefbff630) at main.c:4:3
1 extern int func();
2
3 int main(int argc, char **argv) {
-> 4 func(); // Break here
5 func(); // Second
6 return 0;
7 }
Now, we end up on line 5.
* thread #1, queue = 'com.apple.main-thread', stop reason = step out
Return value: (int) $0 = 1
frame #0: 0x0000000100003f8d a.out`main(argc=1, argv=0x00007ffeefbff630) at main.c:5:3
2
3 int main(int argc, char **argv) {
4 func(); // Break here
-> 5 func(); // Second
6 return 0;
7 }
Given that this is not expected stable to be stable I've made the test a
bit more lenient to accept both scenarios.
Also, use the StructuredData::Dump method to print the StructuredData if there
is no plugin, rather than just returning an error.
Differential Revision: https://reviews.llvm.org/D88266
Per the DAP spec for SetBreakpoints [1], the way to clear breakpoints is: `To clear all breakpoint for a source, specify an empty array.`
However, leaving the breakpoints field unset is also a well formed request (note the `breakpoints?:` in the `SetBreakpointsArguments` definition). If it's unset, we have a couple choices:
1. Crash (current behavior)
2. Clear breakpoints
3. Return an error response that the breakpoints field is missing.
I propose we do (2) instead of (1), and treat an unset breakpoints field the same as an empty breakpoints field.
[1] https://microsoft.github.io/debug-adapter-protocol/specification#Requests_SetBreakpoints
Reviewed By: wallace, labath
Differential Revision: https://reviews.llvm.org/D88513
When running in an ipv6-only environment where `AF_INET` sockets are not available, many lldb tests (mostly gdb remote tests) fail because things like `127.0.0.1` don't work there.
Use `localhost` instead of `127.0.0.1` whenever possible, or include a fallback of creating `AF_INET6` sockets when `AF_INET` fails.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D87333
Rather than relaying on CMake to substitute the full path to the lldb
source root, use the value set in config.lldb_src_root. This makes it
slightly easier to write a custom lit.site.cfg.py.
The destructor must be defined in the implementation class so that it
can be called, as Vedant Kumar pointed out in:
'''
What were your thoughts, re:
+class Trace : public PluginInterface {
+public:
+ ~Trace() override = default;
Does this need to be `virtual ~Trace() = ...`?
Otherwise, when a std::shared_ptr<Trace> is destroyed, the
destructor for the derived TraceIntelPT instance won't run.
'''
The `macos-setup-codesign.sh` script has been in place for over two years. If there are no known issues, it's a good time to drop the manual steps from the docs.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88257
This reverts commit f775fe5964.
I fixed a return type error in the original patch that was causing a test failure.
Also added a REQUIRES: python to the shell test so we'll skip this for
people who build lldb w/o Python.
Also added another test for the error printing.
Configuring the variable in CMake isn't enought, because the build mode
can't be resolved until execution time, which requires the build mode to
be substituted by lit.
Add the flag in ProcessMachCore::DoLoadCore that stops additional
searches for the binaries when we have an LC_NOTE identifying the
firmware/standalone binary as the correct one & we have loaded it
successfully.
This enables support for writing LLDB documentation in markdown in
addition to reStructured text. We already had documentation written in
markdown (StructuredDataPlugins and DarwinLog) which will now also be
available on the website.
With the recent patches to the ASTImporter that improve template type importing
(D87444), most of the import-std-module tests can now finally import the
type of the STL container they are testing. This patch removes most of the casts
that were added to simplify types to something the ASTImporter can import
(for example, std::vector<int>::size_type was casted to `size_t` until now).
Also adds the missing tests that require referencing the container type (for
example simply printing the whole container) as here we couldn't use a casting
workaround.
The only casts that remain are in the forward_list tests that reference
the iterator and the stack test. Both tests are still failing to import the
respective container type correctly (or crash while trying to import).
Usually when we enter a SWIG wrapper function from Python, SWIG automatically
adds a `Py_BEGIN_ALLOW_THREADS`/`Py_END_ALLOW_THREADS` around the call to the SB
API C++ function. This will ensure that Python's GIL is released when we enter
LLDB and locked again when we return to the wrapper code.
D51569 changed this behaviour but only for the generated `__str__` wrappers. The
added `nothreadallow` disables the injection of the GIL release/re-acquire code
and the GIL is now kept locked when entering LLDB and is expected to be still
locked when returning from the LLDB implementation. The main reason for that was
that back when D51569 landed the wrapper itself created a Python string. These
days it just creates a std::string and SWIG itself takes care of getting the GIL
and creating the Python string from the std::string, so that workaround isn't
necessary anymore.
This patch just removes `nothreadallow` so that our `__str__` functions now
behave like all other wrapper functions in that they release the GIL when
calling into the SB API implementation.
The motivation here is actually to work around another potential bug in LLDB.
When one calls into the LLDB SB API while holding the GIL and that call causes
LLDB to interpret some Python script via `ScriptInterpreterPython`, then the GIL
will be unlocked when the control flow returns from the SB API. In the case of
the `__str__` wrapper this would cause that the next call to a Python function
requiring the GIL would fail (as SWIG will not try to reacquire the GIL as it
isn't aware that LLDB removed it).
The reason for this unexpected GIL release seems to be a workaround for recent
Python versions:
```
// The only case we should go further and acquire the GIL: it is unlocked.
if (PyGILState_Check())
return;
```
The early-exit here causes `InitializePythonRAII::m_was_already_initialized` to
be always false and that causes that `InitializePythonRAII`'s destructor always
directly unlocks the GIL via `PyEval_SaveThread`. I'm investigating how to
properly fix this bug in a follow up patch, but for now this straightforward
patch seems to be enough to unblock my other patches (and it also has the
benefit of removing this workaround).
The test for this is just a simple test for `std::deque` which has a synthetic
child provider implemented as a Python script. Inspecting the deque object will
cause `expect_expr` to create a string error message by calling
`str(deque_object)`. Printing the ValueObject causes the Python script for the
synthetic children to execute which then triggers the bug described above where
the GIL ends up being unlocked.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88302
RegInfoBasedABI::GetRegisterInfoByName was failing because mips/mips64 ABIs
don't use ConstString in their register info array.
Reviewed By: #lldb, teemperor
Differential Revision: https://reviews.llvm.org/D88375
When a Mach-O corefile has an LC_NOTE "main bin spec" for a
standalone binary / firmware, with only a UUID and no load
address, try to locate the binary and dSYM by UUID and if
found, load it at offset 0 for the user.
Add a test case that tests a firmware/standalone corefile
with both the "kern ver str" and "main bin spec" LC_NOTEs.
<rdar://problem/68193804>
Differential Revision: https://reviews.llvm.org/D88282
Every call to the protected SBAddress constructor and the SetAddress
method takes the address of a valid object which means we might as well
pass it as a const reference instead of a pointer and drop the null
check.
Differential revision: https://reviews.llvm.org/D88249
Recently https://reviews.llvm.org/D88103 introduced a nice API for
converting a JSON object into C++ types, which include nice error
messaging.
I'm using that new functioniality to perform the parsing in a much more
elegant way. As a result, the code looks simpler and more maintainable,
as we aren't parsing anymore individual fields manually.
I updated the test cases accordingly.
Differential Revision: https://reviews.llvm.org/D88264
The OS version field is generally not very helpful for non-Darwin
targets. On Linux, it identifies the kernel version which moves
out-of-sync with the userspace. On Windows, this field actually ends up
corresponding to the Visual Studio toolset version instead of the OS
version. Consider non-Darwin targets without an OS version to be fully
specified.
Differential Revision: https://reviews.llvm.org/D88181
Reviewed By: Jonas Devlieghere, Dave Lee
Translating between JSON objects and C++ strutctures is common.
From experience in clangd, fromJSON/ObjectMapper work well and save a lot of
code, but aren't adopted elsewhere at least partly due to total lack of error
reporting beyond "ok"/"bad".
The recently-added error model should be rich enough for most applications.
It requires tracking the path within the root object and reporting local
errors at appropriate places.
To do this, we exploit the fact that the call graph of recursive
parse functions mirror the structure of the JSON itself.
The current path is represented as a linked list of segments, each of which is
on the stack as a parameter. Concretely, fromJSON now looks like:
bool fromJSON(const Value&, T&, Path);
Beyond the signature change, this is reasonably unobtrusive: building
the path segments is mostly handled by ObjectMapper and the vector<T> fromJSON.
However the root caller of fromJSON must now create a Root object to
store the errors, which is a little clunky.
I've added high-level parse<T>(StringRef) -> Expected<T>, but it's not
general enough to be the primary interface I think (at least, not usable in
clangd).
All existing users (mostly just clangd) are updated in this patch,
making this change backwards-compatible is a bit hairy.
Differential Revision: https://reviews.llvm.org/D88103
The minidump-sysroot test I added in commit 20f84257 compares two paths
using a string comparison. This causes the Windows buildbot to fail
because of mismatched forward slashes and backslashes. Use
os.path.normcase to normalize before comparing.
Add an optimal thread strategy to execute specified amount of tasks.
This strategy should prevent us from creating too many threads if we
occasionaly have an unexpectedly small amount of tasks.
Differential Revision: https://reviews.llvm.org/D87765
When the various methods of locating the module in GetRemoteSharedModule
fail, make sure we pass the original module spec to the bail-out call to
the provided resolver function.
Also make sure we consistently use the resolved module spec from the
various success paths.
Thanks to what appears to have been an accidentally inverted condition
(commit 85967fa applied the new condition to a path where GetModuleSpec
returns false, but should have applied it when GetModuleSpec returns
true), without this fix we only pass the original module spec in the
fallback if the original spec has no uuid (or has a uuid that somehow
matches the resolved module's uuid despite the call to GetModuleSpec
failing). This manifested as a bug when processing a minidump file with
a user-provided sysroot, since in that case the resolver call was being
applied to resolved_module_spec (despite resolution failing), which did
not have the path of its file_spec set.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88099
Clang has some type sugar that only serves as a way to preserve the way a user
has typed a certain type in the source code. These types are currently not
unwrapped when we query the type name for a Clang type, which means that this
type sugar actually influences what formatters are picked for a certain type.
Currently if a user decides to reference a type by doing `::GlobalDecl Var = 3;`,
the type formatter for `GlobalDecl` will not be used (as the type sugar
around the type gives it the name `::GlobalDecl`. The same goes for other ways
to spell out a type such as `auto` etc.
With this patch most of this type sugar gets stripped when the full type name is
calculated. Typedefs are not getting desugared as that seems counterproductive.
I also don't desugar atomic types as that's technically not type sugar.
Reviewed By: jarin
Differential Revision: https://reviews.llvm.org/D87481
There was a little thinko which meant when stopped in a frame with
debug information but whose CU didn't have any global variables we
report:
no debug info for frame <N>
This patch fixes that error message to say the intended:
no global variables in current compile unit
<rdar://problem/69086361>
This is the first in a series of patches that will adds a new processor trace plug-in to LLDB.
The idea for this first patch to to add the plug-in interface with simple commands for the trace files that can "load" and "dump" the trace information. We can test the functionality and ensure people are happy with the way things are done and how things are organized before moving on to adding more functionality.
Processor trace information can be view in a few different ways:
- post mortem where a trace is saved off that can be viewed later in the debugger
- gathered while a process is running and allow the user to step back in time (with no variables, memory or registers) to see how each thread arrived at where it is currently stopped.
This patch attempts to start with the first solution of loading a trace file after the fact. The idea is that we will use a JSON file to load the trace information. JSON allows us to specify information about the trace like:
- plug-in name in LLDB
- path to trace file
- shared library load information so we can re-create a target and symbolicate the information in the trace
- any other info that the trace plug-in will need to be able to successfully parse the trace information
- cpu type
- version info
- ???
A new "trace" command was added at the top level of the LLDB commmands:
- "trace load"
- "trace dump"
I did this because if we load trace information we don't need to have a process and we might end up creating a new target for the trace information that will become active. If anyone has any input on where this would be better suited, please let me know. Walter Erquinigo will end up filling in the Intel PT specific plug-in so that it works and is tested once we can agree that the direction of this patch is the correct one, so please feel free to chime in with ideas on comments!
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D85705
A few fixes while trying to figure out why tests are being skipped for arsenm:
- We check `$compiler -v`, but `-v` is `--verbose`, not `--version`. Use the long flag name.
- We check all lines matching `version ...`, but we should exit early for the first version string we see (which should be the main one). I'm not sure if this is the issue, but perhaps this is causing some users to skip some tests if another "version ..." is showing up later.
- Having `\.` in a python string is triggering pylint warnings, because it should be escaped as a regex string, e.g. `r'\.' However, `.` in a character class does not need to be escaped, as it matches only a literal `.` in that context.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D88051
When we fixed ImportDeclContext(...) in D71378 to make sure we complete each
FieldDecl of a RecordDecl when we are importing the definition we missed the
case where a FeildDecl was an ArrayType whose ElementType is a record.
This fix was motivated by a codegen crash during LLDB expression parsing. Since
we were not importing the definition we were crashing during layout which
required all the records be defined.
Differential Revision: https://reviews.llvm.org/D86660
Update the some examples in the help string for `breakpoint command add`.
Python breakpoint commands have different output than what's shown in the help string.
Notes:
* Removed an example containing an inner function, as it seems more about a Python technique than about `command script add`
* Updated `print x` to `print(x)` to be python 2/3 agnostic
Differential Revision: https://reviews.llvm.org/D87807
This patch makes the include_directories, file_names and opcodes fields
of the line table optional. This helps us simplify some tests.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D87878
Previously, there was a little ambiguity about whether IsInlined should
return true for an inlined lexical block, since technically the lexical
block would not represent an inlined function (it'd just be contained
within one).
Edit suggested by Jim Ingham.
Previously when <addr> in "memory region <addr>" didn't
parse correctly, we'd print an error then also ask lldb-server
for a region containing LLDB_INVALID_ADDRESS.
(lldb) memory region not_an_address
error: invalid address argument "not_an_address"...
error: Server returned invalid range
Only send the command to lldb-server if the address
parsed correctly.
(lldb) memory region not_an_address
error: invalid address argument "not_an_address"...
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D87694
Register the `faulthandler` module so we can see what lldb tests are doing when they misbehave (e.g. run under a test runner that sets a timeout). This will print a stack trace for the following signals:
- `SIGSEGV`, `SIGFPE`, `SIGABRT`, `SIGBUS`, and `SIGILL` (via `faulthandler.enable()`)
- `SIGTERM` (via `faulthandler.register(SIGTERM)`) [This is what our test runners sends when it times out].
The only signal we currently handle is `SIGINT` (via `unittest2.signals.installHandler()`) so there should be no overlap added by this patch.
Because this import is not available until python3, and the `register()` method is not available on Windows, this is enabled defensively.
This should have absolutely no effect when tests are passing (or even normally failing), but can be observed by running this while ninja is running:
```
kill -s SIGTERM $(ps aux | grep dotest.py | head -1 | awk '{print $2}')
```
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D87637
Perform all error handling in ReadCode()
Add :help text describing “< path”, add extra line before Commands
Differential Revision: https://reviews.llvm.org/D87640
Since "generic type" has a precise meaning in some languages, reword the docstring of `CompilerType` to avoid ambiguity.
Differential Revision: https://reviews.llvm.org/D87633
Code was added that used llvm error checking to parse .debug_aranges, but the error check after parsing the DWARFDebugArangesSet was reversed and was causing no error to be returned with no valid address ranges being actually used. This meant we always would fall back onto creating out own address ranges by parsing the compile unit's ranges. This was causing problems for cases where the DW_TAG_compile_unit had a single address range by using a DW_AT_low_pc and DW_AT_high_pc attribute pair (not using a DW_AT_ranges attribute), but the .debug_aranges had correct split ranges. In this case we would end up using the single range for the compile unit that encompassed all of the ranges from the .debug_aranges section and would cause address resolving issues in LLDB where address lookups would fail for certain addresses.
Differential Revision: https://reviews.llvm.org/D87626
Make it possible to run the script command with a different language
than currently selected.
$ ./bin/lldb -l python
(lldb) script -l lua
>>> io.stdout:write("Hello, World!\n")
Hello, World!
When passing the language option and a raw command, you need to separate
the flag from the script code with --.
$ ./bin/lldb -l python
(lldb) script -l lua -- io.stdout:write("Hello, World!\n")
Hello, World!
Differential revision: https://reviews.llvm.org/D86996
qemu calls the "fp" and "lr" registers via their generic names
(x29/x30). This mismatch manifested itself as not being able to unwind
or display values of some local variables.
In MinGW world, UNIX like lib prefix is preferred for the libraries.
This patch adjusts CMake files to do that.
Differential Revision: https://reviews.llvm.org/D87517
On macOS Big Sur the class descriptor contains the NSKVONotifying_
prefix. This is covered by TestDataFormatterObjCKVO.
Differential revision: https://reviews.llvm.org/D87545
This patch adds a way to fetch breakpoint metadatas as a serialized
`Structured` Data format (JSON). This can be used by IDEs to update
their UI when a breakpoint is set or modified from the console.
rdar://11013798
Differential Revision: https://reviews.llvm.org/D87491
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
They are currently not being set correctly for the case of multi-config generators like XCode and VS. There's also a typo in one of the cmake files.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D87466
Extract a function for turning `eLaunchFlavorDefault` into a concreate `eLaunchFlavor` value.
This new function encapsulates the few compile time variables involved, and also prevents clang unused code diagnostics.
Differential Revision: https://reviews.llvm.org/D87327
This adds support for substituting std::pair instantiations with enabled
import-std-module.
With the fixes in parent revisions we can currently substitute a single pair
(however, a result that returns a second pair currently causes LLDB to crash
while importing the second template instantiation).
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D85141
The ASTImporter has an `Imported(From, To)` callback that notifies subclasses
that a declaration has been imported in some way. LLDB uses this in the
`CompleteTagDeclsScope` to see which records have been imported into the scratch
context. If the record was declared inside the expression, then the
`CompleteTagDeclsScope` will forcibly import the full definition of that record
to the scratch context so that the expression AST can safely be disposed later
(otherwise we might end up going back to the deleted AST to complete the
minimally imported record). The way this is implemented is that there is a list
of decls that need to be imported (`m_decls_to_complete`) and we keep completing
the declarations inside that list until the list is empty. Every `To` Decl we
get via the `Imported` callback will be added to the list of Decls to be
completed.
There are some situations where the ASTImporter will actually give us two
`Imported` calls with the same `To` Decl. One way where this happens is if the
ASTImporter decides to merge an imported definition into an already imported
one. Another way is that the ASTImporter just happens to get two calls to
`ASTImporter::Import` for the same Decl. This for example happens when importing
the DeclContext of a Decl requires importing the Decl itself, such as when
importing a RecordDecl that was declared inside a function.
The bug addressed in this patch is that when we end up getting two `Imported`
calls for the same `To` Decl, then we would crash in the
`CompleteTagDeclsScope`. That's because the first time we complete the Decl we
remove the Origin tracking information (that maps the Decl back to from where it
came from). The next time we try to complete the same `To` Decl the Origin
tracking information is gone and we hit the `to_context_md->getOrigin(decl).ctx
== m_src_ctx` assert (`getOrigin(decl).ctx` is a nullptr the second time as the
Origin was deleted).
This is actually a regression coming from D72495. Before D72495
`m_decls_to_complete` was actually a set so every declaration in there could
only be queued once to be completed. The set was changed to a vector to make the
iteration over it deterministic, but that also causes that we now potentially
end up trying to complete a Decl twice.
This patch essentially just reverts D72495 and makes the `CompleteTagDeclsScope`
use a SetVector for the list of declarations to be completed. The SetVector
should filter out the duplicates (as the original `set` did) and also ensure that
the completion order is deterministic. I actually couldn't find any way to cause
LLDB to reproduce this bug by merging declarations (this would require that we
for example declare two namespaces in a non-top-level expression which isn't
possible). But the bug reproduces very easily by just declaring a class in an
expression, so that's what the test is doing.
Reviewed By: shafik
Differential Revision: https://reviews.llvm.org/D85648
SemaSourceWithPriorities is a special SemaSource that wraps our normal LLDB
ExternalASTSource and the ASTReader (which is used for the C++ module loading).
It's only active when the `import-std-module` setting is turned on.
The `CompleteType` function there in `SemaSourceWithPriorities` is looping over
all ExternalASTSources and asks each to complete the type. However, that loop is
in another loop that keeps doing that until the type is complete. If that
function is ever called on a type that is a forward decl then that causes LLDB
to go into an infinite loop.
I remember I added that second loop and the comment because I thought I saw a
similar pattern in some other Clang code, but after some grepping I can't find
that code anywhere and it seems the rest of the code base only calls
CompleteType once (It would also be kinda silly to have calling it multiple
times). So it seems that's just a silly mistake.
The is implicitly tested by importing `std::pair`, but I also added a simpler
dedicated test that creates a dummy libc++ module with some forward declarations
and then imports them into the scratch AST context. At some point the
ASTImporter will check if one of the forward decls could be completed by the
ExternalASTSource, which will cause the `SemaSourceWithPriorities` to go into an
infinite loop once it receives the `CompleteType` call.
Reviewed By: shafik
Differential Revision: https://reviews.llvm.org/D87289
This patch removes register set definitions and other redundant code from
NativeRegisterContextLinux/RegisterContextPOSIX*_arm. Register sets are now
moved under RegisterInfosPOSIX_arm which now uses RegisterInfoAndSetInterface.
This is similar to what we earlier did for AArch64.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D86962
targetHasSVE helper function was added to test for availability of SVE support
by connected platform. We now intend to use this function in other testcases
and I am moving it to a generic location in lldbtest.py to allow usage by
other upcoming testcases.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D86872
TestCPP11EnumTypes is one of the most expensive tests on my system and takes
around 35 seconds to run. A relatively large amount of that time is actually
doing CPU intensive work it seems (and not waiting on timeouts like other
slow tests).
The main issue is that this test repeatedly compiles the same source files
with different compiler defines. The test is also including standard library
headers, so it will also build all system modules with the gmodules debug
info variant. This leads to the problem that this test ends up compiling all
system Clang modules 8 times (one for each subtest with a unique define). As
the system modules are quite large, this causes that this test spends most
of its runtime just recompiling all system modules on macOS.
There is also the small issue that this test is starting and start-stopping
the test process a few hundred times.
This rewrites the test to instead just use a macro to instantiate all the
enum types in a single source and uses global variables to test the values
(which means there is no more need to continue/stop or even start a process).
I kept running all the debug info variants (event though it doesn't seem really
relevant) to keep this as NFC as possible.
This reduced the test runtime by around 1.5 seconds on my system (or in relative
numbers, the runtime of this test decreases by 95%).
This is one of the most expensive tests and runs for nearly half a minute on
my machine. Beside this test just doing a lot of work by iterating 15k times on
one ValueObject (which seems to be the point), it also runs this for every
debug info variant which doesn't seem relevant to just iterating ValueObject.
This marks it as no_debug_info_test to only run one debug info variation
and cut down the runtime to around 7 seconds on my machine.
This reverts commit f369d51896. The bug this
fixes was already fixed by 1c5a0cb1c3 with the
same approach and this commit is now just giving the variable a second fallback
value.
The tests are unsupported on linux, but they assert in
Thread::GetStopDescriptionRaw() because of empty stop reason
description. And it is empty because
InstrumentationRuntimeTSan::NotifyBreakpointHit() fails
to get report from InstrumentationRuntimeTSan::RetrieveReportData(),
which is possibly(?) the reason why this is unsupported on linux.
Add a dummy stop reason description for this case, which changes
the test result from failing to unsupported.
Caused by D86662. The fix is only checking some fields when the expect_debug_info_size flag is true. For some reason this was not failing on a local linux machine.
This updates the errors reported by expect()
to something like:
```
Ran command:
"help"
Got output:
Debugger commands:
<...>
Expecting start string: "Debugger commands:" (was found)
Expecting end string: "foo" (was not found)
```
(see added tests for more examples)
This shows the user exactly what was run,
what checks passed and which failed. Along with
whether that check was supposed to pass.
(including what regex patterns matched)
These lines are also output to the test
trace file, whether the test passes or not.
Note that expect() will still fail at the first failed
check, in line with previous behaviour.
Also I have flipped the wording of the assert
message functions (.*_MSG) to describe failures
not successes. This makes more sense as they are
only shown on assert failures.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D86792
Previously, before loading the REPL language-specific init file, lldb
checked the selected target language in which case it returned an unknown
language type with the REPL target.
Instead, the patch calls `Language::GetLanguagesSupportingREPLs` and
look for the first element of that set. In case lldb was not configured
with a REPL language, then, it will just stop sourcing the REPL init
file and fallback to the original logic (continuing with the default
init file).
rdar://65836048
Differential Revision: https://reviews.llvm.org/D87076
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>