This is a pretty straightforward first pass over removing a number of uses of
Mutex in favor of std::mutex or std::recursive_mutex. The problem is that there
are interfaces which take Mutex::Locker & to lock internal locks. This patch
cleans up most of the easy cases. The only non-trivial change is in
CommandObjectTarget.cpp where a Mutex::Locker was split into two.
llvm-svn: 269877
NPL now assumes it is running from a single thread now, so its thread-safety is untested
anyway (and if that assumption is broken, we'll have bigger problems (due to ptrace restrictions)
than a couple of missing mutexes).
llvm-svn: 269640
Summary:
MonitorDebugServerProcess went to a lot of effort to make sure its asynchronous invocation does
not cause any mischief, but it was still not race-free. Specifically, in a quick stop-restart
sequence (like the one in TestAddressBreakpoints) the copying of the process shared pointer via
target_sp->GetProcessSP() was racing with the resetting of the pointer in DeleteCurrentProcess,
as they were both accessing the same shared_ptr object.
To avoid this, I simply pass in a weak_ptr to the process when the callback is created. Locking
this pointer is race-free as they are two separate object even though they point to the same
process instance. This also removes the need for the complicated tap-dance around retrieving the
process pointer.
Reviewers: clayborg
Subscribers: tberghammer, lldb-commits
Differential Revision: http://reviews.llvm.org/D20107
llvm-svn: 269281
Summary:
This replaces the C-style "void *" baton of the child process monitoring functions with a more
C++-like API taking a std::function. The motivation for this was that it was very difficult to
handle the ownership of the object passed into the callback function -- each caller ended up
implementing his own way of doing it, some doing it better than others. With the new API, one can
just pass a smart pointer into the callback and all of the lifetime management will be handled
automatically.
This has enabled me to simplify the rather complicated handshake in Host::RunShellCommand. I have
left handling of MonitorDebugServerProcess (my original motivation for this change) to a separate
commit to reduce the scope of this change.
Reviewers: clayborg, zturner, emaste, krytarowski
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D20106
llvm-svn: 269205
Summary:
If the remote uses svr4 packets to communicate library info,
the LoadUnload tests will fail, as lldb only used the basename
for modules, causing problems when two modules have the same basename.
Using absolute path as sent by the remote will ensure that lldb
locates the module from the correct directory when there are overlapping
basenames. When debugging a remote process, LoadModuleAtAddress will still
fall back to using basename and module_search_paths, so we don't
need to worry about using absolute paths in this case.
Reviewers: ADodds, jasonmolenda, clayborg, ovyalov
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D19557
llvm-svn: 267741
Summary:
If the remote uses include features when communicating
xml register info back to lldb, the existing code would reset the
lldb register index at the beginning of each include node.
This would lead to multiple registers having the same lldb register index.
Since the lldb register numbers should be contiguous and unique,
maintain them accross the parsing of all of the xml feature nodes.
Reviewers: jingham, jasonmolenda, clayborg
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D19303
llvm-svn: 267468
Summary:
When we receive an svr4 packet from the remote, we check for new modules
and add them to the list of images in the target. However, we did not
do the same for modules which have been removed.
This was causing TestLoadUnload to fail when using ds2, which uses
svr4 packets to communicate all library info on Linux. This patch fixes
the failing test.
Reviewers: zturner, tfiala, ADodds
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D19230
llvm-svn: 267467
Summary:
eRegisterKindProcessPlugin is used to store the register
indices used by the remote, and eRegisterKindLLDB is used
to store the internal lldb register indices. However, we're currently
using the lldb indices instead of the process plugin indices
when sending p/P packets. This will break if the remote uses
non-contiguous register indices.
Reviewers: jasonmolenda, clayborg
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D19305
llvm-svn: 267466
RegisterContextLLDB::InitializeNonZerothFrame already has code to attempt
to detect and handle the case where the PC points beyond the end of a
function, but there are certain cases where this doesn't work correctly.
In fact, there are *two* different places where this detection is attempted,
and the failure is in fact a result of an unfortunate interaction between
those two separate attempts.
First, the ResolveSymbolContextForAddress routine is called with the
resolve_tail_call_address flag set to true. This causes the routine
to internally accept a PC pointing beyond the end of a function, and
still resolving the PC to that function symbol.
Second, the InitializeNonZerothFrame routine itself maintains a
"decr_pc_and_recompute_addr_range" flag and, if that turns out to
be true, itself decrements the PC by one and searches again for
a symbol at that new PC value.
Both approaches correctly identify the symbol associated with the PC.
However, the problem is now that later on, we also need to find the
DWARF CFI record associated with the PC. This is done in the
RegisterContextLLDB::GetFullUnwindPlanForFrame routine, and uses
the "m_current_offset_backed_up_one" member variable.
However, that variable only actually contains the PC "backed up by
one" if the *second* approach above was taken. If the function was
already identified via the first approach above, that member variable
is *not* backed up by one but simply points to the original PC.
This in turn causes GetEHFrameUnwindPlan to not correctly identify
the DWARF CFI record associated with the PC.
Now, in many cases, if the first method had to back up the PC by one,
we *still* use the second method too, because of this piece of code:
// Or if we're in the middle of the stack (and not "above" an asynchronous event like sigtramp),
// and our "current" pc is the start of a function...
if (m_sym_ctx_valid
&& GetNextFrame()->m_frame_type != eTrapHandlerFrame
&& GetNextFrame()->m_frame_type != eDebuggerFrame
&& addr_range.GetBaseAddress().IsValid()
&& addr_range.GetBaseAddress().GetSection() == m_current_pc.GetSection()
&& addr_range.GetBaseAddress().GetOffset() == m_current_pc.GetOffset())
{
decr_pc_and_recompute_addr_range = true;
}
In many cases, when the PC is one beyond the end of the current function,
it will indeed then be exactly at the start of the next function. But this
is not always the case, e.g. if there happens to be alignment padding
between the end of one function and the start of the next.
In those cases, we may sucessfully look up the function symbol via
ResolveSymbolContextForAddress, but *not* set decr_pc_and_recompute_addr_range,
and therefore fail to find the correct DWARF CFI record.
A very simple fix for this problem is to just never use the first method.
Call ResolveSymbolContextForAddress with resolve_tail_call_address set
to false, which will cause it to fail if the PC is beyond the end of
the current function; or else, identify the next function if the PC
is also at the start of the next function. In either case, we will
then set the decr_pc_and_recompute_addr_range variable and back up the
PC anyway, but this time also find the correct DWARF CFI.
A related problem is that the ResolveSymbolContextForAddress sometimes
returns a "symbol" with empty name. This turns out to be an ELF section
symbol. Now, usually those get type eSymbolTypeInvalid. However, there
is code in ObjectFileELF::ParseSymbols that tries to change the type of
invalid symbols to eSymbolTypeCode or eSymbolTypeData if the symbol
lies within the code or data section.
Unfortunately, this check also hits the symbol for the code section
itself, which is then marked as eSymbolTypeCode. While the size of
the section symbol is 0 according to the ELF file, LLDB considers
this size invalid and attempts to figure out the "correct" size.
Depending on how this goes, we may end up with a symbol that overlays
part of the code section, even outside areas covered by real function
symbols.
Therefore, if we call ResolveSymbolContextForAddress with PC pointing
beyond the end of a function, we may get this bogus section symbol.
This again means InitializeNonZerothFrame thinks we have a valid PC,
but then we don't find any unwind info for it.
The fix for this problem is me to simply always leave ELF section
symbols as type eSymbolTypeInvalid.
Differential Revision: http://reviews.llvm.org/D18975
llvm-svn: 267363
This patch adds support for Linux on SystemZ:
- A new ArchSpec value of eCore_s390x_generic
- A new directory Plugins/ABI/SysV-s390x providing an ABI implementation
- Register context support
- Native Linux support including watchpoint support
- ELF core file support
- Misc. support throughout the code base (e.g. breakpoint opcodes)
- Test case updates to support the platform
This should provide complete support for debugging the SystemZ platform.
Not yet supported are optional features like transaction support (zEC12)
or SIMD vector support (z13).
There is no instruction emulation, since our ABI requires that all code
provide correct DWARF CFI at all PC locations in .eh_frame to support
unwinding (i.e. -fasynchronous-unwind-tables is on by default).
The implementation follows existing platforms in a mostly straightforward
manner. A couple of things that are different:
- We do not use PTRACE_PEEKUSER / PTRACE_POKEUSER to access single registers,
since some registers (access register) reside at offsets in the user area
that are multiples of 4, but the PTRACE_PEEKUSER interface only allows
accessing aligned 8-byte blocks in the user area. Instead, we use a s390
specific ptrace interface PTRACE_PEEKUSR_AREA / PTRACE_POKEUSR_AREA that
allows accessing a whole block of the user area in one go, so in effect
allowing to treat parts of the user area as register sets.
- SystemZ hardware does not provide any means to implement read watchpoints,
only write watchpoints. In fact, we can only support a *single* write
watchpoint (but this can span a range of arbitrary size). In LLDB this
means we support only a single watchpoint. I've set all test cases that
require read watchpoints (or multiple watchpoints) to expected failure
on the platform. [ Note that there were two test cases that install
a read/write watchpoint even though they nowhere rely on the "read"
property. I've changed those to simply use plain write watchpoints. ]
Differential Revision: http://reviews.llvm.org/D18978
llvm-svn: 266308
If the UnwindPlan did not identify how to unwind the stack pointer
register, LLDB currently assumes it can determine to caller's SP
from the current frame's CFA. This is true on most platforms
where CFA is by definition equal to the incoming SP at function
entry.
However, on the s390x target, we instead define the CFA to equal
the incoming SP plus an offset of 160 bytes. This is because
our ABI defines that the caller has to provide a register save
area of size 160 bytes. This area is allocated by the caller,
but is considered part of the callee's stack frame, and therefore
the CFA is defined as pointing to the top of this area.
In order to make this work on s390x, this patch introduces a new
ABI callback GetFallbackRegisterLocation that provides platform-
specific fallback register locations for unwinding. The existing
code to handle SP unwinding as well as volatile registers is moved
into the default implementation of that ABI callback, to allow
targets where that implementation is incorrect to override it.
This patch in itself is a no-op for all existing platforms.
But it is a pre-requisite for adding s390x support.
Differential Revision: http://reviews.llvm.org/D18977
llvm-svn: 266307
The structure definitions are not provided, but we perform a sizeof operation of
them which causes a build failure. Include `asm/ptrace.h` to get the structure
definitions.
llvm-svn: 266042
os to "ios" or "macosx" if it is unspecified. For environments
where there genuinely is no os, we don't want to errantly
convert that to ios/macosx, e.g. bare board debugging.
Change PlatformRemoteiOS, PlatformRemoteAppleWatch, and
PlatformRemoteAppleTV to not create themselves if we have
an unspecified OS. Same problem - these are not appropriate
platforms for bare board debugging environments.
Have Process::Attach's logging take place if either
process or target logging is enabled.
<rdar://problem/25592378>
llvm-svn: 265732
In turns out this does make a functional change, in case when the inferior hits an int3 that was
not placed by the debugger. Backing out for now.
llvm-svn: 265647
Summary:
SetThreadStopInfo was checking for a breakpoint at the current PC several times. This merges the
identical code into a separate function. I've left one breakpoint check alone, as it was doing
more complicated stuff, and it did not see a way to merge that without making the interface
complicated. NFC.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18819
llvm-svn: 265560
Summary:
This resolves a similar problem as D16720 (which handled the case when we single-step onto a
breakpoint), but this one deals with involutary stops: when we stop a thread (e.g. because
another thread has hit a breakpont and we are doing a full stop), we can end up stopping it right
before it executes a breakpoint instruction. In this case, the stop reason will be empty, but we
will still step over the breakpoint when do the next resume, thereby missing a breakpoint hit.
I have observed this happening in TestConcurrentEvents, but I have no idea how to reproduce this
behavior more reliably.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18692
llvm-svn: 265525
Summary:
The logic to read modules from memory was added to LoadModuleAtAddress
in the dynamic loader, but not in process gdb remote. This means that when
the remote uses svr4 packets to give library info, libraries only present
on the remote will not be loaded.
This patch therefore involves some code duplication from LoadModuleAtAddress
in the dynamic loader, but removing this would require some amount of code
refactoring.
Reviewers: ADodds, tberghammer, tfiala, deepak2427, ted
Subscribers: tfiala, lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18531
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265418
Summary:
There was a bug in linux core file handling, where if there was a running process with the same
process id as the id in the core file, the core file debugging would fail, as we would pull some
pieces of information (ProcessInfo structure) from the running process instead of the core file.
I fix this by routing the ProcessInfo requests through the Process class and overriding it in
ProcessElfCore to return correct data.
A (slightly convoluted) test is included.
Reviewers: clayborg, zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18697
llvm-svn: 265391
rnb_err_t
RNBRemote::HandlePacket_stop_process (const char *p)
{
if (!DNBProcessInterrupt(m_ctx.ProcessID()))
HandlePacket_last_signal (NULL);
return rnb_success;
}
In the call to DNBProcessInterrupt we did:
nub_bool_t
DNBProcessInterrupt(nub_process_t pid)
{
MachProcessSP procSP;
if (GetProcessSP (pid, procSP))
return procSP->Interrupt();
return false;
}
This would always return false. It would cause HandlePacket_stop_process to always call "HandlePacket_last_signal (NULL);" which would send an extra stop reply packet _if_ the process is stopped. On a machine with enough cores, it would call DNBProcessInterrupt(...) and then HandlePacket_last_signal(NULL) so quickly that it will never send out an extra stop reply packet. But if the machine is slow enough or doesn't have enough cores, it could cause the call to HandlePacket_last_signal() to actually succeed and send an extra stop reply packet. This would cause problems up in GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse() where it would get the first stop reply packet and then possibly return or execute an async packet. If it returned, then the next packet that was sent will get the second stop reply as its response. If it executes an async packet, the async packet will get the wrong response.
To fix this I did the following:
1 - in debugserver, I fixed "bool MachProcess::Interrupt()" to return true if it sends the signal so we avoid sending the stop reply twice on slower machines
2 - Added a log line to RNBRemote::HandlePacket_stop_process() to say if we ever send an extra stop reply so we will see this in the darwin console output if this does happen
3 - Added response validators to StringExtractorGDBRemote so that we can verify some responses to some packets.
4 - Added validators to packets that often follow stop reply packets like the "m" packet for memory reads, JSON packets since "jThreadsInfo" is often sent immediately following a stop reply.
5 - Modified GDBRemoteCommunicationClient::SendPacketAndWaitForResponseNoLock() to validate responses. Any "StringExtractorGDBRemote &response" that contains a valid response verifier will verify the response and keep looking for correct responses up to 3 times. This will help us get back on track if we do get extra stop replies. If a StringExtractorGDBRemote does not have a response validator, it will accept any packet in response.
6 - In GDBRemoteCommunicationClient::SendPacketAndWaitForResponse we copy the response validator from the "response" argument over into m_async_response so that if we send the packet by interrupting the running process, we can validate the response we actually get in GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse()
7 - Modified GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse() to always check for an extra stop reply packet for 100ms when the process is interrupted. We were already doing this because we might interrupt a process with a \x03 packet, yet the process was in the process of stopping due to another reason. This race condition could cause an extra stop reply packet because the GDB remote protocol says if a \x03 packet is sent while the process is stopped, we should send a stop reply packet back. Now we always check for an extra stop reply packet when we manually interrupt a process.
The issue was showing up when our IDE would attempt to set a breakpoint while the process is running and this would happen:
--> \x03
<-- $T<stop reply 1>
--> z0,AAAAA,BB (set breakpoint)
<-- $T<stop reply 1> (incorrect extra stop reply packet)
--> c
<-- OK (response from z0 packet)
Now all packet traffic was off by one response. Since we now have a validator on the response for "z" packets, we do this:
--> \x03
<-- $T<stop reply 1>
--> z0,AAAAA,BB (set breakpoint)
<-- $T<stop reply 1> (Ignore this because this can't be the response to z0 packets)
<-- OK -- (we are back on track as this is a valid response to z0)
...
As time goes on we should add more packet validators.
<rdar://problem/22859505>
llvm-svn: 265086
Win32 API calls that are Unicode aware require wide character
strings, but LLDB uses UTF8 everywhere. This patch does conversions
wherever necessary when passing strings into and out of Win32 API
calls.
Patch by Cameron
Differential Revision: http://reviews.llvm.org/D17107
Reviewed By: zturner, amccarth
llvm-svn: 264074
We want to do a better job presenting errors that occur when evaluating
expressions. Key to this effort is getting away from a model where all
errors are spat out onto a stream where the client has to take or leave
all of them.
To this end, this patch adds a new class, DiagnosticManager, which
contains errors produced by the compiler or by LLDB as an expression
is created. The DiagnosticManager can dump itself to a log as well as
to a string. Clients will (in the future) be able to filter out the
errors they're interested in by ID or present subsets of these errors
to the user.
This patch is not intended to change the *users* of errors - only to
thread DiagnosticManagers to all the places where streams are used. I
also attempt to standardize our use of errors a bit, removing trailing
newlines and making clients omit 'error:', 'warning:' etc. and instead
pass the Severity flag.
The patch is testsuite-neutral, with modifications to one part of the
MI tests because it relied on "error: error:" being erroneously
printed. This patch fixes the MI variable handling and the testcase.
<rdar://problem/22864976>
llvm-svn: 263859
Summary:
This also adds a basic smoke test for linux core file reading. I'm checking in the core files as
well, so that the tests can run on all platforms. With some tricks I was able to produce
reasonably-sized core files (~40K).
This fixes the first part of pr26322.
Reviewers: zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18176
llvm-svn: 263628
Turns out that most of the code that runs expressions (e.g. the ObjC runtime grubber) on
behalf of the expression parser was using the currently selected thread. But sometimes,
e.g. when we are evaluating breakpoint conditions/commands, we don't select the thread
we're running on, we instead set the context for the interpreter, and explicitly pass
that to other callers. That wasn't getting communicated to these utility expressions, so
they would run on some other thread instead, and that could cause a variety of subtle and
hard to reproduce problems.
I also went through the commands and cleaned up the use of GetSelectedThread. All those
uses should have been trying the thread in the m_exe_ctx belonging to the command object
first. It would actually have been pretty hard to get misbehavior in these cases, but for
correctness sake it is good to make this usage consistent.
<rdar://problem/24978569>
llvm-svn: 263326
to each other. This should remove some infrequent teardown crashes when the
listener is not the debugger's listener.
Processes now need to take a ListenerSP, not a Listener&.
This required changing over the Process plugin class constructors to take a ListenerSP, instead
of a Listener&. Other than that there should be no functional change.
<rdar://problem/24580184> CrashTracer: [USER] Xcode at …ework: lldb_private::Listener::BroadcasterWillDestruct + 39
llvm-svn: 262863
This is a mechanical refactor. There should be no functional changes in this commit.
Instead of encapsulating just the Windows-specific data, ProcessWinMiniDump now uses a private implementation class. This reduces indirections (in the source). It makes it easier to add private helper methods without touching the header and allows them to have platform-specific types as parameters. The only trick was that the pimpl class needed a back pointer in order to call a couple methods.
llvm-svn: 262256
Additionally fix the type of some dwarf expression where we had a
confusion between scalar and load address types after a dereference.
Differential revision: http://reviews.llvm.org/D17604
llvm-svn: 262014
32-bit processes on 64-bit Windows run in a layer called WoW64 (Windows-on-Windows64). If you capture a mini dump of such a process from a 32-bit debugger, you end up with a register context for the 64-bit WoW64 process rather than the 32-bit one you probably care about.
This detects WoW64 by looking to see if there's a module named wow64.dll loaded. For such processes, it then looks in the 64-bit Thread Environment Block (TEB) to locate a copy of the 32-bit CONTEXT record that the plugin needs for the register context.
Added some rudimentary tests. I'd like to improve these later once we figure out how to get the exception information from these mini dumps.
Differential Revision: http://reviews.llvm.org/D17465
llvm-svn: 261808
Summary:
On arm64, linux<=4.4 and Android<=M there is a bug, which prevents single-stepping from working when
the system comes back from suspend, because of incorrectly initialized CPUs. This did not really
affect Android<M, because it did not use software suspend, but it is a problem for M, which uses
suspend (doze) quite extensively. Fortunately, it seems that the first CPU is not affected by
this bug, so this commit implements a workaround by forcing the inferior to execute on the first
cpu whenever we are doing single stepping.
While inside, I have moved the implementations of Resume() and SingleStep() to the thread class
(instead of process).
Reviewers: tberghammer, ovyalov
Subscribers: aemerson, rengolin, tberghammer, danalbert, srhines, lldb-commits
Differential Revision: http://reviews.llvm.org/D17509
llvm-svn: 261636
Summary:
Signalfd is not used in the code anymore, and given that the same functionality can be achieved
with the new MainLoop class, it's unlikely we will need it in the future. Remove all traces of
it.
Reviewers: tberghammer, ovyalov
Subscribers: tberghammer, danalbert, srhines, lldb-commits
Differential Revision: http://reviews.llvm.org/D17510
llvm-svn: 261631
on attach uses the architecture it has figured out, rather than the Target's
architecture, which may not have been updated to the correct value yet.
<rdar://problem/24632895>
llvm-svn: 261279
This reverts commit 293c18e067d663e0fe93e6f3d800c2a4bfada2b0.
The BKPT instruction generates SIGBUS instead of SIGTRAP in the Linux
kernel on Nexus 6 - 5.1.1 (kernel version 3.10.40). Revert the CL
until we can figure out how can we hanble the SIGBUS or how to get
back a SIGTRAP using the BKPT instruction.
llvm-svn: 260969
the xcode project file to catch switch statements that have a
case that falls through unintentionally.
Define LLVM_FALLTHROUGH to indicate instances where a case has code
and intends to fall through. This should be in llvm/Support/Compiler.h;
Peter Collingbourne originally checked in there (r237766), then
reverted (r237941) because he didn't have time to mark up all the
'case' statements that were intended to fall through. I put together
a patch to get this back in llvm http://reviews.llvm.org/D17063 but
it hasn't been approved in the past week. I added a new
lldb-private-defines.h to hold the definition for now.
Every place in lldb where there is a comment that the fall-through
is intentional, I added LLVM_FALLTHROUGH to silence the warning.
I haven't tried to identify whether the fallthrough is a bug or
not in the other places.
I haven't tried to add this to the cmake option build flags.
This warning will only work for clang.
This build cleanly (with some new warnings) on macosx with clang
under xcodebuild, but if this causes problems for people on other
configurations, I'll back it out.
llvm-svn: 260930
case where a core file has a kernel binary and a user
process dyld in the same one. Without this, we were
always picking the dyld and trying to process it as a
kernel.
<rdar://problem/24446112>
llvm-svn: 260803
In some circumstances (notably, certain minidumps), the thread CONTEXT does not have values for the
control registers (EIP, ESP, EBP, EFLAGS). There are flags in the CONTEXT which indicate which
portions are valid, but those flags weren't checked. The old code would not detect this and give a
garbage value for the register. The new code will log the problem and return an error.
I consolidated the error checking and logging into a helper function, which makes the big switch
statement easier to read and verify.
Ran tests to ensure this doesn't break anything. Manually verified that a minidump without info on
the control registers now indicates the problem instead of giving bad information.
Differential Review: http://reviews.llvm.org/D17152
llvm-svn: 260559
The UDF instruction is deprecated in armv7 and in case of thumb2
instructions set it don't work well together with the IT instruction.
Differential revision: http://reviews.llvm.org/D16853
llvm-svn: 260367
user process dyld binary and/or a mach kernel binary image. By
default, it prefers the kernel if it finds both.
But if it finds two kernel binary images (which can happen when
random things are mapped into memory), it may pick the wrong
kernel image.
DynamicLoaderDarwinKernel has heuristics to find a kernel in memory;
once we've established that there is a kernel binary in memory,
call over to that class to see if it can find a kernel address via
its search methods. If it does, use that.
Some minor cleanups to DynamicLoaderDarwinKernel while I was at it.
<rdar://problem/24446112>
llvm-svn: 259983
reason to None when we stop due to a trace, then noticed that
we were on a breakpoint that was not valid for the current thread.
That should actually have set it back to trace.
This was pr26441 (<rdar://problem/24470203>)
llvm-svn: 259684
I don't understand how this worked before, but this fixes the recent test regressions on Windows in TestConsecutiveBreakpoints.py.
Differential Revision: http://reviews.llvm.org/D16825
llvm-svn: 259605
Summary:
r259344 introduced a bug, where we fail to perform a single step, when the instruction we are
stepping onto contains a breakpoint which is not valid for this thread. This fixes the problem
and add a test case.
Reviewers: tberghammer, emaste
Subscribers: abhishek.aggarwal, lldb-commits, emaste
Differential Revision: http://reviews.llvm.org/D16767
llvm-svn: 259488