to find the solibs loaded in a process. Support two new ways of
sending the jGetLoadedDynamicLibrariesInfos packet to debugserver
and add a new jGetSharedCacheInfo packet. Update the documentation
for these packets as well. The changes to lldb to use these will
be a separate commit.
<rdar://problem/25251243>
llvm-svn: 274718
for TestNamespaceLookup.py; didn't see anything obviously wrong so I'll
need to look at this more closely before re-committing. (passed OK on
macOS ;)
llvm-svn: 273531
There's uses of "macosx" that will be more tricky to
change, like in triples (e.g. "x86_64-apple-macosx10.11") -
for now I'm just updating source comments and strings printed
for humans.
llvm-svn: 273524
debugserver. thread-pcs has a comma separated list of base 16
addresses - the current pc value for every thread in the process.
It is a partner of the "threads:" key where a list of thread IDs
is given. The pc values in thread-pcs correspond one-to-one with
the thread IDs in the threads list.
This is a part of performance work. When lldb is instruction
stepping / fast stepping over a range of addresses for e.g. a "next"
command, and it steps in to another function, lldb will put a
breakpoint on the return address and continue the process. Before
it calls continue, it calls Thread::SetupForResume on all the
threads, and SetupForResume needs to get the current pc value for
every thread to see if any are at a breakpoint site.
The result is that issuing a "c" continue requires that we send
"read pc register" packets for every thread.
We may do this sequence of step-into-function / continue-to-get-out
many times for a single user-visible "next" or "step" command, and
with highly multithreaded programs, we are sending many extra
packets to get all the thread values.
I looked at including this data in the "jstopinfo" JSON that
we already have in the T packet. But there are three problems that
would make this increase the size of the T packet significantly.
First, numbers in JSON are base 10. Second, a proper JSON would
have something like "thread_pcs": { "34224331112":383772734222, ...}
for thread-id 34224331112 and pc 383772734222 - so we're including
a whole extra copy of the thread id in addition to the pc. Third,
the JSON text is hex-ascii'fied so the size of it is doubled.
In one example,
threads:585db8,585dc7,585dc8,585dc9,585dca,585dce;thread-pcs:100001400,7fff8badc6de,7fff8badcff6,7fff8badc6de,7fff8badc6de,7fff8badc6de;
The "thread-pcs" adds 86 characters - 136 characters for both
threads and thread-pcs. Doing this in JSON would look like
threads={"5791160":4294972416,"5791175":140735536809694,"5791176":140735536812022,"5791177":140735536809694,"5791178":140735536809694,"5791182":140735536809694}
or 160 characters -- or 320 characters once it is hex-asciified.
Given that it's 86 characters vrs 320, I went with the old style
approach. I've seen real world programs that have up to 60 threads
in them, so this could result in vastly larger packets if it
was all done in the JSON with hex-ascii expansion.
If we had an all-JSON T packet, where we didn't need to hex-ascii
encode anything, that would have been the better approach. But
we'd already have a list of threads in JSON at that point so
the additional text wouldn't be too bad.
I'm working on finishing the patches to lldb to use this data;
will commit those once I've had a chance to test them more. But
I wanted to commit the debugserver bits which are more
straightforward.
<rdar://problem/21963031>
llvm-svn: 255711
The standard remote debugging workflow with gdb is to start the
application on the remote host under gdbserver (e.g.: gdbserver :5039
a.out) and then connect to it with gdb.
The same workflow is supported by debugserver/lldb-gdbserver with a very
similar syntax but to access all features of lldb we need to be
connected also to an lldb-platform instance running on the target.
Before this change this had to be done manually with starting a separate
lldb-platform on the target machine and then connecting to it with lldb
before connecting to the process.
This change modifies the behavior of "platform connect" with
automatically connecting to the process instance if it was started by
the remote platform. With this command replacing gdbserver in a gdb
based worflow is usually as simple as replacing the command to execute
gdbserver with executing lldb-platform.
Differential revision: http://reviews.llvm.org/D14952
llvm-svn: 255016
major, minor, and patchlevel in the qHostInfo reply.
Document that qHostInfo may report major/minor/patch
separately / in addition to the version: combination.
<rdar://problem/22125465>
llvm-svn: 244716
Summary:
This commit adds initial support for the jThreadsInfo packet to lldb-server. The current
implementation does not expedite inferior memory. I have also added a description of the new
packet to our protocol documentation (mostly taken from Greg's earlier commit message).
Reviewers: clayborg, ovyalov, tberghammer
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D11187
llvm-svn: 242402
jGetLoadedDynamicLibrariesInfos. This packet is similar to
qXfer:libraries:read except that lldb supplies the number of solibs
that should be reported about, and the start address for the list
of them. At the initial process launch we'll read the full list
of solibs linked by the process -- at this point we could be using
qXfer:libraries:read -- but on subsequence solib-loaded notifications,
we'll be fetching a smaller number of solibs, often only one or two.
A typical Mac/iOS GUI app may have a couple hundred different
solibs loaded - doing all of the loads via memory reads takes
a couple of megabytes of traffic between lldb and debugserver.
Having debugserver summarize the load addresses of all the solibs
and sending it in JSON requires a couple of hundred kilobytes
of traffic. It's a significant performance improvement when
communicating over a slower channel.
This patch leaves all of the logic for loading the libraries
in DynamicLoaderMacOSXDYLD -- it only call over ot ProcesGDBRemote
to get the JSON result.
If the jGetLoadedDynamicLibrariesInfos packet is not implemented,
the normal technique of using memory read packets to get all of
the details from the target will be used.
<rdar://problem/21007465>
llvm-svn: 241964
For some communication channels, sending large packets can be very
slow. In those cases, it may be faster to compress the contents of
the packet on the target device and decompress it on the debug host
system. For instance, communicating with a device using something
like Bluetooth may be an environment where this tradeoff is a good one.
This patch adds a new field to the response to the "qSupported" packet
(which returns a "qXfer:features:" response) -- SupportedCompressions
and DefaultCompressionMinSize. These tell you what the remote
stub can support.
lldb, if it wants to enable compression and can handle one of those
algorithms, it can send a QEnableCompression packet specifying the
algorithm and optionally the minimum packet size to use compression
on. lldb may have better knowledge about the best tradeoff for
a given communication channel.
I added support to debugserver an lldb to use the zlib APIs
(if -DHAVE_LIBZ=1 is in CFLAGS and -lz is in LDFLAGS) and the
libcompression APIs on Mac OS X 10.11 and later
(if -DHAVE_LIBCOMPRESSION=1). libz "zlib-deflate" compression.
libcompression can support deflate, lz4, lzma, and a proprietary
lzfse algorithm. libcompression has been hand-tuned for Apple
hardware so it should be preferred if available.
debugserver currently only adds the SupportedCompressions when
it is being run on an Apple watch (TARGET_OS_WATCH). Comment
that #if out from RNBRemote.cpp if you want to enable it to
see how it works. I haven't tested this on a native system
configuration but surely it will be slower to compress & decompress
the packets in a same-system debug session.
I haven't had a chance to add support for this to
GDBRemoteCommunciationServer.cpp yet.
<rdar://problem/21090180>
llvm-svn: 240066
With the previous implementation the protocol used by the client and the
server for the response was different and worked only by an accident.
With this change the communication is fixed and the return code from
mkdir and chmod correctly captured by lldb. The change also add
documentation for the qPlatform__[mkdir,chmod] packages.
Differential revision: http://reviews.llvm.org/D7786
llvm-svn: 230213
This change brings in lldb-gdbserver (llgs) specifically for Linux x86_64.
(More architectures coming soon).
Not every debugserver option is covered yet. Currently
the lldb-gdbserver command line can start unattached,
start attached to a pid (process-name attach not supported yet),
or accept lldb attaching and launching a process or connecting
by process id.
The history of this large change can be found here:
https://github.com/tfiala/lldb/tree/dev-tfiala-native-protocol-linux-x86_64
Until mid/late April, I was not sharing the work and continued
to rebase it off of head (developed via id tfiala@google.com). I switched over to
user todd.fiala@gmail.com in the middle, and once I went to github, I did
merges rather than rebasing so I could share with others.
llvm-svn: 212069
ArchSpec now contains an optional distribution_id, with getters and
setters. Host::GetArchitecture () sets it on non-Apple platforms using
Host::GetDistributionId (). The distribution_id is ignored during
ArchSpec comparisons.
The gdb remote qHostInfo message transmits it, if set, via the
distribution_id={id-value} key/value pair. Updated gdb remote docs to
reflect this change.
As before, GetDistributionId () returns nothing on non-Linux platforms
at this time. On Linux, it is returned only if the lsb_platform
command is installed (in /bin or /usr/bin), and only if the
distributor id key is returned by 'lsb_platform -i'. This id is
lowercased, and whitespace is replaced with underscores.
llvm-svn: 199539
the name of the remote gdb-protocol server, and get
a version number from it. This can be useful if lldb
needs to interoperate with a gdb-protocol server with
a known issue or bug.
llvm-svn: 191729
Summary:
This merge brings in the improved 'platform' command that knows how to
interface with remote machines; that is, query OS/kernel information, push
and pull files, run shell commands, etc... and implementation for the new
communication packets that back that interface, at least on Darwin based
operating systems via the POSIXPlatform class. Linux support is coming soon.
Verified the test suite runs cleanly on Linux (x86_64), build OK on Mac OS
X Mountain Lion.
Additional improvements (not in the source SVN branch 'lldb-platform-work'):
- cmake build scripts for lldb-platform
- cleanup test suite
- documentation stub for qPlatform_RunCommand
- use log class instead of printf() directly
- reverted work-in-progress-looking changes from test/types/TestAbstract.py that work towards running the test suite remotely.
- add new logging category 'platform'
Reviewers: Matt Kopec, Greg Clayton
Review: http://llvm-reviews.chandlerc.com/D1493
llvm-svn: 189295
reply to be hex encoded, not decimal.
Fix the whitespace in the container-regs/invalidate-regs
documentation, fix one ambiguous hex/decimal number in an
example.
llvm-svn: 173225
Update the debugserver "qProcessInfo" implementation to return the
cpu type, cpu subtype, OS and vendor information just like qHostInfo
does so lldb can create an ArchSpec based on the returned values.
Add a new GetProcessArchitecture to GDBRemoteCommunicationClient akin
to GetHostArchitecture. If the qProcessInfo packet is supported,
GetProcessArchitecture will return the cpu type / subtype of the
process -- e.g. a 32-bit user process running on a 64-bit x86_64 Mac
system.
Have ProcessGDBRemote set the Target's architecture based on the
GetProcessArchitecture when we've completed an attach/launch/connect.
llvm-svn: 170491
This can be used by lldb to ask for information
about the process debugserver is attached to/launched.
Particularly useful on a 64-bit x86 Mac system which
can run 32-bit or 64-bit user-land processes.
llvm-svn: 170409
if this is a mapped/executable region of memory. If it isn't, we've jumped
through a bad pointer and we know how to unwind the stack correctly based
on the ABI.
Previously I had 0x0 special cased but if you jumped to 0x2 on x86_64 one
frame would be skipped because the unwinder would try using the x86_64
ArchDefaultUnwindPlan which relied on the rbp.
Fixes <rdar://problem/10508291>
llvm-svn: 146477
as well as attached a new priority description as to why and when you would
want to implement each packet.
Also documented the additions we have made to the stop reply packet and why
the extra information is necessary.
llvm-svn: 145357