stl upper_bound method instead of lower_bound - we were
failing to find some cached data in the L1 cache resulting
in extra memory read packets while stepping.
The bug with the existing code looked like this:
If the L1 cache has 8 bytes at address 0x1000 and 8 bytes
at address 0x2000 and we are searching for 4 bytes at 0x2004,
the use of lower_bound would return the end() of the container
and so we would incorrectly treat the memory as uncached.
(the L1 cache is memory seeded from debugserver in the T aka
questionmark packet, where debugserver will send up the stack
memory that likely contains the caller's stack pointer and
frame pointer values.)
<rdar://problem/23869227>
llvm-svn: 255421
A few extras were fixed
- Symbol::GetAddress() now returns an Address object, not a reference. There were places where people were accessing the address of a symbol when the symbol's value wasn't an address symbol. On MacOSX, undefined symbols have a value zero and some places where using the symbol's address and getting an absolute address of zero (since an Address object with no section and an m_offset whose value isn't LLDB_INVALID_ADDRESS is considered an absolute address). So fixing this required some changes to make sure people were getting what they expected.
- Since some places want to access the address as a reference, I added a few new functions to symbol:
Address &Symbol::GetAddressRef();
const Address &Symbol::GetAddressRef() const;
Linux test suite passes just fine now.
<rdar://problem/21494354>
llvm-svn: 240702
We have been working on reducing the packet count that is sent between LLDB and the debugserver on MacOSX and iOS. Our approach to this was to reduce the packets required when debugging multiple threads. We currently make one qThreadStopInfoXXXX call (where XXXX is the thread ID in hex) per thread except the thread that stopped with a stop reply packet. In order to implement multiple thread infos in a single reply, we need to use structured data, which means JSON. The new jThreadsInfo packet will attempt to retrieve all thread infos in a single packet. The data is very similar to the stop reply packets, but packaged in JSON and uses JSON arrays where applicable. The JSON output looks like:
[
{ "tid":1580681,
"metype":6,
"medata":[2,0],
"reason":"exception",
"qaddr":140735118423168,
"registers": {
"0":"8000000000000000",
"1":"0000000000000000",
"2":"20fabf5fff7f0000",
"3":"e8f8bf5fff7f0000",
"4":"0100000000000000",
"5":"d8f8bf5fff7f0000",
"6":"b0f8bf5fff7f0000",
"7":"20f4bf5fff7f0000",
"8":"8000000000000000",
"9":"61a8db78a61500db",
"10":"3200000000000000",
"11":"4602000000000000",
"12":"0000000000000000",
"13":"0000000000000000",
"14":"0000000000000000",
"15":"0000000000000000",
"16":"960b000001000000",
"17":"0202000000000000",
"18":"2b00000000000000",
"19":"0000000000000000",
"20":"0000000000000000"},
"memory":[
{"address":140734799804592,"bytes":"c8f8bf5fff7f0000c9a59e8cff7f0000"},
{"address":140734799804616,"bytes":"00000000000000000100000000000000"}
]
}
]
It contains an array of dicitionaries with all of the key value pairs that are normally in the stop reply packet. Including the expedited registers. Notice that is also contains expedited memory in the "memory" key. Any values in this memory will get included in a new L1 cache in lldb_private::Process where if a memory read request is made and that memory request fits into one of the L1 memory cache blocks, it will use that memory data. If a memory request fails in the L1 cache, it will fall back to the L2 cache which is the same block sized caching we were using before these changes. This allows a process to expedite memory that you are likely to use and it reduces packet count. On MacOSX with debugserver, we expedite the frame pointer backchain for a thread (up to 256 entries) by reading 2 pointers worth of bytes at the frame pointer (for the previous FP and PC), and follow the backchain. Most backtraces on MacOSX and iOS now don't require us to read any memory!
We will try these packets out and if successful, we should port these to lldb-server in the near future.
<rdar://problem/21494354>
llvm-svn: 240354
lldb's internal memory cache chunks that are read from the remote
system. For a remote connection that is especially slow, a user may
need to reduce it; reading a 512 byte chunk of memory whenever a
4-byte region is requested may not be the right decision in these
kinds of environments.
<rdar://problem/18175117>
llvm-svn: 217083
in all but one of the AllocatedBlocks that matched the requested permissions.
Over time this would make the performance of expressions slow down considerably.
Also added a little bit of logging that was helpful in resolving the issue.
<rdar://problem/17954438>
llvm-svn: 215239
data if it is available.
Change ProcessGDBRemote's maximum read/write packet size from a
fixed 512 byte value to asking the remote gdb stub what its maximum
is, using up to 128kbyte sizes if that's allowed, and falling back
to 512 if the remote gdb stub doesn't advertise a max packet size.
Add a new "process plugin packet xfer-size" command that can be used
to override the maximum packet size (although not exceeding any packet
size maximum published by the remote gdb stub).
<rdar://problem/16032150>
llvm-svn: 208058
LLDB is crashing when logging is enabled from lldb-perf-clang. This has to do with the global destructor chain as the process and its threads are being torn down.
All logging channels now make one and only one instance that is kept in a global pointer which is never freed. This guarantees that logging can correctly continue as the process tears itself down.
llvm-svn: 178191
to the __PAGEZERO segment on darwin. The dynamic loader now correctly doesn't
slide __PAGEZERO and it also registers it as an invalid region of memory. This
allows us to not make any memory requests from the local or remote debug session
for any addresses in this region. Stepping performance can improve when uninitialized
local variables that point to locations in __PAGEZERO are attempted to be read
from memory as we won't even make the memory read or write request.
llvm-svn: 151128
over when running JITed expressions. The allocated memory cache will cache
allocate memory a page at a time for each permission combination and divvy up
the memory and hand it out in 16 byte increments.
llvm-svn: 131453