Added "mach_o.py" which is a mach-o parser that can dump mach-o file contents and also extract sections. It uses the "file_extract" module and the "dict_utils" module.
llvm-svn: 189959
print five words of memory at the beginning of the stack frame so it's
easier to track where an incorrect saved-fp or saved-pc may have come from.
llvm-svn: 185903
The script was able to point out and save 40 bytes in each lldb_private::Section by being very careful where we need to have virtual destructors and also by re-ordering members.
llvm-svn: 184364
Change the simple-minded stack walk to not depend on lldb to unwind
the first frame.
Collect a list of Modules and Addresses seen while backtracing (with
both methods), display the image list output for all of those modules,
plus disassemble and image show-unwind any additional frames that
the simple backtrace was able to unwind through instead of just the
lldb unwind algorithm frames.
Remove checks for older lldb's that didn't support -a for disassemble
or specifying the assembler syntax on x86 targets.
llvm-svn: 184280
This module uses Python's sys.settrace() mechanism so setup a hook that can log every significant operation
This is a first step in providing a good debugging experience of Python embedded in LLDB
This module comprises an OO infrastructure that wraps Python's tracing and inspecting mechanisms, plus a very simple logging tracer
Output from this tracer looks like:
call print_keyword_args from <module> @ 243 args are kwargs are {'first_name': 'John', 'last_name': 'Doe'}
running print_keyword_args @ 228 locals are {'kwargs': {'first_name': 'John', 'last_name': 'Doe'}}
running print_keyword_args @ 229 locals are {'key': 'first_name', 'value': 'John', 'kwargs': {'first_name': 'John', 'last_name': 'Doe'}}
first_name = John
running print_keyword_args @ 228 locals are {'key': 'first_name', 'value': 'John', 'kwargs': {'first_name': 'John', 'last_name': 'Doe'}}
running print_keyword_args @ 229 locals are {'key': 'last_name', 'value': 'Doe', 'kwargs': {'first_name': 'John', 'last_name': 'Doe'}}
last_name = Doe
running print_keyword_args @ 228 locals are {'key': 'last_name', 'value': 'Doe', 'kwargs': {'first_name': 'John', 'last_name': 'Doe'}}
return from print_keyword_args value is None locals are {'key': 'last_name', 'value': 'Doe', 'kwargs': {'first_name': 'John', 'last_name': 'Doe'}}
llvm-svn: 181343
finish-swig-Python-LLDB.sh to create a new lldb.diagnose subdirectory
in the LLDB framework; the first diagnostic command in this directory
is diagnose-unwind. There may be others added in the future.
Users can load these diagnostic tools into their session with
"script import lldb.diagnose".
llvm-svn: 180768
lldb-179 version numberings and the new lldb-300 version numberings.
Remove the pretense that someone might run this from the command
line; this is only used from within a live lldb debug session. Fix
the loading so it can be loaded via "script import lldb.macosx" or
the script can be loaded individually like "command script import
unwind_diagnose.py"
llvm-svn: 180085
unwind instructions for a function/symbol which contains that
address.
Update the unwind_diagnose.py script to use this instead of doing
image show-unwind by name to avoid cases where there are multiple
name definitions.
llvm-svn: 180079
It will be installed in the LLDB.framework and can be loaded with
(lldb) script import lldb.macosx
after which a "unwind-diagnose" command will be registered. Select
the thread which has a bad backtrace and run this command -- a lot
of information about the stack frames, and an alternate backtrace
algorithm, will be used. The information will often be sufficient
for a remote person to figure out why the backtrace failed.
<rdar://problem/13679300>
llvm-svn: 180077
crashlog.py was always subtracting 1 to point to the previous instruction when symbolicating ARM backtraces. Many times the backtraces will include bit zero set to 1 to indicate thumb, so we need to make sure we mask the address and then backup one for non frame zero frames.
llvm-svn: 178812
This is a very basic implementation of a library that easily allows to drive LLDB.framework to write test cases for performance
This is separate from the LLDB testsuite in test/ in that:
a) this uses C++ instead of Python to avoid measures being affected by SWIG
b) this is in very early development and needs lots of tweaking before it can be considered functionally complete
c) this is not meant to test correctness but to help catch performance regressions
There is a sample application built against the library (in darwin/sketch) that uses the famous sample app Sketch as an inferior to measure certain basic parameters of LLDB's behavior.
The resulting output is a PLIST much like the following:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<array>
<dict>
<key>fetch-frames</key>
<real>0.13161715522222225</real>
</dict>
<dict>
<key>file-line-bkpt</key>
<real>0.029111678750000002</real>
</dict>
<dict>
<key>fetch-modules</key>
<real>0.00026376766666666668</real>
</dict>
<dict>
<key>fetch-vars</key>
<real>0.17820429311111111</real>
</dict>
<dict>
<key>run-expr</key>
<real>0.029676525769230768</real>
</dict>
</array>
</plist>
Areas for improvement:
- code cleanups (I will be out of the office for a couple days this coming week, but please keep ideas coming!)
- more metrics and test cases
- better error checking
This toolkit also comprises a simple event-loop-driven controller for LLDB, similar yet much simpler to what the Driver does to implement the lldb command-line tool.
llvm-svn: 176715
I used this to verify that the debug map line tables were the same as previous LLDB releases prior to my change in the DWARF in .o file linking.
llvm-svn: 176610
Major fixed to allow reading files that are over 4GB. The main problems were that the DataExtractor was using 32 bit offsets as a data cursor, and since we mmap all of our object files we could run into cases where if we had a very large core file that was over 4GB, we were running into the 4GB boundary.
So I defined a new "lldb::offset_t" which should be used for all file offsets.
After making this change, I enabled warnings for data loss and for enexpected implicit conversions temporarily and found a ton of things that I fixed.
Any functions that take an index internally, should use "size_t" for any indexes and also should return "size_t" for any sizes of collections.
llvm-svn: 173463
Added the ability for OS plug-ins to lazily populate the thread this. The python OS plug-in classes can now implement the following method:
class OperatingSystemPlugin:
def create_thread(self, tid, context):
# Return a dictionary for a new thread to create it on demand
This will add a new thread to the thread list if it doesn't already exist. The example code in lldb/examples/python/operating_system.py has been updated to show how this call us used.
Cleaned up the code in PythonDataObjects.cpp/h:
- renamed all classes that started with PythonData* to be Python*.
- renamed PythonArray to PythonList. Cleaned up the code to use inheritance where
- Centralized the code that does ref counting in the PythonObject class to a single function.
- Made the "bool PythonObject::Reset(PyObject *)" function be virtual so each subclass can correctly check to ensure a PyObject is of the right type before adopting the object.
- Cleaned up all APIs and added new constructors for the Python* classes to they can all construct form:
- PyObject *
- const PythonObject &
- const lldb::ScriptInterpreterObjectSP &
Cleaned up code in ScriptInterpreterPython:
- Made calling python functions safer by templatizing the production of value formats. Python specifies the value formats based on built in C types (long, long long, etc), and code often uses typedefs for uint32_t, uint64_t, etc when passing arguments down to python. We will now always produce correct value formats as the templatized code will "do the right thing" all the time.
- Fixed issues with the ScriptInterpreterPython::Locker where entering the session and leaving the session had a bunch of issues that could cause the "lldb" module globals lldb.debugger, lldb.target, lldb.process, lldb.thread, and lldb.frame to not be initialized.
llvm-svn: 172873
Added a new setting that allows a python OS plug-in to detect threads and provide registers for memory threads. To enable this you set the setting:
settings set target.process.python-os-plugin-path lldb/examples/python/operating_system.py
Then run your program and see the extra threads.
llvm-svn: 166244