for use in the benchmark against lldb's disassembly speed. Note that the lldb
executable path can already be specified using the LLDB_EXEC env variable.
rdar://problem/7511194
llvm-svn: 146050
inferior program for the lldb debugger to operate on. The fixed lldb executable
corresponds to r142902.
Plus some minor modifications to the test benchmark to conform to way bench.py
is meant to be invoked.
llvm-svn: 143075
An example (with /Developer/usr/bin/lldb vs. /usr/bin/gdb):
[13:05:04] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v +b -n -p TestCompileRunToBreakpointTurnaround.py
1: test_run_lldb_then_gdb (TestCompileRunToBreakpointTurnaround.CompileRunToBreakpointBench)
Benchmark turnaround time with lldb vs. gdb. ...
lldb turnaround benchmark: Avg: 4.574600 (Laps: 3, Total Elapsed Time: 13.723799)
gdb turnaround benchmark: Avg: 7.966713 (Laps: 3, Total Elapsed Time: 23.900139)
lldb_avg/gdb_avg: 0.574214
ok
----------------------------------------------------------------------
Ran 1 test in 55.462s
OK
llvm-svn: 142949
Example (start the lldb inferior, break at the Driver::MainLoop() function, and
issue 'frame variable'):
$ ./dotest.py -v +b -x '-F Driver::MainLoop()' -n -p TestFrameVariableResponse.py
----------------------------------------------------------------------
Collected 1 test
1: test_startup_delay (TestFrameVariableResponse.FrameVariableResponseBench)
Test response time for the 'frame variable' command. ...
lldb frame variable benchmark: Avg: 1.636897 (Laps: 20, Total Elapsed Time: 32.737944)
ok
----------------------------------------------------------------------
Ran 1 test in 65.105s
OK
llvm-svn: 142678
o create a fresh target; and
o set the first breakpoint
Example (using lldb to set a breakpoint on lldb's Driver::MainLoop function):
./dotest.py -v +b -x '-F Driver::MainLoop()' -p TestStartupDelays.py
...
1: test_startup_delay (TestStartupDelays.StartupDelaysBench)
Test start up delays creating a target and setting a breakpoint. ...
lldb startup delays benchmark:
create fresh target: Avg: 0.106732 (Laps: 15, Total Elapsed Time: 1.600985)
set first breakpoint: Avg: 0.102589 (Laps: 15, Total Elapsed Time: 1.538832)
ok
llvm-svn: 142628
Add a '-y count' option to the test driver for this purpose. An example:
$ ./dotest.py -v -y 25 +b -p TestDisassembly.py
...
----------------------------------------------------------------------
Collected 2 tests
1: test_run_gdb_then_lldb (TestDisassembly.DisassembleDriverMainLoop)
Test disassembly on a large function with lldb vs. gdb. ...
gdb benchmark: Avg: 0.226305 (Laps: 25, Total Elapsed Time: 5.657614)
lldb benchmark: Avg: 0.113864 (Laps: 25, Total Elapsed Time: 2.846606)
lldb_avg/gdb_avg: 0.503146
ok
2: test_run_lldb_then_gdb (TestDisassembly.DisassembleDriverMainLoop)
Test disassembly on a large function with lldb vs. gdb. ...
lldb benchmark: Avg: 0.113008 (Laps: 25, Total Elapsed Time: 2.825201)
gdb benchmark: Avg: 0.225240 (Laps: 25, Total Elapsed Time: 5.631001)
lldb_avg/gdb_avg: 0.501723
ok
----------------------------------------------------------------------
Ran 2 tests in 41.346s
OK
llvm-svn: 142598
child=None, child_prompt=None, use_cmd_api=False
By default, expect a pexpect spawned child and child prompt to be
supplied (use_cmd_api=False). If use_cmd_api is true, ignore the child
and child prompt and use self.runCmd() to run the hooks one by one.
Modify existing client to reflect the change.
llvm-svn: 142532
to be able to specify the runhook(s) to bring the debug session to a certain state
before running the benchmarking logic. An example,
./dotest.py -v -t +b -k 'process attach -n Mail' -k 'thread backtrace all' -p TestRunHooksThenSteppings.py
spawns lldb, attaches to the 'Mail' application, does a backtrace for all threads, and then
runs the benchmark to step the inferior multiple times.
llvm-svn: 141740
and the breakpoint specification for the benchmark purpose. This is used by TestSteppingSpeed.py
to benchmark the lldb stepping speed. Without '-e' and 'x' specified, the test defaults to
run the built lldb against itself and stopped on Driver::MainLoop, then stepping for 50 times.
rdar://problem/7511193
llvm-svn: 141584
Set up self.lldbOption to be "--no-lldbibit" unless env variable NO_LLDBIBIT is defined and equals "NO".
Also add "-nx" to gdb spawned.
llvm-svn: 141384
built locally from the source tree. This is distinguished from self.lldbExec, which
can be used by test/benchmarks to measure the performances against other debuggers.
You can use environment variable LLDB_EXEC to specify self.lldbExec to the dotest.py
test driver, otherwise it is going to be populated with self.lldbHere.
Modify the regular tests under test dir, i.e., not test/benchmarks, to use self.lldbHere.
Also modify the benchmarks tests to use self.lldbHere when it needs an 'lldb' executable
with debug info to do the performance measurements.
llvm-svn: 138608
on lldb's Driver::MainLoop function which is ~1190 lines of x86 assembly code. This file is not
exercised during the normal test suite run, i.e., no +b option specified. So it should be ok.
The following is the benchmark result on my MBP running OSX Lion:
[17:38:46] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v +b -p TestFlintVsSlate
/Volumes/data/lldb/svn/trunk/build/Debug
LLDB-71
Path: /Volumes/data/lldb/svn/trunk
URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
Repository Root: https://johnny@llvm.org/svn/llvm-project
Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
Revision: 137008
Node Kind: directory
Schedule: normal
Last Changed Author: gclayton
Last Changed Rev: 137008
Last Changed Date: 2011-08-05 17:50:36 -0700 (Fri, 05 Aug 2011)
Session logs for test failures/errors/unexpected successes will go into directory '2011-08-08-17_38_52'
Command invoked: python ./dotest.py -v +b -p TestFlintVsSlate
----------------------------------------------------------------------
Collected 2 tests
1: test_run_41_then_42 (TestFlintVsSlateGDBDisassembly.FlintVsSlateGDBDisassembly)
Test disassembly on a large function with 4.1 vs. 4.2's gdb. ...
4.1 gdb benchmark: Avg: 0.205623 (Laps: 5, Total Elapsed Time: 1.028113)
4.2 gdb benchmark: Avg: 0.201970 (Laps: 5, Total Elapsed Time: 1.009849)
gdb_42_avg/gdb_41_avg: 0.982236
ok
2: test_run_42_then_41 (TestFlintVsSlateGDBDisassembly.FlintVsSlateGDBDisassembly)
Test disassembly on a large function with 4.1 vs. 4.2's gdb. ...
4.2 gdb benchmark: Avg: 0.202602 (Laps: 5, Total Elapsed Time: 1.013012)
4.1 gdb benchmark: Avg: 0.204418 (Laps: 5, Total Elapsed Time: 1.022089)
gdb_42_avg/gdb_41_avg: 0.991119
ok
----------------------------------------------------------------------
Ran 2 tests in 15.688s
OK
llvm-svn: 137092
Sample run on my OSX Lion (MacBook Pro):
1: test_run_gdb_then_lldb (TestDisassembly.DisassembleDriverMainLoop)
Test disassembly on a large function with lldb vs. gdb. ...
gdb benchmark: Avg: 0.201802 (Laps: 5, Total Elapsed Time: 1.009008)
lldb benchmark: Avg: 0.109569 (Laps: 5, Total Elapsed Time: 0.547843)
lldb_avg/gdb_avg: 0.542952
ok
2: test_run_lldb_then_gdb (TestDisassembly.DisassembleDriverMainLoop)
Test disassembly on a large function with lldb vs. gdb. ...
lldb benchmark: Avg: 0.109580 (Laps: 5, Total Elapsed Time: 0.547902)
gdb benchmark: Avg: 0.201587 (Laps: 5, Total Elapsed Time: 1.007936)
lldb_avg/gdb_avg: 0.543588
ok
llvm-svn: 136931
Modify lldbbench.py so that lldbtest.line_number() utility function is available to
BenchBase client as just line_number(), and modify lldbtest.py so that self.lldbExec
(the full path for the 'lldb' executable) is available to BenchBase client as well.
An example run of the test case on my MacBook Pro running Lion:
1: test_compare_lldb_to_gdb (TestRepeatedExprs.RepeatedExprsCase)
Test repeated expressions with lldb vs. gdb. ...
lldb_avg: 0.204339
gdb_avg: 0.205721
lldb_avg/gdb_avg: 0.993284
ok
llvm-svn: 136740
Stopwatch (self.swatch) within the BenchBase's setUp() instance method to be available
to all the child classes.
Use self.swatch to measure elapsed time in TestRepeatedExprs.py, which needs to be
modified later on to actually measure repeated expression evaluations within the
context of lldb as well as gdb.
llvm-svn: 136664
The test driver now takes an option "+b" which enables to run just the benchmarks tests.
By default, tests decorated with the @benchmarks_test decorator do not get run.
Add an example benchmarks test directory which contains nothing for the time being,
just to demonstrate the @benchmarks_test concept.
For example,
$ ./dotest.py -v benchmarks
...
----------------------------------------------------------------------
Collected 2 tests
1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with gdb. ... skipped 'benchmarks tests'
2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with lldb. ... skipped 'benchmarks tests'
----------------------------------------------------------------------
Ran 2 tests in 0.047s
OK (skipped=2)
$ ./dotest.py -v +b benchmarks
...
----------------------------------------------------------------------
Collected 2 tests
1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with gdb. ... running test_with_gdb
benchmarks result for test_with_gdb
ok
2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with lldb. ... running test_with_lldb
benchmarks result for test_with_lldb
ok
----------------------------------------------------------------------
Ran 2 tests in 0.270s
OK
Also mark some Python API tests which are missing the @python_api_test decorator.
llvm-svn: 136553