-Change the build_dir variable name to lib_dir
-Set lib_dir to the correct location on Linux
-Set LD_EXTRAS to the actual lldb library
llvm-svn: 175664
- Add a "parsable" mode to dotest.py that outputs test results in exactly the same format as clang's lit tests
- Improve dosep script to output list of failing tests (output should look like clang test failure summaries)
- Cleanup lldb/test/Makefile to remove needless parameters and environment variables
- Switch makefile tests to use parsable-mode output; should make the buildbot results parsable
- Switch makefile tests to use dosep to log catch crashing tests (instead of halting the test suite)
llvm-svn: 175309
Added a new line of information that reports the count of tests that pass, fail or have other things happen to them.
Again no flag to have the dots back. If you care, let us know!
llvm-svn: 174784
The LLDB test suite now shows a progress bar instead of dots when not in verbose mode
If you crave the dots, make your Terminal window smaller than 10 columns :-)
(or ask for a flag to have the dots come back on demand)
llvm-svn: 174777
- now prints the correct PYTHONPATH
- update dotest.py to use lldb -P result correctly
- resolves TestPublicAPIHeaders test failure (on Linux)
llvm-svn: 171558
This feature allows us to group test cases into logical groups (categories), and to only run a subset of test cases based on these categories.
Each test-case can have a new method getCategories(self): which returns a list of strings that are the categories to which the test case belongs.
If a test-case does not provide its own categories, we will look for categories in the class that contains the test case.
If that fails too, the default implementation looks for a .category file, which contains a comma separated list of strings.
The test suite will recurse look for .categories up until the top level directory (which we guarantee will have an empty .category file).
The driver dotest.py has a new --category <foo> option, which can be repeated, and specifies which categories of tests you want to run.
(example: ./dotest.py --category objc --category expression)
All tests that do not belong to any specified category will be skipped. Other filtering options still exist and should not interfere with category filtering.
A few tests have been categorized. Feel free to categorize others, and to suggest new categories that we could want to use.
All categories need to be validly defined in dotest.py, or the test suite will refuse to run when you use them as arguments to --category.
In the end, failures will be reported on a per-category basis, as well as in the usual format.
This is the very first stage of this feature. Feel free to chime in with ideas for improvements!
llvm-svn: 164403
Changed the '-A' option to also have a long option of '--arch'. This is now specified multiple times to get multiple architectures.
Old: -A i386^x86_64
New: -A i386 -A x86_64
--arch i386 --arch x86_64
Changed the '-C' option to also have a long option of '--compiler'. This is now specified multiple times to get multiple compiler.
Old: -C clang^gcc
New: -C clang -C gcc
--compiler clang --compiler gcc
llvm-svn: 163141
before running the test suite. A usage example looks like this:
test $ ./dotest.py -A x86_64 -R /tmp/x86_64 &
test $ ./dotest.py -A i386 -R /tmp/i386 &
where we would want to run the x86_64 and i386 archs concurrently but relocate the test suite to different directory
hierarchies in order not to stump on each other's intermediate files.
llvm-svn: 155491
rdar://problem/11283401
Example:
Collected 1 test
1: test_with_dwarf (TestCallStdStringFunction.ExprCommandCallFunctionTestCase)
Test calling std::String member function. ... FAIL
======================================================================
FAIL: test_with_dwarf (TestCallStdStringFunction.ExprCommandCallFunctionTestCase)
Test calling std::String member function.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Volumes/data/lldb/svn/ToT/test/lldbtest.py", line 427, in wrapper
return func(self, *args, **kwargs)
File "/Volumes/data/lldb/svn/ToT/test/expression_command/call-function/TestCallStdStringFunction.py", line 34, in test_with_dwarf
self.call_function()
File "/Volumes/data/lldb/svn/ToT/test/expression_command/call-function/TestCallStdStringFunction.py", line 48, in call_function
substrs = ['Hello world'])
File "/Volumes/data/lldb/svn/ToT/test/lldbtest.py", line 1235, in expect
msg if msg else EXP_MSG(str, exe))
AssertionError: False is not True : 'Hello world' returns expected result
Config=i386-clang
----------------------------------------------------------------------
Ran 1 test in 1.148s
FAILED (failures=1)
llvm-svn: 155157
the pre-flight code gets executed during setUp() after the debugger instance is available
and the post-flight code gets executed during tearDown() after the debugger instance has
done killing the inferior and deleting all the target programs.
Example:
[11:32:48] johnny:/Volumes/data/lldb/svn/ToT/test $ ./dotest.py -A x86_64 -v -c ../examples/test/.lldb-pre-post-flight functionalities/watchpoint/hello_watchpoint
config: {'pre_flight': <function pre_flight at 0x1098541b8>, 'post_flight': <function post_flight at 0x109854230>}
LLDB build dir: /Volumes/data/lldb/svn/ToT/build/Debug
LLDB-139
Path: /Volumes/data/lldb/svn/ToT
URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
Repository Root: https://johnny@llvm.org/svn/llvm-project
Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
Revision: 154753
Node Kind: directory
Schedule: normal
Last Changed Author: gclayton
Last Changed Rev: 154730
Last Changed Date: 2012-04-13 18:42:46 -0700 (Fri, 13 Apr 2012)
lldb.pre_flight: def pre_flight(test):
__import__("lldb")
__import__("lldbtest")
print "\nRunning pre-flight function:"
print "for test case:", test
lldb.post_flight: def post_flight(test):
__import__("lldb")
__import__("lldbtest")
print "\nRunning post-flight function:"
print "for test case:", test
Session logs for test failures/errors/unexpected successes will go into directory '2012-04-16-11_34_08'
Command invoked: python ./dotest.py -A x86_64 -v -c ../examples/test/.lldb-pre-post-flight functionalities/watchpoint/hello_watchpoint
compilers=['clang']
Configuration: arch=x86_64 compiler=clang
----------------------------------------------------------------------
Collected 2 tests
1: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ...
Running pre-flight function:
for test case: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Running post-flight function:
for test case: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
ok
2: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ...
Running pre-flight function:
for test case: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Running post-flight function:
for test case: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
ok
----------------------------------------------------------------------
Ran 2 tests in 1.584s
OK
llvm-svn: 154847
either @dsym_test or @dwarf_test to be executed during the testsuite run. There are still lots of
Test*.py files which have not been decorated with the new decorator.
An example:
# From TestMyFirstWatchpoint.py ->
class HelloWatchpointTestCase(TestBase):
mydir = os.path.join("functionalities", "watchpoint", "hello_watchpoint")
@dsym_test
def test_hello_watchpoint_with_dsym_using_watchpoint_set(self):
"""Test a simple sequence of watchpoint creation and watchpoint hit."""
self.buildDsym(dictionary=self.d)
self.setTearDownCleanup(dictionary=self.d)
self.hello_watchpoint()
@dwarf_test
def test_hello_watchpoint_with_dwarf_using_watchpoint_set(self):
"""Test a simple sequence of watchpoint creation and watchpoint hit."""
self.buildDwarf(dictionary=self.d)
self.setTearDownCleanup(dictionary=self.d)
self.hello_watchpoint()
# Invocation ->
[17:50:14] johnny:/Volumes/data/lldb/svn/ToT/test $ ./dotest.py -N dsym -v -p TestMyFirstWatchpoint.py
LLDB build dir: /Volumes/data/lldb/svn/ToT/build/Debug
LLDB-137
Path: /Volumes/data/lldb/svn/ToT
URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
Repository Root: https://johnny@llvm.org/svn/llvm-project
Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
Revision: 154133
Node Kind: directory
Schedule: normal
Last Changed Author: gclayton
Last Changed Rev: 154109
Last Changed Date: 2012-04-05 10:43:02 -0700 (Thu, 05 Apr 2012)
Session logs for test failures/errors/unexpected successes will go into directory '2012-04-05-17_50_49'
Command invoked: python ./dotest.py -N dsym -v -p TestMyFirstWatchpoint.py
compilers=['clang']
Configuration: arch=x86_64 compiler=clang
----------------------------------------------------------------------
Collected 2 tests
1: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ... skipped 'dsym tests'
2: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
----------------------------------------------------------------------
Ran 2 tests in 1.138s
OK (skipped=1)
Session logs for test failures/errors/unexpected successes can be found in directory '2012-04-05-17_50_49'
[17:50:50] johnny:/Volumes/data/lldb/svn/ToT/test $
llvm-svn: 154154
to pass to the toolchain in order to build the inferior programs to be run/debugged
duirng the test suite. The architecture might dictate some special CFLAGS which are
more easily specified in a central place (like the command line) instead of inside
make rules.
For Example,
./dotest.py -v -r /shared/phone -A armv7 -E "-isysroot your_sdk_root" functionalities/watchpoint/hello_watchpoint
will relocate the particular test directory ('functionalities/watchpoint/hello_watchpoint' in this case) to a
new directory named '/shared/phone'. The particular incarnation of the architecture-compiler combination of the
test support files are therefore to be found under:
/shared/phone.arch=armv7-compiler=clang/functionalities/watchpoint/hello_watchpoint
The building of the inferior programs under testing is now working.
The actual launching/debugging of the inferior programs are not yet working,
neither is the setting of a watchpoint on the phone.
llvm-svn: 153070
to be debugged while running the test suite. By default, compilers is set to ['clang'] and can be overridden
using the "-C compilerA^compilerB" option.
llvm-svn: 152367
when building the inferior programs.
Example:
/Volumes/data/lldb/svn/ToT/test $ ./dotest.py -v functionalities/watchpoint
LLDB build dir: /Volumes/data/lldb/svn/ToT/build/Debug
LLDB-123
Path: /Volumes/data/lldb/svn/ToT
URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
Repository Root: https://johnny@llvm.org/svn/llvm-project
Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
Revision: 152244
Node Kind: directory
Schedule: normal
Last Changed Author: gclayton
Last Changed Rev: 152244
Last Changed Date: 2012-03-07 13:03:09 -0800 (Wed, 07 Mar 2012)
Session logs for test failures/errors/unexpected successes will go into directory '2012-03-08-16_43_51'
Command invoked: python ./dotest.py -v functionalities/watchpoint
Configuration: arch=x86_64
----------------------------------------------------------------------
Collected 21 tests
1: test_hello_watchlocation_with_dsym (TestWatchLocation.HelloWatchLocationTestCase)
Test watching a location with '-x size' option. ... ok
2: test_hello_watchlocation_with_dwarf (TestWatchLocation.HelloWatchLocationTestCase)
Test watching a location with '-x size' option. ... ok
3: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
4: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
5: test_watchpoint_multiple_threads_with_dsym (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
Test that lldb watchpoint works for multiple threads. ... ok
6: test_watchpoint_multiple_threads_with_dwarf (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
Test that lldb watchpoint works for multiple threads. ... ok
7: test_rw_disable_after_first_stop__with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint but disable it after the first stop. ... ok
8: test_rw_disable_after_first_stop_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint but disable it after the first stop. ... ok
9: test_rw_disable_then_enable_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint, disable initially, then enable it. ... ok
10: test_rw_disable_then_enable_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint, disable initially, then enable it. ... ok
11: test_rw_watchpoint_delete_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test delete watchpoint and expect not to stop for watchpoint. ... ok
12: test_rw_watchpoint_delete_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test delete watchpoint and expect not to stop for watchpoint. ... ok
13: test_rw_watchpoint_set_ignore_count_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test watchpoint ignore count and expect to not to stop at all. ... ok
14: test_rw_watchpoint_set_ignore_count_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test watchpoint ignore count and expect to not to stop at all. ... ok
15: test_rw_watchpoint_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint and expect to stop two times. ... ok
16: test_rw_watchpoint_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint and expect to stop two times. ... ok
17: test_watchpoint_cond_with_dsym (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
Test watchpoint condition. ... ok
18: test_watchpoint_cond_with_dwarf (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
Test watchpoint condition. ... ok
19: test_watchlocation_with_dsym_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
20: test_watchlocation_with_dwarf_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
21: test_error_cases_with_watchpoint_set (TestWatchpointSetErrorCases.WatchpointSetErrorTestCase)
Test error cases with the 'watchpoint set' command. ... ok
----------------------------------------------------------------------
Ran 21 tests in 74.590s
OK
Configuration: arch=i386
----------------------------------------------------------------------
Collected 21 tests
1: test_hello_watchlocation_with_dsym (TestWatchLocation.HelloWatchLocationTestCase)
Test watching a location with '-x size' option. ... ok
2: test_hello_watchlocation_with_dwarf (TestWatchLocation.HelloWatchLocationTestCase)
Test watching a location with '-x size' option. ... ok
3: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
4: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
5: test_watchpoint_multiple_threads_with_dsym (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
Test that lldb watchpoint works for multiple threads. ... ok
6: test_watchpoint_multiple_threads_with_dwarf (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
Test that lldb watchpoint works for multiple threads. ... ok
7: test_rw_disable_after_first_stop__with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint but disable it after the first stop. ... ok
8: test_rw_disable_after_first_stop_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint but disable it after the first stop. ... ok
9: test_rw_disable_then_enable_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint, disable initially, then enable it. ... ok
10: test_rw_disable_then_enable_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint, disable initially, then enable it. ... ok
11: test_rw_watchpoint_delete_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test delete watchpoint and expect not to stop for watchpoint. ... ok
12: test_rw_watchpoint_delete_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test delete watchpoint and expect not to stop for watchpoint. ... ok
13: test_rw_watchpoint_set_ignore_count_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test watchpoint ignore count and expect to not to stop at all. ... ok
14: test_rw_watchpoint_set_ignore_count_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test watchpoint ignore count and expect to not to stop at all. ... ok
15: test_rw_watchpoint_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint and expect to stop two times. ... ok
16: test_rw_watchpoint_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
Test read_write watchpoint and expect to stop two times. ... ok
17: test_watchpoint_cond_with_dsym (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
Test watchpoint condition. ... ok
18: test_watchpoint_cond_with_dwarf (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
Test watchpoint condition. ... ok
19: test_watchlocation_with_dsym_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
20: test_watchlocation_with_dwarf_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
21: test_error_cases_with_watchpoint_set (TestWatchpointSetErrorCases.WatchpointSetErrorTestCase)
Test error cases with the 'watchpoint set' command. ... ok
----------------------------------------------------------------------
Ran 21 tests in 67.059s
OK
llvm-svn: 152357
environment variable before starting the test runner which executes the test cases and
may spawn child processes. An example:
./dotest.py -u MY_ENV1 -u MY_ENV2 -v -p TestWatchLocationWithWatchSet.py
llvm-svn: 149304
Use this option with care as you would need to build the inferior(s) by hand
and build the executable(s) with the correct name(s). This option can be used
with '-# n' to stress test certain test cases for n number of times.
An example:
[11:55:11] johnny:/Volumes/data/lldb/svn/trunk/test/python_api/value $ ls
Makefile TestValueAPI.pyc linked_list
TestValueAPI.py change_values main.c
[11:55:14] johnny:/Volumes/data/lldb/svn/trunk/test/python_api/value $ make EXE=test_with_dsym
clang -gdwarf-2 -O0 -arch x86_64 -c -o main.o main.c
clang -gdwarf-2 -O0 -arch x86_64 main.o -o "test_with_dsym"
/usr/bin/dsymutil -o "test_with_dsym.dSYM" "test_with_dsym"
[11:55:20] johnny:/Volumes/data/lldb/svn/trunk/test/python_api/value $ cd ../..
[11:55:24] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v -# 10 -S -f ValueAPITestCase.test_with_dsym
LLDB build dir: /Volumes/data/lldb/svn/trunk/build/Debug
LLDB-89
Path: /Volumes/data/lldb/svn/trunk
URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
Repository Root: https://johnny@llvm.org/svn/llvm-project
Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
Revision: 144914
Node Kind: directory
Schedule: normal
Last Changed Author: gclayton
Last Changed Rev: 144911
Last Changed Date: 2011-11-17 09:22:31 -0800 (Thu, 17 Nov 2011)
Session logs for test failures/errors/unexpected successes will go into directory '2011-11-17-11_55_29'
Command invoked: python ./dotest.py -v -# 10 -S -f ValueAPITestCase.test_with_dsym
----------------------------------------------------------------------
Collected 1 test
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 1.163s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.200s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.198s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.199s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.239s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 1.215s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.105s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.098s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.195s
OK
1: test_with_dsym (TestValueAPI.ValueAPITestCase)
Exercise some SBValue APIs. ... ok
----------------------------------------------------------------------
Ran 1 test in 1.197s
OK
[11:55:34] johnny:/Volumes/data/lldb/svn/trunk/test $
llvm-svn: 144919
An example (with /Developer/usr/bin/lldb vs. /usr/bin/gdb):
[13:05:04] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v +b -n -p TestCompileRunToBreakpointTurnaround.py
1: test_run_lldb_then_gdb (TestCompileRunToBreakpointTurnaround.CompileRunToBreakpointBench)
Benchmark turnaround time with lldb vs. gdb. ...
lldb turnaround benchmark: Avg: 4.574600 (Laps: 3, Total Elapsed Time: 13.723799)
gdb turnaround benchmark: Avg: 7.966713 (Laps: 3, Total Elapsed Time: 23.900139)
lldb_avg/gdb_avg: 0.574214
ok
----------------------------------------------------------------------
Ran 1 test in 55.462s
OK
llvm-svn: 142949
Add a '-y count' option to the test driver for this purpose. An example:
$ ./dotest.py -v -y 25 +b -p TestDisassembly.py
...
----------------------------------------------------------------------
Collected 2 tests
1: test_run_gdb_then_lldb (TestDisassembly.DisassembleDriverMainLoop)
Test disassembly on a large function with lldb vs. gdb. ...
gdb benchmark: Avg: 0.226305 (Laps: 25, Total Elapsed Time: 5.657614)
lldb benchmark: Avg: 0.113864 (Laps: 25, Total Elapsed Time: 2.846606)
lldb_avg/gdb_avg: 0.503146
ok
2: test_run_lldb_then_gdb (TestDisassembly.DisassembleDriverMainLoop)
Test disassembly on a large function with lldb vs. gdb. ...
lldb benchmark: Avg: 0.113008 (Laps: 25, Total Elapsed Time: 2.825201)
gdb benchmark: Avg: 0.225240 (Laps: 25, Total Elapsed Time: 5.631001)
lldb_avg/gdb_avg: 0.501723
ok
----------------------------------------------------------------------
Ran 2 tests in 41.346s
OK
llvm-svn: 142598
for the debugger to execute for certain kind of tests (for example, a benchmark).
A list of runhooks can be used to steer the debugger into the desired state before more
actions can be performed.
llvm-svn: 141626
and the breakpoint specification for the benchmark purpose. This is used by TestSteppingSpeed.py
to benchmark the lldb stepping speed. Without '-e' and 'x' specified, the test defaults to
run the built lldb against itself and stopped on Driver::MainLoop, then stepping for 50 times.
rdar://problem/7511193
llvm-svn: 141584
built locally from the source tree. This is distinguished from self.lldbExec, which
can be used by test/benchmarks to measure the performances against other debuggers.
You can use environment variable LLDB_EXEC to specify self.lldbExec to the dotest.py
test driver, otherwise it is going to be populated with self.lldbHere.
Modify the regular tests under test dir, i.e., not test/benchmarks, to use self.lldbHere.
Also modify the benchmarks tests to use self.lldbHere when it needs an 'lldb' executable
with debug info to do the performance measurements.
llvm-svn: 138608
There should be nothing unwanted there and a simpe main.cpp (generated from main.cpp.template)
which includes SB*.h should compile and link with the LLDB framework.
llvm-svn: 136894
The test driver now takes an option "+b" which enables to run just the benchmarks tests.
By default, tests decorated with the @benchmarks_test decorator do not get run.
Add an example benchmarks test directory which contains nothing for the time being,
just to demonstrate the @benchmarks_test concept.
For example,
$ ./dotest.py -v benchmarks
...
----------------------------------------------------------------------
Collected 2 tests
1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with gdb. ... skipped 'benchmarks tests'
2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with lldb. ... skipped 'benchmarks tests'
----------------------------------------------------------------------
Ran 2 tests in 0.047s
OK (skipped=2)
$ ./dotest.py -v +b benchmarks
...
----------------------------------------------------------------------
Collected 2 tests
1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with gdb. ... running test_with_gdb
benchmarks result for test_with_gdb
ok
2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase)
Test repeated expressions with lldb. ... running test_with_lldb
benchmarks result for test_with_lldb
ok
----------------------------------------------------------------------
Ran 2 tests in 0.270s
OK
Also mark some Python API tests which are missing the @python_api_test decorator.
llvm-svn: 136553
to find out the tests which failed/errored and need re-running. The dotest.py test driver
script is modified to allow specifying multiple -f testclass.testmethod in the command line
to accommodate the redo functionality.
An example,
$ ./redo.py -n 2011-07-29-11_50_14
adding filterspec: TargetAPITestCase.test_find_global_variables_with_dwarf
adding filterspec: DisasmAPITestCase.test_with_dsym
Running ./dotest.py -v -f TargetAPITestCase.test_find_global_variables_with_dwarf -f DisasmAPITestCase.test_with_dsym
...
----------------------------------------------------------------------
Collected 2 tests
1: test_with_dsym (TestDisasmAPI.DisasmAPITestCase)
Exercise getting SBAddress objects, disassembly, and SBAddress APIs. ... ok
2: test_find_global_variables_with_dwarf (TestTargetAPI.TargetAPITestCase)
Exercise SBTarget.FindGlobalVariables() API. ... ok
----------------------------------------------------------------------
Ran 2 tests in 15.328s
OK
llvm-svn: 136533
the CommandInterpreter where it was always being used.
Make sure that Modules can track their object file offsets correctly to
allow opening of sub object files (like the "__commpage" on darwin).
Modified the Platforms to be able to launch processes. The first part of this
move is the platform soon will become the entity that launches your program
and when it does, it uses a new ProcessLaunchInfo class which encapsulates
all process launching settings. This simplifies the internal APIs needed for
launching. I want to slowly phase out process launching from the process
classes, so for now we can still launch just as we used to, but eventually
the platform is the object that should do the launching.
Modified the Host::LaunchProcess in the MacOSX Host.mm to correctly be able
to launch processes with all of the new eLaunchFlag settings. Modified any
code that was manually launching processes to use the Host::LaunchProcess
functions.
Fixed an issue where lldb_private::Args had implicitly defined copy
constructors that could do the wrong thing. This has now been fixed by adding
an appropriate copy constructor and assignment operator.
Make sure we don't add empty ModuleSP entries to a module list.
Fixed the commpage module creation on MacOSX, but we still need to train
the MacOSX dynamic loader to not get rid of it when it doesn't have an entry
in the all image infos.
Abstracted many more calls from in ProcessGDBRemote down into the
GDBRemoteCommunicationClient subclass to make the classes cleaner and more
efficient.
Fixed the default iOS ARM register context to be correct and also added support
for targets that don't support the qThreadStopInfo packet by selecting the
current thread (only if needed) and then sending a stop reply packet.
Debugserver can now start up with a --unix-socket (-u for short) and can
then bind to port zero and send the port it bound to to a listening process
on the other end. This allows the GDB remote platform to spawn new GDB server
instances (debugserver) to allow platform debugging.
llvm-svn: 129351
on the command line. For example, use '-A x86_64^i386' to launch the inferior use both x86_64
and i386.
This is an example of building the debuggee using both clang and gcc compiers:
[17:30:46] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -C clang^gcc -v -f SourceManagerTestCase.test_modify_source_file_while_debugging
Session logs for test failures/errors will go into directory '2011-03-03-17_31_39'
Command invoked: python ./dotest.py -C clang^gcc -v -f SourceManagerTestCase.test_modify_source_file_while_debugging
Configuration: compiler=clang
----------------------------------------------------------------------
Collected 1 test
1: test_modify_source_file_while_debugging (TestSourceManager.SourceManagerTestCase)
Modify a source file while debugging the executable. ... Command 'run' failed!
original content: #include <stdio.h>
int main(int argc, char const *argv[]) {
printf("Hello world.\n"); // Set break point at this line.
return 0;
}
new content: #include <stdio.h>
int main(int argc, char const *argv[]) {
printf("Hello lldb.\n"); // Set break point at this line.
return 0;
}
os.path.getmtime() after writing new content: 1299202305.0
content restored to: #include <stdio.h>
int main(int argc, char const *argv[]) {
printf("Hello world.\n"); // Set break point at this line.
return 0;
}
os.path.getmtime() after restore: 1299202307.0
ok
----------------------------------------------------------------------
Ran 1 test in 8.259s
OK
Configuration: compiler=gcc
----------------------------------------------------------------------
Collected 1 test
1: test_modify_source_file_while_debugging (TestSourceManager.SourceManagerTestCase)
Modify a source file while debugging the executable. ... original content: #include <stdio.h>
int main(int argc, char const *argv[]) {
printf("Hello world.\n"); // Set break point at this line.
return 0;
}
new content: #include <stdio.h>
int main(int argc, char const *argv[]) {
printf("Hello lldb.\n"); // Set break point at this line.
return 0;
}
os.path.getmtime() after writing new content: 1299202307.0
content restored to: #include <stdio.h>
int main(int argc, char const *argv[]) {
printf("Hello world.\n"); // Set break point at this line.
return 0;
}
os.path.getmtime() after restore: 1299202309.0
ok
----------------------------------------------------------------------
Ran 1 test in 2.301s
OK
[17:31:49] johnny:/Volumes/data/lldb/svn/trunk/test $
llvm-svn: 126979
of Stephen Wilson's idea (thanks for the input Stephen!). What I ended up
doing was:
- Got rid of ArchSpec::CPU (which was a generic CPU enumeration that mimics
the contents of llvm::Triple::ArchType). We now rely upon the llvm::Triple
to give us the machine type from llvm::Triple::ArchType.
- There is a new ArchSpec::Core definition which further qualifies the CPU
core we are dealing with into a single enumeration. If you need support for
a new Core and want to debug it in LLDB, it must be added to this list. In
the future we can allow for dynamic core registration, but for now it is
hard coded.
- The ArchSpec can now be initialized with a llvm::Triple or with a C string
that represents the triple (it can just be an arch still like "i386").
- The ArchSpec can still initialize itself with a architecture type -- mach-o
with cpu type and subtype, or ELF with e_machine + e_flags -- and this will
then get translated into the internal llvm::Triple::ArchSpec + ArchSpec::Core.
The mach-o cpu type and subtype can be accessed using the getter functions:
uint32_t
ArchSpec::GetMachOCPUType () const;
uint32_t
ArchSpec::GetMachOCPUSubType () const;
But these functions are just converting out internal llvm::Triple::ArchSpec
+ ArchSpec::Core back into mach-o. Same goes for ELF.
All code has been updated to deal with the changes.
This should abstract us until later when the llvm::TargetSpec stuff gets
finalized and we can then adopt it.
llvm-svn: 126278
module.
On my MBP running SnowLeopard:
$ DOTEST_PROFILE=YES DOTEST_SCRIPT_DIR=/Volumes/data/lldb/svn/trunk/test /System/Library/Frameworks/Python.framework/Versions/Current/lib/python2.6/cProfile.py -o my.profile ./dotest.py -v -w 2> ~/Developer/Log/lldbtest.log
After that, I used the pstats.py module to browse the statistics recorded in the my.profile file.
llvm-svn: 123807
Add an attribute __python_api_test__ (set to True) to the @python_api_test decorated
test method to distinguish them from the lldb command line tests.
llvm-svn: 121500
Example:
@python_api_test
def test_evaluate_expression_python(self):
"""Test SBFrame.EvaluateExpression() API for evaluating an expression."""
...
The opposite of Python APIs only test is an lldb command line test, which sends
commands to the lldb command interpreter. Add a '-a' option to the test driver
to skip Python API only tests.
Modify TestExprs.py to mark a test as @python_api_test and remove an @expectedFailure
decorator as the bug has been fixed.
llvm-svn: 121442
as the args and the envs to the launched process.
o lldbtest.py:
Forgot to check in some assertion messages changes for lldbtest.py.
o dotest.py:
Also add "api" category to the default lldb log option list.
llvm-svn: 121220