This change addresses the corner case bug in the test
infrastructure where a test file times out *outside*
of any running test method. In those cases, the issue
was charged to the file, not to a test method within
the file. When that file is re-run successfully,
none of the test-method-level successes would clear
the file-level issue.
This change fixes that: for all test files that are
getting rerun (whether by being marked flaky or
via the --rerun-all-issues flag), file-level test
issues are searched for in each of those files. Each
file-level issue found in the rerun file list then
gets cleared.
A test of this feature is added to issue_verification,
using the technique there of moving the *.py.park file
to *.py to do an end-to-end validation.
This change also adds a .gitignore entry for pyenv
project-level files and fixes up a few minor pep8
formatting violations in files I touched.
Fixes:
llvm.org/pr27423
llvm-svn: 282990
Summary:
This change enhances the LLDB test infrastructure to convert
load-time exceptions in a given Python test module into errors.
Before this change, specifying a non-existent test decorator,
or otherwise having some load-time error in a python test module,
would not get flagged as an error.
With this change, typos and other load-time errors in a python
test file get converted to errors and reported by the
test runner.
This change also includes test infrastructure tests that include
covering the new work here. I'm going to wait until we have
these infrastructure tests runnable on the main platforms before
I try to work that into all the normal testing workflows.
The test infrastructure tests can be run by using the standard python module testing practice of doing the following:
cd packages/Python/lldbsuite/test_event
python -m unittest discover -s test/src -p 'Test*.py'
Those tests run the dotest inferior with a known broken test and verify that the errors are caught. These tests did not pass until I modified dotest.py to capture them properly.
@zturner, if you have the chance, if you could try those steps above (the python -m unittest ... line) on Windows, that would be great if we can address any python2/3/Windows bits there. I don't think there's anything fancy, but I didn't want to hook it into test flow until I know it works there.
I'll be slowly adding more tests that cover some of the other breakage I've occasionally seen that didn't get collected as part of the summarization. This is the biggest one I'm aware of.
Reviewers: zturner, labath
Subscribers: zturner, lldb-commits
Differential Revision: http://reviews.llvm.org/D20193
llvm-svn: 269489
The race boiled down to this:
If a test worker queue is able to run the test inferior and
clean up before the dosep.py listener socket is spun up, and
the worker queue is the last one (as would be the case when
there's only one test rerunning in the rerun queue), then
the test suite will exit the main loop before having a chance
to process any test events coming from the test inferior or
the worker queue job control.
I found this race to be far more likely on fast hardware.
Our Linux CI is one such example. While it will show
up primarily during meta test events generated by
a worker thread when a test inferior times out or
exits with an exceptional exit (e.g. seg fault), it only
requires that the OS takes longer to hook up the
listener socket than it takes for the final test inferior
and worker thread to shut down.
See:
http://reviews.llvm.org/D19214
reviewed by:
Pavel Labath
llvm-svn: 266624
Summary:
This adds ability to mark test that do not complete due to hangs, crashes, etc., as "expected",
to avoid flagging the build red for a known problem. Functionally, this extends the scope of the
existing expectedFailureXXX decorators to cover these states as well. Once this is in, I will
start replacing the magic list of failing tests in dosep.py with our regular annotations which
should hopefully make code simpler.
Reviewers: tfiala
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D15530
llvm-svn: 255763
Use of --rerun-all-issues will enable any test method failure, not just
test methods marked with the flakey decorator, to rerun.
Currently this does not change the flakey logic's immediate rerun
attempt. I want to make sure this doesn't cause any significant issues
before changing that part.
The rerun reporting is only known to work properly with the
default (new) BasicResultsFormatter reporting. Once we work out
any issues, I'll go back and make sure the curses output handles
it properly as well.
llvm-svn: 255543
Also adds full path info for exceptional exits and timeouts when
no test method is currently running.
Adds --rerun-all-issues command line arg. If specified, all
test issues are eligible for rerun. If not specified, only tests
marked flakey are eligible for rerun.
The actual rerunning will occur in an upcoming change. This
change just handles tha accounting of what should be rerun.
llvm-svn: 255438
Also adds enable.py/disable.py script to simplify turning on and off
the issue_verification tests helpful for testing a results formatter.
llvm-svn: 255161
This change is a trial balloon to verify that the default test summary
output sends the right output for the buildbot issue detection script.
The effect of this change will be reverted after verifying the testbot
behavior. This change will not stay in as is and will knowingly create
noise, see this thread:
http://lists.llvm.org/pipermail/lldb-dev/2015-December/009048.html
llvm-svn: 255131