8.4 KiB
- Feature Name:
libtest-json
- Start Date: 2024-01-18
- Pre-RFC: Internals
- eRFC PR: rust-lang/rfcs#3558
- Tracking Issue: rust-lang/testing-devex-team1
Summary
This eRFC lays out a path for stabilizing programmatic output for libtest.
Motivation
libtest is the test harness used by default for tests in cargo projects. It provides the CLI that cargo calls into and enumerates and runs the tests discovered in that binary. It ships with rustup and has the same compatibility guarantees as the standard library.
Before 1.70, anyone could pass --format json
despite it being unstable.
When this was fixed to require nightly,
this helped show how much people have come to rely on programmatic output.
Cargo could also benefit from programmatic test output to improve user interactions, including
- Wanting to run test binaries in parallel, like
cargo nextest
- Lack of summary across all binaries
- Noisy test output (see also #5089)
- Confusing command-line interactions (see also #8903, #10392)
- Poor messaging when a filter doesn't match
- Smarter test execution order (see also #8685, #10673)
- JUnit output is incorrect when running multiple test binaries
- Lack of failure when test binaries exit unexpectedly
Most of that involves shifting responsibilities from the test harness to the test runner which has the side effects of:
- Allowing more powerful experiments with custom test runners (e.g.
cargo nextest
) as they'll have more information to operate on - Lowering the barrier for custom test harnesses (like
libtest-mimic
) as UI responsibilities are shifted to the test runner (cargo test
)
Guide-level explanation
The intended outcomes of this experiment are:
- Updates to libtest's unstable output
- A stabilization request to T-libs-api using the process of their choosing
Additional outcomes we hope for are:
- A change proposal for T-cargo for
cargo test
andcargo bench
to provide their own UX on top of the programmatic output - A change proposal for T-cargo to allow users of custom test harnesses to opt-in to the new UX using programmatic output
While having a plan for evolution takes some burden off of the format, we should still do some due diligence in ensuring the format works well for our intended uses. Our rough plan for vetting a proposal is:
- Create an experimental test harness where each
--format <mode>
is a skin over a common internalserde
structure, emulating whatlibtest
andcargo
s relationship will be like on a smaller scale for faster iteration - Transition libtest to this proposed interface
- Add experimental support for cargo to interact with test binaries through the unstable programmatic output
- Create a stabilization report for programmatic output for T-libs-api and a cargo RFC for custom test harnesses to opt into this new protocol
It is expected that the experimental test harness have functional parity with libtest
, including
- Ignored tests
- Parallel running of tests
- Benches being both a bench and a test
- Test discovery
We should evaluate the design against the capabilities of test runners from different ecosystems to ensure the format has the expandability for what people may do with custom test harnesses or cargo test
, including:
- Ability to implement different format modes on top
- Both test running and
--list
mode
- Both test running and
- Ability to run test harnesses in parallel
- Tests with multiple failures
- Bench support
- Static and dynamic parameterized tests / test fixtures
- Static and dynamic test skipping
- Test markers
- doctests
- Test location (for IDEs)
- Collect metrics related to tests
- Elapsed time
- Temp dir sizes
- RNG seed
Warning: This doesn't mean they'll all be supported in the initial stabilization just that we feel confident the format will support them)
We also need to evaluate how we'll support evolving the format.
An important consideration is that the compile-time burden we put on custom
test harnesses as that will be an important factor for people's willingness to
pull them in as libtest
comes pre-built today.
Custom test harnesses are important for this discussion because
- Many already exist today, directly or shoe-horned on top of
libtest
, like - The compatibility guarantees around libtest mean that development of new ideas is easier through custom test harnesses
Reference-level explanation
Resources
Comments made on libtests format
- Format is complex (see also 1)
- Benches need love
- Type field is overloaded
- Suite/child relationship is missing
- Lack of suite name makes it hard to use programmatic output from Cargo (see also 1)
- Format is underspecified
Lacks ignored reason(resolved?)- Lack of
rendered
field
Drawbacks
Rationale and alternatives
See also
- https://internals.rust-lang.org/t/alternate-libtest-output-format/6121
- https://internals.rust-lang.org/t/past-present-and-future-for-rust-testing/6354
Prior art
Existing formats
Unresolved questions
Future possibilities
Improve custom test harness experience
With less of a burden being placed on custom test harnesses, we can more easily explore what is needed for making them be a first-class experience.
See