Page MenuHomePhabricator

Implement test framework feature to check for expected stderr outputs
AbandonedPublic

Authored by freetrader on Jun 16 2017, 13:11.

Details

Reviewers
deadalnix
sickpig
kyuupichan
Mengerian
awemany
Group Reviewers
Restricted Project
Summary

As a by-product, this allows us to re-enable abc-cmdline.py as a first-class
test citizen.

Tests which want some outputs verified in their stderr output must call
the expected_stderr() method of BitcoinTestFramework (from which most tests
are derived). An example of how to use can be seen in the adapted abc-cmdline.py .

If tests are run standalone without the wrapper, there is no ability to check
their stderr output. They are now declared as

Tests completed (maybe successful - stderr checks pending)

instead of

Tests successful

By setting PYTHON_DEBUG, it is possible to get output of the expected traces
after the test has run.

We should stick to current paradigm of executing tests through the wrapper
whenever possible.

Technically, the stderr output collection is enabled between wrapper and test
by passing a new --exp-stderr=<file> option with a temporary <file>
to the test. The test collects its expected stderr outputs and writes them to
this temporary file.
When the test terminates, the wrapper compares the produced stderr outputs
to the contents of the temporary file. If they match, there is no problem and
the temporary file is removed.
If there is a mismatch, the temporary file is not removed, for troubleshooting,
and the test causes an overall FAILED.

Test Plan

qa/pull-tester/rpc-test.py -extended

Diff Detail

Repository
rABC Bitcoin ABC
Branch
expected_stderr_checks_fix_abccmdline
Lint
No Lint Coverage
Unit
No Test Coverage
Build Status
Buildable 321
Build 321: arc lint + arc unit

Event Timeline

Herald added 1 blocking reviewer(s): Restricted Project. · View Herald TranscriptJun 16 2017, 13:11

Minor fixes from self-review of diff on website

qa/pull-tester/rpc-tests.py
344 ↗(On Diff #526)

This launches the test and then checks stderr for the whole test. It doesn't looks like what we are trying to do. We want to check stderr for the node process, bot for the test process. The only reason that we get the stderr here is because the test itself does nothing with it, which it should. Sounds like the node itself should be started with some object to get the stream coming out of stderr and what's coming on that stream checked from there. When the node is shut down, an exception is thrown if there are things unprocessed on stderr.

qa/rpc-tests/abc-cmdline.py
39 ↗(On Diff #526)

Make it clear in the name that it is related to stderr.

58 ↗(On Diff #526)

Don't catch blanket exception. Throw a specific exception and catch that specific type.

freetrader planned changes to this revision.EditedJun 16 2017, 14:35

As discussed on Slack, it would be better to have the framework's node class deal with all aspects of collecting and verifying the stderr data.
The node class would be extended with methods to invoke stderr verification, but would do it on node stop in any case.
If mismatches between expected and present outputs are found, an exception would be raised to fail the test.

This would be a much cleaner design than the current one.

qa/rpc-tests/abc-cmdline.py
39 ↗(On Diff #526)

ok

58 ↗(On Diff #526)

Yeah, I was intending to catch MissionOptionError but forgot to put the right exception.

qa/pull-tester/rpc-tests.py
344 ↗(On Diff #526)

Agreed on need for design change .

Have taken it back to 'Plan Changes'.

Abandoned since superceded by D230 .