Page MenuHomePhabricator

test_runner.py should not list all the test, but instead be able to find them.
Open, NormalPublic

Description

The case of assumevalid.py shows that the test framework should pursue running the tests by itself. If anything, tests should be explicitly blacklisted rather than having to be whitelisted to be run.

We want tests to run if they are added, without the developer having to do anything for them to run.

Event Timeline

deadalnix triaged this task as Normal priority.Jan 17 2018, 18:01
deadalnix created this task.

Tests are ordered by the estimated runtime, to favor running tests in parallel (at BASE_SCRIPTS and EXTENDED_SCRIPTS arrays).

Do you think we can add a priority as a prefix of the test name (i.e. 01_assumevalid.py, a la init.d style) in order to keep that order? Or should we just ignore the order? Any other idea, like adding the priority as a comment in the first line of the script?

Also, EXTENDED tests should be marked somehow, like E01_assumevalid.py.

Other than that, this task would be to 1) glob the right path checking for .py files, sort them; filter Extended is needed 2) exclude blacklisted; Sounds ok?

We can have some file describing timing, but the absence of timing or listing shouldn't mean the test doesn't run.

ok, what about we have 3 paths:

/base/
/extended/
/disabled/

where the scripts are kept.

Then there is a timing.json file that has something like:

[ {
 'name': 'script1.py', 
 'runtime':'32.32s',
 },
...
]

if the file is present, is used to sort execution order, if not, we run them alphabetically.
when running the tests with -`-update-timing-file`  will update the timing file.

That sounds like a plan that could work.

Thinking that a bit more, I'm not sure sure we want to have different path. We can just have a cutoff that is time based, and if no time is provided, we run.

Is it OK to store timing.json in git (test/function/timing.json)?

Running a through a single, global list with a cutoff seems like a simple an effective solution. If no timing file is present, all tests are run. If a test is not present in a file, a test will always run.

With this solution, when adding new *.py test, developer must manually run with -update-timing-file. If he forgets this, the test will always run (until someone updates and commits the file)
contributing.md / test/functional/readme.md should be updated to reflect this.

Alternative solution: if test is not present in timing.json, just fail the test. The downside of alternative is: if a developer runs with -update-timing-file the timing might not reflect the real values, because he will probably add more tests before committing. On the other hand, the same scenario might happen if developer is updating an existing test and forgets to -update-timing-file