Page MenuHomePhabricator

Write test timing information to the build tree when running test_runner.py
ClosedPublic

Authored by matra774 on Feb 21 2018, 22:09.

Details

Summary

Timing information is is stored 'timing.json'. It is not generated when build and src tree point to the same location.

Test Plan

Preconditions: make sure, that you do not build from src tree:
mkdir build && cd build
../configure
make check

--> Test1: Run single test and check if its timing is present in build directory
rm build/timing.json
build/test/functional/test_runner.py uptime.py
cat build/timing.json

--> Test2: Run another test, it should be inserted in timing file before the previous one
build/test/functional/test_runner.py abc-cmdline.py
cat build/timing.json

--> Test3: Run another test. A single entry should be added in the middle of the timming.json
--> Note: this commit preserves the behavior, where it is not possible to include
--> or exclude an instance of the test with additional parameters - if explicitly dpecified
---> it will only run the test instance without parameters
build/test/functional/test_runner.py txn_doublespend.py
cat build/timing.json

--> Test4: Check that timing file nothing is not present in src directory
find -name timing.json

--> Test5: Switch to building form src tree, timing.json should not be generated there
rm build/timing.json
./configure
make check
test/functional/test_runner.py uptime.py
find -name timing.json

Diff Detail

Repository
rABC Bitcoin ABC
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

Owners added a reviewer: Restricted Owners Package.Feb 21 2018, 22:09

This is wonderful. Gave it a first pass. Will give it another a bit later.

test/functional/test_runner.py
67 ↗(On Diff #2940)

Should we made this just a list of lists?

527 ↗(On Diff #2940)

What do you see these timings providing us with? The CI system should track them from junit as well, and the filesystems there are ephemeral anyways.

test/functional/test_runner.py
67 ↗(On Diff #2940)

Yes, this was my initial ideas, but I've later decides to keep existing format: its easier to type testname -p1 -p2 -3 than testname ["-p1", "-p2", "-p3"].

I can change it to list of lists. I am strongly typed person myself and prefer typed data structures over string manipulations and parsing.

527 ↗(On Diff #2940)

It helps us distinguish between long and short running tests.
The differences between junit_results.xml and timing.json are the following :

  • junit_results.xml is not stored in git
  • junit_results.xml is updated on each run and contains only tests that were executed as part of last run
  • junit_results.xml contains additional information (such as stdout/stderr) from last run

Alternative implementation could be: get rid of Timings class, keep EXTENDED_SCRIPTS list and make developers of new, slow tests responsible for manually updating it.

schancel added inline comments.
test/functional/test_runner.py
67 ↗(On Diff #2940)

Okay. We can change it later :) I just hate trying to parse and split things like this. A map to a list of parameters would avoid (t.split()[1:] for t in TEST_PARAMS if len(t.split()) > 1 and t.split()[0] == test_name)

133 ↗(On Diff #2940)

I'd prefer it if we formatted these before joining.

This revision is now accepted and ready to land.Feb 22 2018, 21:52
deadalnix requested changes to this revision.Feb 23 2018, 06:46
deadalnix added inline comments.
test/functional/README.md
47 ↗(On Diff #2940)

What happens if a test isn't in there ? I don't think we should require people to do this.

test/functional/test_runner.py
67 ↗(On Diff #2940)

These tests needs to run with and without these params. The different option do not seems to appear in the generated timing.

This revision now requires changes to proceed.Feb 23 2018, 06:46
test/functional/README.md
47 ↗(On Diff #2940)

If test is not there, it is always run (assumed that duration==0) as per your comment in T163

We can have some file describing timing, but the absence of timing or listing shouldn't mean the test doesn't run.

Alternative implementation: if test is not present. it is always run, and its duration is recorded in timing.json even if --updatetimings was NOT specified.

test/functional/test_runner.py
67 ↗(On Diff #2940)

The old implementation always run this tests with params. See line 419 in old code.

If we need to execute same test multiple times, then I suggest the following:

  • if test is not present in TEST_PARAMS, it is executed only once (without parameters)
  • if it is present, then it is executed for each occurrence in TEST_PARAMS
TEST_PARAMS = [
    # Some test requires parameters. When a test is listed here, those parameters will
    # automatically be added when executing the test
    "txn_doublespend.py --mineblock",
    "txn_doublespend.py --some --other --combination --of --parameters",
    "anothertest.py --some ",

This enables us to cover different combinations.

OK?

See the two inline comments ensuring that txn_doublespend.py ran with and without the flag.

test/functional/test_runner.py
85 ↗(On Diff #2940)

here

149 ↗(On Diff #2940)

here

test/functional/test_runner.py
67 ↗(On Diff #2940)

I think we should just run test without params always, and then have a run with params if apropriate. The run without param should be implicit.

Alternative implementation: if test is not present. it is always run, and its duration is recorded in timing.json even if --updatetimings was NOT specified.

This makes sense to me. Let's go with this one?

Ok this is going to be hard to review as this because it's doing several things and there are a few details to nails for each of them. it is generaly adviced to do small diff, so one thing do not get gated depending on another.

Going forward;

  • You should split out the timing generation part out of this diff. The timing file should be generated in the build tree, not in the source tree. Timing file can be generated, always. This is diff 1.
  • Browse the filesystem to discover tests. Use the timing infos from diff 1 to chose which one are extended tests and which are regular ones. Read timing files from the build folder and fallback to the source tree if there isn't anything in the build folder. This is diff 2.
  • Finally, add a script that generate the timing file and copy it in the source tree. Have a bot that updates it on a regular basis on a reference machine when tests are modified.

Splitting this update in smaller commits. This commit saves test timing in build_dir/timing.json

Test plan:
--> Test1: Run single test and check if its timing is present in build directory
rm build/timing.json
build/test/functional/test_runner.py uptime.py
cat build/timing.json

--> Test2: Run another test, it should be inserted in timing file before the previous one
build/test/functional/test_runner.py abc-cmdline.py
cat build/timing.json

--> Test3: Run another test. A single entry should be added in the middle of the timming.json
--> Note: this commit preserves the behavior, where it is not possible to include
--> or exclude an instance of the test with additional parameters
build/test/functional/test_runner.py txn_doublespend.py
cat build/timing.json

--> Test4: Check that timing file nothing is not present in src directory
find -name timing.json

--> Test5: Switch to building form src tree, timing.json should not be generated there
rm build/timing.json
./configure
make check
test/functional/test_runner.py uptime.py
find -name timing.json

matra774 retitled this revision from Remove BASE_SCRIPTS and use file system to locate all test script, use timings.json to separate long running tests from short running test to Write test timing information to the build tree when running test_runner.py .Feb 25 2018, 11:23
matra774 edited the summary of this revision. (Show Details)
matra774 edited the test plan for this revision. (Show Details)
matra774 retitled this revision from Write test timing information to the build tree when running test_runner.py to Write test timing information to the build tree when running test_runner.py.

Minor fix: Renamed Timings.build_dir to dir

deadalnix added inline comments.
test/functional/test_runner.py
272 ↗(On Diff #2980)

likestamp

This revision is now accepted and ready to land.Feb 25 2018, 18:59
This revision was automatically updated to reflect the committed changes.