Page MenuHomePhabricator

[avalanche] Make peer availability score accessible via getavalanchepeerinfo
ClosedPublic

Authored by sdulfari on Mar 22 2023, 17:49.

Details

Summary

Useful for debugging and generally inspecting the health of the network from the perspective of your node.

Test Plan

Run multiple times to check for flakiness

ninja
for i in {0..20}; do ./test/functional/test_runner.py abc_rpc_getavalanchepeerinfo || break ; done

Diff Detail

Repository
rABC Bitcoin ABC
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

Failed tests logs:

====== Bitcoin ABC functional tests: abc_feature_proof_cleanup.py ======

------- Stdout: -------
2023-03-22T17:56:56.173000Z TestFramework (INFO): Initializing test directory /work/abc-ci-builds/build-without-wallet/test/tmp/test_runner_₿₵_  _20230322_175446/abc_feature_proof_cleanup_101
2023-03-22T17:56:57.765000Z TestFramework (INFO): No proof is cleaned before the timeout expires
2023-03-22T17:56:57.768000Z TestFramework (INFO): Check the proofs with attached nodes are not cleaned
2023-03-22T17:56:57.826000Z TestFramework (ERROR): Unexpected exception caught during testing
Traceback (most recent call last):
  File "/work/test/functional/test_framework/test_framework.py", line 137, in main
    self.run_test()
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in run_test
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/test_framework.py", line 709, in wait_until
    return wait_until_helper(test_function, timeout=timeout,
  File "/work/test/functional/test_framework/util.py", line 268, in wait_until_helper
    if predicate():
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in <lambda>
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/avatools.py", line 155, in get_proof_ids
    return [int(peer['proofid'], 16) for peer in node.getavalanchepeerinfo()]
  File "/work/test/functional/test_framework/coverage.py", line 47, in __call__
    return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs)
  File "/work/test/functional/test_framework/authproxy.py", line 160, in __call__
    response, status = self._request(
  File "/work/test/functional/test_framework/authproxy.py", line 117, in _request
    return self._get_response()
  File "/work/test/functional/test_framework/authproxy.py", line 208, in _get_response
    response = json.loads(responsedata, parse_float=decimal.Decimal)
  File "/usr/lib/python3.9/json/__init__.py", line 359, in loads
    return cls(**kw).decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 55 (char 54)
2023-03-22T17:56:57.881000Z TestFramework (INFO): Stopping nodes
2023-03-22T17:56:58.035000Z TestFramework (WARNING): Not cleaning up dir /work/abc-ci-builds/build-without-wallet/test/tmp/test_runner_₿₵_  _20230322_175446/abc_feature_proof_cleanup_101
2023-03-22T17:56:58.036000Z TestFramework (ERROR): Test failed. Test logging available at /work/abc-ci-builds/build-without-wallet/test/tmp/test_runner_₿₵_  _20230322_175446/abc_feature_proof_cleanup_101/test_framework.log
2023-03-22T17:56:58.036000Z TestFramework (ERROR): 
2023-03-22T17:56:58.036000Z TestFramework (ERROR): Hint: Call /work/test/functional/combine_logs.py '/work/abc-ci-builds/build-without-wallet/test/tmp/test_runner_₿₵_  _20230322_175446/abc_feature_proof_cleanup_101' to consolidate all logs
2023-03-22T17:56:58.037000Z TestFramework (ERROR): 
2023-03-22T17:56:58.037000Z TestFramework (ERROR): If this failure happened unexpectedly or intermittently, please file a bug and provide a link or upload of the combined log.
2023-03-22T17:56:58.037000Z TestFramework (ERROR): https://github.com/Bitcoin-ABC/bitcoin-abc/issues
2023-03-22T17:56:58.037000Z TestFramework (ERROR):

Each failure log is accessible here:
Bitcoin ABC functional tests: abc_feature_proof_cleanup.py

Failed tests logs:

====== Bitcoin ABC functional tests: abc_feature_proof_cleanup.py ======

------- Stdout: -------
2023-03-22T18:02:28.167000Z TestFramework (INFO): Initializing test directory /work/abc-ci-builds/build-debug/test/tmp/test_runner_₿₵_  _20230322_175638/abc_feature_proof_cleanup_101
2023-03-22T18:02:30.288000Z TestFramework (INFO): No proof is cleaned before the timeout expires
2023-03-22T18:02:30.290000Z TestFramework (INFO): Check the proofs with attached nodes are not cleaned
2023-03-22T18:02:30.344000Z TestFramework (ERROR): Unexpected exception caught during testing
Traceback (most recent call last):
  File "/work/test/functional/test_framework/test_framework.py", line 137, in main
    self.run_test()
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in run_test
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/test_framework.py", line 709, in wait_until
    return wait_until_helper(test_function, timeout=timeout,
  File "/work/test/functional/test_framework/util.py", line 268, in wait_until_helper
    if predicate():
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in <lambda>
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/avatools.py", line 155, in get_proof_ids
    return [int(peer['proofid'], 16) for peer in node.getavalanchepeerinfo()]
  File "/work/test/functional/test_framework/coverage.py", line 47, in __call__
    return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs)
  File "/work/test/functional/test_framework/authproxy.py", line 160, in __call__
    response, status = self._request(
  File "/work/test/functional/test_framework/authproxy.py", line 117, in _request
    return self._get_response()
  File "/work/test/functional/test_framework/authproxy.py", line 208, in _get_response
    response = json.loads(responsedata, parse_float=decimal.Decimal)
  File "/usr/lib/python3.9/json/__init__.py", line 359, in loads
    return cls(**kw).decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 55 (char 54)
2023-03-22T18:02:30.395000Z TestFramework (INFO): Stopping nodes
2023-03-22T18:02:30.546000Z TestFramework (WARNING): Not cleaning up dir /work/abc-ci-builds/build-debug/test/tmp/test_runner_₿₵_  _20230322_175638/abc_feature_proof_cleanup_101
2023-03-22T18:02:30.546000Z TestFramework (ERROR): Test failed. Test logging available at /work/abc-ci-builds/build-debug/test/tmp/test_runner_₿₵_  _20230322_175638/abc_feature_proof_cleanup_101/test_framework.log
2023-03-22T18:02:30.546000Z TestFramework (ERROR): 
2023-03-22T18:02:30.547000Z TestFramework (ERROR): Hint: Call /work/test/functional/combine_logs.py '/work/abc-ci-builds/build-debug/test/tmp/test_runner_₿₵_  _20230322_175638/abc_feature_proof_cleanup_101' to consolidate all logs
2023-03-22T18:02:30.547000Z TestFramework (ERROR): 
2023-03-22T18:02:30.547000Z TestFramework (ERROR): If this failure happened unexpectedly or intermittently, please file a bug and provide a link or upload of the combined log.
2023-03-22T18:02:30.547000Z TestFramework (ERROR): https://github.com/Bitcoin-ABC/bitcoin-abc/issues
2023-03-22T18:02:30.547000Z TestFramework (ERROR):

Each failure log is accessible here:
Bitcoin ABC functional tests: abc_feature_proof_cleanup.py

Failed tests logs:

====== Bitcoin ABC functional tests: abc_feature_proof_cleanup.py ======

------- Stdout: -------
2023-03-22T17:59:36.107000Z TestFramework (INFO): Initializing test directory /work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_175536/abc_feature_proof_cleanup_101
2023-03-22T17:59:38.248000Z TestFramework (INFO): No proof is cleaned before the timeout expires
2023-03-22T17:59:38.250000Z TestFramework (INFO): Check the proofs with attached nodes are not cleaned
2023-03-22T17:59:38.303000Z TestFramework (ERROR): Unexpected exception caught during testing
Traceback (most recent call last):
  File "/work/test/functional/test_framework/test_framework.py", line 137, in main
    self.run_test()
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in run_test
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/test_framework.py", line 709, in wait_until
    return wait_until_helper(test_function, timeout=timeout,
  File "/work/test/functional/test_framework/util.py", line 268, in wait_until_helper
    if predicate():
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in <lambda>
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/avatools.py", line 155, in get_proof_ids
    return [int(peer['proofid'], 16) for peer in node.getavalanchepeerinfo()]
  File "/work/test/functional/test_framework/coverage.py", line 47, in __call__
    return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs)
  File "/work/test/functional/test_framework/authproxy.py", line 160, in __call__
    response, status = self._request(
  File "/work/test/functional/test_framework/authproxy.py", line 117, in _request
    return self._get_response()
  File "/work/test/functional/test_framework/authproxy.py", line 208, in _get_response
    response = json.loads(responsedata, parse_float=decimal.Decimal)
  File "/usr/lib/python3.9/json/__init__.py", line 359, in loads
    return cls(**kw).decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 55 (char 54)
2023-03-22T17:59:38.355000Z TestFramework (INFO): Stopping nodes
2023-03-22T17:59:38.857000Z TestFramework (WARNING): Not cleaning up dir /work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_175536/abc_feature_proof_cleanup_101
2023-03-22T17:59:38.857000Z TestFramework (ERROR): Test failed. Test logging available at /work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_175536/abc_feature_proof_cleanup_101/test_framework.log
2023-03-22T17:59:38.857000Z TestFramework (ERROR): 
2023-03-22T17:59:38.857000Z TestFramework (ERROR): Hint: Call /work/test/functional/combine_logs.py '/work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_175536/abc_feature_proof_cleanup_101' to consolidate all logs
2023-03-22T17:59:38.857000Z TestFramework (ERROR): 
2023-03-22T17:59:38.857000Z TestFramework (ERROR): If this failure happened unexpectedly or intermittently, please file a bug and provide a link or upload of the combined log.
2023-03-22T17:59:38.857000Z TestFramework (ERROR): https://github.com/Bitcoin-ABC/bitcoin-abc/issues
2023-03-22T17:59:38.858000Z TestFramework (ERROR):
====== Bitcoin ABC functional tests with the next upgrade activated: abc_feature_proof_cleanup.py ======

------- Stdout: -------
2023-03-22T18:04:01.585000Z TestFramework (INFO): Initializing test directory /work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_180028/abc_feature_proof_cleanup_101
2023-03-22T18:04:02.970000Z TestFramework (INFO): No proof is cleaned before the timeout expires
2023-03-22T18:04:02.971000Z TestFramework (INFO): Check the proofs with attached nodes are not cleaned
2023-03-22T18:04:03.024000Z TestFramework (ERROR): Unexpected exception caught during testing
Traceback (most recent call last):
  File "/work/test/functional/test_framework/test_framework.py", line 137, in main
    self.run_test()
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in run_test
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/test_framework.py", line 709, in wait_until
    return wait_until_helper(test_function, timeout=timeout,
  File "/work/test/functional/test_framework/util.py", line 268, in wait_until_helper
    if predicate():
  File "/work/test/functional/abc_feature_proof_cleanup.py", line 96, in <lambda>
    self.wait_until(lambda: set(get_proof_ids(node)) == set(
  File "/work/test/functional/test_framework/avatools.py", line 155, in get_proof_ids
    return [int(peer['proofid'], 16) for peer in node.getavalanchepeerinfo()]
  File "/work/test/functional/test_framework/coverage.py", line 47, in __call__
    return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs)
  File "/work/test/functional/test_framework/authproxy.py", line 160, in __call__
    response, status = self._request(
  File "/work/test/functional/test_framework/authproxy.py", line 117, in _request
    return self._get_response()
  File "/work/test/functional/test_framework/authproxy.py", line 208, in _get_response
    response = json.loads(responsedata, parse_float=decimal.Decimal)
  File "/usr/lib/python3.9/json/__init__.py", line 359, in loads
    return cls(**kw).decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 55 (char 54)
2023-03-22T18:04:03.075000Z TestFramework (INFO): Stopping nodes
2023-03-22T18:04:03.227000Z TestFramework (WARNING): Not cleaning up dir /work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_180028/abc_feature_proof_cleanup_101
2023-03-22T18:04:03.227000Z TestFramework (ERROR): Test failed. Test logging available at /work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_180028/abc_feature_proof_cleanup_101/test_framework.log
2023-03-22T18:04:03.228000Z TestFramework (ERROR): 
2023-03-22T18:04:03.228000Z TestFramework (ERROR): Hint: Call /work/test/functional/combine_logs.py '/work/abc-ci-builds/build-diff/test/tmp/test_runner_₿₵_  _20230322_180028/abc_feature_proof_cleanup_101' to consolidate all logs
2023-03-22T18:04:03.228000Z TestFramework (ERROR): 
2023-03-22T18:04:03.228000Z TestFramework (ERROR): If this failure happened unexpectedly or intermittently, please file a bug and provide a link or upload of the combined log.
2023-03-22T18:04:03.229000Z TestFramework (ERROR): https://github.com/Bitcoin-ABC/bitcoin-abc/issues
2023-03-22T18:04:03.229000Z TestFramework (ERROR):

Each failure log is accessible here:
Bitcoin ABC functional tests: abc_feature_proof_cleanup.py
Bitcoin ABC functional tests with the next upgrade activated: abc_feature_proof_cleanup.py

Fabien requested changes to this revision.Mar 22 2023, 19:35
Fabien added a subscriber: Fabien.

It breaks a test, back to your queue.
You can also rebase on top of D13426.

This revision now requires changes to proceed.Mar 22 2023, 19:35
  • Add missing RPCResult entry
  • Fix division by zero when calculating average peer score
  • Rebase
Fabien requested changes to this revision.Mar 23 2023, 07:21
Fabien added inline comments.
src/avalanche/peermanager.h
394 ↗(On Diff #38784)

I missed that but it's not a good idea.
A peer with one 100% node and one 90% node is strictly better than a peer with a single 100% node, but its score will end up lower.
Even worst if there is no congestion: if a node is never polled (can be due to the stake being too low, but also could occur if you have a bazillion nodes) the score will be zero so the extra nodes will contribute negatively to the score.

This revision now requires changes to proceed.Mar 23 2023, 07:21

Remove averaging in favor of adding node scores to make the peer score

Tail of the build log:

[426/483] Running utility command for check-bitcoin-checkpoints_tests
[427/483] bitcoin: testing sigencoding_tests
[428/483] bitcoin: testing scheduler_tests
[429/483] bitcoin: testing key_io_tests
[430/483] bitcoin: testing blockfilter_tests
[431/483] bitcoin: testing scriptpubkeyman_tests
[432/483] Running utility command for check-bitcoin-sigencoding_tests
[433/483] Running utility command for check-bitcoin-key_io_tests
[434/483] Running utility command for check-bitcoin-scheduler_tests
[435/483] Running utility command for check-bitcoin-blockfilter_tests
[436/483] bitcoin: testing coinstatsindex_tests
[437/483] Running utility command for check-bitcoin-scriptpubkeyman_tests
[438/483] Running utility command for check-bitcoin-coinstatsindex_tests
[439/483] bitcoin: testing txvalidationcache_tests
[440/483] Running utility command for check-bitcoin-txvalidationcache_tests
[441/483] bitcoin: testing script_tests
[442/483] Running utility command for check-bitcoin-script_tests
[443/483] bitcoin: testing walletdb_tests
[444/483] Running utility command for check-bitcoin-walletdb_tests
[445/483] bitcoin: testing blockindex_tests
[446/483] Running utility command for check-bitcoin-blockindex_tests
[447/483] bitcoin: testing miner_tests
[448/483] Running utility command for check-bitcoin-miner_tests
[449/483] bitcoin: testing init_tests
[450/483] bitcoin: testing cuckoocache_tests
[451/483] Running utility command for check-avalanche-voterecord_tests
[452/483] Running utility command for check-bitcoin-cuckoocache_tests
[453/483] Running utility command for check-bitcoin-init_tests
[454/483] bitcoin: testing uint256_tests
[455/483] Running utility command for check-avalanche-compactproofs_tests
[456/483] bitcoin: testing merkle_tests
[457/483] Running utility command for check-avalanche-processor_tests
[458/483] Running utility command for check-bitcoin-uint256_tests
[459/483] Running utility command for check-bitcoin-merkle_tests
[460/483] bitcoin: testing crypto_tests
[461/483] Running utility command for check-bitcoin-crypto_tests
[462/483] Linking CXX executable src/qt/test/test_bitcoin-qt
[463/483] bitcoin: testing rcu_tests
[464/483] Running utility command for check-bitcoin-rcu_tests
[465/483] bitcoin: testing wallet_crypto_tests
[466/483] Running utility command for check-bitcoin-wallet_crypto_tests
[467/483] bitcoin: testing blockcheck_tests
[468/483] Running utility command for check-bitcoin-blockcheck_tests
[469/483] bitcoin: testing txrequest_tests
[470/483] Running utility command for check-bitcoin-txrequest_tests
[471/483] bitcoin-qt: testing test_bitcoin-qt
[472/483] Running bitcoin-qt test suite
PASSED: bitcoin-qt test suite
[473/483] bitcoin: testing wallet_tests
[474/483] Running utility command for check-bitcoin-wallet_tests
[475/483] bitcoin: testing coinselector_tests
[476/483] Running utility command for check-bitcoin-coinselector_tests
[477/483] bitcoin: testing transaction_tests
[478/483] Running utility command for check-bitcoin-transaction_tests
[479/483] bitcoin: testing coins_tests
[480/483] Running utility command for check-bitcoin-coins_tests
[481/483] Running bitcoin test suite
PASSED: bitcoin test suite
ninja: build stopped: cannot make progress due to previous errors.
Build build-clang failed with exit code 1

Tail of the build log:

[456/476] bitcoin: testing wallet_tests
[457/476] Running utility command for check-bitcoin-coinselector_tests
[458/476] Running utility command for check-bitcoin-wallet_tests
[459/476] avalanche: testing voterecord_tests
[460/476] Running utility command for check-avalanche-voterecord_tests
[461/476] Building CXX object src/qt/test/CMakeFiles/test_bitcoin-qt.dir/__/__/wallet/test/wallet_test_fixture.cpp.o
[462/476] bitcoin: testing transaction_tests
[463/476] avalanche: testing processor_tests
[464/476] Running utility command for check-bitcoin-transaction_tests
[465/476] Running utility command for check-avalanche-processor_tests
[466/476] avalanche: testing peermanager_tests
FAILED: src/avalanche/test/CMakeFiles/check-avalanche-peermanager_tests 
cd /work/abc-ci-builds/build-clang-tidy/src/avalanche/test && /usr/bin/cmake -E make_directory /work/abc-ci-builds/build-clang-tidy/test/junit && /usr/bin/cmake -E make_directory /work/abc-ci-builds/build-clang-tidy/test/log && /usr/bin/cmake -E env /work/cmake/utils/log-and-print-on-failure.sh /work/abc-ci-builds/build-clang-tidy/test/log/avalanche-peermanager_tests.log /work/abc-ci-builds/build-clang-tidy/src/avalanche/test/test-avalanche --run_test=peermanager_tests --logger=HRF,message:JUNIT,message,avalanche-peermanager_tests.xml --catch_system_errors=no
Running 32 test cases...
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{3.1606027941427897} and -1 * std::expm1(-1. * i){0.63212055882855767} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.3233235838169373} and -1 * std::expm1(-1. * i){0.8646647167633873} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.7510646581606828} and -1 * std::expm1(-1. * i){0.95021293163213605} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9084218055563316} and -1 * std::expm1(-1. * i){0.98168436111126578} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9663102650045756} and -1 * std::expm1(-1. * i){0.99326205300091452} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9876062391166709} and -1 * std::expm1(-1. * i){0.99752124782333362} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9954405901722296} and -1 * std::expm1(-1. * i){0.99908811803444553} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9983226868604902} and -1 * std::expm1(-1. * i){0.99966453737209748} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9993829509795695} and -1 * std::expm1(-1. * i){0.99987659019591335} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9997730003511895} and -1 * std::expm1(-1. * i){0.99995460007023751} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2161): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511895} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2191): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511895} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2211): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{1.83931369735326} and 1. + std::expm1(-1. * i){0.36787944117144233} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2211): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.67664569512129613} and 1. + std::expm1(-1. * i){0.1353352832366127} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2211): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.24892404019228442} and 1. + std::expm1(-1. * i){0.049787068367863951} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2215): error: in "peermanager_tests/peer_availability_score": check previousScore < .05 has failed [0.24892404019228442 >= 0.050000000000000003]
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{3.1606027941427883} and -1 * std::expm1(-1. * i){0.63212055882855767} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.3233235838169364} and -1 * std::expm1(-1. * i){0.8646647167633873} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.7510646581606801} and -1 * std::expm1(-1. * i){0.95021293163213605} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9084218055563289} and -1 * std::expm1(-1. * i){0.98168436111126578} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.966310265004573} and -1 * std::expm1(-1. * i){0.99326205300091452} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9876062391166682} and -1 * std::expm1(-1. * i){0.99752124782333362} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9954405901722279} and -1 * std::expm1(-1. * i){0.99908811803444553} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9983226868604875} and -1 * std::expm1(-1. * i){0.99966453737209748} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9993829509795669} and -1 * std::expm1(-1. * i){0.99987659019591335} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2157): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9997730003511878} and -1 * std::expm1(-1. * i){0.99995460007023751} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2161): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511878} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2191): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511878} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2211): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{1.8393136973532604} and 1. + std::expm1(-1. * i){0.36787944117144233} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2211): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.67664569512129691} and 1. + std::expm1(-1. * i){0.1353352832366127} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2211): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.24892404019228484} and 1. + std::expm1(-1. * i){0.049787068367863951} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2215): error: in "peermanager_tests/peer_availability_score": check previousScore < .05 has failed [0.24892404019228484 >= 0.050000000000000003]

*** 32 failures are detected in the test module "Avalanche Test Suite"
[467/476] Building CXX object src/qt/test/CMakeFiles/test_bitcoin-qt.dir/paymentservertests.cpp.o
[468/476] bitcoin: testing coins_tests
[469/476] Running utility command for check-bitcoin-coins_tests
[470/476] Running bitcoin test suite
PASSED: bitcoin test suite
[471/476] Building CXX object src/qt/test/CMakeFiles/test_bitcoin-qt.dir/wallettests.cpp.o
[472/476] Linking CXX executable src/qt/test/test_bitcoin-qt
[473/476] bitcoin-qt: testing test_bitcoin-qt
[474/476] Running bitcoin-qt test suite
PASSED: bitcoin-qt test suite
ninja: build stopped: cannot make progress due to previous errors.
Build build-clang-tidy failed with exit code 1

Tail of the build log:

.....
----------------------------------------------------------------------
Ran 5 tests in 0.001s

OK
[120/444] Building CXX object src/test/CMakeFiles/test_bitcoin.dir/script_tests.cpp.o
In file included from /usr/include/boost/test/unit_test.hpp:19,
                 from ../../src/test/script_tests.cpp:30:
../../src/test/script_tests.cpp: In member function ‘void script_tests::script_build::test_method()’:
../../src/test/script_tests.cpp:540:22: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without
  540 | BOOST_AUTO_TEST_CASE(script_build) {
      |                      ^~~~~~~~~~~~
[179/444] Running seeder test suite
PASSED: seeder test suite
[182/444] Running bitcoin-qt test suite
PASSED: bitcoin-qt test suite
[185/444] Running pow test suite
PASSED: pow test suite
[422/444] Running bitcoin test suite
PASSED: bitcoin test suite
[441/444] avalanche: testing peermanager_tests
FAILED: src/avalanche/test/CMakeFiles/check-avalanche-peermanager_tests 
cd /work/abc-ci-builds/build-without-wallet/src/avalanche/test && /usr/bin/cmake -E make_directory /work/abc-ci-builds/build-without-wallet/test/junit && /usr/bin/cmake -E make_directory /work/abc-ci-builds/build-without-wallet/test/log && /usr/bin/cmake -E env /work/cmake/utils/log-and-print-on-failure.sh /work/abc-ci-builds/build-without-wallet/test/log/avalanche-peermanager_tests.log /work/abc-ci-builds/build-without-wallet/src/avalanche/test/test-avalanche --run_test=peermanager_tests --logger=HRF,message:JUNIT,message,avalanche-peermanager_tests.xml --catch_system_errors=no
Running 32 test cases...
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{3.1606027941427897} and -1 * std::expm1(-1. * i){0.63212055882855767} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.3233235838169373} and -1 * std::expm1(-1. * i){0.8646647167633873} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.7510646581606828} and -1 * std::expm1(-1. * i){0.95021293163213605} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9084218055563316} and -1 * std::expm1(-1. * i){0.98168436111126578} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9663102650045756} and -1 * std::expm1(-1. * i){0.99326205300091452} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9876062391166709} and -1 * std::expm1(-1. * i){0.99752124782333362} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9954405901722296} and -1 * std::expm1(-1. * i){0.99908811803444553} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9983226868604902} and -1 * std::expm1(-1. * i){0.99966453737209748} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9993829509795695} and -1 * std::expm1(-1. * i){0.99987659019591335} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9997730003511895} and -1 * std::expm1(-1. * i){0.99995460007023751} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2161): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511895} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2191): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511895} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{1.83931369735326} and 1. + std::expm1(-1. * i){0.36787944117144233} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.67664569512129613} and 1. + std::expm1(-1. * i){0.1353352832366127} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.24892404019228442} and 1. + std::expm1(-1. * i){0.049787068367863951} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2215): error: in "peermanager_tests/peer_availability_score": check previousScore < .05 has failed [0.24892404019228442 >= 0.050000000000000003]
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{3.1606027941427883} and -1 * std::expm1(-1. * i){0.63212055882855767} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.3233235838169364} and -1 * std::expm1(-1. * i){0.8646647167633873} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.7510646581606801} and -1 * std::expm1(-1. * i){0.95021293163213605} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9084218055563289} and -1 * std::expm1(-1. * i){0.98168436111126578} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.966310265004573} and -1 * std::expm1(-1. * i){0.99326205300091452} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9876062391166682} and -1 * std::expm1(-1. * i){0.99752124782333362} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9954405901722279} and -1 * std::expm1(-1. * i){0.99908811803444553} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9983226868604875} and -1 * std::expm1(-1. * i){0.99966453737209748} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9993829509795669} and -1 * std::expm1(-1. * i){0.99987659019591335} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9997730003511878} and -1 * std::expm1(-1. * i){0.99995460007023751} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2161): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511878} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2191): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511878} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{1.8393136973532604} and 1. + std::expm1(-1. * i){0.36787944117144233} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.67664569512129691} and 1. + std::expm1(-1. * i){0.1353352832366127} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.24892404019228484} and 1. + std::expm1(-1. * i){0.049787068367863951} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2215): error: in "peermanager_tests/peer_availability_score": check previousScore < .05 has failed [0.24892404019228484 >= 0.050000000000000003]

*** 32 failures are detected in the test module "Avalanche Test Suite"
ninja: build stopped: cannot make progress due to previous errors.
Build build-without-wallet failed with exit code 1

Tail of the build log:

----------------------------------------------------------------------
Ran 3 tests in 0.037s

OK
[183/484] cd /work/contrib/devtools/chainparams && /usr/bin/python3.9 ./test_make_chainparams.py
.....
----------------------------------------------------------------------
Ran 5 tests in 0.001s

OK
[208/484] Running seeder test suite
PASSED: seeder test suite
[215/484] Running pow test suite
PASSED: pow test suite
[267/484] avalanche: testing peermanager_tests
FAILED: src/avalanche/test/CMakeFiles/check-avalanche-peermanager_tests 
cd /work/abc-ci-builds/build-debug/src/avalanche/test && /usr/bin/cmake -E make_directory /work/abc-ci-builds/build-debug/test/junit && /usr/bin/cmake -E make_directory /work/abc-ci-builds/build-debug/test/log && /usr/bin/cmake -E env /work/cmake/utils/log-and-print-on-failure.sh /work/abc-ci-builds/build-debug/test/log/avalanche-peermanager_tests.log /work/abc-ci-builds/build-debug/src/avalanche/test/test-avalanche --run_test=peermanager_tests --logger=HRF,message:JUNIT,message,avalanche-peermanager_tests.xml --catch_system_errors=no
Running 32 test cases...
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{3.1606027941427897} and -1 * std::expm1(-1. * i){0.63212055882855767} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.3233235838169373} and -1 * std::expm1(-1. * i){0.8646647167633873} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.7510646581606828} and -1 * std::expm1(-1. * i){0.95021293163213605} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9084218055563316} and -1 * std::expm1(-1. * i){0.98168436111126578} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9663102650045756} and -1 * std::expm1(-1. * i){0.99326205300091452} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9876062391166709} and -1 * std::expm1(-1. * i){0.99752124782333362} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9954405901722296} and -1 * std::expm1(-1. * i){0.99908811803444553} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9983226868604902} and -1 * std::expm1(-1. * i){0.99966453737209748} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9993829509795695} and -1 * std::expm1(-1. * i){0.99987659019591335} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9997730003511895} and -1 * std::expm1(-1. * i){0.99995460007023751} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2161): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511895} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2191): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511895} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{1.83931369735326} and 1. + std::expm1(-1. * i){0.36787944117144233} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.67664569512129613} and 1. + std::expm1(-1. * i){0.1353352832366127} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.24892404019228442} and 1. + std::expm1(-1. * i){0.049787068367863951} exceeds 1.001%
../../src/avalanche/test/peermanager_tests.cpp(2215): error: in "peermanager_tests/peer_availability_score": check previousScore < .05 has failed [0.24892404019228442 >= 0.050000000000000003]
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{3.1606027941427883} and -1 * std::expm1(-1. * i){0.63212055882855767} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.3233235838169364} and -1 * std::expm1(-1. * i){0.8646647167633873} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.7510646581606801} and -1 * std::expm1(-1. * i){0.95021293163213605} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9084218055563289} and -1 * std::expm1(-1. * i){0.98168436111126578} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.966310265004573} and -1 * std::expm1(-1. * i){0.99326205300091452} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9876062391166682} and -1 * std::expm1(-1. * i){0.99752124782333362} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9954405901722279} and -1 * std::expm1(-1. * i){0.99908811803444553} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9983226868604875} and -1 * std::expm1(-1. * i){0.99966453737209748} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9993829509795669} and -1 * std::expm1(-1. * i){0.99987659019591335} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2156): error: in "peermanager_tests/peer_availability_score": difference{4} between previousScore{4.9997730003511878} and -1 * std::expm1(-1. * i){0.99995460007023751} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2161): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511878} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2191): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{4.9997730003511878} and 1.{1} exceeds 0.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{1.8393136973532604} and 1. + std::expm1(-1. * i){0.36787944117144233} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.67664569512129691} and 1. + std::expm1(-1. * i){0.1353352832366127} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2210): error: in "peermanager_tests/peer_availability_score": difference{3.99977} between previousScore{0.24892404019228484} and 1. + std::expm1(-1. * i){0.049787068367863951} exceeds 10.01%
../../src/avalanche/test/peermanager_tests.cpp(2215): error: in "peermanager_tests/peer_availability_score": check previousScore < .05 has failed [0.24892404019228484 >= 0.050000000000000003]

*** 32 failures are detected in the test module "Avalanche Test Suite"
[458/484] Running bitcoin-qt test suite
PASSED: bitcoin-qt test suite
[474/484] Running secp256k1 test suite
PASSED: secp256k1 test suite
[481/484] Running bitcoin test suite
PASSED: bitcoin test suite
ninja: build stopped: cannot make progress due to previous errors.
Build build-debug failed with exit code 1

Tail of the build log:

wallet_avoidreuse.py                      | ✓ Passed  | 4 s
wallet_avoidreuse.py --descriptors        | ✓ Passed  | 5 s
wallet_backup.py                          | ✓ Passed  | 23 s
wallet_balance.py                         | ✓ Passed  | 5 s
wallet_balance.py --descriptors           | ✓ Passed  | 7 s
wallet_basic.py                           | ✓ Passed  | 15 s
wallet_coinbase_category.py               | ✓ Passed  | 1 s
wallet_create_tx.py                       | ✓ Passed  | 5 s
wallet_createwallet.py                    | ✓ Passed  | 2 s
wallet_createwallet.py --descriptors      | ✓ Passed  | 2 s
wallet_createwallet.py --usecli           | ✓ Passed  | 2 s
wallet_descriptor.py                      | ✓ Passed  | 6 s
wallet_disable.py                         | ✓ Passed  | 0 s
wallet_dump.py                            | ✓ Passed  | 5 s
wallet_encryption.py                      | ✓ Passed  | 5 s
wallet_encryption.py --descriptors        | ✓ Passed  | 5 s
wallet_groups.py                          | ✓ Passed  | 13 s
wallet_hd.py                              | ✓ Passed  | 6 s
wallet_hd.py --descriptors                | ✓ Passed  | 6 s
wallet_import_rescan.py                   | ✓ Passed  | 7 s
wallet_import_with_label.py               | ✓ Passed  | 1 s
wallet_importdescriptors.py               | ✓ Passed  | 4 s
wallet_importmulti.py                     | ✓ Passed  | 2 s
wallet_importprunedfunds.py               | ✓ Passed  | 2 s
wallet_importprunedfunds.py --descriptors | ✓ Passed  | 2 s
wallet_keypool.py                         | ✓ Passed  | 3 s
wallet_keypool_topup.py                   | ✓ Passed  | 4 s
wallet_keypool_topup.py --descriptors     | ✓ Passed  | 5 s
wallet_labels.py                          | ✓ Passed  | 1 s
wallet_labels.py --descriptors            | ✓ Passed  | 1 s
wallet_listreceivedby.py                  | ✓ Passed  | 5 s
wallet_listsinceblock.py                  | ✓ Passed  | 6 s
wallet_listsinceblock.py --descriptors    | ✓ Passed  | 7 s
wallet_listtransactions.py                | ✓ Passed  | 4 s
wallet_listtransactions.py --descriptors  | ✓ Passed  | 4 s
wallet_multiwallet.py --usecli            | ✓ Passed  | 9 s
wallet_reorgsrestore.py                   | ✓ Passed  | 3 s
wallet_resendwallettransactions.py        | ✓ Passed  | 1 s
wallet_send.py                            | ✓ Passed  | 7 s
wallet_startup.py                         | ✓ Passed  | 2 s
wallet_timelock.py                        | ✓ Passed  | 1 s
wallet_txn_clone.py                       | ✓ Passed  | 2 s
wallet_txn_clone.py --mineblock           | ✓ Passed  | 3 s
wallet_txn_doublespend.py                 | ✓ Passed  | 1 s
wallet_txn_doublespend.py --mineblock     | ✓ Passed  | 3 s
wallet_watchonly.py                       | ✓ Passed  | 1 s
wallet_watchonly.py --usecli              | ✓ Passed  | 1 s
chronik_block.py                          | ○ Skipped | 0 s
chronik_disallow_prune.py                 | ○ Skipped | 0 s
chronik_resync.py                         | ○ Skipped | 0 s
chronik_serve.py                          | ○ Skipped | 0 s
interface_usdt_net.py                     | ○ Skipped | 0 s
interface_usdt_utxocache.py               | ○ Skipped | 0 s
interface_usdt_validation.py              | ○ Skipped | 0 s

ALL                                       | ✓ Passed  | 1243 s (accumulated) 
Runtime: 249 s

ninja: build stopped: cannot make progress due to previous errors.
Build build-diff failed with exit code 1
Fabien requested changes to this revision.Mar 24 2023, 08:22

You forgot to update the test

This revision now requires changes to proceed.Mar 24 2023, 08:22
This revision is now accepted and ready to land.Mar 24 2023, 20:25