diff --git a/doc/functional-tests.md b/doc/functional-tests.md index 2d005f5ca..ae4084d7f 100644 --- a/doc/functional-tests.md +++ b/doc/functional-tests.md @@ -1,369 +1,373 @@ # Functional tests The [/test/](/test/) directory contains integration tests that test bitcoind and its utilities in their entirety. It does not contain unit tests, which can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test), etc. There are currently two sets of tests in the [/test/](/test/) directory: - [functional](/test/functional) which test the functionality of bitcoind and bitcoin-qt by interacting with them through the RPC and P2P interfaces. - [util](/test/util) which tests the bitcoin utilities, currently only bitcoin-tx. The util tests are run as part of `make check` target. The functional tests are run by the Teamcity continuous build process whenever a diff is created or updated on Phabricator. Both sets of tests can also be run locally. # Running functional tests locally Build for your system first. Be sure to enable wallet, utils and daemon when you configure. Tests will not run otherwise. ### Functional tests #### Dependencies The ZMQ functional test requires a python ZMQ library. To install it: - On Unix, run `sudo apt-get install python3-zmq` - On mac OS, run `pip3 install pyzmq` #### Running the tests Individual tests can be run by directly calling the test script, eg: ``` test/functional/example_test.py ``` or can be run through the test_runner harness, eg: ``` test/functional/test_runner.py example_test ``` You can run any combination (incl. duplicates) of tests by calling: ``` test/functional/test_runner.py ... ``` Run the regression test suite with: ``` test/functional/test_runner.py ``` Run all possible tests with ``` test/functional/test_runner.py --extended ``` By default, up to 4 tests will be run in parallel by test_runner. To specify how many jobs to run, append `--jobs=n` The individual tests and the test_runner harness have many command-line options. Run `test_runner.py -h` to see them all. #### Troubleshooting and debugging test failures ##### Resource contention The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make conflicts with other processes unlikely. However, if there is another bitcoind process running on the system (perhaps from a previous test which hasn't successfully killed all its bitcoind nodes), then there may be a port conflict which will cause the test to fail. It is recommended that you run the tests on a system where no other bitcoind processes are running. On linux, the test_framework will warn if there is another bitcoind process running when the tests are started. If there are zombie bitcoind processes after test failure, you can kill them by running the following commands. **Note that these commands will kill all bitcoind processes running on the system, so should not be used if any non-test bitcoind processes are being run.** ```bash killall bitcoind ``` or ```bash pkill -9 bitcoind ``` ##### Data directory cache A pre-mined blockchain with 200 blocks is generated the first time a functional test is run and is stored in test/cache. This speeds up test startup times since new blockchains don't need to be generated for each test. However, the cache may get into a bad state, in which case tests will fail. If this happens, remove the cache directory (and make sure bitcoind processes are stopped as above): ```bash rm -rf cache killall bitcoind ``` ##### Test logging The tests contain logging at different levels (debug, info, warning, etc). By default: - When run through the test_runner harness, *all* logs are written to `test_framework.log` and no logs are output to the console. - When run directly, *all* logs are written to `test_framework.log` and INFO level and above are output to the console. - When run on Travis, no logs are output to the console. However, if a test fails, the `test_framework.log` and bitcoind `debug.log`s will all be dumped to the console to help troubleshooting. To change the level of logs output to the console, use the `-l` command line argument. `test_framework.log` and bitcoind `debug.log`s can be combined into a single aggregate log by running the `combine_logs.py` script. The output can be plain text, colorized text or html. For example: ``` combine_logs.py -c | less -r ``` will pipe the colorized logs from the test into less. Use `--tracerpc` to trace out all the RPC calls and responses to the console. For some tests (eg any that use `submitblock` to submit a full block over RPC), this can result in a lot of screen output. By default, the test data directory will be deleted after a successful run. Use `--nocleanup` to leave the test data directory intact. The test data directory is never deleted after a failed test. ##### Attaching a debugger A python debugger can be attached to tests at any point. Just add the line: ```py import pdb; pdb.set_trace() ``` anywhere in the test. You will then be able to inspect variables, as well as call methods that interact with the bitcoind nodes-under-test. If further introspection of the bitcoind instances themselves becomes necessary, this can be accomplished by first setting a pdb breakpoint at an appropriate location, running the test to that point, then using `gdb` (or `lldb` on macOS) to attach to the process and debug. For instance, to attach to `self.node[1]` during a run you can get the pid of the node within `pdb`. ``` (pdb) self.node[1].process.pid ``` Alternatively, you can find the pid by inspecting the temp folder for the specific test you are running. The path to that folder is printed at the beginning of every test run: ```bash 2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3 ``` Use the path to find the pid file in the temp folder: ```bash cat /tmp/user/1000/testo9vsdjo3/node1/regtest/bitcoind.pid ``` Then you can use the pid to start `gdb`: ```bash gdb /home/example/bitcoind ``` Note: gdb attach step may require `sudo`. To get rid of this, you can run: ```bash echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope ``` +Often while debugging rpc calls from functional tests, the test might reach timeout before +process can return a response. Use `--timeout-factor 0` to disable all rpc timeouts for that particular +functional test. Ex: `test/functional/test_runner.py wallet_hd --timeout-factor 0`. + ### Benchmarking and profiling with perf An easy way to profile node performance during functional tests is provided for Linux platforms using `perf`. Perf will sample the running node and will generate profile data in the node's datadir. The profile data can then be presented using `perf report` or a graphical tool like [hotspot](https://github.com/KDAB/hotspot). There are two ways of invoking perf: one is to use the `--perf` flag when running tests, which will profile each node during the entire test run: perf begins to profile when the node starts and ends when it shuts down. The other way is the use the `profile_with_perf` context manager, e.g. ```python with node.profile_with_perf("send-big-msgs"): # Perform activity on the node you're interested in profiling, e.g.: for _ in range(10000): node.p2p.send_message(some_large_message) ``` To see useful textual output, run ```sh perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less ``` #### See also: - [Installing perf](https://askubuntu.com/q/50145) - [Perf examples](http://www.brendangregg.com/perf.html) - [Hotspot](https://github.com/KDAB/hotspot): a GUI for perf output analysis ##### Prevent using deprecated features Python will issue a `DeprecationWarning` when a deprecated feature is encountered in a script. By default, this warning message is ignored and not displayed to the user. This behavior can be changed by setting the environment variable `PYTHONWARNINGS` as follow: `PYTHONWARNINGS=default::DeprecationWarning` The warning message will now be printed to the `sys.stderr` output. ### Util tests Util tests can be run locally by running `test/util/bitcoin-util-test.py`. Use the `-v` option for verbose output. # Writing functional tests #### Example test The [example_test.py](/test/functional/example_test.py) is a heavily commented example of a test case that uses both the RPC and P2P interfaces. If you are writing your first test, copy that file and modify to fit your needs. #### Coverage Running `test_runner.py` with the `--coverage` argument tracks which RPCs are called by the tests and prints a report of uncovered RPCs in the summary. This can be used (along with the `--extended` argument) to find out which RPCs we don't have test cases for. #### Style guidelines - Where possible, try to adhere to [PEP-8 guidelines](https://www.python.org/dev/peps/pep-0008/) - Use a python linter like flake8 before submitting PRs to catch common style nits (eg trailing whitespace, unused imports, etc) - Avoid wildcard imports where possible - Use a module-level docstring to describe what the test is testing, and how it is testing it. - When subclassing the BitcoinTestFramwork, place overrides for the `set_test_params()`, `add_options()` and `setup_xxxx()` methods at the top of the subclass, then locally-defined helper methods, then the `run_test()` method. #### Naming guidelines - Name the test `_test.py`, where area can be one of the following: - `feature` for tests for full features that aren't wallet/mining/mempool, eg `feature_rbf.py` - `interface` for tests for other interfaces (REST, ZMQ, etc), eg `interface_rest.py` - `mempool` for tests for mempool behaviour, eg `mempool_reorg.py` - `mining` for tests for mining features, eg `mining_prioritisetransaction.py` - `p2p` for tests that explicitly test the p2p interface, eg `p2p_disconnect_ban.py` - `rpc` for tests for individual RPC methods or features, eg `rpc_listtransactions.py` - `tool` for tests for tools, eg `tool_wallet.py` - `wallet` for tests for wallet features, eg `wallet_keypool.py` - use an underscore to separate words - exception: for tests for specific RPCs or command line options which don't include underscores, name the test after the exact RPC or argument name, eg `rpc_decodescript.py`, not `rpc_decode_script.py` - Don't use the redundant word `test` in the name, eg `interface_zmq.py`, not `interface_zmq_test.py` #### General test-writing advice - Set `self.num_nodes` to the minimum number of nodes necessary for the test. Having additional unrequired nodes adds to the execution time of the test as well as memory/CPU/disk requirements (which is important when running tests in parallel or on Travis). - Avoid stop-starting the nodes multiple times during the test if possible. A stop-start takes several seconds, so doing it several times blows up the runtime of the test. - Set the `self.setup_clean_chain` variable in `set_test_params()` to control whether or not to use the cached data directories. The cached data directories contain a 200-block pre-mined blockchain and wallets for four nodes. Each node has 25 mature blocks (25x50=1250 BTC) in its wallet. - When calling RPCs with lots of arguments, consider using named keyword arguments instead of positional arguments to make the intent of the call clear to readers. - Many of the core test framework classes such as `CBlock` and `CTransaction` don't allow new attributes to be added to their objects at runtime like typical Python objects allow. This helps prevent unpredictable side effects from typographical errors or usage of the objects outside of their intended purpose. #### RPC and P2P definitions Test writers may find it helpful to refer to the definitions for the RPC and P2P messages. These can be found in the following source files: - `/src/rpc/*` for RPCs - `/src/wallet/rpc*` for wallet RPCs - `ProcessMessage()` in `/src/net_processing.cpp` for parsing P2P messages #### Using the P2P interface - `messages.py` contains all the definitions for objects that pass over the network (`CBlock`, `CTransaction`, etc, along with the network-level wrappers for them, `msg_block`, `msg_tx`, etc). - P2P tests have two threads. One thread handles all network communication with the bitcoind(s) being tested in a callback-based event loop; the other implements the test logic. - `P2PConnection` is the class used to connect to a bitcoind. `P2PInterface` contains the higher level logic for processing P2P payloads and connecting to the Bitcoin Core node application logic. For custom behaviour, subclass the P2PInterface object and override the callback methods. - Can be used to write tests where specific P2P protocol behavior is tested. Examples tests are `p2p-acceptblock.py`, `p2p-compactblocks.py`. #### Prototyping tests The [`TestShell`](test-shell.md) class exposes the BitcoinTestFramework functionality to interactive Python3 environments and can be used to prototype tests. This may be especially useful in a REPL environment with session logging utilities, such as [IPython](https://ipython.readthedocs.io/en/stable/interactive/reference.html#session-logging-and-restoring). The logs of such interactive sessions can later be adapted into permanent test cases. ### test-framework modules #### [test_framework/authproxy.py](/test/functional/test_framework/authproxy.py) Taken from the [python-bitcoinrpc repository](https://github.com/jgarzik/python-bitcoinrpc). #### [test_framework/test_framework.py](/test/functional/test_framework/test_framework.py) Base class for functional tests. #### [test_framework/util.py](/test/functional/test_framework/util.py) Generally useful functions. #### [test_framework/mininode.py](/test/functional/test_framework/mininode.py) Basic code to support P2P connectivity to a bitcoind. #### [test_framework/script.py](/test/functional/test_framework/script.py) Utilities for manipulating transaction scripts (originally from python-bitcoinlib) #### [test_framework/key.py](/test/functional/test_framework/key.py) Test-only secp256k1 elliptic curve implementation #### [test_framework/blocktools.py](/test/functional/test_framework/blocktools.py) Helper functions for creating blocks and transactions. diff --git a/test/functional/test_framework/mininode.py b/test/functional/test_framework/mininode.py index 5250675f4..81d93dac8 100755 --- a/test/functional/test_framework/mininode.py +++ b/test/functional/test_framework/mininode.py @@ -1,777 +1,777 @@ #!/usr/bin/env python3 # Copyright (c) 2010 ArtForz -- public domain half-a-node # Copyright (c) 2012 Jeff Garzik # Copyright (c) 2010-2019 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Bitcoin P2P network half-a-node. This python code was modified from ArtForz' public domain half-a-node, as found in the mini-node branch of http://github.com/jgarzik/pynode. P2PConnection: A low-level connection object to a node's P2P interface P2PInterface: A high-level interface object for communicating to a node over P2P P2PDataStore: A p2p interface class that keeps a store of transactions and blocks and can respond correctly to getdata and getheaders messages P2PTxInvStore: A p2p interface class that inherits from P2PDataStore, and keeps a count of how many times each txid has been announced.""" import asyncio from collections import defaultdict from io import BytesIO import logging import struct import sys import threading from test_framework.messages import ( CBlockHeader, MIN_VERSION_SUPPORTED, msg_addr, msg_avapoll, msg_tcpavaresponse, msg_avahello, msg_block, MSG_BLOCK, msg_blocktxn, msg_cfcheckpt, msg_cfheaders, msg_cfilter, msg_cmpctblock, msg_feefilter, msg_filteradd, msg_filterclear, msg_filterload, msg_getaddr, msg_getblocks, msg_getblocktxn, msg_getdata, msg_getheaders, msg_headers, msg_inv, msg_mempool, msg_merkleblock, msg_notfound, msg_ping, msg_pong, msg_sendcmpct, msg_sendheaders, msg_tx, MSG_TX, MSG_TYPE_MASK, msg_verack, msg_version, NODE_NETWORK, sha256, ) from test_framework.util import wait_until logger = logging.getLogger("TestFramework.mininode") MESSAGEMAP = { b"addr": msg_addr, b"avapoll": msg_avapoll, b"avaresponse": msg_tcpavaresponse, b"avahello": msg_avahello, b"block": msg_block, b"blocktxn": msg_blocktxn, b"cfcheckpt": msg_cfcheckpt, b"cfheaders": msg_cfheaders, b"cfilter": msg_cfilter, b"cmpctblock": msg_cmpctblock, b"feefilter": msg_feefilter, b"filteradd": msg_filteradd, b"filterclear": msg_filterclear, b"filterload": msg_filterload, b"getaddr": msg_getaddr, b"getblocks": msg_getblocks, b"getblocktxn": msg_getblocktxn, b"getdata": msg_getdata, b"getheaders": msg_getheaders, b"headers": msg_headers, b"inv": msg_inv, b"mempool": msg_mempool, b"merkleblock": msg_merkleblock, b"notfound": msg_notfound, b"ping": msg_ping, b"pong": msg_pong, b"sendcmpct": msg_sendcmpct, b"sendheaders": msg_sendheaders, b"tx": msg_tx, b"verack": msg_verack, b"version": msg_version, } MAGIC_BYTES = { "mainnet": b"\xe3\xe1\xf3\xe8", "testnet3": b"\xf4\xe5\xf3\xf4", "regtest": b"\xda\xb5\xbf\xfa", } class P2PConnection(asyncio.Protocol): """A low-level connection object to a node's P2P interface. This class is responsible for: - opening and closing the TCP connection to the node - reading bytes from and writing bytes to the socket - deserializing and serializing the P2P message header - logging messages as they are sent and received This class contains no logic for handing the P2P message payloads. It must be sub-classed and the on_message() callback overridden.""" def __init__(self): # The underlying transport of the connection. # Should only call methods on this from the NetworkThread, c.f. # call_soon_threadsafe self._transport = None @property def is_connected(self): return self._transport is not None - def peer_connect(self, dstaddr, dstport, *, net, factor): + def peer_connect(self, dstaddr, dstport, *, net, timeout_factor): assert not self.is_connected - self.factor = factor + self.timeout_factor = timeout_factor self.dstaddr = dstaddr self.dstport = dstport # The initial message to send after the connection was made: self.on_connection_send_msg = None self.on_connection_send_msg_is_raw = False self.recvbuf = b"" self.magic_bytes = MAGIC_BYTES[net] logger.debug('Connecting to Bitcoin Node: {}:{}'.format( self.dstaddr, self.dstport)) loop = NetworkThread.network_event_loop conn_gen_unsafe = loop.create_connection( lambda: self, host=self.dstaddr, port=self.dstport) def conn_gen(): return loop.call_soon_threadsafe( loop.create_task, conn_gen_unsafe) return conn_gen def peer_disconnect(self): # Connection could have already been closed by other end. NetworkThread.network_event_loop.call_soon_threadsafe( lambda: self._transport and self._transport.abort()) # Connection and disconnection methods def connection_made(self, transport): """asyncio callback when a connection is opened.""" assert not self._transport logger.debug("Connected & Listening: {}:{}".format( self.dstaddr, self.dstport)) self._transport = transport if self.on_connection_send_msg: if self.on_connection_send_msg_is_raw: self.send_raw_message(self.on_connection_send_msg) else: self.send_message(self.on_connection_send_msg) # Never used again self.on_connection_send_msg = None self.on_open() def connection_lost(self, exc): """asyncio callback when a connection is closed.""" if exc: logger.warning("Connection lost to {}:{} due to {}".format( self.dstaddr, self.dstport, exc)) else: logger.debug("Closed connection to: {}:{}".format( self.dstaddr, self.dstport)) self._transport = None self.recvbuf = b"" self.on_close() # Socket read methods def data_received(self, t): """asyncio callback when data is read from the socket.""" with mininode_lock: if len(t) > 0: self.recvbuf += t while True: msg = self._on_data() if msg is None: break self.on_message(msg) def _on_data(self): """Try to read P2P messages from the recv buffer. This method reads data from the buffer in a loop. It deserializes, parses and verifies the P2P header, then passes the P2P payload to the on_message callback for processing.""" try: with mininode_lock: if len(self.recvbuf) < 4: return None if self.recvbuf[:4] != self.magic_bytes: raise ValueError( "magic bytes mismatch: {} != {}".format( repr( self.magic_bytes), repr( self.recvbuf))) if len(self.recvbuf) < 4 + 12 + 4 + 4: return None msgtype = self.recvbuf[4:4 + 12].split(b"\x00", 1)[0] msglen = struct.unpack( " 500: log_message += "... (msg truncated)" logger.debug(log_message) class P2PInterface(P2PConnection): """A high-level P2P interface class for communicating with a Bitcoin Cash node. This class provides high-level callbacks for processing P2P message payloads, as well as convenience methods for interacting with the node over P2P. Individual testcases should subclass this and override the on_* methods if they want to alter message handling behaviour.""" def __init__(self): super().__init__() # Track number of messages of each type received and the most recent # message of each type self.message_count = defaultdict(int) self.last_message = {} # A count of the number of ping messages we've sent to the node self.ping_counter = 1 # The network services received from the peer self.nServices = 0 def peer_connect(self, *args, services=NODE_NETWORK, send_version=True, **kwargs): create_conn = super().peer_connect(*args, **kwargs) if send_version: # Send a version msg vt = msg_version() vt.nServices = services vt.addrTo.ip = self.dstaddr vt.addrTo.port = self.dstport vt.addrFrom.ip = "0.0.0.0" vt.addrFrom.port = 0 # Will be sent soon after connection_made self.on_connection_send_msg = vt return create_conn # Message receiving methods def on_message(self, message): """Receive message and dispatch message to appropriate callback. We keep a count of how many of each message type has been received and the most recent message of each type.""" with mininode_lock: try: msgtype = message.msgtype.decode('ascii') self.message_count[msgtype] += 1 self.last_message[msgtype] = message getattr(self, 'on_' + msgtype)(message) except Exception: print("ERROR delivering {} ({})".format( repr(message), sys.exc_info()[0])) raise # Callback methods. Can be overridden by subclasses in individual test # cases to provide custom message handling behaviour. def on_open(self): pass def on_close(self): pass def on_addr(self, message): pass def on_avapoll(self, message): pass def on_avaresponse(self, message): pass def on_avahello(self, message): pass def on_block(self, message): pass def on_blocktxn(self, message): pass def on_cfcheckpt(self, message): pass def on_cfheaders(self, message): pass def on_cfilter(self, message): pass def on_cmpctblock(self, message): pass def on_feefilter(self, message): pass def on_filteradd(self, message): pass def on_filterclear(self, message): pass def on_filterload(self, message): pass def on_getaddr(self, message): pass def on_getblocks(self, message): pass def on_getblocktxn(self, message): pass def on_getdata(self, message): pass def on_getheaders(self, message): pass def on_headers(self, message): pass def on_mempool(self, message): pass def on_merkleblock(self, message): pass def on_notfound(self, message): pass def on_pong(self, message): pass def on_sendcmpct(self, message): pass def on_sendheaders(self, message): pass def on_tx(self, message): pass def on_inv(self, message): want = msg_getdata() for i in message.inv: if i.type != 0: want.inv.append(i) if len(want.inv): self.send_message(want) def on_ping(self, message): self.send_message(msg_pong(message.nonce)) def on_verack(self, message): pass def on_version(self, message): assert message.nVersion >= MIN_VERSION_SUPPORTED, "Version {} received. Test framework only supports versions greater than {}".format( message.nVersion, MIN_VERSION_SUPPORTED) self.send_message(msg_verack()) self.nServices = message.nServices # Connection helper methods def wait_until(self, test_function, timeout): - wait_until(test_function, timeout=timeout, - lock=mininode_lock, factor=self.factor) + wait_until(test_function, timeout=timeout, lock=mininode_lock, + timeout_factor=self.timeout_factor) def wait_for_disconnect(self, timeout=60): def test_function(): return not self.is_connected self.wait_until(test_function, timeout=timeout) # Message receiving helper methods def wait_for_tx(self, txid, timeout=60): def test_function(): assert self.is_connected if not self.last_message.get('tx'): return False return self.last_message['tx'].tx.rehash() == txid self.wait_until(test_function, timeout=timeout) def wait_for_block(self, blockhash, timeout=60): def test_function(): assert self.is_connected return self.last_message.get( "block") and self.last_message["block"].block.rehash() == blockhash self.wait_until(test_function, timeout=timeout) def wait_for_header(self, blockhash, timeout=60): def test_function(): assert self.is_connected last_headers = self.last_message.get('headers') if not last_headers: return False return last_headers.headers[0].rehash() == int(blockhash, 16) self.wait_until(test_function, timeout=timeout) def wait_for_merkleblock(self, blockhash, timeout=60): def test_function(): assert self.is_connected last_filtered_block = self.last_message.get('merkleblock') if not last_filtered_block: return False return last_filtered_block.merkleblock.header.rehash() == int(blockhash, 16) self.wait_until(test_function, timeout=timeout) def wait_for_getdata(self, hash_list, timeout=60): """Waits for a getdata message. The object hashes in the inventory vector must match the provided hash_list.""" def test_function(): assert self.is_connected last_data = self.last_message.get("getdata") if not last_data: return False return [x.hash for x in last_data.inv] == hash_list self.wait_until(test_function, timeout=timeout) def wait_for_getheaders(self, timeout=60): """Waits for a getheaders message. Receiving any getheaders message will satisfy the predicate. the last_message["getheaders"] value must be explicitly cleared before calling this method, or this will return immediately with success. TODO: change this method to take a hash value and only return true if the correct block header has been requested.""" def test_function(): assert self.is_connected return self.last_message.get("getheaders") self.wait_until(test_function, timeout=timeout) def wait_for_inv(self, expected_inv, timeout=60): """Waits for an INV message and checks that the first inv object in the message was as expected.""" if len(expected_inv) > 1: raise NotImplementedError( "wait_for_inv() will only verify the first inv object") def test_function(): assert self.is_connected return self.last_message.get("inv") and \ self.last_message["inv"].inv[0].type == expected_inv[0].type and \ self.last_message["inv"].inv[0].hash == expected_inv[0].hash self.wait_until(test_function, timeout=timeout) def wait_for_verack(self, timeout=60): def test_function(): return self.message_count["verack"] self.wait_until(test_function, timeout=timeout) # Message sending helper functions def send_and_ping(self, message, timeout=60): self.send_message(message) self.sync_with_ping(timeout=timeout) # Sync up with the node def sync_with_ping(self, timeout=60): self.send_message(msg_ping(nonce=self.ping_counter)) def test_function(): assert self.is_connected return self.last_message.get( "pong") and self.last_message["pong"].nonce == self.ping_counter self.wait_until(test_function, timeout=timeout) self.ping_counter += 1 # One lock for synchronizing all data access between the networking thread (see # NetworkThread below) and the thread running the test logic. For simplicity, # P2PConnection acquires this lock whenever delivering a message to a P2PInterface. # This lock should be acquired in the thread running the test logic to synchronize # access to any data shared with the P2PInterface or P2PConnection. mininode_lock = threading.RLock() class NetworkThread(threading.Thread): network_event_loop = None def __init__(self): super().__init__(name="NetworkThread") # There is only one event loop and no more than one thread must be # created assert not self.network_event_loop NetworkThread.network_event_loop = asyncio.new_event_loop() def run(self): """Start the network thread.""" self.network_event_loop.run_forever() def close(self, timeout=10): """Close the connections and network event loop.""" self.network_event_loop.call_soon_threadsafe( self.network_event_loop.stop) wait_until(lambda: not self.network_event_loop.is_running(), timeout=timeout) self.network_event_loop.close() self.join(timeout) # Safe to remove event loop. NetworkThread.network_event_loop = None class P2PDataStore(P2PInterface): """A P2P data store class. Keeps a block and transaction store and responds correctly to getdata and getheaders requests.""" def __init__(self): super().__init__() # store of blocks. key is block hash, value is a CBlock object self.block_store = {} self.last_block_hash = '' # store of txs. key is txid, value is a CTransaction object self.tx_store = {} self.getdata_requests = [] def on_getdata(self, message): """Check for the tx/block in our stores and if found, reply with an inv message.""" for inv in message.inv: self.getdata_requests.append(inv.hash) if (inv.type & MSG_TYPE_MASK) == MSG_TX and inv.hash in self.tx_store.keys(): self.send_message(msg_tx(self.tx_store[inv.hash])) elif (inv.type & MSG_TYPE_MASK) == MSG_BLOCK and inv.hash in self.block_store.keys(): self.send_message(msg_block(self.block_store[inv.hash])) else: logger.debug( 'getdata message type {} received.'.format(hex(inv.type))) def on_getheaders(self, message): """Search back through our block store for the locator, and reply with a headers message if found.""" locator, hash_stop = message.locator, message.hashstop # Assume that the most recent block added is the tip if not self.block_store: return headers_list = [self.block_store[self.last_block_hash]] maxheaders = 2000 while headers_list[-1].sha256 not in locator.vHave: # Walk back through the block store, adding headers to headers_list # as we go. prev_block_hash = headers_list[-1].hashPrevBlock if prev_block_hash in self.block_store: prev_block_header = CBlockHeader( self.block_store[prev_block_hash]) headers_list.append(prev_block_header) if prev_block_header.sha256 == hash_stop: # if this is the hashstop header, stop here break else: logger.debug('block hash {} not found in block store'.format( hex(prev_block_hash))) break # Truncate the list if there are too many headers headers_list = headers_list[:-maxheaders - 1:-1] response = msg_headers(headers_list) if response is not None: self.send_message(response) def send_blocks_and_test(self, blocks, node, *, success=True, force_send=False, reject_reason=None, expect_disconnect=False, timeout=60): """Send blocks to test node and test whether the tip advances. - add all blocks to our block_store - send a headers message for the final block - the on_getheaders handler will ensure that any getheaders are responded to - if force_send is False: wait for getdata for each of the blocks. The on_getdata handler will ensure that any getdata messages are responded to. Otherwise send the full block unsolicited. - if success is True: assert that the node's tip advances to the most recent block - if success is False: assert that the node's tip doesn't advance - if reject_reason is set: assert that the correct reject message is logged""" with mininode_lock: for block in blocks: self.block_store[block.sha256] = block self.last_block_hash = block.sha256 def test(): if force_send: for b in blocks: self.send_message(msg_block(block=b)) else: self.send_message( msg_headers([CBlockHeader(block) for block in blocks])) self.wait_until( lambda: blocks[-1].sha256 in self.getdata_requests, timeout=timeout) if expect_disconnect: self.wait_for_disconnect(timeout=timeout) else: self.sync_with_ping(timeout=timeout) if success: self.wait_until(lambda: node.getbestblockhash() == blocks[-1].hash, timeout=timeout) else: assert node.getbestblockhash() != blocks[-1].hash if reject_reason: with node.assert_debug_log(expected_msgs=[reject_reason]): test() else: test() def send_txs_and_test(self, txs, node, *, success=True, expect_disconnect=False, reject_reason=None): """Send txs to test node and test whether they're accepted to the mempool. - add all txs to our tx_store - send tx messages for all txs - if success is True/False: assert that the txs are/are not accepted to the mempool - if expect_disconnect is True: Skip the sync with ping - if reject_reason is set: assert that the correct reject message is logged.""" with mininode_lock: for tx in txs: self.tx_store[tx.sha256] = tx def test(): for tx in txs: self.send_message(msg_tx(tx)) if expect_disconnect: self.wait_for_disconnect() else: self.sync_with_ping() raw_mempool = node.getrawmempool() if success: # Check that all txs are now in the mempool for tx in txs: assert tx.hash in raw_mempool, "{} not found in mempool".format( tx.hash) else: # Check that none of the txs are now in the mempool for tx in txs: assert tx.hash not in raw_mempool, "{} tx found in mempool".format( tx.hash) if reject_reason: with node.assert_debug_log(expected_msgs=[reject_reason]): test() else: test() class P2PTxInvStore(P2PInterface): """A P2PInterface which stores a count of how many times each txid has been announced.""" def __init__(self): super().__init__() self.tx_invs_received = defaultdict(int) def on_inv(self, message): # Send getdata in response. super().on_inv(message) # Store how many times invs have been received for each tx. for i in message.inv: if i.type == MSG_TX: # save txid self.tx_invs_received[i.hash] += 1 def get_invs(self): with mininode_lock: return list(self.tx_invs_received.keys()) def wait_for_broadcast(self, txns, timeout=60): """Waits for the txns (list of txids) to complete initial broadcast. The mempool should mark unbroadcast=False for these transactions. """ # Wait until invs have been received (and getdatas sent) for each txid. self.wait_until(lambda: set(self.get_invs()) == set( [int(tx, 16) for tx in txns]), timeout) # Flush messages and wait for the getdatas to be processed self.sync_with_ping() diff --git a/test/functional/test_framework/test_framework.py b/test/functional/test_framework/test_framework.py index a2732d26a..b01ad44e2 100755 --- a/test/functional/test_framework/test_framework.py +++ b/test/functional/test_framework/test_framework.py @@ -1,703 +1,707 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2019 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Base class for RPC testing.""" import argparse import configparser from enum import Enum import logging import os import pdb import random import shutil import sys import tempfile import time from .authproxy import JSONRPCException from . import coverage from .test_node import TestNode from .mininode import NetworkThread from .util import ( assert_equal, check_json_precision, connect_nodes, disconnect_nodes, get_datadir_path, initialize_datadir, MAX_NODES, p2p_port, PortSeed, rpc_port, sync_blocks, sync_mempools, ) class TestStatus(Enum): PASSED = 1 FAILED = 2 SKIPPED = 3 TEST_EXIT_PASSED = 0 TEST_EXIT_FAILED = 1 TEST_EXIT_SKIPPED = 77 # Timestamp is Dec. 1st, 2019 at 00:00:00 TIMESTAMP_IN_THE_PAST = 1575158400 TMPDIR_PREFIX = "bitcoin_func_test_" class SkipTest(Exception): """This exception is raised to skip a test""" def __init__(self, message): self.message = message class BitcoinTestMetaClass(type): """Metaclass for BitcoinTestFramework. Ensures that any attempt to register a subclass of `BitcoinTestFramework` adheres to a standard whereby the subclass overrides `set_test_params` and `run_test` but DOES NOT override either `__init__` or `main`. If any of those standards are violated, a ``TypeError`` is raised.""" def __new__(cls, clsname, bases, dct): if not clsname == 'BitcoinTestFramework': if not ('run_test' in dct and 'set_test_params' in dct): raise TypeError("BitcoinTestFramework subclasses must override " "'run_test' and 'set_test_params'") if '__init__' in dct or 'main' in dct: raise TypeError("BitcoinTestFramework subclasses may not override " "'__init__' or 'main'") return super().__new__(cls, clsname, bases, dct) class BitcoinTestFramework(metaclass=BitcoinTestMetaClass): """Base class for a bitcoin test script. Individual bitcoin test scripts should subclass this class and override the set_test_params() and run_test() methods. Individual tests can also override the following methods to customize the test setup: - add_options() - setup_chain() - setup_network() - setup_nodes() The __init__() and main() methods should not be overridden. This class also contains various public and private helper methods.""" def __init__(self): """Sets test framework defaults. Do not override this method. Instead, override the set_test_params() method""" self.chain = 'regtest' self.setup_clean_chain = False self.nodes = [] self.network_thread = None # Wait for up to 60 seconds for the RPC server to respond self.rpc_timeout = 60 self.supports_cli = True self.bind_to_localhost_only = True # We run parse_args before set_test_params for tests who need to # know the parser options during setup. self.parse_args() self.set_test_params() + if self.options.timeout_factor == 0: + self.options.timeout_factor = 99999 # optionally, increase timeout by a factor - self.rpc_timeout = int(self.rpc_timeout * self.options.factor) + self.rpc_timeout = int(self.rpc_timeout * self.options.timeout_factor) def main(self): """Main function. This should not be overridden by the subclass test scripts.""" assert hasattr( self, "num_nodes"), "Test must set self.num_nodes in set_test_params()" try: self.setup() self.run_test() except JSONRPCException: self.log.exception("JSONRPC error") self.success = TestStatus.FAILED except SkipTest as e: self.log.warning("Test Skipped: {}".format(e.message)) self.success = TestStatus.SKIPPED except AssertionError: self.log.exception("Assertion failed") self.success = TestStatus.FAILED except KeyError: self.log.exception("Key error") self.success = TestStatus.FAILED except Exception: self.log.exception("Unexpected exception caught during testing") self.success = TestStatus.FAILED except KeyboardInterrupt: self.log.warning("Exiting after keyboard interrupt") self.success = TestStatus.FAILED finally: exit_code = self.shutdown() sys.exit(exit_code) def parse_args(self): parser = argparse.ArgumentParser(usage="%(prog)s [options]") parser.add_argument("--nocleanup", dest="nocleanup", default=False, action="store_true", help="Leave bitcoinds and test.* datadir on exit or error") parser.add_argument("--noshutdown", dest="noshutdown", default=False, action="store_true", help="Don't stop bitcoinds after the test execution") parser.add_argument("--cachedir", dest="cachedir", default=os.path.abspath(os.path.dirname(os.path.realpath(__file__)) + "/../../cache"), help="Directory for caching pregenerated datadirs (default: %(default)s)") parser.add_argument("--tmpdir", dest="tmpdir", help="Root directory for datadirs") parser.add_argument("-l", "--loglevel", dest="loglevel", default="INFO", help="log events at this level and higher to the console. Can be set to DEBUG, INFO, WARNING, ERROR or CRITICAL. Passing --loglevel DEBUG will output all logs to console. Note that logs at all levels are always written to the test_framework.log file in the temporary test directory.") parser.add_argument("--tracerpc", dest="trace_rpc", default=False, action="store_true", help="Print out all RPC calls as they are made") parser.add_argument("--portseed", dest="port_seed", default=os.getpid(), type=int, help="The seed to use for assigning port numbers (default: current process id)") parser.add_argument("--coveragedir", dest="coveragedir", help="Write tested RPC commands into this directory") parser.add_argument("--configfile", dest="configfile", default=os.path.abspath(os.path.dirname(os.path.realpath( __file__)) + "/../../config.ini"), help="Location of the test framework config file (default: %(default)s)") parser.add_argument("--pdbonfailure", dest="pdbonfailure", default=False, action="store_true", help="Attach a python debugger if test fails") parser.add_argument("--usecli", dest="usecli", default=False, action="store_true", help="use bitcoin-cli instead of RPC for all commands") parser.add_argument("--perf", dest="perf", default=False, action="store_true", help="profile running nodes with perf for the duration of the test") parser.add_argument("--valgrind", dest="valgrind", default=False, action="store_true", help="run nodes under the valgrind memory error detector: expect at least a ~10x slowdown, valgrind 3.14 or later required") parser.add_argument("--randomseed", type=int, help="set a random seed for deterministically reproducing a previous test run") parser.add_argument("--with-axionactivation", dest="axionactivation", default=False, action="store_true", help="Activate axion update on timestamp {}".format(TIMESTAMP_IN_THE_PAST)) parser.add_argument( - '--factor', + '--timeout-factor', + dest="timeout_factor", type=float, default=1.0, - help='adjust test timeouts by a factor') + help='adjust test timeouts by a factor. ' + 'Setting it to 0 disables all timeouts') self.add_options(parser) self.options = parser.parse_args() def setup(self): """Call this method to start up the test framework object with options set.""" PortSeed.n = self.options.port_seed check_json_precision() self.options.cachedir = os.path.abspath(self.options.cachedir) config = configparser.ConfigParser() config.read_file(open(self.options.configfile, encoding='utf-8')) self.config = config fname_bitcoind = os.path.join( config["environment"]["BUILDDIR"], "src", "bitcoind" + config["environment"]["EXEEXT"] ) fname_bitcoincli = os.path.join( config["environment"]["BUILDDIR"], "src", "bitcoin-cli" + config["environment"]["EXEEXT"] ) self.options.bitcoind = os.getenv("BITCOIND", default=fname_bitcoind) self.options.bitcoincli = os.getenv( "BITCOINCLI", default=fname_bitcoincli) self.options.emulator = config["environment"]["EMULATOR"] or None os.environ['PATH'] = config['environment']['BUILDDIR'] + os.pathsep + \ config['environment']['BUILDDIR'] + os.path.sep + "qt" + os.pathsep + \ os.environ['PATH'] # Set up temp directory and start logging if self.options.tmpdir: self.options.tmpdir = os.path.abspath(self.options.tmpdir) os.makedirs(self.options.tmpdir, exist_ok=False) else: self.options.tmpdir = tempfile.mkdtemp(prefix=TMPDIR_PREFIX) self._start_logging() # Seed the PRNG. Note that test runs are reproducible if and only if # a single thread accesses the PRNG. For more information, see # https://docs.python.org/3/library/random.html#notes-on-reproducibility. # The network thread shouldn't access random. If we need to change the # network thread to access randomness, it should instantiate its own # random.Random object. seed = self.options.randomseed if seed is None: seed = random.randrange(sys.maxsize) else: self.log.debug("User supplied random seed {}".format(seed)) random.seed(seed) self.log.debug("PRNG seed is: {}".format(seed)) self.log.debug('Setting up network thread') self.network_thread = NetworkThread() self.network_thread.start() if self.options.usecli: if not self.supports_cli: raise SkipTest( "--usecli specified but test does not support using CLI") self.skip_if_no_cli() self.skip_test_if_missing_module() self.setup_chain() self.setup_network() self.success = TestStatus.PASSED def shutdown(self): """Call this method to shut down the test framework object.""" if self.success == TestStatus.FAILED and self.options.pdbonfailure: print("Testcase failed. Attaching python debugger. Enter ? for help") pdb.set_trace() self.log.debug('Closing down network thread') self.network_thread.close() if not self.options.noshutdown: self.log.info("Stopping nodes") if self.nodes: self.stop_nodes() else: for node in self.nodes: node.cleanup_on_exit = False self.log.info( "Note: bitcoinds were not stopped and may still be running") should_clean_up = ( not self.options.nocleanup and not self.options.noshutdown and self.success != TestStatus.FAILED and not self.options.perf ) if should_clean_up: self.log.info("Cleaning up {} on exit".format(self.options.tmpdir)) cleanup_tree_on_exit = True elif self.options.perf: self.log.warning( "Not cleaning up dir {} due to perf data".format( self.options.tmpdir)) cleanup_tree_on_exit = False else: self.log.warning( "Not cleaning up dir {}".format(self.options.tmpdir)) cleanup_tree_on_exit = False if self.success == TestStatus.PASSED: self.log.info("Tests successful") exit_code = TEST_EXIT_PASSED elif self.success == TestStatus.SKIPPED: self.log.info("Test skipped") exit_code = TEST_EXIT_SKIPPED else: self.log.error( "Test failed. Test logging available at {}/test_framework.log".format(self.options.tmpdir)) self.log.error("Hint: Call {} '{}' to consolidate all logs".format(os.path.normpath( os.path.dirname(os.path.realpath(__file__)) + "/../combine_logs.py"), self.options.tmpdir)) exit_code = TEST_EXIT_FAILED # Logging.shutdown will not remove stream- and filehandlers, so we must # do it explicitly. Handlers are removed so the next test run can apply # different log handler settings. # See: https://docs.python.org/3/library/logging.html#logging.shutdown for h in list(self.log.handlers): h.flush() h.close() self.log.removeHandler(h) rpc_logger = logging.getLogger("BitcoinRPC") for h in list(rpc_logger.handlers): h.flush() rpc_logger.removeHandler(h) if cleanup_tree_on_exit: shutil.rmtree(self.options.tmpdir) self.nodes.clear() return exit_code # Methods to override in subclass test scripts. def set_test_params(self): """Tests must this method to change default values for number of nodes, topology, etc""" raise NotImplementedError def add_options(self, parser): """Override this method to add command-line options to the test""" pass def skip_test_if_missing_module(self): """Override this method to skip a test if a module is not compiled""" pass def setup_chain(self): """Override this method to customize blockchain setup""" self.log.info("Initializing test directory " + self.options.tmpdir) if self.setup_clean_chain: self._initialize_chain_clean() else: self._initialize_chain() def setup_network(self): """Override this method to customize test network topology""" self.setup_nodes() # Connect the nodes as a "chain". This allows us # to split the network between nodes 1 and 2 to get # two halves that can work on competing chains. # # Topology looks like this: # node0 <-- node1 <-- node2 <-- node3 # # If all nodes are in IBD (clean chain from genesis), node0 is assumed to be the source of blocks (miner). To # ensure block propagation, all nodes will establish outgoing connections toward node0. # See fPreferredDownload in net_processing. # # If further outbound connections are needed, they can be added at the beginning of the test with e.g. # connect_nodes(self.nodes[1], 2) for i in range(self.num_nodes - 1): connect_nodes(self.nodes[i + 1], self.nodes[i]) self.sync_all() def setup_nodes(self): """Override this method to customize test node setup""" extra_args = None if hasattr(self, "extra_args"): extra_args = self.extra_args self.add_nodes(self.num_nodes, extra_args) self.start_nodes() self.import_deterministic_coinbase_privkeys() if not self.setup_clean_chain: for n in self.nodes: assert_equal(n.getblockchaininfo()["blocks"], 199) # To ensure that all nodes are out of IBD, the most recent block # must have a timestamp not too old (see IsInitialBlockDownload()). self.log.debug('Generate a block with current time') block_hash = self.nodes[0].generate(1)[0] block = self.nodes[0].getblock(blockhash=block_hash, verbosity=0) for n in self.nodes: n.submitblock(block) chain_info = n.getblockchaininfo() assert_equal(chain_info["blocks"], 200) assert_equal(chain_info["initialblockdownload"], False) def import_deterministic_coinbase_privkeys(self): for n in self.nodes: try: n.getwalletinfo() except JSONRPCException as e: assert str(e).startswith('Method not found') continue n.importprivkey( privkey=n.get_deterministic_priv_key().key, label='coinbase') def run_test(self): """Tests must override this method to define test logic""" raise NotImplementedError # Public helper methods. These can be accessed by the subclass test # scripts. def add_nodes(self, num_nodes, extra_args=None, *, host=None, binary=None): """Instantiate TestNode objects. Should only be called once after the nodes have been specified in set_test_params().""" if self.bind_to_localhost_only: extra_confs = [["bind=127.0.0.1"]] * num_nodes else: extra_confs = [[]] * num_nodes if extra_args is None: extra_args = [[]] * num_nodes if binary is None: binary = [self.options.bitcoind] * num_nodes assert_equal(len(extra_confs), num_nodes) assert_equal(len(extra_args), num_nodes) assert_equal(len(binary), num_nodes) for i in range(num_nodes): self.nodes.append(TestNode( i, get_datadir_path(self.options.tmpdir, i), chain=self.chain, host=host, rpc_port=rpc_port(i), p2p_port=p2p_port(i), timewait=self.rpc_timeout, - factor=self.options.factor, + timeout_factor=self.options.timeout_factor, bitcoind=binary[i], bitcoin_cli=self.options.bitcoincli, coverage_dir=self.options.coveragedir, cwd=self.options.tmpdir, extra_conf=extra_confs[i], extra_args=extra_args[i], use_cli=self.options.usecli, emulator=self.options.emulator, start_perf=self.options.perf, use_valgrind=self.options.valgrind, )) if self.options.axionactivation: self.nodes[i].extend_default_args( ["-axionactivationtime={}".format(TIMESTAMP_IN_THE_PAST)]) def start_node(self, i, *args, **kwargs): """Start a bitcoind""" node = self.nodes[i] node.start(*args, **kwargs) node.wait_for_rpc_connection() if self.options.coveragedir is not None: coverage.write_all_rpc_commands(self.options.coveragedir, node.rpc) def start_nodes(self, extra_args=None, *args, **kwargs): """Start multiple bitcoinds""" if extra_args is None: extra_args = [None] * self.num_nodes assert_equal(len(extra_args), self.num_nodes) try: for i, node in enumerate(self.nodes): node.start(extra_args[i], *args, **kwargs) for node in self.nodes: node.wait_for_rpc_connection() except BaseException: # If one node failed to start, stop the others self.stop_nodes() raise if self.options.coveragedir is not None: for node in self.nodes: coverage.write_all_rpc_commands( self.options.coveragedir, node.rpc) def stop_node(self, i, expected_stderr='', wait=0): """Stop a bitcoind test node""" self.nodes[i].stop_node(expected_stderr, wait=wait) def stop_nodes(self, wait=0): """Stop multiple bitcoind test nodes""" for node in self.nodes: # Issue RPC to stop nodes node.stop_node(wait=wait, wait_until_stopped=False) for node in self.nodes: # Wait for nodes to stop node.wait_until_stopped() def restart_node(self, i, extra_args=None): """Stop and start a test node""" self.stop_node(i) self.start_node(i, extra_args) def wait_for_node_exit(self, i, timeout): self.nodes[i].process.wait(timeout) def split_network(self): """ Split the network of four nodes into nodes 0/1 and 2/3. """ disconnect_nodes(self.nodes[1], self.nodes[2]) disconnect_nodes(self.nodes[2], self.nodes[1]) self.sync_all(self.nodes[:2]) self.sync_all(self.nodes[2:]) def join_network(self): """ Join the (previously split) network halves together. """ connect_nodes(self.nodes[1], self.nodes[2]) self.sync_all() def sync_blocks(self, nodes=None, **kwargs): sync_blocks(nodes or self.nodes, **kwargs) def sync_mempools(self, nodes=None, **kwargs): sync_mempools(nodes or self.nodes, **kwargs) def sync_all(self, nodes=None, **kwargs): self.sync_blocks(nodes, **kwargs) self.sync_mempools(nodes, **kwargs) # Private helper methods. These should not be accessed by the subclass # test scripts. def _start_logging(self): # Add logger and logging handlers self.log = logging.getLogger('TestFramework') self.log.setLevel(logging.DEBUG) # Create file handler to log all messages fh = logging.FileHandler( self.options.tmpdir + '/test_framework.log', encoding='utf-8') fh.setLevel(logging.DEBUG) # Create console handler to log messages to stderr. By default this # logs only error messages, but can be configured with --loglevel. ch = logging.StreamHandler(sys.stdout) # User can provide log level as a number or string (eg DEBUG). loglevel # was caught as a string, so try to convert it to an int ll = int(self.options.loglevel) if self.options.loglevel.isdigit( ) else self.options.loglevel.upper() ch.setLevel(ll) # Format logs the same as bitcoind's debug.log with microprecision (so # log files can be concatenated and sorted) formatter = logging.Formatter( fmt='%(asctime)s.%(msecs)03d000Z %(name)s (%(levelname)s): %(message)s', datefmt='%Y-%m-%dT%H:%M:%S') formatter.converter = time.gmtime fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger self.log.addHandler(fh) self.log.addHandler(ch) if self.options.trace_rpc: rpc_logger = logging.getLogger("BitcoinRPC") rpc_logger.setLevel(logging.DEBUG) rpc_handler = logging.StreamHandler(sys.stdout) rpc_handler.setLevel(logging.DEBUG) rpc_logger.addHandler(rpc_handler) def _initialize_chain(self): """Initialize a pre-mined blockchain for use by the test. Create a cache of a 199-block-long chain Afterward, create num_nodes copies from the cache.""" # Use node 0 to create the cache for all other nodes CACHE_NODE_ID = 0 cache_node_dir = get_datadir_path(self.options.cachedir, CACHE_NODE_ID) assert self.num_nodes <= MAX_NODES if not os.path.isdir(cache_node_dir): self.log.debug( "Creating cache directory {}".format(cache_node_dir)) initialize_datadir( self.options.cachedir, CACHE_NODE_ID, self.chain) self.nodes.append( TestNode( CACHE_NODE_ID, cache_node_dir, chain=self.chain, extra_conf=["bind=127.0.0.1"], extra_args=['-disablewallet'], host=None, rpc_port=rpc_port(CACHE_NODE_ID), p2p_port=p2p_port(CACHE_NODE_ID), timewait=self.rpc_timeout, - factor=self.options.factor, + timeout_factor=self.options.timeout_factor, bitcoind=self.options.bitcoind, bitcoin_cli=self.options.bitcoincli, coverage_dir=None, cwd=self.options.tmpdir, emulator=self.options.emulator, )) if self.options.axionactivation: self.nodes[CACHE_NODE_ID].extend_default_args( ["-axionactivationtime={}".format(TIMESTAMP_IN_THE_PAST)]) self.start_node(CACHE_NODE_ID) cache_node = self.nodes[CACHE_NODE_ID] # Wait for RPC connections to be ready cache_node.wait_for_rpc_connection() # Set a time in the past, so that blocks don't end up in the future cache_node.setmocktime( cache_node.getblockheader( cache_node.getbestblockhash())['time']) # Create a 199-block-long chain; each of the 4 first nodes # gets 25 mature blocks and 25 immature. # The 4th node gets only 24 immature blocks so that the very last # block in the cache does not age too much (have an old tip age). # This is needed so that we are out of IBD when the test starts, # see the tip age check in IsInitialBlockDownload(). for i in range(8): cache_node.generatetoaddress( nblocks=25 if i != 7 else 24, address=TestNode.PRIV_KEYS[i % 4].address, ) assert_equal(cache_node.getblockchaininfo()["blocks"], 199) # Shut it down, and clean up cache directories: self.stop_nodes() self.nodes = [] def cache_path(*paths): return os.path.join(cache_node_dir, self.chain, *paths) # Remove empty wallets dir os.rmdir(cache_path('wallets')) for entry in os.listdir(cache_path()): # Only keep chainstate and blocks folder if entry not in ['chainstate', 'blocks']: os.remove(cache_path(entry)) for i in range(self.num_nodes): self.log.debug( "Copy cache directory {} to node {}".format( cache_node_dir, i)) to_dir = get_datadir_path(self.options.tmpdir, i) shutil.copytree(cache_node_dir, to_dir) # Overwrite port/rpcport in bitcoin.conf initialize_datadir(self.options.tmpdir, i, self.chain) def _initialize_chain_clean(self): """Initialize empty blockchain for use by the test. Create an empty blockchain and num_nodes wallets. Useful if a test case wants complete control over initialization.""" for i in range(self.num_nodes): initialize_datadir(self.options.tmpdir, i, self.chain) def skip_if_no_py3_zmq(self): """Attempt to import the zmq package and skip the test if the import fails.""" try: import zmq # noqa except ImportError: raise SkipTest("python3-zmq module not available.") def skip_if_no_bitcoind_zmq(self): """Skip the running test if bitcoind has not been compiled with zmq support.""" if not self.is_zmq_compiled(): raise SkipTest("bitcoind has not been built with zmq enabled.") def skip_if_no_wallet(self): """Skip the running test if wallet has not been compiled.""" if not self.is_wallet_compiled(): raise SkipTest("wallet has not been compiled.") def skip_if_no_wallet_tool(self): """Skip the running test if bitcoin-wallet has not been compiled.""" if not self.is_wallet_tool_compiled(): raise SkipTest("bitcoin-wallet has not been compiled") def skip_if_no_cli(self): """Skip the running test if bitcoin-cli has not been compiled.""" if not self.is_cli_compiled(): raise SkipTest("bitcoin-cli has not been compiled.") def is_cli_compiled(self): """Checks whether bitcoin-cli was compiled.""" return self.config["components"].getboolean("ENABLE_CLI") def is_wallet_compiled(self): """Checks whether the wallet module was compiled.""" return self.config["components"].getboolean("ENABLE_WALLET") def is_wallet_tool_compiled(self): """Checks whether bitcoin-wallet was compiled.""" return self.config["components"].getboolean("ENABLE_WALLET_TOOL") def is_zmq_compiled(self): """Checks whether the zmq module was compiled.""" return self.config["components"].getboolean("ENABLE_ZMQ") diff --git a/test/functional/test_framework/test_node.py b/test/functional/test_framework/test_node.py index be3c3755e..8af036b2f 100755 --- a/test/functional/test_framework/test_node.py +++ b/test/functional/test_framework/test_node.py @@ -1,887 +1,893 @@ #!/usr/bin/env python3 # Copyright (c) 2017-2019 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Class for bitcoind node under test""" import contextlib import decimal from enum import Enum import errno import http.client import json import logging import os import re import subprocess import sys import tempfile import time import urllib.parse import collections import shlex from .authproxy import JSONRPCException from .descriptors import descsum_create from .messages import COIN, CTransaction, FromHex from .util import ( MAX_NODES, append_config, delete_cookie_file, get_auth_cookie, get_rpc_proxy, p2p_port, rpc_url, wait_until, EncodeDecimal, ) BITCOIND_PROC_WAIT_TIMEOUT = 60 class FailedToStartError(Exception): """Raised when a node fails to start correctly.""" class ErrorMatch(Enum): FULL_TEXT = 1 FULL_REGEX = 2 PARTIAL_REGEX = 3 class TestNode(): """A class for representing a bitcoind node under test. This class contains: - state about the node (whether it's running, etc) - a Python subprocess.Popen object representing the running process - an RPC connection to the node - one or more P2P connections to the node To make things easier for the test writer, any unrecognised messages will be dispatched to the RPC connection.""" - def __init__(self, i, datadir, *, chain, host, rpc_port, p2p_port, timewait, factor, bitcoind, bitcoin_cli, + def __init__(self, i, datadir, *, chain, host, rpc_port, p2p_port, timewait, timeout_factor, bitcoind, bitcoin_cli, coverage_dir, cwd, extra_conf=None, extra_args=None, use_cli=False, emulator=None, start_perf=False, use_valgrind=False): """ Kwargs: start_perf (bool): If True, begin profiling the node with `perf` as soon as the node starts. """ self.index = i self.datadir = datadir self.bitcoinconf = os.path.join(self.datadir, "bitcoin.conf") self.stdout_dir = os.path.join(self.datadir, "stdout") self.stderr_dir = os.path.join(self.datadir, "stderr") self.chain = chain self.host = host self.rpc_port = rpc_port self.p2p_port = p2p_port self.name = "testnode-{}".format(i) self.rpc_timeout = timewait self.binary = bitcoind if not os.path.isfile(self.binary): raise FileNotFoundError( "Binary '{}' could not be found.\nTry setting it manually:\n\tBITCOIND= {}".format(self.binary, sys.argv[0])) self.coverage_dir = coverage_dir self.cwd = cwd if extra_conf is not None: append_config(datadir, extra_conf) # Most callers will just need to add extra args to the default list # below. # For those callers that need more flexibility, they can access the # default args using the provided facilities. # Note that common args are set in the config file (see # initialize_datadir) self.extra_args = extra_args # Configuration for logging is set as command-line args rather than in the bitcoin.conf file. # This means that starting a bitcoind using the temp dir to debug a failed test won't # spam debug.log. self.default_args = [ "-datadir=" + self.datadir, "-logtimemicros", "-logthreadnames", "-debug", "-debugexclude=libevent", "-debugexclude=leveldb", "-uacomment=" + self.name, "-noprinttoconsole", ] if use_valgrind: default_suppressions_file = os.path.join( os.path.dirname(os.path.realpath(__file__)), "..", "..", "..", "contrib", "valgrind.supp") suppressions_file = os.getenv("VALGRIND_SUPPRESSIONS_FILE", default_suppressions_file) self.binary = "valgrind" self.bitcoind_args = [bitcoind] + self.default_args self.default_args = ["--suppressions={}".format(suppressions_file), "--gen-suppressions=all", "--exit-on-first-error=yes", "--error-exitcode=1", "--quiet"] + self.bitcoind_args if emulator is not None: if not os.path.isfile(emulator): raise FileNotFoundError( "Emulator '{}' could not be found.".format(emulator)) self.emulator = emulator if use_cli and not os.path.isfile(bitcoin_cli): raise FileNotFoundError( "Binary '{}' could not be found.\nTry setting it manually:\n\tBITCOINCLI= {}".format(bitcoin_cli, sys.argv[0])) self.cli = TestNodeCLI(bitcoin_cli, self.datadir, self.emulator) self.use_cli = use_cli self.start_perf = start_perf self.running = False self.process = None self.rpc_connected = False self.rpc = None self.url = None self.relay_fee_cache = None self.log = logging.getLogger('TestFramework.node{}'.format(i)) # Whether to kill the node when this object goes away self.cleanup_on_exit = True # Cache perf subprocesses here by their data output filename. self.perf_subprocesses = {} self.p2ps = [] - self.factor = factor + self.timeout_factor = timeout_factor AddressKeyPair = collections.namedtuple( 'AddressKeyPair', ['address', 'key']) PRIV_KEYS = [ # address , privkey AddressKeyPair( 'mjTkW3DjgyZck4KbiRusZsqTgaYTxdSz6z', 'cVpF924EspNh8KjYsfhgY96mmxvT6DgdWiTYMtMjuM74hJaU5psW'), AddressKeyPair( 'msX6jQXvxiNhx3Q62PKeLPrhrqZQdSimTg', 'cUxsWyKyZ9MAQTaAhUQWJmBbSvHMwSmuv59KgxQV7oZQU3PXN3KE'), AddressKeyPair( 'mnonCMyH9TmAsSj3M59DsbH8H63U3RKoFP', 'cTrh7dkEAeJd6b3MRX9bZK8eRmNqVCMH3LSUkE3dSFDyzjU38QxK'), AddressKeyPair( 'mqJupas8Dt2uestQDvV2NH3RU8uZh2dqQR', 'cVuKKa7gbehEQvVq717hYcbE9Dqmq7KEBKqWgWrYBa2CKKrhtRim'), AddressKeyPair( 'msYac7Rvd5ywm6pEmkjyxhbCDKqWsVeYws', 'cQDCBuKcjanpXDpCqacNSjYfxeQj8G6CAtH1Dsk3cXyqLNC4RPuh'), AddressKeyPair( 'n2rnuUnwLgXqf9kk2kjvVm8R5BZK1yxQBi', 'cQakmfPSLSqKHyMFGwAqKHgWUiofJCagVGhiB4KCainaeCSxeyYq'), AddressKeyPair( 'myzuPxRwsf3vvGzEuzPfK9Nf2RfwauwYe6', 'cQMpDLJwA8DBe9NcQbdoSb1BhmFxVjWD5gRyrLZCtpuF9Zi3a9RK'), AddressKeyPair( 'mumwTaMtbxEPUswmLBBN3vM9oGRtGBrys8', 'cSXmRKXVcoouhNNVpcNKFfxsTsToY5pvB9DVsFksF1ENunTzRKsy'), AddressKeyPair( 'mpV7aGShMkJCZgbW7F6iZgrvuPHjZjH9qg', 'cSoXt6tm3pqy43UMabY6eUTmR3eSUYFtB2iNQDGgb3VUnRsQys2k'), AddressKeyPair( 'mq4fBNdckGtvY2mijd9am7DRsbRB4KjUkf', 'cN55daf1HotwBAgAKWVgDcoppmUNDtQSfb7XLutTLeAgVc3u8hik'), AddressKeyPair( 'mpFAHDjX7KregM3rVotdXzQmkbwtbQEnZ6', 'cT7qK7g1wkYEMvKowd2ZrX1E5f6JQ7TM246UfqbCiyF7kZhorpX3'), AddressKeyPair( 'mzRe8QZMfGi58KyWCse2exxEFry2sfF2Y7', 'cPiRWE8KMjTRxH1MWkPerhfoHFn5iHPWVK5aPqjW8NxmdwenFinJ'), ] def get_deterministic_priv_key(self): """Return a deterministic priv key in base58, that only depends on the node's index""" assert len(self.PRIV_KEYS) == MAX_NODES return self.PRIV_KEYS[self.index] def _node_msg(self, msg: str) -> str: """Return a modified msg that identifies this node by its index as a debugging aid.""" return "[node {}] {}".format(self.index, msg) def _raise_assertion_error(self, msg: str): """Raise an AssertionError with msg modified to identify this node.""" raise AssertionError(self._node_msg(msg)) def __del__(self): # Ensure that we don't leave any bitcoind processes lying around after # the test ends if self.process and self.cleanup_on_exit: # Should only happen on test failure # Avoid using logger, as that may have already been shutdown when # this destructor is called. print(self._node_msg("Cleaning up leftover process")) self.process.kill() def __getattr__(self, name): """Dispatches any unrecognised messages to the RPC connection or a CLI instance.""" if self.use_cli: return getattr(RPCOverloadWrapper(self.cli, True), name) else: assert self.rpc is not None, self._node_msg( "Error: RPC not initialized") assert self.rpc_connected, self._node_msg( "Error: No RPC connection") return getattr(RPCOverloadWrapper(self.rpc), name) def clear_default_args(self): self.default_args.clear() def extend_default_args(self, args): self.default_args.extend(args) def remove_default_args(self, args): for rm_arg in args: # Remove all occurrences of rm_arg in self.default_args: # - if the arg is a flag (-flag), then the names must match # - if the arg is a value (-key=value) then the name must starts # with "-key=" (the '"' char is to avoid removing "-key_suffix" # arg is "-key" is the argument to remove). self.default_args = [def_arg for def_arg in self.default_args if rm_arg != def_arg and not def_arg.startswith(rm_arg + '=')] def start(self, extra_args=None, *, cwd=None, stdout=None, stderr=None, **kwargs): """Start the node.""" if extra_args is None: extra_args = self.extra_args # Add a new stdout and stderr file each time bitcoind is started if stderr is None: stderr = tempfile.NamedTemporaryFile( dir=self.stderr_dir, delete=False) if stdout is None: stdout = tempfile.NamedTemporaryFile( dir=self.stdout_dir, delete=False) self.stderr = stderr self.stdout = stdout if cwd is None: cwd = self.cwd # Delete any existing cookie file -- if such a file exists (eg due to # unclean shutdown), it will get overwritten anyway by bitcoind, and # potentially interfere with our attempt to authenticate delete_cookie_file(self.datadir, self.chain) # add environment variable LIBC_FATAL_STDERR_=1 so that libc errors are # written to stderr and not the terminal subp_env = dict(os.environ, LIBC_FATAL_STDERR_="1") p_args = [self.binary] + self.default_args + extra_args if self.emulator is not None: p_args = [self.emulator] + p_args self.process = subprocess.Popen( p_args, env=subp_env, stdout=stdout, stderr=stderr, cwd=cwd, **kwargs) self.running = True self.log.debug("bitcoind started, waiting for RPC to come up") if self.start_perf: self._start_perf() def wait_for_rpc_connection(self): """Sets up an RPC connection to the bitcoind process. Returns False if unable to connect.""" # Poll at a rate of four times per second poll_per_s = 4 for _ in range(poll_per_s * self.rpc_timeout): if self.process.poll() is not None: raise FailedToStartError(self._node_msg( 'bitcoind exited with status {} during initialization'.format(self.process.returncode))) try: rpc = get_rpc_proxy( rpc_url( self.datadir, self.chain, self.host, self.rpc_port), self.index, timeout=self.rpc_timeout, coveragedir=self.coverage_dir) rpc.getblockcount() # If the call to getblockcount() succeeds then the RPC # connection is up wait_until(lambda: rpc.getmempoolinfo()['loaded']) # Wait for the node to finish reindex, block import, and # loading the mempool. Usually importing happens fast or # even "immediate" when the node is started. However, there # is no guarantee and sometimes ThreadImport might finish # later. This is going to cause intermittent test failures, # because generally the tests assume the node is fully # ready after being started. # # For example, the node will reject block messages from p2p # when it is still importing with the error "Unexpected # block message received" # # The wait is done here to make tests as robust as possible # and prevent racy tests and intermittent failures as much # as possible. Some tests might not need this, but the - # overhead is trivial, and the added gurantees are worth + # overhead is trivial, and the added guarantees are worth # the minimal performance cost. self.log.debug("RPC successfully started") if self.use_cli: return self.rpc = rpc self.rpc_connected = True self.url = self.rpc.url return except JSONRPCException as e: # Initialization phase # -28 RPC in warmup # -342 Service unavailable, RPC server started but is shutting down due to error if e.error['code'] != -28 and e.error['code'] != -342: raise # unknown JSON RPC exception except ConnectionResetError: # This might happen when the RPC server is in warmup, but shut down before the call to getblockcount # succeeds. Try again to properly raise the FailedToStartError pass except OSError as e: if e.errno != errno.ECONNREFUSED: # Port not yet open? raise # unknown OS error except ValueError as e: # cookie file not found and no rpcuser or rpcpassword; # bitcoind is still starting if "No RPC credentials" not in str(e): raise time.sleep(1.0 / poll_per_s) self._raise_assertion_error("Unable to connect to bitcoind") def wait_for_cookie_credentials(self): """Ensures auth cookie credentials can be read, e.g. for testing CLI with -rpcwait before RPC connection is up.""" self.log.debug("Waiting for cookie credentials") # Poll at a rate of four times per second. poll_per_s = 4 for _ in range(poll_per_s * self.rpc_timeout): try: get_auth_cookie(self.datadir, self.chain) self.log.debug("Cookie credentials successfully retrieved") return except ValueError: # cookie file not found and no rpcuser or rpcpassword; # bitcoind is still starting so we continue polling until # RPC credentials are retrieved pass time.sleep(1.0 / poll_per_s) self._raise_assertion_error( "Unable to retrieve cookie credentials after {}s".format( self.rpc_timeout)) def generate(self, nblocks, maxtries=1000000): self.log.debug( "TestNode.generate() dispatches `generate` call to `generatetoaddress`") return self.generatetoaddress( nblocks=nblocks, address=self.get_deterministic_priv_key().address, maxtries=maxtries) def get_wallet_rpc(self, wallet_name): if self.use_cli: return RPCOverloadWrapper( self.cli("-rpcwallet={}".format(wallet_name)), True) else: assert self.rpc is not None, self._node_msg( "Error: RPC not initialized") assert self.rpc_connected, self._node_msg( "Error: RPC not connected") wallet_path = "wallet/{}".format(urllib.parse.quote(wallet_name)) return RPCOverloadWrapper(self.rpc / wallet_path) def stop_node(self, expected_stderr='', *, wait=0, wait_until_stopped=True): """Stop the node.""" if not self.running: return self.log.debug("Stopping node") try: self.stop(wait=wait) except http.client.CannotSendRequest: self.log.exception("Unable to stop node.") # If there are any running perf processes, stop them. for profile_name in tuple(self.perf_subprocesses.keys()): self._stop_perf(profile_name) # Check that stderr is as expected self.stderr.seek(0) stderr = self.stderr.read().decode('utf-8').strip() if stderr != expected_stderr: raise AssertionError( "Unexpected stderr {} != {}".format(stderr, expected_stderr)) self.stdout.close() self.stderr.close() del self.p2ps[:] if wait_until_stopped: self.wait_until_stopped() def is_node_stopped(self): """Checks whether the node has stopped. Returns True if the node has stopped. False otherwise. This method is responsible for freeing resources (self.process).""" if not self.running: return True return_code = self.process.poll() if return_code is None: return False # process has stopped. Assert that it didn't return an error code. assert return_code == 0, self._node_msg( "Node returned non-zero exit code ({}) when stopping".format(return_code)) self.running = False self.process = None self.rpc_connected = False self.rpc = None self.log.debug("Node stopped") return True def wait_until_stopped(self, timeout=BITCOIND_PROC_WAIT_TIMEOUT): - wait_until(self.is_node_stopped, timeout=timeout, factor=self.factor) + wait_until( + self.is_node_stopped, + timeout=timeout, + timeout_factor=self.timeout_factor) @contextlib.contextmanager def assert_debug_log(self, expected_msgs, unexpected_msgs=None, timeout=2): """Assert that some debug messages are present within some timeout. Unexpected debug messages may be optionally provided to fail a test if they appear before expected messages. Note: expected_msgs must always be non-empty even if the goal is to check for unexpected_msgs. This provides a bounded scenario such that "we expect to reach some target resulting in expected_msgs without seeing unexpected_msgs. Otherwise, we are testing that something never happens, which is fundamentally not robust test logic. """ if not expected_msgs: raise AssertionError("Expected debug messages is empty") if unexpected_msgs is None: unexpected_msgs = [] - time_end = time.time() + timeout * self.factor + time_end = time.time() + timeout * self.timeout_factor debug_log = os.path.join(self.datadir, self.chain, 'debug.log') with open(debug_log, encoding='utf-8') as dl: dl.seek(0, 2) prev_size = dl.tell() yield while True: found = True with open(debug_log, encoding='utf-8') as dl: dl.seek(prev_size) log = dl.read() print_log = " - " + "\n - ".join(log.splitlines()) for unexpected_msg in unexpected_msgs: if re.search(re.escape(unexpected_msg), log, flags=re.MULTILINE): self._raise_assertion_error( 'Unexpected message "{}" partially matches log:\n\n{}\n\n'.format( unexpected_msg, print_log)) for expected_msg in expected_msgs: if re.search(re.escape(expected_msg), log, flags=re.MULTILINE) is None: found = False if found: return if time.time() >= time_end: break time.sleep(0.05) self._raise_assertion_error( 'Expected messages "{}" does not partially match log:\n\n{}\n\n'.format( str(expected_msgs), print_log)) @contextlib.contextmanager def profile_with_perf(self, profile_name): """ Context manager that allows easy profiling of node activity using `perf`. See `test/functional/README.md` for details on perf usage. Args: profile_name (str): This string will be appended to the profile data filename generated by perf. """ subp = self._start_perf(profile_name) yield if subp: self._stop_perf(profile_name) def _start_perf(self, profile_name=None): """Start a perf process to profile this node. Returns the subprocess running perf.""" subp = None def test_success(cmd): return subprocess.call( # shell=True required for pipe use below cmd, shell=True, stderr=subprocess.DEVNULL, stdout=subprocess.DEVNULL) == 0 if not sys.platform.startswith('linux'): self.log.warning( "Can't profile with perf; only availabe on Linux platforms") return None if not test_success('which perf'): self.log.warning( "Can't profile with perf; must install perf-tools") return None if not test_success( 'readelf -S {} | grep .debug_str'.format(shlex.quote(self.binary))): self.log.warning( "perf output won't be very useful without debug symbols compiled into bitcoind") output_path = tempfile.NamedTemporaryFile( dir=self.datadir, prefix="{}.perf.data.".format(profile_name or 'test'), delete=False, ).name cmd = [ 'perf', 'record', '-g', # Record the callgraph. # Compatibility for gcc's --fomit-frame-pointer. '--call-graph', 'dwarf', '-F', '101', # Sampling frequency in Hz. '-p', str(self.process.pid), '-o', output_path, ] subp = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.perf_subprocesses[profile_name] = subp return subp def _stop_perf(self, profile_name): """Stop (and pop) a perf subprocess.""" subp = self.perf_subprocesses.pop(profile_name) output_path = subp.args[subp.args.index('-o') + 1] subp.terminate() subp.wait(timeout=10) stderr = subp.stderr.read().decode() if 'Consider tweaking /proc/sys/kernel/perf_event_paranoid' in stderr: self.log.warning( "perf couldn't collect data! Try " "'sudo sysctl -w kernel.perf_event_paranoid=-1'") else: report_cmd = "perf report -i {}".format(output_path) self.log.info("See perf output by running '{}'".format(report_cmd)) def assert_start_raises_init_error( self, extra_args=None, expected_msg=None, match=ErrorMatch.FULL_TEXT, *args, **kwargs): """Attempt to start the node and expect it to raise an error. extra_args: extra arguments to pass through to bitcoind expected_msg: regex that stderr should match when bitcoind fails Will throw if bitcoind starts without an error. Will throw if an expected_msg is provided and it does not match bitcoind's stdout.""" with tempfile.NamedTemporaryFile(dir=self.stderr_dir, delete=False) as log_stderr, \ tempfile.NamedTemporaryFile(dir=self.stdout_dir, delete=False) as log_stdout: try: self.start(extra_args, stdout=log_stdout, stderr=log_stderr, *args, **kwargs) self.wait_for_rpc_connection() self.stop_node() self.wait_until_stopped() except FailedToStartError as e: self.log.debug('bitcoind failed to start: {}'.format(e)) self.running = False self.process = None # Check stderr for expected message if expected_msg is not None: log_stderr.seek(0) stderr = log_stderr.read().decode('utf-8').strip() if match == ErrorMatch.PARTIAL_REGEX: if re.search(expected_msg, stderr, flags=re.MULTILINE) is None: self._raise_assertion_error( 'Expected message "{}" does not partially match stderr:\n"{}"'.format(expected_msg, stderr)) elif match == ErrorMatch.FULL_REGEX: if re.fullmatch(expected_msg, stderr) is None: self._raise_assertion_error( 'Expected message "{}" does not fully match stderr:\n"{}"'.format(expected_msg, stderr)) elif match == ErrorMatch.FULL_TEXT: if expected_msg != stderr: self._raise_assertion_error( 'Expected message "{}" does not fully match stderr:\n"{}"'.format(expected_msg, stderr)) else: if expected_msg is None: assert_msg = "bitcoind should have exited with an error" else: assert_msg = "bitcoind should have exited with expected error " + expected_msg self._raise_assertion_error(assert_msg) def relay_fee(self, cached=True): if not self.relay_fee_cache or not cached: self.relay_fee_cache = self.getnetworkinfo()["relayfee"] return self.relay_fee_cache def calculate_fee(self, tx): """ Estimate the necessary fees (in sats) for an unsigned CTransaction assuming: - the current relayfee on node - all inputs are compressed-key p2pkh, and will be signed ecdsa or schnorr - all inputs currently unsigned (empty scriptSig) """ billable_size_estimate = tx.billable_size() # Add some padding for signatures / public keys # 107 = length of PUSH(longest_sig = 72 bytes), PUSH(pubkey = 33 bytes) billable_size_estimate += len(tx.vin) * 107 # relay_fee gives a value in BCH per kB. return int(self.relay_fee() / 1000 * billable_size_estimate * COIN) def calculate_fee_from_txid(self, txid): ctx = FromHex(CTransaction(), self.getrawtransaction(txid)) return self.calculate_fee(ctx) def add_p2p_connection(self, p2p_conn, *, wait_for_verack=True, **kwargs): """Add a p2p connection to the node. This method adds the p2p connection to the self.p2ps list and also returns the connection to the caller.""" if 'dstport' not in kwargs: kwargs['dstport'] = p2p_port(self.index) if 'dstaddr' not in kwargs: kwargs['dstaddr'] = '127.0.0.1' - p2p_conn.peer_connect(**kwargs, net=self.chain, factor=self.factor)() + p2p_conn.peer_connect( + **kwargs, + net=self.chain, + timeout_factor=self.timeout_factor)() self.p2ps.append(p2p_conn) if wait_for_verack: # Wait for the node to send us the version and verack p2p_conn.wait_for_verack() # At this point we have sent our version message and received the version and verack, however the full node # has not yet received the verack from us (in reply to their version). So, the connection is not yet fully # established (fSuccessfullyConnected). # # This shouldn't lead to any issues when sending messages, since the verack will be in-flight before the # message we send. However, it might lead to races where we are expecting to receive a message. E.g. a # transaction that will be added to the mempool as soon as we return here. # # So syncing here is redundant when we only want to send a message, but the cost is low (a few milliseconds) - # in comparision to the upside of making tests less fragile and + # in comparison to the upside of making tests less fragile and # unexpected intermittent errors less likely. p2p_conn.sync_with_ping() return p2p_conn @property def p2p(self): """Return the first p2p connection Convenience property - most tests only use a single p2p connection to each node, so this saves having to write node.p2ps[0] many times.""" assert self.p2ps, self._node_msg("No p2p connection") return self.p2ps[0] def disconnect_p2ps(self): """Close all p2p connections to the node.""" for p in self.p2ps: p.peer_disconnect() del self.p2ps[:] class TestNodeCLIAttr: def __init__(self, cli, command): self.cli = cli self.command = command def __call__(self, *args, **kwargs): return self.cli.send_cli(self.command, *args, **kwargs) def get_request(self, *args, **kwargs): return lambda: self(*args, **kwargs) def arg_to_cli(arg): if isinstance(arg, bool): return str(arg).lower() elif isinstance(arg, dict) or isinstance(arg, list): return json.dumps(arg, default=EncodeDecimal) else: return str(arg) class TestNodeCLI(): """Interface to bitcoin-cli for an individual node""" def __init__(self, binary, datadir, emulator=None): self.options = [] self.binary = binary self.datadir = datadir self.input = None self.log = logging.getLogger('TestFramework.bitcoincli') self.emulator = emulator def __call__(self, *options, input=None): # TestNodeCLI is callable with bitcoin-cli command-line options cli = TestNodeCLI(self.binary, self.datadir, self.emulator) cli.options = [str(o) for o in options] cli.input = input return cli def __getattr__(self, command): return TestNodeCLIAttr(self, command) def batch(self, requests): results = [] for request in requests: try: results.append(dict(result=request())) except JSONRPCException as e: results.append(dict(error=e)) return results def send_cli(self, command=None, *args, **kwargs): """Run bitcoin-cli command. Deserializes returned string as python object.""" pos_args = [arg_to_cli(arg) for arg in args] named_args = [str(key) + "=" + arg_to_cli(value) for (key, value) in kwargs.items()] assert not ( pos_args and named_args), "Cannot use positional arguments and named arguments in the same bitcoin-cli call" p_args = [self.binary, "-datadir=" + self.datadir] + self.options if named_args: p_args += ["-named"] if command is not None: p_args += [command] p_args += pos_args + named_args self.log.debug("Running bitcoin-cli {}".format(p_args[2:])) if self.emulator is not None: p_args = [self.emulator] + p_args process = subprocess.Popen(p_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) cli_stdout, cli_stderr = process.communicate(input=self.input) returncode = process.poll() if returncode: match = re.match( r'error code: ([-0-9]+)\nerror message:\n(.*)', cli_stderr) if match: code, message = match.groups() raise JSONRPCException(dict(code=int(code), message=message)) # Ignore cli_stdout, raise with cli_stderr raise subprocess.CalledProcessError( returncode, self.binary, output=cli_stderr) try: return json.loads(cli_stdout, parse_float=decimal.Decimal) except (json.JSONDecodeError, decimal.InvalidOperation): return cli_stdout.rstrip("\n") class RPCOverloadWrapper(): def __init__(self, rpc, cli=False): self.rpc = rpc self.is_cli = cli def __getattr__(self, name): return getattr(self.rpc, name) def importprivkey(self, privkey, label=None, rescan=None): wallet_info = self.getwalletinfo() if self.is_cli: if label is None: label = 'null' if rescan is None: rescan = 'null' if 'descriptors' not in wallet_info or ( 'descriptors' in wallet_info and not wallet_info['descriptors']): return self.__getattr__('importprivkey')(privkey, label, rescan) desc = descsum_create('combo(' + privkey + ')') req = [{ 'desc': desc, 'timestamp': 0 if rescan else 'now', 'label': label if label else '' }] import_res = self.importdescriptors(req) if not import_res[0]['success']: raise JSONRPCException(import_res[0]['error']) def addmultisigaddress(self, nrequired, keys, label=None): wallet_info = self.getwalletinfo() if self.is_cli: if label is None: label = 'null' if 'descriptors' not in wallet_info or ( 'descriptors' in wallet_info and not wallet_info['descriptors']): return self.__getattr__('addmultisigaddress')( nrequired, keys, label) cms = self.createmultisig(nrequired, keys) req = [{ 'desc': cms['descriptor'], 'timestamp': 0, 'label': label if label else '' }] import_res = self.importdescriptors(req) if not import_res[0]['success']: raise JSONRPCException(import_res[0]['error']) return cms def importpubkey(self, pubkey, label=None, rescan=None): wallet_info = self.getwalletinfo() if self.is_cli: if label is None: label = 'null' if rescan is None: rescan = 'null' if 'descriptors' not in wallet_info or ( 'descriptors' in wallet_info and not wallet_info['descriptors']): return self.__getattr__('importpubkey')(pubkey, label, rescan) desc = descsum_create('combo(' + pubkey + ')') req = [{ 'desc': desc, 'timestamp': 0 if rescan else 'now', 'label': label if label else '' }] import_res = self.importdescriptors(req) if not import_res[0]['success']: raise JSONRPCException(import_res[0]['error']) def importaddress(self, address, label=None, rescan=None, p2sh=None): wallet_info = self.getwalletinfo() if self.is_cli: if label is None: label = 'null' if rescan is None: rescan = 'null' if p2sh is None: p2sh = 'null' if 'descriptors' not in wallet_info or ( 'descriptors' in wallet_info and not wallet_info['descriptors']): return self.__getattr__('importaddress')( address, label, rescan, p2sh) is_hex = False try: int(address, 16) is_hex = True desc = descsum_create('raw(' + address + ')') except BaseException: desc = descsum_create('addr(' + address + ')') reqs = [{ 'desc': desc, 'timestamp': 0 if rescan else 'now', 'label': label if label else '' }] if is_hex and p2sh: reqs.append({ 'desc': descsum_create('p2sh(raw(' + address + '))'), 'timestamp': 0 if rescan else 'now', 'label': label if label else '' }) import_res = self.importdescriptors(reqs) for res in import_res: if not res['success']: raise JSONRPCException(res['error']) diff --git a/test/functional/test_framework/util.py b/test/functional/test_framework/util.py index e366e7662..bf32a751e 100644 --- a/test/functional/test_framework/util.py +++ b/test/functional/test_framework/util.py @@ -1,634 +1,634 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2019 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Helpful routines for regression testing.""" from base64 import b64encode from binascii import unhexlify from decimal import Decimal, ROUND_DOWN from io import BytesIO from subprocess import CalledProcessError import inspect import json import logging import os import random import re import time from . import coverage from .authproxy import AuthServiceProxy, JSONRPCException logger = logging.getLogger("TestFramework.utils") # Assert functions ################## def assert_approx(v, vexp, vspan=0.00001): """Assert that `v` is within `vspan` of `vexp`""" if v < vexp - vspan: raise AssertionError("{} < [{}..{}]".format( str(v), str(vexp - vspan), str(vexp + vspan))) if v > vexp + vspan: raise AssertionError("{} > [{}..{}]".format( str(v), str(vexp - vspan), str(vexp + vspan))) def assert_fee_amount(fee, tx_size, fee_per_kB, wiggleroom=2): """ Assert the fee was in range wiggleroom defines an amount that the test expects the wallet to be off by when estimating fees. This can be due to the dummy signature that is added during fee calculation, or due to the wallet funding transactions using the ceiling of the calculated fee. """ target_fee = round(tx_size * fee_per_kB / 1000, 8) if fee < (tx_size - wiggleroom) * fee_per_kB / 1000: raise AssertionError( "Fee of {} BCH too low! (Should be {} BCH)".format(str(fee), str(target_fee))) if fee > (tx_size + wiggleroom) * fee_per_kB / 1000: raise AssertionError( "Fee of {} BCH too high! (Should be {} BCH)".format(str(fee), str(target_fee))) def assert_equal(thing1, thing2, *args): if thing1 != thing2 or any(thing1 != arg for arg in args): raise AssertionError("not({})".format(" == ".join(str(arg) for arg in (thing1, thing2) + args))) def assert_greater_than(thing1, thing2): if thing1 <= thing2: raise AssertionError("{} <= {}".format(str(thing1), str(thing2))) def assert_greater_than_or_equal(thing1, thing2): if thing1 < thing2: raise AssertionError("{} < {}".format(str(thing1), str(thing2))) def assert_raises(exc, fun, *args, **kwds): assert_raises_message(exc, None, fun, *args, **kwds) def assert_raises_message(exc, message, fun, *args, **kwds): try: fun(*args, **kwds) except JSONRPCException: raise AssertionError( "Use assert_raises_rpc_error() to test RPC failures") except exc as e: if message is not None and message not in e.error['message']: raise AssertionError( "Expected substring not found in error message:\nsubstring: '{}'\nerror message: '{}'.".format( message, e.error['message'])) except Exception as e: raise AssertionError( "Unexpected exception raised: " + type(e).__name__) else: raise AssertionError("No exception raised") def assert_raises_process_error(returncode, output, fun, *args, **kwds): """Execute a process and asserts the process return code and output. Calls function `fun` with arguments `args` and `kwds`. Catches a CalledProcessError and verifies that the return code and output are as expected. Throws AssertionError if no CalledProcessError was raised or if the return code and output are not as expected. Args: returncode (int): the process return code. output (string): [a substring of] the process output. fun (function): the function to call. This should execute a process. args*: positional arguments for the function. kwds**: named arguments for the function. """ try: fun(*args, **kwds) except CalledProcessError as e: if returncode != e.returncode: raise AssertionError( "Unexpected returncode {}".format(e.returncode)) if output not in e.output: raise AssertionError("Expected substring not found:" + e.output) else: raise AssertionError("No exception raised") def assert_raises_rpc_error(code, message, fun, *args, **kwds): """Run an RPC and verify that a specific JSONRPC exception code and message is raised. Calls function `fun` with arguments `args` and `kwds`. Catches a JSONRPCException and verifies that the error code and message are as expected. Throws AssertionError if no JSONRPCException was raised or if the error code/message are not as expected. Args: code (int), optional: the error code returned by the RPC call (defined in src/rpc/protocol.h). Set to None if checking the error code is not required. message (string), optional: [a substring of] the error string returned by the RPC call. Set to None if checking the error string is not required. fun (function): the function to call. This should be the name of an RPC. args*: positional arguments for the function. kwds**: named arguments for the function. """ assert try_rpc(code, message, fun, *args, **kwds), "No exception raised" def try_rpc(code, message, fun, *args, **kwds): """Tries to run an rpc command. Test against error code and message if the rpc fails. Returns whether a JSONRPCException was raised.""" try: fun(*args, **kwds) except JSONRPCException as e: # JSONRPCException was thrown as expected. Check the code and message # values are correct. if (code is not None) and (code != e.error["code"]): raise AssertionError( "Unexpected JSONRPC error code {}".format(e.error["code"])) if (message is not None) and (message not in e.error['message']): raise AssertionError( "Expected substring not found in error message:\nsubstring: '{}'\nerror message: '{}'.".format( message, e.error['message'])) return True except Exception as e: raise AssertionError( "Unexpected exception raised: " + type(e).__name__) else: return False def assert_is_hex_string(string): try: int(string, 16) except Exception as e: raise AssertionError( "Couldn't interpret {!r} as hexadecimal; raised: {}".format(string, e)) def assert_is_hash_string(string, length=64): if not isinstance(string, str): raise AssertionError( "Expected a string, got type {!r}".format(type(string))) elif length and len(string) != length: raise AssertionError( "String of length {} expected; got {}".format(length, len(string))) elif not re.match('[abcdef0-9]+$', string): raise AssertionError( "String {!r} contains invalid characters for a hash.".format(string)) def assert_array_result(object_array, to_match, expected, should_not_find=False): """ Pass in array of JSON objects, a dictionary with key/value pairs to match against, and another dictionary with expected key/value pairs. If the should_not_find flag is true, to_match should not be found in object_array """ if should_not_find: assert_equal(expected, {}) num_matched = 0 for item in object_array: all_match = True for key, value in to_match.items(): if item[key] != value: all_match = False if not all_match: continue elif should_not_find: num_matched = num_matched + 1 for key, value in expected.items(): if item[key] != value: raise AssertionError("{} : expected {}={}".format( str(item), str(key), str(value))) num_matched = num_matched + 1 if num_matched == 0 and not should_not_find: raise AssertionError("No objects matched {}".format(str(to_match))) if num_matched > 0 and should_not_find: raise AssertionError("Objects were found {}".format(str(to_match))) # Utility functions ################### def check_json_precision(): """Make sure json library being used does not lose precision converting BCH values""" n = Decimal("20000000.00000003") satoshis = int(json.loads(json.dumps(float(n))) * 1.0e8) if satoshis != 2000000000000003: raise RuntimeError("JSON encode/decode loses precision") def EncodeDecimal(o): if isinstance(o, Decimal): return str(o) raise TypeError(repr(o) + " is not JSON serializable") def count_bytes(hex_string): return len(bytearray.fromhex(hex_string)) def hex_str_to_bytes(hex_str): return unhexlify(hex_str.encode('ascii')) def str_to_b64str(string): return b64encode(string.encode('utf-8')).decode('ascii') def satoshi_round(amount): return Decimal(amount).quantize(Decimal('0.00000001'), rounding=ROUND_DOWN) def wait_until(predicate, *, attempts=float('inf'), - timeout=float('inf'), lock=None, factor=1.0): + timeout=float('inf'), lock=None, timeout_factor=1.0): if attempts == float('inf') and timeout == float('inf'): timeout = 60 - timeout = timeout * factor + timeout = timeout * timeout_factor attempt = 0 time_end = time.time() + timeout while attempt < attempts and time.time() < time_end: if lock: with lock: if predicate(): return else: if predicate(): return attempt += 1 time.sleep(0.05) # Print the cause of the timeout predicate_source = "''''\n" + inspect.getsource(predicate) + "'''" logger.error("wait_until() failed. Predicate: {}".format(predicate_source)) if attempt >= attempts: raise AssertionError("Predicate {} not true after {} attempts".format( predicate_source, attempts)) elif time.time() >= time_end: raise AssertionError( "Predicate {} not true after {} seconds".format(predicate_source, timeout)) raise RuntimeError('Unreachable') # RPC/P2P connection constants and functions ############################################ # The maximum number of nodes a single test can spawn MAX_NODES = 12 # Don't assign rpc or p2p ports lower than this (for example: 18333 is the # default testnet port) PORT_MIN = int(os.getenv('TEST_RUNNER_PORT_MIN', default=20000)) # The number of ports to "reserve" for p2p and rpc, each PORT_RANGE = 5000 class PortSeed: # Must be initialized with a unique integer for each process n = None def get_rpc_proxy(url, node_number, *, timeout=None, coveragedir=None): """ Args: url (str): URL of the RPC server to call node_number (int): the node number (or id) that this calls to Kwargs: timeout (int): HTTP timeout in seconds coveragedir (str): Directory Returns: AuthServiceProxy. convenience object for making RPC calls. """ proxy_kwargs = {} if timeout is not None: proxy_kwargs['timeout'] = int(timeout) proxy = AuthServiceProxy(url, **proxy_kwargs) proxy.url = url # store URL on proxy for info coverage_logfile = coverage.get_filename( coveragedir, node_number) if coveragedir else None return coverage.AuthServiceProxyWrapper(proxy, coverage_logfile) def p2p_port(n): assert n <= MAX_NODES return PORT_MIN + n + \ (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES) def rpc_port(n): return PORT_MIN + PORT_RANGE + n + \ (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES) def rpc_url(datadir, chain, host, port): rpc_u, rpc_p = get_auth_cookie(datadir, chain) if host is None: host = '127.0.0.1' return "http://{}:{}@{}:{}".format(rpc_u, rpc_p, host, int(port)) # Node functions ################ def initialize_datadir(dirname, n, chain): datadir = get_datadir_path(dirname, n) if not os.path.isdir(datadir): os.makedirs(datadir) # Translate chain name to config name if chain == 'testnet3': chain_name_conf_arg = 'testnet' chain_name_conf_section = 'test' else: chain_name_conf_arg = chain chain_name_conf_section = chain with open(os.path.join(datadir, "bitcoin.conf"), 'w', encoding='utf8') as f: f.write("{}=1\n".format(chain_name_conf_arg)) f.write("[{}]\n".format(chain_name_conf_section)) f.write("port=" + str(p2p_port(n)) + "\n") f.write("rpcport=" + str(rpc_port(n)) + "\n") f.write("fallbackfee=0.0002\n") f.write("server=1\n") f.write("keypool=1\n") f.write("discover=0\n") f.write("dnsseed=0\n") f.write("listenonion=0\n") f.write("usecashaddr=1\n") f.write("shrinkdebugfile=0\n") os.makedirs(os.path.join(datadir, 'stderr'), exist_ok=True) os.makedirs(os.path.join(datadir, 'stdout'), exist_ok=True) return datadir def get_datadir_path(dirname, n): return os.path.join(dirname, "node" + str(n)) def append_config(datadir, options): with open(os.path.join(datadir, "bitcoin.conf"), 'a', encoding='utf8') as f: for option in options: f.write(option + "\n") def get_auth_cookie(datadir, chain): user = None password = None if os.path.isfile(os.path.join(datadir, "bitcoin.conf")): with open(os.path.join(datadir, "bitcoin.conf"), 'r', encoding='utf8') as f: for line in f: if line.startswith("rpcuser="): assert user is None # Ensure that there is only one rpcuser line user = line.split("=")[1].strip("\n") if line.startswith("rpcpassword="): assert password is None # Ensure that there is only one rpcpassword line password = line.split("=")[1].strip("\n") try: with open(os.path.join(datadir, chain, ".cookie"), 'r', encoding="ascii") as f: userpass = f.read() split_userpass = userpass.split(':') user = split_userpass[0] password = split_userpass[1] except OSError: pass if user is None or password is None: raise ValueError("No RPC credentials") return user, password # If a cookie file exists in the given datadir, delete it. def delete_cookie_file(datadir, chain): if os.path.isfile(os.path.join(datadir, chain, ".cookie")): logger.debug("Deleting leftover cookie file") os.remove(os.path.join(datadir, chain, ".cookie")) def set_node_times(nodes, t): for node in nodes: node.setmocktime(t) def disconnect_nodes(from_node, to_node): for peer_id in [peer['id'] for peer in from_node.getpeerinfo( ) if to_node.name in peer['subver']]: try: from_node.disconnectnode(nodeid=peer_id) except JSONRPCException as e: # If this node is disconnected between calculating the peer id # and issuing the disconnect, don't worry about it. # This avoids a race condition if we're mass-disconnecting peers. if e.error['code'] != -29: # RPC_CLIENT_NODE_NOT_CONNECTED raise # wait to disconnect wait_until(lambda: [peer['id'] for peer in from_node.getpeerinfo( ) if to_node.name in peer['subver']] == [], timeout=5) def connect_nodes(from_node, to_node): host = to_node.host if host is None: host = '127.0.0.1' ip_port = host + ':' + str(to_node.p2p_port) from_node.addnode(ip_port, "onetry") # poll until version handshake complete to avoid race conditions # with transaction relaying # See comments in net_processing: # * Must have a version message before anything else # * Must have a verack message before anything else wait_until( lambda: all( peer['version'] != 0 for peer in from_node.getpeerinfo())) wait_until( lambda: all( peer['bytesrecv_per_msg'].pop( 'verack', 0) == 24 for peer in from_node.getpeerinfo())) def sync_blocks(rpc_connections, *, wait=1, timeout=60): """ Wait until everybody has the same tip. sync_blocks needs to be called with an rpc_connections set that has least one node already synced to the latest, stable tip, otherwise there's a chance it might return before all nodes are stably synced. """ stop_time = time.time() + timeout while time.time() <= stop_time: best_hash = [x.getbestblockhash() for x in rpc_connections] if best_hash.count(best_hash[0]) == len(rpc_connections): return # Check that each peer has at least one connection assert (all([len(x.getpeerinfo()) for x in rpc_connections])) time.sleep(wait) raise AssertionError("Block sync timed out:{}".format( "".join("\n {!r}".format(b) for b in best_hash))) def sync_mempools(rpc_connections, *, wait=1, timeout=60, flush_scheduler=True): """ Wait until everybody has the same transactions in their memory pools """ stop_time = time.time() + timeout while time.time() <= stop_time: pool = [set(r.getrawmempool()) for r in rpc_connections] if pool.count(pool[0]) == len(rpc_connections): if flush_scheduler: for r in rpc_connections: r.syncwithvalidationinterfacequeue() return # Check that each peer has at least one connection assert (all([len(x.getpeerinfo()) for x in rpc_connections])) time.sleep(wait) raise AssertionError("Mempool sync timed out:{}".format( "".join("\n {!r}".format(m) for m in pool))) # Transaction/Block functions ############################# def find_output(node, txid, amount, *, blockhash=None): """ Return index to output of txid with value amount Raises exception if there is none. """ txdata = node.getrawtransaction(txid, 1, blockhash) for i in range(len(txdata["vout"])): if txdata["vout"][i]["value"] == amount: return i raise RuntimeError("find_output txid {} : {} not found".format( txid, str(amount))) def gather_inputs(from_node, amount_needed, confirmations_required=1): """ Return a random set of unspent txouts that are enough to pay amount_needed """ assert confirmations_required >= 0 utxo = from_node.listunspent(confirmations_required) random.shuffle(utxo) inputs = [] total_in = Decimal("0.00000000") while total_in < amount_needed and len(utxo) > 0: t = utxo.pop() total_in += t["amount"] inputs.append( {"txid": t["txid"], "vout": t["vout"], "address": t["address"]}) if total_in < amount_needed: raise RuntimeError("Insufficient funds: need {}, have {}".format( amount_needed, total_in)) return (total_in, inputs) def make_change(from_node, amount_in, amount_out, fee): """ Create change output(s), return them """ outputs = {} amount = amount_out + fee change = amount_in - amount if change > amount * 2: # Create an extra change output to break up big inputs change_address = from_node.getnewaddress() # Split change in two, being careful of rounding: outputs[change_address] = Decimal( change / 2).quantize(Decimal('0.00000001'), rounding=ROUND_DOWN) change = amount_in - amount - outputs[change_address] if change > 0: outputs[from_node.getnewaddress()] = change return outputs def random_transaction(nodes, amount, min_fee, fee_increment, fee_variants): """ Create a random transaction. Returns (txid, hex-encoded-transaction-data, fee) """ from_node = random.choice(nodes) to_node = random.choice(nodes) fee = min_fee + fee_increment * random.randint(0, fee_variants) (total_in, inputs) = gather_inputs(from_node, amount + fee) outputs = make_change(from_node, total_in, amount, fee) outputs[to_node.getnewaddress()] = float(amount) rawtx = from_node.createrawtransaction(inputs, outputs) signresult = from_node.signrawtransactionwithwallet(rawtx) txid = from_node.sendrawtransaction(signresult["hex"], 0) return (txid, signresult["hex"], fee) # Create large OP_RETURN txouts that can be appended to a transaction # to make it large (helper for constructing large transactions). def gen_return_txouts(): # Some pre-processing to create a bunch of OP_RETURN txouts to insert into transactions we create # So we have big transactions (and therefore can't fit very many into each block) # create one script_pubkey script_pubkey = "6a4d0200" # OP_RETURN OP_PUSH2 512 bytes for i in range(512): script_pubkey = script_pubkey + "01" # concatenate 128 txouts of above script_pubkey which we'll insert before # the txout for change txouts = [] from .messages import CTxOut txout = CTxOut() txout.nValue = 0 txout.scriptPubKey = hex_str_to_bytes(script_pubkey) for k in range(128): txouts.append(txout) return txouts # Create a spend of each passed-in utxo, splicing in "txouts" to each raw # transaction to make it large. See gen_return_txouts() above. def create_lots_of_big_transactions(node, txouts, utxos, num, fee): addr = node.getnewaddress() txids = [] from .messages import CTransaction for _ in range(num): t = utxos.pop() inputs = [{"txid": t["txid"], "vout": t["vout"]}] outputs = {} change = t['amount'] - fee outputs[addr] = satoshi_round(change) rawtx = node.createrawtransaction(inputs, outputs) tx = CTransaction() tx.deserialize(BytesIO(hex_str_to_bytes(rawtx))) for txout in txouts: tx.vout.append(txout) newtx = tx.serialize().hex() signresult = node.signrawtransactionwithwallet( newtx, None, "NONE|FORKID") txid = node.sendrawtransaction(signresult["hex"], 0) txids.append(txid) return txids def find_vout_for_address(node, txid, addr): """ Locate the vout index of the given transaction sending to the given address. Raises runtime error exception if not found. """ tx = node.getrawtransaction(txid, True) for i in range(len(tx["vout"])): if any([addr == a for a in tx["vout"][i]["scriptPubKey"]["addresses"]]): return i raise RuntimeError( "Vout not found for address: txid={}, addr={}".format(txid, addr))