diff --git a/doc/functional-tests.md b/doc/functional-tests.md index abd32251a..d58f18eea 100644 --- a/doc/functional-tests.md +++ b/doc/functional-tests.md @@ -1,316 +1,316 @@ # Functional tests The [/ est/](/test/) directory contains integration tests that test bitcoind and its utilities in their entirety. It does not contain unit tests, which can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test), etc. There are currently two sets of tests in the [/test/](/test/) directory: - [functional](/test/functional) which test the functionality of bitcoind and bitcoin-qt by interacting with them through the RPC and P2P interfaces. - [util](/test/util) which tests the bitcoin utilities, currently only bitcoin-tx. The util tests are run as part of `make check` target. The functional tests are run by the Teamcity continuous build process whenever a diff is created or updated on Phabricator. Both sets of tests can also be run locally. # Running functional tests locally Build for your system first. Be sure to enable wallet, utils and daemon when you configure. Tests will not run otherwise. ### Functional tests #### Dependencies The ZMQ functional test requires a python ZMQ library. To install it: - on Unix, run `sudo apt-get install python3-zmq` - on mac OS, run `pip3 install pyzmq` #### Running the tests Individual tests can be run by directly calling the test script, eg: ``` test/functional/example_test.py ``` or can be run through the test_runner harness, eg: ``` test/functional/test_runner.py example_test ``` You can run any combination (incl. duplicates) of tests by calling: ``` test/functional/test_runner.py ... ``` Run the regression test suite with: ``` test/functional/test_runner.py ``` Run all possible tests with ``` test/functional/test_runner.py --extended ``` By default, up to 4 tests will be run in parallel by test_runner. To specify how many jobs to run, append `--jobs=n` The individual tests and the test_runner harness have many command-line options. Run `test_runner.py -h` to see them all. #### Troubleshooting and debugging test failures ##### Resource contention The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make conflicts with other processes unlikely. However, if there is another bitcoind process running on the system (perhaps from a previous test which hasn't successfully killed all its bitcoind nodes), then there may be a port conflict which will cause the test to fail. It is recommended that you run the tests on a system where no other bitcoind processes are running. On linux, the test_framework will warn if there is another bitcoind process running when the tests are started. If there are zombie bitcoind processes after test failure, you can kill them by running the following commands. **Note that these commands will kill all bitcoind processes running on the system, so should not be used if any non-test bitcoind processes are being run.** ```bash killall bitcoind ``` or ```bash pkill -9 bitcoind ``` ##### Data directory cache A pre-mined blockchain with 200 blocks is generated the first time a functional test is run and is stored in test/cache. This speeds up test startup times since new blockchains don't need to be generated for each test. However, the cache may get into a bad state, in which case tests will fail. If this happens, remove the cache directory (and make sure bitcoind processes are stopped as above): ```bash rm -rf cache killall bitcoind ``` ##### Test logging The tests contain logging at different levels (debug, info, warning, etc). By default: - when run through the test_runner harness, *all* logs are written to `test_framework.log` and no logs are output to the console. - when run directly, *all* logs are written to `test_framework.log` and INFO level and above are output to the console. - when run on Travis, no logs are output to the console. However, if a test fails, the `test_framework.log` and bitcoind `debug.log`s will all be dumped to the console to help troubleshooting. To change the level of logs output to the console, use the `-l` command line argument. `test_framework.log` and bitcoind `debug.log`s can be combined into a single aggregate log by running the `combine_logs.py` script. The output can be plain text, colorized text or html. For example: ``` combine_logs.py -c | less -r ``` will pipe the colorized logs from the test into less. Use `--tracerpc` to trace out all the RPC calls and responses to the console. For some tests (eg any that use `submitblock` to submit a full block over RPC), this can result in a lot of screen output. By default, the test data directory will be deleted after a successful run. Use `--nocleanup` to leave the test data directory intact. The test data directory is never deleted after a failed test. ##### Attaching a debugger A python debugger can be attached to tests at any point. Just add the line: ```py import pdb; pdb.set_trace() ``` anywhere in the test. You will then be able to inspect variables, as well as call methods that interact with the bitcoind nodes-under-test. ### Util tests Util tests can be run locally by running `test/util/bitcoin-util-test.py`. Use the `-v` option for verbose output. # Writing functional tests #### Example test The [example_test.py](example_test.py) is a heavily commented example of a test case that uses both the RPC and P2P interfaces. If you are writing your first test, copy that file and modify to fit your needs. #### Coverage Running `test_runner.py` with the `--coverage` argument tracks which RPCs are called by the tests and prints a report of uncovered RPCs in the summary. This can be used (along with the `--extended` argument) to find out which RPCs we don't have test cases for. #### Style guidelines - Where possible, try to adhere to [PEP-8 guidelines]([https://www.python.org/dev/peps/pep-0008/) - Use a python linter like flake8 before submitting PRs to catch common style nits (eg trailing whitespace, unused imports, etc) - Avoid wildcard imports where possible - Use a module-level docstring to describe what the test is testing, and how it is testing it. - When subclassing the BitcoinTestFramwork, place overrides for the `set_test_params()`, `add_options()` and `setup_xxxx()` methods at the top of the subclass, then locally-defined helper methods, then the `run_test()` method. #### General test-writing advice - Set `self.num_nodes` to the minimum number of nodes necessary for the test. Having additional unrequired nodes adds to the execution time of the test as well as memory/CPU/disk requirements (which is important when running tests in parallel or on Travis). - Avoid stop-starting the nodes multiple times during the test if possible. A stop-start takes several seconds, so doing it several times blows up the runtime of the test. - Set the `self.setup_clean_chain` variable in `set_test_params()` to control whether or not to use the cached data directories. The cached data directories contain a 200-block pre-mined blockchain and wallets for four nodes. Each node has 25 mature blocks (25x50=1250 BTC) in its wallet. - When calling RPCs with lots of arguments, consider using named keyword arguments instead of positional arguments to make the intent of the call clear to readers. #### RPC and P2P definitions Test writers may find it helpful to refer to the definitions for the RPC and P2P messages. These can be found in the following source files: - `/src/rpc/*` for RPCs - `/src/wallet/rpc*` for wallet RPCs - `ProcessMessage()` in `/src/net_processing.cpp` for parsing P2P messages #### Using the P2P interface - `mininode.py` contains all the definitions for objects that pass over the network (`CBlock`, `CTransaction`, etc, along with the network-level wrappers for them, `msg_block`, `msg_tx`, etc). - P2P tests have two threads. One thread handles all network communication with the bitcoind(s) being tested (using python's asyncore package); the other implements the test logic. - `P2PConnection` is the class used to connect to a bitcoind. `P2PInterface` contains the higher level logic for processing P2P payloads and connecting to the Bitcoin Core node application logic. For custom behaviour, subclass the P2PInterface object and override the callback methods. -- Call `NetworkThread.start()` after all `P2PInterface` objects are created to +- Call `network_thread_start()` after all `P2PInterface` objects are created to start the networking thread. (Continue with the test logic in your existing thread.) - Can be used to write tests where specific P2P protocol behavior is tested. Examples tests are `p2p-accept-block.py`, `p2p-compactblocks.py`. #### Comptool - Comptool is a Testing framework for writing tests that compare the block/tx acceptance behavior of a bitcoind against 1 or more other bitcoind instances. It should not be used to write static tests with known outcomes, since that type of test is easier to write and maintain using the standard BitcoinTestFramework. - Set the `num_nodes` variable (defined in `ComparisonTestFramework`) to start up 1 or more nodes. If using 1 node, then `--testbinary` can be used as a command line option to change the bitcoind binary used by the test. If using 2 or more nodes, then `--refbinary` can be optionally used to change the bitcoind that will be used on nodes 2 and up. - Implement a (generator) function called `get_tests()` which yields `TestInstance`s. Each `TestInstance` consists of: - a list of `[object, outcome, hash]` entries * `object` is a `CBlock`, `CTransaction`, or `CBlockHeader`. `CBlock`'s and `CTransaction`'s are tested for acceptance. `CBlockHeader`s can be used so that the test runner can deliver complete headers-chains when requested from the bitcoind, to allow writing tests where blocks can be delivered out of order but still processed by headers-first bitcoind's. * `outcome` is `True`, `False`, or `None`. If `True` or `False`, the tip is compared with the expected tip -- either the block passed in, or the hash specified as the optional 3rd entry. If `None` is specified, then the test will compare all the bitcoind's being tested to see if they all agree on what the best tip is. * `hash` is the block hash of the tip to compare against. Optional to specify; if left out then the hash of the block passed in will be used as the expected tip. This allows for specifying an expected tip while testing the handling of either invalid blocks or blocks delivered out of order, which complete a longer chain. - `sync_every_block`: `True/False`. If `False`, then all blocks are inv'ed together, and the test runner waits until the node receives the last one, and tests only the last block for tip acceptance using the outcome and specified tip. If `True`, then each block is tested in sequence and synced (this is slower when processing many blocks). - `sync_every_transaction`: `True/False`. Analogous to `sync_every_block`, except if the outcome on the last tx is "None", then the contents of the entire mempool are compared across all bitcoind connections. If `True` or `False`, then only the last tx's acceptance is tested against the given outcome. - For examples of tests written in this framework, see `invalidblockrequest.py` and `p2p-fullblocktest.py`. ### test-framework modules #### [test_framework/authproxy.py](test_framework/authproxy.py) Taken from the [python-bitcoinrpc repository](https://github.com/jgarzik/python-bitcoinrpc). #### [test_framework/test_framework.py](test_framework/test_framework.py) Base class for functional tests. #### [test_framework/util.py](test_framework/util.py) Generally useful functions. #### [test_framework/mininode.py](test_framework/mininode.py) Basic code to support P2P connectivity to a bitcoind. #### [test_framework/comptool.py](test_framework/comptool.py) Framework for comparison-tool style, P2P tests. #### [test_framework/script.py](test_framework/script.py) Utilities for manipulating transaction scripts (originally from python-bitcoinlib) #### [test_framework/blockstore.py](test_framework/blockstore.py) Implements disk-backed block and tx storage. #### [test_framework/key.py](test_framework/key.py) Wrapper around OpenSSL EC_Key (originally from python-bitcoinlib) #### [test_framework/bignum.py](test_framework/bignum.py) Helpers for script.py #### [test_framework/blocktools.py](test_framework/blocktools.py) Helper functions for creating blocks and transactions. diff --git a/test/functional/abc-mempool-accept-txn.py b/test/functional/abc-mempool-accept-txn.py index 47a70f079..10e0d0f63 100755 --- a/test/functional/abc-mempool-accept-txn.py +++ b/test/functional/abc-mempool-accept-txn.py @@ -1,266 +1,265 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ This test checks acceptance of transactions by the mempool It is derived from the much more complex p2p-fullblocktest. """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import (assert_raises_rpc_error, assert_equal) from test_framework.comptool import TestManager, TestInstance from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * import struct from test_framework.cdefs import MAX_STANDARD_TX_SIGOPS # Error for too many sigops in one TX TXNS_TOO_MANY_SIGOPS_ERROR = b'bad-txns-too-many-sigops' RPC_TXNS_TOO_MANY_SIGOPS_ERROR = "64: " + \ TXNS_TOO_MANY_SIGOPS_ERROR.decode("utf-8") class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending class FullBlockTest(ComparisonTestFramework): # Can either run this test as 1 node with expected answers, or two and compare them. # Change the "outcome" variable from each TestInstance object to only do # the comparison. def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.block_heights = {} self.coinbase_key = CECKey() self.coinbase_key.set_secretbytes(b"horsebattery") self.coinbase_pubkey = self.coinbase_key.get_pubkey() self.tip = None self.blocks = {} def setup_network(self): self.extra_args = [['-norelaypriority']] self.add_nodes(self.num_nodes, self.extra_args) self.start_nodes() def add_options(self, parser): super().add_options(parser) parser.add_argument( "--runbarelyexpensive", dest="runbarelyexpensive", default=True) def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() self.test.run() def add_transactions_to_block(self, block, tx_list): [tx.rehash() for tx in tx_list] block.vtx.extend(tx_list) block.vtx = [block.vtx[0]] + \ sorted(block.vtx[1:], key=lambda tx: tx.get_id()) # this is a little handier to use than the version in blocktools.py def create_tx(self, spend_tx, n, value, script=CScript([OP_TRUE])): tx = create_transaction(spend_tx, n, b"", value, script) return tx # sign a transaction, using the key we know about # this signs input 0 in tx, which is assumed to be spending output n in # spend_tx def sign_tx(self, tx, spend_tx, n): scriptPubKey = bytearray(spend_tx.vout[n].scriptPubKey) if (scriptPubKey[0] == OP_TRUE): # an anyone-can-spend tx.vin[0].scriptSig = CScript() return sighash = SignatureHashForkId( spend_tx.vout[n].scriptPubKey, tx, 0, SIGHASH_ALL | SIGHASH_FORKID, spend_tx.vout[n].nValue) tx.vin[0].scriptSig = CScript( [self.coinbase_key.sign(sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID]))]) def create_and_sign_transaction(self, spend_tx, n, value, script=CScript([OP_TRUE])): tx = self.create_tx(spend_tx, n, value, script) self.sign_tx(tx, spend_tx, n) tx.rehash() return tx def next_block(self, number, spend=None, additional_coinbase_value=0, script=CScript([OP_TRUE])): if self.tip == None: base_block_hash = self.genesis_hash block_time = int(time.time()) + 1 else: base_block_hash = self.tip.sha256 block_time = self.tip.nTime + 1 # First create the coinbase height = self.block_heights[base_block_hash] + 1 coinbase = create_coinbase(height, self.coinbase_pubkey) coinbase.vout[0].nValue += additional_coinbase_value coinbase.rehash() if spend == None: block = create_block(base_block_hash, coinbase, block_time) else: # all but one satoshi to fees coinbase.vout[0].nValue += spend.tx.vout[ spend.n].nValue - 1 coinbase.rehash() block = create_block(base_block_hash, coinbase, block_time) # spend 1 satoshi tx = create_transaction(spend.tx, spend.n, b"", 1, script) self.sign_tx(tx, spend.tx, spend.n) self.add_transactions_to_block(block, [tx]) block.hashMerkleRoot = block.calc_merkle_root() # Do PoW, which is very inexpensive on regnet block.solve() self.tip = block self.block_heights[block.sha256] = height assert number not in self.blocks self.blocks[number] = block return block def get_tests(self): self.genesis_hash = int(self.nodes[0].getbestblockhash(), 16) self.block_heights[self.genesis_hash] = 0 spendable_outputs = [] # save the current tip so it can be spent by a later block def save_spendable_output(): spendable_outputs.append(self.tip) # get an output that we previously marked as spendable def get_spendable_output(): return PreviousSpendableOutput(spendable_outputs.pop(0).vtx[0], 0) # returns a test case that asserts that the current tip was accepted def accepted(): return TestInstance([[self.tip, True]]) # returns a test case that asserts that the current tip was rejected def rejected(reject=None): if reject is None: return TestInstance([[self.tip, False]]) else: return TestInstance([[self.tip, reject]]) # move the tip back to a previous block def tip(number): self.tip = self.blocks[number] # adds transactions to the block and updates state def update_block(block_number, new_transactions): block = self.blocks[block_number] self.add_transactions_to_block(block, new_transactions) old_sha256 = block.sha256 block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Update the internal state just like in next_block self.tip = block if block.sha256 != old_sha256: self.block_heights[ block.sha256] = self.block_heights[old_sha256] del self.block_heights[old_sha256] self.blocks[block_number] = block return block # shorthand for functions block = self.next_block create_tx = self.create_tx # shorthand for variables node = self.nodes[0] # Create a new block block(0) save_spendable_output() yield accepted() # Now we need that block to mature so we can spend the coinbase. test = TestInstance(sync_every_block=False) for i in range(99): block(5000 + i) test.blocks_and_transactions.append([self.tip, True]) save_spendable_output() yield test # Collect spendable outputs now to avoid cluttering the code later on out = [] for i in range(33): out.append(get_spendable_output()) # P2SH # Build the redeem script, hash it, use hash to create the p2sh script redeem_script = CScript([self.coinbase_pubkey] + [ OP_2DUP, OP_CHECKSIGVERIFY] * 5 + [OP_CHECKSIG]) redeem_script_hash = hash160(redeem_script) p2sh_script = CScript([OP_HASH160, redeem_script_hash, OP_EQUAL]) # Creates a new transaction using a p2sh transaction as input def spend_p2sh_tx(p2sh_tx_to_spend, output_script=CScript([OP_TRUE])): # Create the transaction spent_p2sh_tx = CTransaction() spent_p2sh_tx.vin.append( CTxIn(COutPoint(p2sh_tx_to_spend.sha256, 0), b'')) spent_p2sh_tx.vout.append(CTxOut(1, output_script)) # Sign the transaction using the redeem script sighash = SignatureHashForkId( redeem_script, spent_p2sh_tx, 0, SIGHASH_ALL | SIGHASH_FORKID, p2sh_tx_to_spend.vout[0].nValue) sig = self.coinbase_key.sign( sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID])) spent_p2sh_tx.vin[0].scriptSig = CScript([sig, redeem_script]) spent_p2sh_tx.rehash() return spent_p2sh_tx # P2SH tests # Create a p2sh transaction p2sh_tx = self.create_and_sign_transaction( out[0].tx, out[0].n, 1, p2sh_script) # Add the transaction to the block block(1) update_block(1, [p2sh_tx]) yield accepted() # Sigops p2sh limit for the mempool test p2sh_sigops_limit_mempool = MAX_STANDARD_TX_SIGOPS - \ redeem_script.GetSigOpCount(True) # Too many sigops in one p2sh script too_many_p2sh_sigops_mempool = CScript( [OP_CHECKSIG] * (p2sh_sigops_limit_mempool + 1)) # A transaction with this output script can't get into the mempool assert_raises_rpc_error(-26, RPC_TXNS_TOO_MANY_SIGOPS_ERROR, node.sendrawtransaction, ToHex(spend_p2sh_tx(p2sh_tx, too_many_p2sh_sigops_mempool))) # The transaction is rejected, so the mempool should still be empty assert_equal(set(node.getrawmempool()), set()) # Max sigops in one p2sh txn max_p2sh_sigops_mempool = CScript( [OP_CHECKSIG] * (p2sh_sigops_limit_mempool)) # A transaction with this output script can get into the mempool max_p2sh_sigops_txn = spend_p2sh_tx(p2sh_tx, max_p2sh_sigops_mempool) max_p2sh_sigops_txn_id = node.sendrawtransaction( ToHex(max_p2sh_sigops_txn)) assert_equal(set(node.getrawmempool()), {max_p2sh_sigops_txn_id}) # Mine the transaction block(2, spend=out[1]) update_block(2, [max_p2sh_sigops_txn]) yield accepted() # The transaction has been mined, it's not in the mempool anymore assert_equal(set(node.getrawmempool()), set()) if __name__ == '__main__': FullBlockTest().main() diff --git a/test/functional/abc-p2p-compactblocks.py b/test/functional/abc-p2p-compactblocks.py index 3de05ebb3..af8fb8e45 100755 --- a/test/functional/abc-p2p-compactblocks.py +++ b/test/functional/abc-p2p-compactblocks.py @@ -1,358 +1,357 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ This test checks simple acceptance of bigger blocks via p2p. It is derived from the much more complex p2p-fullblocktest. The intention is that small tests can be derived from this one, or this one can be extended, to cover the checks done for bigger blocks (e.g. sigops limits). """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time from test_framework.script import * from test_framework.cdefs import (ONE_MEGABYTE, LEGACY_MAX_BLOCK_SIZE, MAX_BLOCK_SIGOPS_PER_MB, MAX_TX_SIGOPS_COUNT) from collections import deque class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending # TestNode: A peer we use to send messages to bitcoind, and store responses. class TestNode(P2PInterface): def __init__(self): self.last_sendcmpct = None self.last_cmpctblock = None self.last_getheaders = None self.last_headers = None super().__init__() def on_sendcmpct(self, message): self.last_sendcmpct = message def on_cmpctblock(self, message): self.last_cmpctblock = message self.last_cmpctblock.header_and_shortids.header.calc_sha256() def on_getheaders(self, message): self.last_getheaders = message def on_headers(self, message): self.last_headers = message for x in self.last_headers.headers: x.calc_sha256() def clear_block_data(self): with mininode_lock: self.last_sendcmpct = None self.last_cmpctblock = None class FullBlockTest(ComparisonTestFramework): # Can either run this test as 1 node with expected answers, or two and compare them. # Change the "outcome" variable from each TestInstance object to only do # the comparison. def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.block_heights = {} self.tip = None self.blocks = {} self.excessive_block_size = 16 * ONE_MEGABYTE self.extra_args = [['-norelaypriority', '-whitelist=127.0.0.1', '-limitancestorcount=999999', '-limitancestorsize=999999', '-limitdescendantcount=999999', '-limitdescendantsize=999999', '-maxmempool=99999', "-excessiveblocksize=%d" % self.excessive_block_size]] def add_options(self, parser): super().add_options(parser) parser.add_argument( "--runbarelyexpensive", dest="runbarelyexpensive", default=True) def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() # Set the blocksize to 2MB as initial condition self.nodes[0].setexcessiveblock(self.excessive_block_size) self.test.run() def add_transactions_to_block(self, block, tx_list): [tx.rehash() for tx in tx_list] block.vtx.extend(tx_list) # this is a little handier to use than the version in blocktools.py def create_tx(self, spend_tx, n, value, script=CScript([OP_TRUE])): tx = create_transaction(spend_tx, n, b"", value, script) return tx def next_block(self, number, spend=None, script=CScript([OP_TRUE]), block_size=0, extra_txns=0): if self.tip == None: base_block_hash = self.genesis_hash block_time = int(time.time()) + 1 else: base_block_hash = self.tip.sha256 block_time = self.tip.nTime + 1 # First create the coinbase height = self.block_heights[base_block_hash] + 1 coinbase = create_coinbase(height) coinbase.rehash() if spend == None: # We need to have something to spend to fill the block. assert_equal(block_size, 0) block = create_block(base_block_hash, coinbase, block_time) else: # all but one satoshi to fees coinbase.vout[0].nValue += spend.tx.vout[spend.n].nValue - 1 coinbase.rehash() block = create_block(base_block_hash, coinbase, block_time) # Make sure we have plenty enough to spend going forward. spendable_outputs = deque([spend]) def get_base_transaction(): # Create the new transaction tx = CTransaction() # Spend from one of the spendable outputs spend = spendable_outputs.popleft() tx.vin.append(CTxIn(COutPoint(spend.tx.sha256, spend.n))) # Add spendable outputs for i in range(4): tx.vout.append(CTxOut(0, CScript([OP_TRUE]))) spendable_outputs.append(PreviousSpendableOutput(tx, i)) pad_tx(tx) return tx tx = get_base_transaction() # Make it the same format as transaction added for padding and save the size. # It's missing the padding output, so we add a constant to account for it. tx.rehash() base_tx_size = len(tx.serialize()) + 18 # If a specific script is required, add it. if script != None: tx.vout.append(CTxOut(1, script)) # Put some random data into the first transaction of the chain to randomize ids. tx.vout.append( CTxOut(0, CScript([random.randint(0, 256), OP_RETURN]))) # Add the transaction to the block self.add_transactions_to_block(block, [tx]) # Add transaction until we reach the expected transaction count for _ in range(extra_txns): self.add_transactions_to_block(block, [get_base_transaction()]) # If we have a block size requirement, just fill # the block until we get there current_block_size = len(block.serialize()) overage_bytes = 0 while current_block_size < block_size: # We will add a new transaction. That means the size of # the field enumerating how many transaction go in the block # may change. current_block_size -= len(ser_compact_size(len(block.vtx))) current_block_size += len(ser_compact_size(len(block.vtx) + 1)) # Add padding to fill the block. left_to_fill = block_size - current_block_size # Don't go over the 1 mb limit for a txn if left_to_fill > 500000: # Make sure we eat up non-divisible by 100 amounts quickly # Also keep transaction less than 1 MB left_to_fill = 500000 + left_to_fill % 100 # Create the new transaction tx = get_base_transaction() pad_tx(tx, left_to_fill - overage_bytes) if len(tx.serialize()) + current_block_size > block_size: # Our padding was too big try again overage_bytes += 1 continue # Add the tx to the list of transactions to be included # in the block. self.add_transactions_to_block(block, [tx]) current_block_size += len(tx.serialize()) # Now that we added a bunch of transaction, we need to recompute # the merkle root. make_conform_to_ctor(block) block.hashMerkleRoot = block.calc_merkle_root() # Check that the block size is what's expected if block_size > 0: assert_equal(len(block.serialize()), block_size) # Do PoW, which is cheap on regnet block.solve() self.tip = block self.block_heights[block.sha256] = height assert number not in self.blocks self.blocks[number] = block return block def get_tests(self): self.genesis_hash = int(self.nodes[0].getbestblockhash(), 16) self.block_heights[self.genesis_hash] = 0 spendable_outputs = [] # save the current tip so it can be spent by a later block def save_spendable_output(): spendable_outputs.append(self.tip) # get an output that we previously marked as spendable def get_spendable_output(): return PreviousSpendableOutput(spendable_outputs.pop(0).vtx[0], 0) # returns a test case that asserts that the current tip was accepted def accepted(): return TestInstance([[self.tip, True]]) # returns a test case that asserts that the current tip was rejected def rejected(reject=None): if reject is None: return TestInstance([[self.tip, False]]) else: return TestInstance([[self.tip, reject]]) # move the tip back to a previous block def tip(number): self.tip = self.blocks[number] # shorthand for functions block = self.next_block # Create a new block block(0) save_spendable_output() yield accepted() # Now we need that block to mature so we can spend the coinbase. test = TestInstance(sync_every_block=False) for i in range(99): block(5000 + i) test.blocks_and_transactions.append([self.tip, True]) save_spendable_output() # Get to one block of the May 15, 2018 HF activation for i in range(6): block(5100 + i) test.blocks_and_transactions.append([self.tip, True]) # Send it all to the node at once. yield test # collect spendable outputs now to avoid cluttering the code later on out = [] for i in range(100): out.append(get_spendable_output()) # Check that compact block also work for big blocks node = self.nodes[0] peer = TestNode() peer.peer_connect('127.0.0.1', p2p_port(0)) # Wait for connection to be etablished peer.wait_for_verack() # Wait for SENDCMPCT def received_sendcmpct(): return (peer.last_sendcmpct != None) wait_until(received_sendcmpct, timeout=30) sendcmpct = msg_sendcmpct() sendcmpct.version = 1 sendcmpct.announce = True peer.send_and_ping(sendcmpct) # Exchange headers def received_getheaders(): return (peer.last_getheaders != None) wait_until(received_getheaders, timeout=30) # Return the favor peer.send_message(peer.last_getheaders) # Wait for the header list def received_headers(): return (peer.last_headers != None) wait_until(received_headers, timeout=30) # It's like we know about the same headers ! peer.send_message(peer.last_headers) # Send a block b1 = block(1, spend=out[0], block_size=ONE_MEGABYTE + 1) yield accepted() # Checks the node to forward it via compact block def received_block(): return (peer.last_cmpctblock != None) wait_until(received_block, timeout=30) # Was it our block ? cmpctblk_header = peer.last_cmpctblock.header_and_shortids.header cmpctblk_header.calc_sha256() assert(cmpctblk_header.sha256 == b1.sha256) # Send a large block with numerous transactions. peer.clear_block_data() b2 = block(2, spend=out[1], extra_txns=70000, block_size=self.excessive_block_size - 1000) yield accepted() # Checks the node forwards it via compact block wait_until(received_block, timeout=30) # Was it our block ? cmpctblk_header = peer.last_cmpctblock.header_and_shortids.header cmpctblk_header.calc_sha256() assert(cmpctblk_header.sha256 == b2.sha256) # In order to avoid having to resend a ton of transactions, we invalidate # b2, which will send all its transactions in the mempool. node.invalidateblock(node.getbestblockhash()) # Let's send a compact block and see if the node accepts it. # Let's modify b2 and use it so that we can reuse the mempool. tx = b2.vtx[0] tx.vout.append(CTxOut(0, CScript([random.randint(0, 256), OP_RETURN]))) tx.rehash() b2.vtx[0] = tx b2.hashMerkleRoot = b2.calc_merkle_root() b2.solve() # Now we create the compact block and send it comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(b2) peer.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) # Check that compact block is received properly assert(int(node.getbestblockhash(), 16) == b2.sha256) if __name__ == '__main__': FullBlockTest().main() diff --git a/test/functional/abc-p2p-fullblocktest.py b/test/functional/abc-p2p-fullblocktest.py index 090fa8cf3..0b5ce6750 100755 --- a/test/functional/abc-p2p-fullblocktest.py +++ b/test/functional/abc-p2p-fullblocktest.py @@ -1,402 +1,401 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ This test checks simple acceptance of bigger blocks via p2p. It is derived from the much more complex p2p-fullblocktest. The intention is that small tests can be derived from this one, or this one can be extended, to cover the checks done for bigger blocks (e.g. sigops limits). """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import assert_equal, assert_raises_rpc_error from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * from test_framework.cdefs import (ONE_MEGABYTE, LEGACY_MAX_BLOCK_SIZE, MAX_BLOCK_SIGOPS_PER_MB, MAX_TX_SIGOPS_COUNT) from collections import deque REPLAY_PROTECTION_START_TIME = 2000000000 class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending class FullBlockTest(ComparisonTestFramework): # Can either run this test as 1 node with expected answers, or two and compare them. # Change the "outcome" variable from each TestInstance object to only do # the comparison. def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.block_heights = {} self.tip = None self.blocks = {} self.excessive_block_size = 100 * ONE_MEGABYTE self.extra_args = [['-whitelist=127.0.0.1', "-replayprotectionactivationtime=%d" % REPLAY_PROTECTION_START_TIME, "-excessiveblocksize=%d" % self.excessive_block_size]] def add_options(self, parser): super().add_options(parser) parser.add_argument( "--runbarelyexpensive", dest="runbarelyexpensive", default=True) def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() # Set the blocksize to 2MB as initial condition self.nodes[0].setexcessiveblock(self.excessive_block_size) self.test.run() def add_transactions_to_block(self, block, tx_list): [tx.rehash() for tx in tx_list] block.vtx.extend(tx_list) # this is a little handier to use than the version in blocktools.py def create_tx(self, spend, value, script=CScript([OP_TRUE])): tx = create_transaction(spend.tx, spend.n, b"", value, script) return tx def next_block(self, number, spend=None, script=CScript([OP_TRUE]), block_size=0, extra_sigops=0): if self.tip == None: base_block_hash = self.genesis_hash block_time = int(time.time()) + 1 else: base_block_hash = self.tip.sha256 block_time = self.tip.nTime + 1 # First create the coinbase height = self.block_heights[base_block_hash] + 1 coinbase = create_coinbase(height) coinbase.rehash() if spend == None: # We need to have something to spend to fill the block. assert_equal(block_size, 0) block = create_block(base_block_hash, coinbase, block_time) else: # all but one satoshi to fees coinbase.vout[0].nValue += spend.tx.vout[spend.n].nValue - 1 coinbase.rehash() block = create_block(base_block_hash, coinbase, block_time) # Make sure we have plenty engough to spend going forward. spendable_outputs = deque([spend]) def get_base_transaction(): # Create the new transaction tx = CTransaction() # Spend from one of the spendable outputs spend = spendable_outputs.popleft() tx.vin.append(CTxIn(COutPoint(spend.tx.sha256, spend.n))) # Add spendable outputs for i in range(4): tx.vout.append(CTxOut(0, CScript([OP_TRUE]))) spendable_outputs.append(PreviousSpendableOutput(tx, i)) return tx tx = get_base_transaction() # Make it the same format as transaction added for padding and save the size. # It's missing the padding output, so we add a constant to account for it. tx.rehash() base_tx_size = len(tx.serialize()) + 18 # If a specific script is required, add it. if script != None: tx.vout.append(CTxOut(1, script)) # Put some random data into the first transaction of the chain to randomize ids. tx.vout.append( CTxOut(0, CScript([random.randint(0, 256), OP_RETURN]))) # Add the transaction to the block self.add_transactions_to_block(block, [tx]) # If we have a block size requirement, just fill # the block until we get there current_block_size = len(block.serialize()) while current_block_size < block_size: # We will add a new transaction. That means the size of # the field enumerating how many transaction go in the block # may change. current_block_size -= len(ser_compact_size(len(block.vtx))) current_block_size += len(ser_compact_size(len(block.vtx) + 1)) # Create the new transaction tx = get_base_transaction() # Add padding to fill the block. script_length = block_size - current_block_size - base_tx_size if script_length > 510000: if script_length < 1000000: # Make sure we don't find ourselves in a position where we # need to generate a transaction smaller than what we expected. script_length = script_length // 2 else: script_length = 500000 tx_sigops = min(extra_sigops, script_length, MAX_TX_SIGOPS_COUNT) extra_sigops -= tx_sigops script_pad_len = script_length - tx_sigops script_output = CScript( [b'\x00' * script_pad_len] + [OP_CHECKSIG] * tx_sigops) tx.vout.append(CTxOut(0, script_output)) # Add the tx to the list of transactions to be included # in the block. self.add_transactions_to_block(block, [tx]) current_block_size += len(tx.serialize()) # Now that we added a bunch of transaction, we need to recompute # the merkle root. make_conform_to_ctor(block) block.hashMerkleRoot = block.calc_merkle_root() # Check that the block size is what's expected if block_size > 0: assert_equal(len(block.serialize()), block_size) # Do PoW, which is cheap on regnet block.solve() self.tip = block self.block_heights[block.sha256] = height assert number not in self.blocks self.blocks[number] = block return block def get_tests(self): node = self.nodes[0] self.genesis_hash = int(node.getbestblockhash(), 16) self.block_heights[self.genesis_hash] = 0 spendable_outputs = [] # save the current tip so it can be spent by a later block def save_spendable_output(): spendable_outputs.append(self.tip) # get an output that we previously marked as spendable def get_spendable_output(): return PreviousSpendableOutput(spendable_outputs.pop(0).vtx[0], 0) # returns a test case that asserts that the current tip was accepted def accepted(): return TestInstance([[self.tip, True]]) # returns a test case that asserts that the current tip was rejected def rejected(reject=None): if reject is None: return TestInstance([[self.tip, False]]) else: return TestInstance([[self.tip, reject]]) # move the tip back to a previous block def tip(number): self.tip = self.blocks[number] # adds transactions to the block and updates state def update_block(block_number, new_transactions): block = self.blocks[block_number] self.add_transactions_to_block(block, new_transactions) old_sha256 = block.sha256 make_conform_to_ctor(block) block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Update the internal state just like in next_block self.tip = block if block.sha256 != old_sha256: self.block_heights[ block.sha256] = self.block_heights[old_sha256] del self.block_heights[old_sha256] self.blocks[block_number] = block return block # shorthand for functions block = self.next_block # Create a new block block(0) save_spendable_output() yield accepted() # Now we need that block to mature so we can spend the coinbase. test = TestInstance(sync_every_block=False) for i in range(99): block(5000 + i) test.blocks_and_transactions.append([self.tip, True]) save_spendable_output() yield test # collect spendable outputs now to avoid cluttering the code later on out = [] for i in range(100): out.append(get_spendable_output()) # Let's build some blocks and test them. for i in range(16): n = i + 1 block(n, spend=out[i], block_size=n * ONE_MEGABYTE) yield accepted() # block of maximal size block(17, spend=out[16], block_size=self.excessive_block_size) yield accepted() # Reject oversized blocks with bad-blk-length error block(18, spend=out[17], block_size=self.excessive_block_size + 1) yield rejected(RejectResult(16, b'bad-blk-length')) # Rewind bad block. tip(17) # Accept many sigops lots_of_checksigs = CScript( [OP_CHECKSIG] * MAX_BLOCK_SIGOPS_PER_MB) block(19, spend=out[17], script=lots_of_checksigs, block_size=ONE_MEGABYTE) yield accepted() block(20, spend=out[18], script=lots_of_checksigs, block_size=ONE_MEGABYTE, extra_sigops=1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(19) # Accept 40k sigops per block > 1MB and <= 2MB block(21, spend=out[18], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB, block_size=ONE_MEGABYTE + 1) yield accepted() # Accept 40k sigops per block > 1MB and <= 2MB block(22, spend=out[19], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB, block_size=2 * ONE_MEGABYTE) yield accepted() # Reject more than 40k sigops per block > 1MB and <= 2MB. block(23, spend=out[20], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=ONE_MEGABYTE + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(22) # Reject more than 40k sigops per block > 1MB and <= 2MB. block(24, spend=out[20], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=2 * ONE_MEGABYTE) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(22) # Accept 60k sigops per block > 2MB and <= 3MB block(25, spend=out[20], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB, block_size=2 * ONE_MEGABYTE + 1) yield accepted() # Accept 60k sigops per block > 2MB and <= 3MB block(26, spend=out[21], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB, block_size=3 * ONE_MEGABYTE) yield accepted() # Reject more than 40k sigops per block > 1MB and <= 2MB. block(27, spend=out[22], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=2 * ONE_MEGABYTE + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(26) # Reject more than 40k sigops per block > 1MB and <= 2MB. block(28, spend=out[22], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=3 * ONE_MEGABYTE) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(26) # Too many sigops in one txn too_many_tx_checksigs = CScript( [OP_CHECKSIG] * (MAX_BLOCK_SIGOPS_PER_MB + 1)) block( 29, spend=out[22], script=too_many_tx_checksigs, block_size=ONE_MEGABYTE + 1) yield rejected(RejectResult(16, b'bad-txn-sigops')) # Rewind bad block tip(26) # Generate a key pair to test P2SH sigops count private_key = CECKey() private_key.set_secretbytes(b"fatstacks") public_key = private_key.get_pubkey() # P2SH # Build the redeem script, hash it, use hash to create the p2sh script redeem_script = CScript( [public_key] + [OP_2DUP, OP_CHECKSIGVERIFY] * 5 + [OP_CHECKSIG]) redeem_script_hash = hash160(redeem_script) p2sh_script = CScript([OP_HASH160, redeem_script_hash, OP_EQUAL]) # Create a p2sh transaction p2sh_tx = self.create_tx(out[22], 1, p2sh_script) # Add the transaction to the block block(30) update_block(30, [p2sh_tx]) yield accepted() # Creates a new transaction using the p2sh transaction included in the # last block def spend_p2sh_tx(output_script=CScript([OP_TRUE])): # Create the transaction spent_p2sh_tx = CTransaction() spent_p2sh_tx.vin.append(CTxIn(COutPoint(p2sh_tx.sha256, 0), b'')) spent_p2sh_tx.vout.append(CTxOut(1, output_script)) # Sign the transaction using the redeem script sighash = SignatureHashForkId( redeem_script, spent_p2sh_tx, 0, SIGHASH_ALL | SIGHASH_FORKID, p2sh_tx.vout[0].nValue) sig = private_key.sign(sighash) + \ bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID])) spent_p2sh_tx.vin[0].scriptSig = CScript([sig, redeem_script]) spent_p2sh_tx.rehash() return spent_p2sh_tx # Sigops p2sh limit p2sh_sigops_limit = MAX_BLOCK_SIGOPS_PER_MB - \ redeem_script.GetSigOpCount(True) # Too many sigops in one p2sh txn too_many_p2sh_sigops = CScript([OP_CHECKSIG] * (p2sh_sigops_limit + 1)) block(31, spend=out[23], block_size=ONE_MEGABYTE + 1) update_block(31, [spend_p2sh_tx(too_many_p2sh_sigops)]) yield rejected(RejectResult(16, b'bad-txn-sigops')) # Rewind bad block tip(30) # Max sigops in one p2sh txn max_p2sh_sigops = CScript([OP_CHECKSIG] * (p2sh_sigops_limit)) block(32, spend=out[23], block_size=ONE_MEGABYTE + 1) update_block(32, [spend_p2sh_tx(max_p2sh_sigops)]) yield accepted() # Submit a very large block via RPC large_block = block( 33, spend=out[24], block_size=self.excessive_block_size) node.submitblock(ToHex(large_block)) if __name__ == '__main__': FullBlockTest().main() diff --git a/test/functional/abc-replay-protection.py b/test/functional/abc-replay-protection.py index 435f61aac..a697a6bb7 100755 --- a/test/functional/abc-replay-protection.py +++ b/test/functional/abc-replay-protection.py @@ -1,282 +1,281 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ This test checks activation of UAHF and the different consensus related to this activation. It is derived from the much more complex p2p-fullblocktest. """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import assert_equal, assert_raises_rpc_error from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * # far into the future REPLAY_PROTECTION_START_TIME = 2000000000 # Error due to invalid signature INVALID_SIGNATURE_ERROR = b'mandatory-script-verify-flag-failed (Signature must be zero for failed CHECK(MULTI)SIG operation)' RPC_INVALID_SIGNATURE_ERROR = "16: " + \ INVALID_SIGNATURE_ERROR.decode("utf-8") class PreviousSpendableOutput(object): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending class ReplayProtectionTest(ComparisonTestFramework): def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.block_heights = {} self.tip = None self.blocks = {} self.extra_args = [['-whitelist=127.0.0.1', "-replayprotectionactivationtime=%d" % REPLAY_PROTECTION_START_TIME]] def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() self.nodes[0].setmocktime(REPLAY_PROTECTION_START_TIME) self.test.run() def next_block(self, number): if self.tip == None: base_block_hash = self.genesis_hash block_time = int(time.time()) + 1 else: base_block_hash = self.tip.sha256 block_time = self.tip.nTime + 1 # First create the coinbase height = self.block_heights[base_block_hash] + 1 coinbase = create_coinbase(height) coinbase.rehash() block = create_block(base_block_hash, coinbase, block_time) # Do PoW, which is cheap on regnet block.solve() self.tip = block self.block_heights[block.sha256] = height assert number not in self.blocks self.blocks[number] = block return block def get_tests(self): self.genesis_hash = int(self.nodes[0].getbestblockhash(), 16) self.block_heights[self.genesis_hash] = 0 spendable_outputs = [] # save the current tip so it can be spent by a later block def save_spendable_output(): spendable_outputs.append(self.tip) # get an output that we previously marked as spendable def get_spendable_output(): return PreviousSpendableOutput(spendable_outputs.pop(0).vtx[0], 0) # returns a test case that asserts that the current tip was accepted def accepted(): return TestInstance([[self.tip, True]]) # returns a test case that asserts that the current tip was rejected def rejected(reject=None): if reject is None: return TestInstance([[self.tip, False]]) else: return TestInstance([[self.tip, reject]]) # move the tip back to a previous block def tip(number): self.tip = self.blocks[number] # adds transactions to the block and updates state def update_block(block_number, new_transactions): [tx.rehash() for tx in new_transactions] block = self.blocks[block_number] block.vtx.extend(new_transactions) old_sha256 = block.sha256 make_conform_to_ctor(block) block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Update the internal state just like in next_block self.tip = block if block.sha256 != old_sha256: self.block_heights[ block.sha256] = self.block_heights[old_sha256] del self.block_heights[old_sha256] self.blocks[block_number] = block return block # shorthand for functions block = self.next_block node = self.nodes[0] # Create a new block block(0) save_spendable_output() yield accepted() # Now we need that block to mature so we can spend the coinbase. test = TestInstance(sync_every_block=False) for i in range(99): block(5000 + i) test.blocks_and_transactions.append([self.tip, True]) save_spendable_output() yield test # collect spendable outputs now to avoid cluttering the code later on out = [] for i in range(100): out.append(get_spendable_output()) # Generate a key pair to test P2SH sigops count private_key = CECKey() private_key.set_secretbytes(b"replayprotection") public_key = private_key.get_pubkey() # This is a little handier to use than the version in blocktools.py def create_fund_and_spend_tx(spend, forkvalue=0): # Fund transaction script = CScript([public_key, OP_CHECKSIG]) txfund = create_transaction( spend.tx, spend.n, b'', 50 * COIN, script) txfund.rehash() # Spend transaction txspend = CTransaction() txspend.vout.append(CTxOut(50 * COIN - 1000, CScript([OP_TRUE]))) txspend.vin.append(CTxIn(COutPoint(txfund.sha256, 0), b'')) # Sign the transaction sighashtype = (forkvalue << 8) | SIGHASH_ALL | SIGHASH_FORKID sighash = SignatureHashForkId( script, txspend, 0, sighashtype, 50 * COIN) sig = private_key.sign(sighash) + \ bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID])) txspend.vin[0].scriptSig = CScript([sig]) txspend.rehash() return [txfund, txspend] def send_transaction_to_mempool(tx): tx_id = node.sendrawtransaction(ToHex(tx)) assert(tx_id in set(node.getrawmempool())) return tx_id # Before the fork, no replay protection required to get in the mempool. txns = create_fund_and_spend_tx(out[0]) send_transaction_to_mempool(txns[0]) send_transaction_to_mempool(txns[1]) # And txns get mined in a block properly. block(1) update_block(1, txns) yield accepted() # Replay protected transactions are rejected. replay_txns = create_fund_and_spend_tx(out[1], 0xffdead) send_transaction_to_mempool(replay_txns[0]) assert_raises_rpc_error(-26, RPC_INVALID_SIGNATURE_ERROR, node.sendrawtransaction, ToHex(replay_txns[1])) # And block containing them are rejected as well. block(2) update_block(2, replay_txns) yield rejected(RejectResult(16, b'blk-bad-inputs')) # Rewind bad block tip(1) # Create a block that would activate the replay protection. bfork = block(5555) bfork.nTime = REPLAY_PROTECTION_START_TIME - 1 update_block(5555, []) yield accepted() for i in range(5): block(5100 + i) test.blocks_and_transactions.append([self.tip, True]) yield test # Check we are just before the activation time assert_equal(node.getblockheader(node.getbestblockhash())['mediantime'], REPLAY_PROTECTION_START_TIME - 1) # We are just before the fork, replay protected txns still are rejected assert_raises_rpc_error(-26, RPC_INVALID_SIGNATURE_ERROR, node.sendrawtransaction, ToHex(replay_txns[1])) block(3) update_block(3, replay_txns) yield rejected(RejectResult(16, b'blk-bad-inputs')) # Rewind bad block tip(5104) # Send some non replay protected txns in the mempool to check # they get cleaned at activation. txns = create_fund_and_spend_tx(out[2]) send_transaction_to_mempool(txns[0]) tx_id = send_transaction_to_mempool(txns[1]) # Activate the replay protection block(5556) yield accepted() # Non replay protected transactions are not valid anymore, # so they should be removed from the mempool. assert(tx_id not in set(node.getrawmempool())) # Good old transactions are now invalid. send_transaction_to_mempool(txns[0]) assert_raises_rpc_error(-26, RPC_INVALID_SIGNATURE_ERROR, node.sendrawtransaction, ToHex(txns[1])) # They also cannot be mined block(4) update_block(4, txns) yield rejected(RejectResult(16, b'blk-bad-inputs')) # Rewind bad block tip(5556) # The replay protected transaction is now valid send_transaction_to_mempool(replay_txns[0]) replay_tx_id = send_transaction_to_mempool(replay_txns[1]) # They also can also be mined b5 = block(5) update_block(5, replay_txns) yield accepted() # Ok, now we check if a reorg work properly accross the activation. postforkblockid = node.getbestblockhash() node.invalidateblock(postforkblockid) assert(replay_tx_id in set(node.getrawmempool())) # Deactivating replay protection. forkblockid = node.getbestblockhash() node.invalidateblock(forkblockid) assert(replay_tx_id not in set(node.getrawmempool())) # Check that we also do it properly on deeper reorg. node.reconsiderblock(forkblockid) node.reconsiderblock(postforkblockid) node.invalidateblock(forkblockid) assert(replay_tx_id not in set(node.getrawmempool())) if __name__ == '__main__': ReplayProtectionTest().main() diff --git a/test/functional/abc-sync-chain.py b/test/functional/abc-sync-chain.py index d09bf0075..fcd3221bb 100755 --- a/test/functional/abc-sync-chain.py +++ b/test/functional/abc-sync-chain.py @@ -1,81 +1,81 @@ #!/usr/bin/env python3 # Copyright (c) 2018 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ Test that a node receiving many (potentially out of order) blocks exits initial block download (IBD; this occurs once it has passed minimumchainwork) and continues to sync without seizing. """ import random from test_framework.blocktools import create_block, create_coinbase from test_framework.mininode import (CBlockHeader, - NetworkThread, + network_thread_start, P2PInterface, msg_block, msg_headers) from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal, wait_until, p2p_port NUM_IBD_BLOCKS = 50 class BaseNode(P2PInterface): def send_header(self, block): msg = msg_headers() msg.headers = [CBlockHeader(block)] self.send_message(msg) def send_block(self, block): self.send_message(msg_block(block)) class SyncChainTest(BitcoinTestFramework): def set_test_params(self): self.num_nodes = 1 # Setting minimumchainwork makes sure we test IBD as well as post-IBD self.extra_args = [ ["-minimumchainwork={:#x}".format(202 + 2 * NUM_IBD_BLOCKS)]] def run_test(self): node0conn = BaseNode() node0conn.peer_connect('127.0.0.1', p2p_port(0)) - NetworkThread().start() + network_thread_start() node0conn.wait_for_verack() node0 = self.nodes[0] tip = int(node0.getbestblockhash(), 16) height = node0.getblockcount() + 1 time = node0.getblock(node0.getbestblockhash())['time'] + 1 blocks = [] for i in range(NUM_IBD_BLOCKS * 2): block = create_block(tip, create_coinbase(height), time) block.solve() blocks.append(block) tip = block.sha256 height += 1 time += 1 # Headers need to be sent in-order for b in blocks: node0conn.send_header(b) # Send blocks in some random order for b in random.sample(blocks, len(blocks)): node0conn.send_block(b) # The node should eventually, completely sync without getting stuck def node_synced(): return node0.getbestblockhash() == blocks[-1].hash wait_until(node_synced) if __name__ == '__main__': SyncChainTest().main() diff --git a/test/functional/abc-transaction-ordering.py b/test/functional/abc-transaction-ordering.py index 9f46a6c43..6a75e30d7 100755 --- a/test/functional/abc-transaction-ordering.py +++ b/test/functional/abc-transaction-ordering.py @@ -1,247 +1,246 @@ #!/usr/bin/env python3 # Copyright (c) 2018 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ This test checks that the node software accepts transactions in non topological order once the feature is activated. """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import assert_equal, assert_raises_rpc_error from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * from collections import deque # far into the future REPLAY_PROTECTION_START_TIME = 2000000000 class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending class TransactionOrderingTest(ComparisonTestFramework): # Can either run this test as 1 node with expected answers, or two and compare them. # Change the "outcome" variable from each TestInstance object to only do # the comparison. def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.block_heights = {} self.tip = None self.blocks = {} self.extra_args = [['-whitelist=127.0.0.1', '-relaypriority=0', "-replayprotectionactivationtime={}".format(REPLAY_PROTECTION_START_TIME)]] def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() # Set the blocksize to 2MB as initial condition self.test.run() def add_transactions_to_block(self, block, tx_list): [tx.rehash() for tx in tx_list] block.vtx.extend(tx_list) def next_block(self, number, spend=None, tx_count=0): if self.tip == None: base_block_hash = self.genesis_hash block_time = int(time.time()) + 1 else: base_block_hash = self.tip.sha256 block_time = self.tip.nTime + 1 # First create the coinbase height = self.block_heights[base_block_hash] + 1 coinbase = create_coinbase(height) coinbase.rehash() if spend == None: # We need to have something to spend to fill the block. block = create_block(base_block_hash, coinbase, block_time) else: # all but one satoshi to fees coinbase.vout[0].nValue += spend.tx.vout[spend.n].nValue - 1 coinbase.rehash() block = create_block(base_block_hash, coinbase, block_time) # Make sure we have plenty enough to spend going forward. spendable_outputs = deque([spend]) def get_base_transaction(): # Create the new transaction tx = CTransaction() # Spend from one of the spendable outputs spend = spendable_outputs.popleft() tx.vin.append(CTxIn(COutPoint(spend.tx.sha256, spend.n))) # Add spendable outputs for i in range(4): tx.vout.append(CTxOut(0, CScript([OP_TRUE]))) spendable_outputs.append(PreviousSpendableOutput(tx, i)) # Put some random data into the transaction in order to randomize ids. # This also ensures that transaction are larger than 100 bytes. rand = random.getrandbits(256) tx.vout.append(CTxOut(0, CScript([rand, OP_RETURN]))) return tx tx = get_base_transaction() # Make it the same format as transaction added for padding and save the size. # It's missing the padding output, so we add a constant to account for it. tx.rehash() # Add the transaction to the block self.add_transactions_to_block(block, [tx]) # If we have a transaction count requirement, just fill the block until we get there while len(block.vtx) < tx_count: # Create the new transaction and add it. tx = get_base_transaction() self.add_transactions_to_block(block, [tx]) # Now that we added a bunch of transaction, we need to recompute # the merkle root. block.hashMerkleRoot = block.calc_merkle_root() if tx_count > 0: assert_equal(len(block.vtx), tx_count) # Do PoW, which is cheap on regnet block.solve() self.tip = block self.block_heights[block.sha256] = height assert number not in self.blocks self.blocks[number] = block return block def get_tests(self): node = self.nodes[0] self.genesis_hash = int(node.getbestblockhash(), 16) self.block_heights[self.genesis_hash] = 0 spendable_outputs = [] # save the current tip so it can be spent by a later block def save_spendable_output(): spendable_outputs.append(self.tip) # get an output that we previously marked as spendable def get_spendable_output(): return PreviousSpendableOutput(spendable_outputs.pop(0).vtx[0], 0) # returns a test case that asserts that the current tip was accepted def accepted(): return TestInstance([[self.tip, True]]) # returns a test case that asserts that the current tip was rejected def rejected(reject=None): if reject is None: return TestInstance([[self.tip, False]]) else: return TestInstance([[self.tip, reject]]) # move the tip back to a previous block def tip(number): self.tip = self.blocks[number] # adds transactions to the block and updates state def update_block(block_number, new_transactions=[]): block = self.blocks[block_number] self.add_transactions_to_block(block, new_transactions) old_sha256 = block.sha256 block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Update the internal state just like in next_block self.tip = block if block.sha256 != old_sha256: self.block_heights[block.sha256] = self.block_heights[old_sha256] del self.block_heights[old_sha256] self.blocks[block_number] = block return block # shorthand for functions block = self.next_block # Create a new block block(0) save_spendable_output() yield accepted() # Now we need that block to mature so we can spend the coinbase. test = TestInstance(sync_every_block=False) for i in range(99): block(5000 + i) test.blocks_and_transactions.append([self.tip, True]) save_spendable_output() yield test # collect spendable outputs now to avoid cluttering the code later on out = [] for i in range(100): out.append(get_spendable_output()) # Let's build some blocks and test them. for i in range(17): n = i + 1 block(n) yield accepted() block(5556) yield accepted() # Block with regular ordering are now rejected. block(5557, out[17], tx_count=16) yield rejected(RejectResult(16, b'tx-ordering')) # Rewind bad block. tip(5556) # After we activate the Nov 15, 2018 HF, transaction order is enforced. def ordered_block(block_number, spend): b = block(block_number, spend=spend, tx_count=16) make_conform_to_ctor(b) update_block(block_number) return b # Now that the fork activated, we need to order transaction per txid. ordered_block(4445, out[17]) yield accepted() ordered_block(4446, out[18]) yield accepted() # Generate a block with a duplicated transaction. double_tx_block = ordered_block(4447, out[19]) assert_equal(len(double_tx_block.vtx), 16) double_tx_block.vtx = double_tx_block.vtx[:8] + \ [double_tx_block.vtx[8]] + double_tx_block.vtx[8:] update_block(4447) yield rejected(RejectResult(16, b'bad-txns-duplicate')) # Rewind bad block. tip(4446) # Check over two blocks. proper_block = ordered_block(4448, out[20]) yield accepted() replay_tx_block = ordered_block(4449, out[21]) assert_equal(len(replay_tx_block.vtx), 16) replay_tx_block.vtx.append(proper_block.vtx[5]) replay_tx_block.vtx = [replay_tx_block.vtx[0]] + \ sorted(replay_tx_block.vtx[1:], key=lambda tx: tx.get_id()) update_block(4449) yield rejected(RejectResult(16, b'bad-txns-BIP30')) if __name__ == '__main__': TransactionOrderingTest().main() diff --git a/test/functional/assumevalid.py b/test/functional/assumevalid.py index ce087fcb7..d7def76f2 100755 --- a/test/functional/assumevalid.py +++ b/test/functional/assumevalid.py @@ -1,204 +1,204 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test logic for skipping signature validation on old blocks. Test logic for skipping signature validation on blocks which we've assumed valid (https://github.com/bitcoin/bitcoin/pull/9484) We build a chain that includes and invalid signature for one of the transactions: 0: genesis block 1: block 1 with coinbase transaction output. 2-101: bury that block with 100 blocks so the coinbase transaction output can be spent 102: a block containing a transaction spending the coinbase transaction output. The transaction has an invalid signature. 103-2202: bury the bad block with just over two weeks' worth of blocks (2100 blocks) Start three nodes: - node0 has no -assumevalid parameter. Try to sync to block 2202. It will reject block 102 and only sync as far as block 101 - node1 has -assumevalid set to the hash of block 102. Try to sync to block 2202. node1 will sync all the way to block 2202. - node2 has -assumevalid set to the hash of block 102. Try to sync to block 200. node2 will reject block 102 since it's assumed valid, but it isn't buried by at least two weeks' work. """ import time from test_framework.blocktools import (create_block, create_coinbase) from test_framework.key import CECKey from test_framework.mininode import (CBlockHeader, COutPoint, CTransaction, CTxIn, CTxOut, - NetworkThread, P2PInterface, msg_block, - msg_headers) + msg_headers, + network_thread_start) from test_framework.script import (CScript, OP_TRUE) from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal class BaseNode(P2PInterface): def send_header_for_blocks(self, new_blocks): headers_message = msg_headers() headers_message.headers = [CBlockHeader(b) for b in new_blocks] self.send_message(headers_message) class AssumeValidTest(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 3 def setup_network(self): self.add_nodes(3) # Start node0. We don't start the other nodes yet since # we need to pre-mine a block with an invalid transaction # signature so we can pass in the block hash as assumevalid. self.start_node(0) def send_blocks_until_disconnected(self, p2p_conn): """Keep sending blocks to the node until we're disconnected.""" for i in range(len(self.blocks)): if p2p_conn.state != "connected": break try: p2p_conn.send_message(msg_block(self.blocks[i])) except IOError as e: assert str(e) == 'Not connected, no pushbuf' break def assert_blockchain_height(self, node, height): """Wait until the blockchain is no longer advancing and verify it's reached the expected height.""" last_height = node.getblock(node.getbestblockhash())['height'] timeout = 10 while True: time.sleep(0.25) current_height = node.getblock(node.getbestblockhash())['height'] if current_height != last_height: last_height = current_height if timeout < 0: assert False, "blockchain too short after timeout: %d" % current_height timeout - 0.25 continue elif current_height > height: assert False, "blockchain too long: %d" % current_height elif current_height == height: break def run_test(self): # Connect to node0 p2p0 = self.nodes[0].add_p2p_connection(BaseNode()) - NetworkThread().start() # Start up network handling in another thread + network_thread_start() self.nodes[0].p2p.wait_for_verack() # Build the blockchain self.tip = int(self.nodes[0].getbestblockhash(), 16) self.block_time = self.nodes[0].getblock( self.nodes[0].getbestblockhash())['time'] + 1 self.blocks = [] # Get a pubkey for the coinbase TXO coinbase_key = CECKey() coinbase_key.set_secretbytes(b"horsebattery") coinbase_pubkey = coinbase_key.get_pubkey() # Create the first block with a coinbase output to our key height = 1 block = create_block(self.tip, create_coinbase( height, coinbase_pubkey), self.block_time) self.blocks.append(block) self.block_time += 1 block.solve() # Save the coinbase for later self.block1 = block self.tip = block.sha256 height += 1 # Bury the block 100 deep so the coinbase output is spendable for i in range(100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() self.blocks.append(block) self.tip = block.sha256 self.block_time += 1 height += 1 # Create a transaction spending the coinbase output with an invalid (null) signature tx = CTransaction() tx.vin.append( CTxIn(COutPoint(self.block1.vtx[0].sha256, 0), scriptSig=b"")) tx.vout.append(CTxOut(49 * 100000000, CScript([OP_TRUE]))) tx.calc_sha256() block102 = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block102.vtx.extend([tx]) block102.hashMerkleRoot = block102.calc_merkle_root() block102.rehash() block102.solve() self.blocks.append(block102) self.tip = block102.sha256 self.block_time += 1 height += 1 # Bury the assumed valid block 2100 deep for i in range(2100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.nVersion = 4 block.solve() self.blocks.append(block) self.tip = block.sha256 self.block_time += 1 height += 1 # Start node1 and node2 with assumevalid so they accept a block with a bad signature. self.start_node(1, extra_args=["-assumevalid=" + hex(block102.sha256)]) p2p1 = self.nodes[1].add_p2p_connection(BaseNode()) p2p1.wait_for_verack() self.start_node(2, extra_args=["-assumevalid=" + hex(block102.sha256)]) p2p2 = self.nodes[2].add_p2p_connection(BaseNode()) p2p2.wait_for_verack() # send header lists to all three nodes p2p0.send_header_for_blocks(self.blocks[0:2000]) p2p0.send_header_for_blocks(self.blocks[2000:]) p2p1.send_header_for_blocks(self.blocks[0:2000]) p2p1.send_header_for_blocks(self.blocks[2000:]) p2p2.send_header_for_blocks(self.blocks[0:200]) # Send blocks to node0. Block 102 will be rejected. self.send_blocks_until_disconnected(p2p0) self.assert_blockchain_height(self.nodes[0], 101) # Send all blocks to node1. All blocks will be accepted. for i in range(2202): p2p1.send_message(msg_block(self.blocks[i])) # Syncing 2200 blocks can take a while on slow systems. Give it plenty of time to sync. p2p1.sync_with_ping(120) assert_equal(self.nodes[1].getblock( self.nodes[1].getbestblockhash())['height'], 2202) # Send blocks to node2. Block 102 will be rejected. self.send_blocks_until_disconnected(p2p2) self.assert_blockchain_height(self.nodes[2], 101) if __name__ == '__main__': AssumeValidTest().main() diff --git a/test/functional/bip65-cltv-p2p.py b/test/functional/bip65-cltv-p2p.py index 7d438ddbc..343b4bc6a 100755 --- a/test/functional/bip65-cltv-p2p.py +++ b/test/functional/bip65-cltv-p2p.py @@ -1,247 +1,246 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test BIP65 (CHECKLOCKTIMEVERIFY). Test that the CHECKLOCKTIMEVERIFY soft-fork activates at (regtest) block height 1351. """ from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import * from test_framework.blocktools import create_coinbase, create_block from test_framework.script import CScript, CScriptNum, OP_1NEGATE, OP_CHECKLOCKTIMEVERIFY, OP_DROP, OP_TRUE from test_framework.txtools import pad_tx CLTV_HEIGHT = 1351 # Reject codes that we might receive in this test REJECT_INVALID = 16 REJECT_OBSOLETE = 17 REJECT_NONSTANDARD = 64 def cltv_lock_to_height(node, tx, height=-1): '''Modify the scriptPubKey to add an OP_CHECKLOCKTIMEVERIFY This transforms the script to anyone can spend (OP_TRUE) if the lock time condition is valid. Default height is -1 which leads CLTV to fail TODO: test more ways that transactions using CLTV could be invalid (eg locktime requirements fail, sequence time requirements fail, etc). ''' height_op = OP_1NEGATE if(height > 0): tx.vin[0].nSequence = 0 tx.nLockTime = height height_op = CScriptNum(height) tx.vout[0].scriptPubKey = CScript( [height_op, OP_CHECKLOCKTIMEVERIFY, OP_DROP, OP_TRUE]) tx.rehash() signed_result = node.signrawtransaction(ToHex(tx)) new_tx = FromHex(CTransaction(), signed_result['hex']) pad_tx(new_tx) new_tx.rehash() return new_tx def spend_from_coinbase(node, coinbase, to_address, amount): from_txid = node.getblock(coinbase)['tx'][0] inputs = [{"txid": from_txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) signresult = node.signrawtransaction(rawtx) tx = FromHex(CTransaction(), signresult['hex']) return tx class BIP65Test(BitcoinTestFramework): def set_test_params(self): self.num_nodes = 1 self.extra_args = [ ['-promiscuousmempoolflags=1', '-whitelist=127.0.0.1']] self.setup_clean_chain = True def run_test(self): self.nodes[0].add_p2p_connection(P2PInterface()) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() # wait_for_verack ensures that the P2P connection is fully up. self.nodes[0].p2p.wait_for_verack() self.log.info("Mining %d blocks", CLTV_HEIGHT - 2) self.coinbase_blocks = self.nodes[0].generate(CLTV_HEIGHT - 2) self.nodeaddress = self.nodes[0].getnewaddress() self.log.info( "Test that an invalid-according-to-CLTV transaction can still appear in a block") spendtx = spend_from_coinbase(self.nodes[0], self.coinbase_blocks[0], self.nodeaddress, 50.0) spendtx = cltv_lock_to_height(self.nodes[0], spendtx) # Make sure the tx is valid self.nodes[0].sendrawtransaction(ToHex(spendtx)) tip = self.nodes[0].getbestblockhash() block_time = self.nodes[0].getblockheader(tip)['mediantime'] + 1 block = create_block(int(tip, 16), create_coinbase( CLTV_HEIGHT - 1), block_time) block.nVersion = 3 block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) assert_equal(self.nodes[0].getbestblockhash(), block.hash) self.log.info("Test that blocks must now be at least version 4") tip = block.sha256 block_time += 1 block = create_block(tip, create_coinbase(CLTV_HEIGHT), block_time) block.nVersion = 3 block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), tip) wait_until(lambda: "reject" in self.nodes[0].p2p.last_message.keys(), lock=mininode_lock) with mininode_lock: assert_equal( self.nodes[0].p2p.last_message["reject"].code, REJECT_OBSOLETE) assert_equal( self.nodes[0].p2p.last_message["reject"].reason, b'bad-version(0x00000003)') assert_equal( self.nodes[0].p2p.last_message["reject"].data, block.sha256) del self.nodes[0].p2p.last_message["reject"] self.log.info( "Test that invalid-according-to-cltv transactions cannot appear in a block") block.nVersion = 4 spendtx = spend_from_coinbase(self.nodes[0], self.coinbase_blocks[1], self.nodeaddress, 49.99) spendtx = cltv_lock_to_height(self.nodes[0], spendtx) # First we show that this tx is valid except for CLTV by getting it # accepted to the mempool (which we can achieve with # -promiscuousmempoolflags). self.nodes[0].p2p.send_and_ping(msg_tx(spendtx)) assert spendtx.hash in self.nodes[0].getrawmempool() # Mine a block containing the funding transaction block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) # This block is valid assert_equal(self.nodes[0].getbestblockhash(), block.hash) # But a block containing a transaction spending this utxo is not rawspendtx = self.nodes[0].decoderawtransaction(ToHex(spendtx)) inputs = [{ "txid": rawspendtx['txid'], "vout": rawspendtx['vout'][0]['n'] }] output = {self.nodeaddress: 49.98} rejectedtx_raw = self.nodes[0].createrawtransaction(inputs, output) rejectedtx_signed = self.nodes[0].signrawtransaction(rejectedtx_raw) # Couldn't complete signature due to CLTV assert(rejectedtx_signed['errors'][0]['error'] == 'Negative locktime') rejectedtx = FromHex(CTransaction(), rejectedtx_signed['hex']) pad_tx(rejectedtx) rejectedtx.rehash() tip = block.hash block_time += 1 block = create_block( block.sha256, create_coinbase(CLTV_HEIGHT+1), block_time) block.nVersion = 4 block.vtx.append(rejectedtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) # This block is invalid assert_equal(self.nodes[0].getbestblockhash(), tip) wait_until(lambda: "reject" in self.nodes[0].p2p.last_message.keys(), lock=mininode_lock) with mininode_lock: assert self.nodes[0].p2p.last_message["reject"].code in [ REJECT_INVALID, REJECT_NONSTANDARD] assert_equal( self.nodes[0].p2p.last_message["reject"].data, block.sha256) if self.nodes[0].p2p.last_message["reject"].code == REJECT_INVALID: # Generic rejection when a block is invalid assert_equal( self.nodes[0].p2p.last_message["reject"].reason, b'blk-bad-inputs') else: assert b'Negative locktime' in self.nodes[0].p2p.last_message["reject"].reason self.log.info( "Test that a version 4 block with a valid-according-to-CLTV transaction is accepted") spendtx = spend_from_coinbase(self.nodes[0], self.coinbase_blocks[2], self.nodeaddress, 49.99) spendtx = cltv_lock_to_height(self.nodes[0], spendtx, CLTV_HEIGHT - 1) # Modify the transaction in the block to be valid against CLTV block.vtx.pop(1) block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) # This block is now valid assert_equal(self.nodes[0].getbestblockhash(), block.hash) # A block containing a transaction spending this utxo is also valid # Build this transaction rawspendtx = self.nodes[0].decoderawtransaction(ToHex(spendtx)) inputs = [{ "txid": rawspendtx['txid'], "vout": rawspendtx['vout'][0]['n'], "sequence": 0 }] output = {self.nodeaddress: 49.98} validtx_raw = self.nodes[0].createrawtransaction( inputs, output, CLTV_HEIGHT) validtx = FromHex(CTransaction(), validtx_raw) # Signrawtransaction won't sign a non standard tx. # But the prevout being anyone can spend, scriptsig can be left empty validtx.vin[0].scriptSig = CScript() pad_tx(validtx) validtx.rehash() tip = block.sha256 block_time += 1 block = create_block(tip, create_coinbase(CLTV_HEIGHT+3), block_time) block.nVersion = 4 block.vtx.append(validtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) # This block is valid assert_equal(self.nodes[0].getbestblockhash(), block.hash) if __name__ == '__main__': BIP65Test().main() diff --git a/test/functional/bip68-112-113-p2p.py b/test/functional/bip68-112-113-p2p.py index f48e8a007..16487d22a 100755 --- a/test/functional/bip68-112-113-p2p.py +++ b/test/functional/bip68-112-113-p2p.py @@ -1,693 +1,692 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test activation of the first version bits soft fork. This soft fork will activate the following BIPS: BIP 68 - nSequence relative lock times BIP 112 - CHECKSEQUENCEVERIFY BIP 113 - MedianTimePast semantics for nLockTime regtest lock-in with 108/144 block signalling activation after a further 144 blocks mine 82 blocks whose coinbases will be used to generate inputs for our tests mine 489 blocks and seed block chain with the 82 inputs will use for our tests at height 572 mine 3 blocks and verify still at LOCKED_IN and test that enforcement has not triggered mine 1 block and test that enforcement has triggered (which triggers ACTIVE) Test BIP 113 is enforced Mine 4 blocks so next height is 580 and test BIP 68 is enforced for time and height Mine 1 block so next height is 581 and test BIP 68 now passes time but not height Mine 1 block so next height is 582 and test BIP 68 now passes time and height Test that BIP 112 is enforced Various transactions will be used to test that the BIPs rules are not enforced before the soft fork activates And that after the soft fork activates transactions pass and fail as they should according to the rules. For each BIP, transactions of versions 1 and 2 will be tested. ---------------- BIP 113: bip113tx - modify the nLocktime variable BIP 68: bip68txs - 16 txs with nSequence relative locktime of 10 with various bits set as per the relative_locktimes below BIP 112: bip112txs_vary_nSequence - 16 txs with nSequence relative_locktimes of 10 evaluated against 10 OP_CSV OP_DROP bip112txs_vary_nSequence_9 - 16 txs with nSequence relative_locktimes of 9 evaluated against 10 OP_CSV OP_DROP bip112txs_vary_OP_CSV - 16 txs with nSequence = 10 evaluated against varying {relative_locktimes of 10} OP_CSV OP_DROP bip112txs_vary_OP_CSV_9 - 16 txs with nSequence = 9 evaluated against varying {relative_locktimes of 10} OP_CSV OP_DROP bip112tx_special - test negative argument to OP_CSV """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * -from test_framework.mininode import ToHex, FromHex, CTransaction, NetworkThread, COIN +from test_framework.mininode import ToHex, FromHex, CTransaction, network_thread_start, COIN from test_framework.blocktools import create_coinbase, create_block, make_conform_to_ctor from test_framework.comptool import TestInstance, TestManager from test_framework.script import * import time base_relative_locktime = 10 seq_disable_flag = 1 << 31 seq_random_high_bit = 1 << 25 seq_type_flag = 1 << 22 seq_random_low_bit = 1 << 18 # b31,b25,b22,b18 represent the 31st, 25th, 22nd and 18th bits respectively in the nSequence field # relative_locktimes[b31][b25][b22][b18] is a base_relative_locktime with the indicated bits set if their indices are 1 relative_locktimes = [] for b31 in range(2): b25times = [] for b25 in range(2): b22times = [] for b22 in range(2): b18times = [] for b18 in range(2): rlt = base_relative_locktime if (b31): rlt = rlt | seq_disable_flag if (b25): rlt = rlt | seq_random_high_bit if (b22): rlt = rlt | seq_type_flag if (b18): rlt = rlt | seq_random_low_bit b18times.append(rlt) b22times.append(b18times) b25times.append(b22times) relative_locktimes.append(b25times) def all_rlt_txs(txarray): txs = [] for b31 in range(2): for b25 in range(2): for b22 in range(2): for b18 in range(2): txs.append(txarray[b31][b25][b22][b18]) return txs def get_csv_status(node): softforks = node.getblockchaininfo()['softforks'] for sf in softforks: if sf['id'] == 'csv' and sf['version'] == 5: return sf['reject']['status'] raise AssertionError('Cannot find CSV fork activation informations') class BIP68_112_113Test(ComparisonTestFramework): def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.extra_args = [['-whitelist=127.0.0.1', '-blockversion=4']] def run_test(self): test = TestManager(self, self.options.tmpdir) test.add_all_connections(self.nodes) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() test.run() def send_generic_input_tx(self, node, coinbases): amount = Decimal("49.99") return node.sendrawtransaction(ToHex(self.sign_transaction(node, self.create_transaction(node, node.getblock(coinbases.pop())['tx'][0], self.nodeaddress, amount)))) def create_transaction(self, node, txid, to_address, amount): inputs = [{"txid": txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) tx = FromHex(CTransaction(), rawtx) return tx def sign_transaction(self, node, unsignedtx): rawtx = ToHex(unsignedtx) signresult = node.signrawtransaction(rawtx) tx = FromHex(CTransaction(), signresult['hex']) return tx def spend_tx(self, node, prev_tx): spendtx = self.create_transaction( node, prev_tx.hash, self.nodeaddress, (prev_tx.vout[0].nValue - 1000) / COIN) spendtx.nVersion = prev_tx.nVersion spendtx.rehash() return spendtx def generate_blocks(self, number): test_blocks = [] for i in range(number): block = self.create_test_block([]) test_blocks.append([block, True]) self.last_block_time += 600 self.tip = block.sha256 self.tipheight += 1 return test_blocks def create_test_block(self, txs, version=536870912): block = create_block(self.tip, create_coinbase( self.tipheight + 1), self.last_block_time + 600) block.nVersion = version block.vtx.extend(txs) make_conform_to_ctor(block) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() return block # Create a block with given txs, and spend these txs in the same block. # Spending utxos in the same block is OK as long as nSequence is not enforced. # Otherwise a number of intermediate blocks should be generated, and this # method should not be used. def create_test_block_spend_utxos(self, node, txs, version=536870912): block = self.create_test_block(txs, version) block.vtx.extend([self.spend_tx(node, tx) for tx in txs]) make_conform_to_ctor(block) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() return block def create_bip68txs(self, bip68inputs, txversion, locktime_delta=0): txs = [] assert(len(bip68inputs) >= 16) i = 0 for b31 in range(2): b25txs = [] for b25 in range(2): b22txs = [] for b22 in range(2): b18txs = [] for b18 in range(2): tx = self.create_transaction( self.nodes[0], bip68inputs[i], self.nodeaddress, Decimal("49.98")) i += 1 tx.nVersion = txversion tx.vin[0].nSequence = relative_locktimes[b31][b25][b22][b18] + \ locktime_delta b18txs.append(self.sign_transaction(self.nodes[0], tx)) b22txs.append(b18txs) b25txs.append(b22txs) txs.append(b25txs) return txs def create_bip112special(self, input, txversion): tx = self.create_transaction( self.nodes[0], input, self.nodeaddress, Decimal("49.98")) tx.nVersion = txversion tx.vout[0].scriptPubKey = CScript( [-1, OP_CHECKSEQUENCEVERIFY, OP_DROP, OP_TRUE]) tx.rehash() signtx = self.sign_transaction(self.nodes[0], tx) signtx.rehash() return signtx def create_bip112txs(self, bip112inputs, varyOP_CSV, txversion, locktime_delta=0): txs = [] assert(len(bip112inputs) >= 16) i = 0 for b31 in range(2): b25txs = [] for b25 in range(2): b22txs = [] for b22 in range(2): b18txs = [] for b18 in range(2): tx = self.create_transaction( self.nodes[0], bip112inputs[i], self.nodeaddress, Decimal("49.98")) i += 1 # if varying OP_CSV, nSequence is fixed if (varyOP_CSV): tx.vin[0].nSequence = base_relative_locktime + \ locktime_delta # vary nSequence instead, OP_CSV is fixed else: tx.vin[0].nSequence = relative_locktimes[b31][b25][b22][b18] + \ locktime_delta tx.nVersion = txversion if (varyOP_CSV): tx.vout[0].scriptPubKey = CScript( [relative_locktimes[b31][b25][b22][b18], OP_CHECKSEQUENCEVERIFY, OP_DROP, OP_TRUE]) else: tx.vout[0].scriptPubKey = CScript( [base_relative_locktime, OP_CHECKSEQUENCEVERIFY, OP_DROP, OP_TRUE]) tx.rehash() signtx = self.sign_transaction(self.nodes[0], tx) signtx.rehash() b18txs.append(signtx) b22txs.append(b18txs) b25txs.append(b22txs) txs.append(b25txs) return txs def get_tests(self): # Enough to build up to 1000 blocks 10 minutes apart without worrying # about getting into the future long_past_time = int(time.time()) - 600 * 1000 # Enough so that the generated blocks will still all be before long_past_time self.nodes[0].setmocktime(long_past_time - 100) # 82 blocks generated for inputs self.coinbase_blocks = self.nodes[0].generate(1 + 16 + 2 * 32 + 1) # Set time back to present so yielded blocks aren't in the future as # we advance last_block_time self.nodes[0].setmocktime(0) # height of the next block to build self.tipheight = 82 self.last_block_time = long_past_time self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0) self.nodeaddress = self.nodes[0].getnewaddress() # CSV is not activated yet. assert_equal(get_csv_status(self.nodes[0]), False) # 489 more version 4 blocks test_blocks = self.generate_blocks(489) # Test #1 yield TestInstance(test_blocks, sync_every_block=False) # Still not activated. assert_equal(get_csv_status(self.nodes[0]), False) # Inputs at height = 572 # Put inputs for all tests in the chain at height 572 (tip now = 571) (time increases by 600s per block) # Note we reuse inputs for v1 and v2 txs so must test these separately # 16 normal inputs bip68inputs = [] for i in range(16): bip68inputs.append(self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks)) # 2 sets of 16 inputs with 10 OP_CSV OP_DROP (actually will be prepended to spending scriptSig) bip112basicinputs = [] for j in range(2): inputs = [] for i in range(16): inputs.append(self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks)) bip112basicinputs.append(inputs) # 2 sets of 16 varied inputs with (relative_lock_time) OP_CSV OP_DROP (actually will be prepended to spending scriptSig) bip112diverseinputs = [] for j in range(2): inputs = [] for i in range(16): inputs.append(self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks)) bip112diverseinputs.append(inputs) # 1 special input with -1 OP_CSV OP_DROP (actually will be prepended to spending scriptSig) bip112specialinput = self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks) # 1 normal input bip113input = self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks) self.nodes[0].setmocktime(self.last_block_time + 600) # 1 block generated for inputs to be in chain at height 572 inputblockhash = self.nodes[0].generate(1)[0] self.nodes[0].setmocktime(0) self.tip = int("0x" + inputblockhash, 0) self.tipheight += 1 self.last_block_time += 600 assert_equal(len(self.nodes[0].getblock( inputblockhash, True)["tx"]), 82 + 1) # 2 more version 4 blocks test_blocks = self.generate_blocks(2) # Test #2 yield TestInstance(test_blocks, sync_every_block=False) # Not yet activated, height = 574 (will activate for block 576, not 575) assert_equal(get_csv_status(self.nodes[0]), False) # Test both version 1 and version 2 transactions for all tests # BIP113 test transaction will be modified before each use to # put in appropriate block time bip113tx_v1 = self.create_transaction( self.nodes[0], bip113input, self.nodeaddress, Decimal("49.98")) bip113tx_v1.vin[0].nSequence = 0xFFFFFFFE bip113tx_v1.nVersion = 1 bip113tx_v2 = self.create_transaction( self.nodes[0], bip113input, self.nodeaddress, Decimal("49.98")) bip113tx_v2.vin[0].nSequence = 0xFFFFFFFE bip113tx_v2.nVersion = 2 # For BIP68 test all 16 relative sequence locktimes bip68txs_v1 = self.create_bip68txs(bip68inputs, 1) bip68txs_v2 = self.create_bip68txs(bip68inputs, 2) # For BIP112 test: # 16 relative sequence locktimes of 10 against 10 OP_CSV OP_DROP inputs bip112txs_vary_nSequence_v1 = self.create_bip112txs( bip112basicinputs[0], False, 1) bip112txs_vary_nSequence_v2 = self.create_bip112txs( bip112basicinputs[0], False, 2) # 16 relative sequence locktimes of 9 against 10 OP_CSV OP_DROP inputs bip112txs_vary_nSequence_9_v1 = self.create_bip112txs( bip112basicinputs[1], False, 1, -1) bip112txs_vary_nSequence_9_v2 = self.create_bip112txs( bip112basicinputs[1], False, 2, -1) # sequence lock time of 10 against 16 (relative_lock_time) OP_CSV OP_DROP inputs bip112txs_vary_OP_CSV_v1 = self.create_bip112txs( bip112diverseinputs[0], True, 1) bip112txs_vary_OP_CSV_v2 = self.create_bip112txs( bip112diverseinputs[0], True, 2) # sequence lock time of 9 against 16 (relative_lock_time) OP_CSV OP_DROP inputs bip112txs_vary_OP_CSV_9_v1 = self.create_bip112txs( bip112diverseinputs[1], True, 1, -1) bip112txs_vary_OP_CSV_9_v2 = self.create_bip112txs( bip112diverseinputs[1], True, 2, -1) # -1 OP_CSV OP_DROP input bip112tx_special_v1 = self.create_bip112special(bip112specialinput, 1) bip112tx_special_v2 = self.create_bip112special(bip112specialinput, 2) ### TESTING ### ################################## ### Before Soft Forks Activate ### ################################## # All txs should pass ### Version 1 txs ### success_txs = [] # add BIP113 tx and -1 CSV tx # = MTP of prior block (not <) but < time put on current block bip113tx_v1.nLockTime = self.last_block_time - 600 * 5 bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1) success_txs.append(bip113signed1) success_txs.append(bip112tx_special_v1) success_txs.append(self.spend_tx(self.nodes[0], bip112tx_special_v1)) # add BIP 68 txs success_txs.extend(all_rlt_txs(bip68txs_v1)) # add BIP 112 with seq=10 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v1)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_nSequence_v1)]) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_v1)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_OP_CSV_v1)]) # try BIP 112 with seq=9 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v1)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_nSequence_9_v1)]) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_9_v1)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_OP_CSV_9_v1)]) # Test #3 yield TestInstance([[self.create_test_block(success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) ### Version 2 txs ### success_txs = [] # add BIP113 tx and -1 CSV tx # = MTP of prior block (not <) but < time put on current block bip113tx_v2.nLockTime = self.last_block_time - 600 * 5 bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2) success_txs.append(bip113signed2) success_txs.append(bip112tx_special_v2) success_txs.append(self.spend_tx(self.nodes[0], bip112tx_special_v2)) # add BIP 68 txs success_txs.extend(all_rlt_txs(bip68txs_v2)) # add BIP 112 with seq=10 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v2)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_nSequence_v2)]) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_v2)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_OP_CSV_v2)]) # try BIP 112 with seq=9 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v2)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_nSequence_9_v2)]) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_9_v2)) success_txs.extend([self.spend_tx(self.nodes[0], tx) for tx in all_rlt_txs(bip112txs_vary_OP_CSV_9_v2)]) # Test #4 yield TestInstance([[self.create_test_block(success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # 1 more version 4 block to get us to height 575 so the fork should # now be active for the next block test_blocks = self.generate_blocks(1) # Test #5 yield TestInstance(test_blocks, sync_every_block=False) assert_equal(get_csv_status(self.nodes[0]), False) self.nodes[0].generate(1) assert_equal(get_csv_status(self.nodes[0]), True) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) ################################# ### After Soft Forks Activate ### ################################# ### BIP 113 ### # BIP 113 tests should now fail regardless of version number # if nLockTime isn't satisfied by new rules # = MTP of prior block (not <) but < time put on current block bip113tx_v1.nLockTime = self.last_block_time - 600 * 5 bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1) # = MTP of prior block (not <) but < time put on current block bip113tx_v2.nLockTime = self.last_block_time - 600 * 5 bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2) for bip113tx in [bip113signed1, bip113signed2]: # Test #6, Test #7 yield TestInstance([[self.create_test_block([bip113tx]), False]]) # BIP 113 tests should now pass if the locktime is < MTP # < MTP of prior block bip113tx_v1.nLockTime = self.last_block_time - 600 * 5 - 1 bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1) # < MTP of prior block bip113tx_v2.nLockTime = self.last_block_time - 600 * 5 - 1 bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2) for bip113tx in [bip113signed1, bip113signed2]: # Test #8, Test #9 yield TestInstance([[self.create_test_block([bip113tx]), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Next block height = 580 after 4 blocks of random version test_blocks = self.generate_blocks(4) # Test #10 yield TestInstance(test_blocks, sync_every_block=False) ### BIP 68 ### ### Version 1 txs ### # All still pass success_txs = [] success_txs.extend(all_rlt_txs(bip68txs_v1)) # Test #11 yield TestInstance([[self.create_test_block(success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) ### Version 2 txs ### bip68success_txs = [] # All txs with SEQUENCE_LOCKTIME_DISABLE_FLAG set pass for b25 in range(2): for b22 in range(2): for b18 in range(2): bip68success_txs.append(bip68txs_v2[1][b25][b22][b18]) # Test #12 yield TestInstance([[self.create_test_block(bip68success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # All txs without flag fail as we are at delta height = 8 < 10 and # delta time = 8 * 600 < 10 * 512 bip68timetxs = [] for b25 in range(2): for b18 in range(2): bip68timetxs.append(bip68txs_v2[0][b25][1][b18]) for tx in bip68timetxs: # Test #13 - Test #16 yield TestInstance([[self.create_test_block([tx]), False]]) bip68heighttxs = [] for b25 in range(2): for b18 in range(2): bip68heighttxs.append(bip68txs_v2[0][b25][0][b18]) for tx in bip68heighttxs: # Test #17 - Test #20 yield TestInstance([[self.create_test_block([tx]), False]]) # Advance one block to 581 test_blocks = self.generate_blocks(1) # Test #21 yield TestInstance(test_blocks, sync_every_block=False) # Height txs should fail and time txs should now pass 9 * 600 > 10 * 512 bip68success_txs.extend(bip68timetxs) # Test #22 yield TestInstance([[self.create_test_block(bip68success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) for tx in bip68heighttxs: # Test #23 - Test #26 yield TestInstance([[self.create_test_block([tx]), False]]) # Advance one block to 582 test_blocks = self.generate_blocks(1) # Test #27 yield TestInstance(test_blocks, sync_every_block=False) # All BIP 68 txs should pass bip68success_txs.extend(bip68heighttxs) # Test #28 yield TestInstance([[self.create_test_block(bip68success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) ### BIP 112 ### ### Version 1 txs ### # -1 OP_CSV tx should fail # Test #29 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], [bip112tx_special_v1]), False]]) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in argument to OP_CSV, # version 1 txs should still pass success_txs = [] for b25 in range(2): for b22 in range(2): for b18 in range(2): success_txs.append( bip112txs_vary_OP_CSV_v1[1][b25][b22][b18]) success_txs.append( bip112txs_vary_OP_CSV_9_v1[1][b25][b22][b18]) # Test #30 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is unset in argument to OP_CSV, # version 1 txs should now fail fail_txs = [] fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v1)) fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v1)) for b25 in range(2): for b22 in range(2): for b18 in range(2): fail_txs.append(bip112txs_vary_OP_CSV_v1[0][b25][b22][b18]) fail_txs.append( bip112txs_vary_OP_CSV_9_v1[0][b25][b22][b18]) for tx in fail_txs: # Test #31 - Test #78 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], [tx]), False]]) ### Version 2 txs ### # -1 OP_CSV tx should fail # Test #79 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], [bip112tx_special_v2]), False]]) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in argument to OP_CSV, # version 2 txs should pass success_txs = [] for b25 in range(2): for b22 in range(2): for b18 in range(2): # 8/16 of vary_OP_CSV success_txs.append( bip112txs_vary_OP_CSV_v2[1][b25][b22][b18]) # 8/16 of vary_OP_CSV_9 success_txs.append( bip112txs_vary_OP_CSV_9_v2[1][b25][b22][b18]) # Test #80 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) ## SEQUENCE_LOCKTIME_DISABLE_FLAG is unset in argument to OP_CSV for all remaining txs ## # All txs with nSequence 9 should fail either due to earlier mismatch # or failing the CSV check fail_txs = [] # 16/16 of vary_nSequence_9 fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v2)) for b25 in range(2): for b22 in range(2): for b18 in range(2): # 16/16 of vary_OP_CSV_9 fail_txs.append( bip112txs_vary_OP_CSV_9_v2[0][b25][b22][b18]) for tx in fail_txs: # Test #81 - Test #104 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], [tx]), False]]) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in nSequence, tx should fail fail_txs = [] for b25 in range(2): for b22 in range(2): for b18 in range(2): # 8/16 of vary_nSequence fail_txs.append( bip112txs_vary_nSequence_v2[1][b25][b22][b18]) for tx in fail_txs: # Test #105 - Test #112 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], [tx]), False]]) # If sequencelock types mismatch, tx should fail fail_txs = [] for b25 in range(2): for b18 in range(2): # 12/16 of vary_nSequence fail_txs.append(bip112txs_vary_nSequence_v2[0][b25][1][b18]) # 12/16 of vary_OP_CSV fail_txs.append(bip112txs_vary_OP_CSV_v2[0][b25][1][b18]) for tx in fail_txs: # Test #113 - Test #120 yield TestInstance([[self.create_test_block_spend_utxos(self.nodes[0], [tx]), False]]) # Remaining txs should pass, just test masking works properly success_txs = [] for b25 in range(2): for b18 in range(2): # 16/16 of vary_nSequence success_txs.append(bip112txs_vary_nSequence_v2[0][b25][0][b18]) # 16/16 of vary_OP_CSV success_txs.append(bip112txs_vary_OP_CSV_v2[0][b25][0][b18]) # Test #121 yield TestInstance([[self.create_test_block(success_txs), True]]) # Spending the previous block utxos requires a difference of 10 blocks (nSequence = 10). # Generate 9 blocks then spend in the 10th block = self.nodes[0].getbestblockhash() self.last_block_time += 600 self.tip = int("0x" + block, 0) self.tipheight += 1 # Test #122 yield TestInstance(self.generate_blocks(9), sync_every_block=False) spend_txs = [] for tx in success_txs: raw_tx = self.spend_tx(self.nodes[0], tx) raw_tx.vin[0].nSequence = base_relative_locktime raw_tx.rehash() spend_txs.append(raw_tx) # Test #123 yield TestInstance([[self.create_test_block(spend_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Additional test, of checking that comparison of two time types works properly time_txs = [] for b25 in range(2): for b18 in range(2): tx = bip112txs_vary_OP_CSV_v2[0][b25][1][b18] signtx = self.sign_transaction(self.nodes[0], tx) time_txs.append(signtx) # Test #124 yield TestInstance([[self.create_test_block(time_txs), True]]) # Spending the previous block utxos requires a block time difference of # at least 10 * 512s (nSequence = 10). # Generate 8 blocks then spend in the 9th (9 * 600 > 10 * 512) block = self.nodes[0].getbestblockhash() self.last_block_time += 600 self.tip = int("0x" + block, 0) self.tipheight += 1 # Test #125 yield TestInstance(self.generate_blocks(8), sync_every_block=False) spend_txs = [] for tx in time_txs: raw_tx = self.spend_tx(self.nodes[0], tx) raw_tx.vin[0].nSequence = base_relative_locktime | seq_type_flag raw_tx.rehash() spend_txs.append(raw_tx) # Test #126 yield TestInstance([[self.create_test_block(spend_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Missing aspects of test # Testing empty stack fails if __name__ == '__main__': BIP68_112_113Test().main() diff --git a/test/functional/bipdersig-p2p.py b/test/functional/bipdersig-p2p.py index 332ed2f3c..c0f2873a4 100755 --- a/test/functional/bipdersig-p2p.py +++ b/test/functional/bipdersig-p2p.py @@ -1,146 +1,145 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test BIP66 (DER SIG). Test that the DERSIG soft-fork activates at (regtest) height 1251. """ from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import * from test_framework.blocktools import create_coinbase, create_block from test_framework.script import CScript from io import BytesIO DERSIG_HEIGHT = 1251 # Reject codes that we might receive in this test REJECT_INVALID = 16 REJECT_OBSOLETE = 17 REJECT_NONSTANDARD = 64 # A canonical signature consists of: # <30> <02> <02> def unDERify(tx): """ Make the signature in vin 0 of a tx non-DER-compliant, by adding padding after the S-value. """ scriptSig = CScript(tx.vin[0].scriptSig) newscript = [] for i in scriptSig: if (len(newscript) == 0): newscript.append(i[0:-1] + b'\0' + i[-1:]) else: newscript.append(i) tx.vin[0].scriptSig = CScript(newscript) def create_transaction(node, coinbase, to_address, amount): from_txid = node.getblock(coinbase)['tx'][0] inputs = [{"txid": from_txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) signresult = node.signrawtransaction(rawtx) tx = CTransaction() tx.deserialize(BytesIO(hex_str_to_bytes(signresult['hex']))) return tx class BIP66Test(BitcoinTestFramework): def set_test_params(self): self.num_nodes = 1 self.extra_args = [ ['-promiscuousmempoolflags=1', '-whitelist=127.0.0.1']] self.setup_clean_chain = True def run_test(self): self.nodes[0].add_p2p_connection(P2PInterface()) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() # wait_for_verack ensures that the P2P connection is fully up. self.nodes[0].p2p.wait_for_verack() self.log.info("Mining %d blocks", DERSIG_HEIGHT - 1) self.coinbase_blocks = self.nodes[0].generate(DERSIG_HEIGHT - 1) self.nodeaddress = self.nodes[0].getnewaddress() self.log.info("Test that blocks must now be at least version 3") tip = self.nodes[0].getbestblockhash() block_time = self.nodes[0].getblockheader(tip)['mediantime'] + 1 block = create_block( int(tip, 16), create_coinbase(DERSIG_HEIGHT), block_time) block.nVersion = 2 block.rehash() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) assert_equal(self.nodes[0].getbestblockhash(), tip) wait_until(lambda: "reject" in self.nodes[0].p2p.last_message.keys(), lock=mininode_lock) with mininode_lock: assert_equal( self.nodes[0].p2p.last_message["reject"].code, REJECT_OBSOLETE) assert_equal( self.nodes[0].p2p.last_message["reject"].reason, b'bad-version(0x00000002)') assert_equal( self.nodes[0].p2p.last_message["reject"].data, block.sha256) del self.nodes[0].p2p.last_message["reject"] self.log.info( "Test that transactions with non-DER signatures cannot appear in a block") block.nVersion = 3 spendtx = create_transaction(self.nodes[0], self.coinbase_blocks[1], self.nodeaddress, 1.0) unDERify(spendtx) spendtx.rehash() # Now we verify that a block with this transaction is invalid. block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) assert_equal(self.nodes[0].getbestblockhash(), tip) wait_until(lambda: "reject" in self.nodes[0].p2p.last_message.keys(), lock=mininode_lock) with mininode_lock: # We can receive different reject messages depending on whether # bitcoind is running with multiple script check threads. If script # check threads are not in use, then transaction script validation # happens sequentially, and bitcoind produces more specific reject # reasons. assert self.nodes[0].p2p.last_message["reject"].code in [ REJECT_INVALID, REJECT_NONSTANDARD] assert_equal( self.nodes[0].p2p.last_message["reject"].data, block.sha256) if self.nodes[0].p2p.last_message["reject"].code == REJECT_INVALID: # Generic rejection when a block is invalid assert_equal( self.nodes[0].p2p.last_message["reject"].reason, b'blk-bad-inputs') else: assert b'Non-canonical DER signature' in self.nodes[0].p2p.last_message["reject"].reason self.log.info( "Test that a version 3 block with a DERSIG-compliant transaction is accepted") block.vtx[1] = create_transaction(self.nodes[0], self.coinbase_blocks[1], self.nodeaddress, 1.0) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() self.nodes[0].p2p.send_and_ping(msg_block(block)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), block.sha256) if __name__ == '__main__': BIP66Test().main() diff --git a/test/functional/example_test.py b/test/functional/example_test.py index dcfd8c86b..41dbb0241 100755 --- a/test/functional/example_test.py +++ b/test/functional/example_test.py @@ -1,224 +1,224 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """An example functional test The module-level docstring should include a high-level description of what the test is doing. It's the first thing people see when they open the file and should give the reader information about *what* the test is testing and *how* it's being tested """ # Imports should be in PEP8 ordering (std library first, then third party # libraries then local imports). from collections import defaultdict # Avoid wildcard * imports if possible from test_framework.blocktools import (create_block, create_coinbase) from test_framework.mininode import ( CInv, - NetworkThread, P2PInterface, mininode_lock, msg_block, msg_getdata, + network_thread_start, ) from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, connect_nodes, wait_until, ) # P2PInterface is a class containing callbacks to be executed when a P2P # message is received from the node-under-test. Subclass P2PInterface and # override the on_*() methods if you need custom behaviour. class BaseNode(P2PInterface): def __init__(self): """Initialize the P2PInterface Used to inialize custom properties for the Node that aren't included by default in the base class. Be aware that the P2PInterface base class already stores a counter for each P2P message type and the last received message of each type, which should be sufficient for the needs of most tests. Call super().__init__() first for standard initialization and then initialize custom properties.""" super().__init__() # Stores a dictionary of all blocks received self.block_receive_map = defaultdict(int) def on_block(self, message): """Override the standard on_block callback Store the hash of a received block in the dictionary.""" message.block.calc_sha256() self.block_receive_map[message.block.sha256] += 1 def on_inv(self, message): """Override the standard on_inv callback""" pass def custom_function(): """Do some custom behaviour If this function is more generally useful for other tests, consider moving it to a module in test_framework.""" # self.log.info("running custom_function") # Oops! Can't run self.log outside the BitcoinTestFramework pass class ExampleTest(BitcoinTestFramework): # Each functional test is a subclass of the BitcoinTestFramework class. # Override the set_test_params(), add_options(), setup_chain(), setup_network() # and setup_nodes() methods to customize the test setup as required. def set_test_params(self): """Override test parameters for your individual test. This method must be overridden and num_nodes must be exlicitly set.""" self.setup_clean_chain = True self.num_nodes = 3 # Use self.extra_args to change command-line arguments for the nodes self.extra_args = [[], ["-logips"], []] # self.log.info("I've finished set_test_params") # Oops! Can't run self.log before run_test() # Use add_options() to add specific command-line options for your test. # In practice this is not used very much, since the tests are mostly written # to be run in automated environments without command-line options. # def add_options() # pass # Use setup_chain() to customize the node data directories. In practice # this is not used very much since the default behaviour is almost always # fine # def setup_chain(): # pass def setup_network(self): """Setup the test network topology Often you won't need to override this, since the standard network topology (linear: node0 <-> node1 <-> node2 <-> ...) is fine for most tests. If you do override this method, remember to start the nodes, assign them to self.nodes, connect them and then sync.""" self.setup_nodes() # In this test, we're not connecting node2 to node0 or node1. Calls to # sync_all() should not include node2, since we're not expecting it to # sync. connect_nodes(self.nodes[0], self.nodes[1]) self.sync_all([self.nodes[0:1]]) # Use setup_nodes() to customize the node start behaviour (for example if # you don't want to start all nodes at the start of the test). # def setup_nodes(): # pass def custom_method(self): """Do some custom behaviour for this test Define it in a method here because you're going to use it repeatedly. If you think it's useful in general, consider moving it to the base BitcoinTestFramework class so other tests can use it.""" self.log.info("Running custom_method") def run_test(self): """Main test logic""" # Create a P2P connection to one of the nodes self.nodes[0].add_p2p_connection(BaseNode()) # Start up network handling in another thread. This needs to be called # after the P2P connections have been created. - NetworkThread().start() + network_thread_start() # wait_for_verack ensures that the P2P connection is fully up. self.nodes[0].p2p.wait_for_verack() # Generating a block on one of the nodes will get us out of IBD blocks = [int(self.nodes[0].generate(nblocks=1)[0], 16)] self.sync_all([self.nodes[0:1]]) # Notice above how we called an RPC by calling a method with the same # name on the node object. Notice also how we used a keyword argument # to specify a named RPC argument. Neither of those are defined on the # node object. Instead there's some __getattr__() magic going on under # the covers to dispatch unrecognised attribute calls to the RPC # interface. # Logs are nice. Do plenty of them. They can be used in place of comments for # breaking the test into sub-sections. self.log.info("Starting test!") self.log.info("Calling a custom function") custom_function() self.log.info("Calling a custom method") self.custom_method() self.log.info("Create some blocks") self.tip = int(self.nodes[0].getbestblockhash(), 16) self.block_time = self.nodes[0].getblock( self.nodes[0].getbestblockhash())['time'] + 1 height = 1 for i in range(10): # Use the mininode and blocktools functionality to manually build a block # Calling the generate() rpc is easier, but this allows us to exactly # control the blocks and transactions. block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() block_message = msg_block(block) # Send message is used to send a P2P message to the node over our P2PInterface self.nodes[0].p2p.send_message(block_message) self.tip = block.sha256 blocks.append(self.tip) self.block_time += 1 height += 1 self.log.info( "Wait for node1 to reach current tip (height 11) using RPC") self.nodes[1].waitforblockheight(11) self.log.info("Connect node2 and node1") connect_nodes(self.nodes[1], self.nodes[2]) self.log.info("Add P2P connection to node2") self.nodes[2].add_p2p_connection(BaseNode()) self.nodes[2].p2p.wait_for_verack() self.log.info( "Wait for node2 reach current tip. Test that it has propagated all the blocks to us") getdata_request = msg_getdata() for block in blocks: getdata_request.inv.append(CInv(2, block)) self.nodes[2].p2p.send_message(getdata_request) # wait_until() will loop until a predicate condition is met. Use it to test properties of the # P2PInterface objects. wait_until(lambda: sorted(blocks) == sorted( list(self.nodes[2].p2p.block_receive_map.keys())), timeout=5, lock=mininode_lock) self.log.info("Check that each block was received only once") # The network thread uses a global lock on data access to the P2PConnection objects when sending and receiving # messages. The test thread should acquire the global lock before accessing any P2PConnection data to avoid locking # and synchronization issues. Note wait_until() acquires this global lock when testing the predicate. with mininode_lock: for block in self.nodes[2].p2p.block_receive_map.values(): assert_equal(block, 1) if __name__ == '__main__': ExampleTest().main() diff --git a/test/functional/invalidblockrequest.py b/test/functional/invalidblockrequest.py index 05ecf74f1..c11b5b473 100755 --- a/test/functional/invalidblockrequest.py +++ b/test/functional/invalidblockrequest.py @@ -1,143 +1,144 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * +from test_framework.mininode import network_thread_start import copy import time ''' In this test we connect to one node over p2p, and test block requests: 1) Valid blocks should be requested and become chain tip. 2) Invalid block with duplicated transaction should be re-requested. 3) Invalid block with bad coinbase value should be rejected and not re-requested. ''' # Use the ComparisonTestFramework with 1 node: only use --testbinary. class InvalidBlockRequestTest(ComparisonTestFramework): ''' Can either run this test as 1 node with expected answers, or two and compare them. Change the "outcome" variable from each TestInstance object to only do the comparison. ''' def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True def run_test(self): test = TestManager(self, self.options.tmpdir) test.add_all_connections(self.nodes) self.tip = None self.block_time = None self.extra_args = [["-whitelist=127.0.0.1"]] - NetworkThread().start() # Start up network handling in another thread + network_thread_start() test.run() def get_tests(self): if self.tip is None: self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0) self.block_time = int(time.time()) + 1 ''' Create a new block with an anyone-can-spend coinbase ''' height = 1 block = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block.solve() self.tip = block.sha256 height += 1 yield TestInstance([[block, True]]) block1 = block ''' Now we need that block to mature so we can spend the coinbase. ''' test = TestInstance(sync_every_block=False) for i in range(100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() self.tip = block.sha256 self.block_time += 1 test.blocks_and_transactions.append([block, True]) height += 1 yield test assert(block.sha256 == int(self.nodes[0].getbestblockhash(), 16)) ''' Now we use merkle-root malleability to generate an invalid block with same blockheader. Manufacture a block with 3 transactions (coinbase, spend of prior coinbase, spend of that spend). Duplicate the 3rd transaction to leave merkle root and blockheader unchanged but invalidate the block. ''' block2 = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 # b'0x51' is OP_TRUE tx1 = create_transaction(block1.vtx[0], 0, b'', 50 * COIN) tx2 = create_transaction(tx1, 0, b'\x51', 50 * COIN) block2.vtx.extend([tx1, tx2]) block2.vtx = [block2.vtx[0]] + \ sorted(block2.vtx[1:], key=lambda tx: tx.get_id()) block2.hashMerkleRoot = block2.calc_merkle_root() block2.rehash() block2.solve() orig_hash = block2.sha256 block2_orig = copy.deepcopy(block2) # Mutate block 2 block2.vtx.append(block2.vtx[2]) assert_equal(block2.hashMerkleRoot, block2.calc_merkle_root()) assert_equal(orig_hash, block2.rehash()) assert(block2_orig.vtx != block2.vtx) self.tip = block2.sha256 yield TestInstance([[block2, RejectResult(16, b'bad-txns-duplicate')]]) yield TestInstance([[block2_orig, True]]) height += 1 # Check transactions for duplicate inputs self.log.info("Test duplicate input block.") block2_orig.vtx[2].vin.append(block2_orig.vtx[2].vin[0]) block2.vtx = [block2.vtx[0]] + \ sorted(block2.vtx[1:], key=lambda tx: tx.get_id()) block2_orig.vtx[2].rehash() block2_orig.hashMerkleRoot = block2_orig.calc_merkle_root() block2_orig.rehash() block2_orig.solve() yield TestInstance([[block2_orig, RejectResult(16, b'bad-txns-inputs-duplicate')]]) ''' Make sure that a totally screwed up block is not valid. ''' block3 = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block3.vtx[0].vout[0].nValue = 51 * COIN # Too high! block3.vtx[0].sha256 = None block3.vtx[0].calc_sha256() block3.hashMerkleRoot = block3.calc_merkle_root() block3.rehash() block3.solve() yield TestInstance([[block3, RejectResult(16, b'bad-cb-amount')]]) if __name__ == '__main__': InvalidBlockRequestTest().main() diff --git a/test/functional/invalidtxrequest.py b/test/functional/invalidtxrequest.py index ff5b01c7a..01f6caf3d 100755 --- a/test/functional/invalidtxrequest.py +++ b/test/functional/invalidtxrequest.py @@ -1,79 +1,79 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import ComparisonTestFramework from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time ''' In this test we connect to one node over p2p, and test tx requests. ''' # Use the ComparisonTestFramework with 1 node: only use --testbinary. class InvalidTxRequestTest(ComparisonTestFramework): ''' Can either run this test as 1 node with expected answers, or two and compare them. Change the "outcome" variable from each TestInstance object to only do the comparison. ''' def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True def run_test(self): test = TestManager(self, self.options.tmpdir) test.add_all_connections(self.nodes) self.tip = None self.block_time = None - NetworkThread().start() # Start up network handling in another thread + network_thread_start() test.run() def get_tests(self): if self.tip is None: self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0) self.block_time = int(time.time()) + 1 ''' Create a new block with an anyone-can-spend coinbase ''' height = 1 block = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block.solve() # Save the coinbase for later self.block1 = block self.tip = block.sha256 height += 1 yield TestInstance([[block, True]]) ''' Now we need that block to mature so we can spend the coinbase. ''' test = TestInstance(sync_every_block=False) for i in range(100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() self.tip = block.sha256 self.block_time += 1 test.blocks_and_transactions.append([block, True]) height += 1 yield test # b'\x64' is OP_NOTIF # Transaction will be rejected with code 16 (REJECT_INVALID) tx1 = create_transaction( self.block1.vtx[0], 0, b'\x64', 50 * COIN - 12000) yield TestInstance([[tx1, RejectResult(16, b'mandatory-script-verify-flag-failed')]]) # TODO: test further transactions... if __name__ == '__main__': InvalidTxRequestTest().main() diff --git a/test/functional/maxuploadtarget.py b/test/functional/maxuploadtarget.py index 89f64f9b5..80c11fd4b 100755 --- a/test/functional/maxuploadtarget.py +++ b/test/functional/maxuploadtarget.py @@ -1,185 +1,183 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. ''' Test behavior of -maxuploadtarget. * Verify that getdata requests for old blocks (>1week) are dropped if uploadtarget has been reached. * Verify that getdata requests for recent blocks are respecteved even if uploadtarget has been reached. * Verify that the upload counters are reset after 24 hours. ''' from collections import defaultdict import time from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE from test_framework.blocktools import mine_big_block class TestNode(P2PInterface): def __init__(self): super().__init__() self.block_receive_map = defaultdict(int) def on_inv(self, message): pass def on_block(self, message): message.block.calc_sha256() self.block_receive_map[message.block.sha256] += 1 class MaxUploadTest(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 # Start a node with maxuploadtarget of 200 MB (/24h) self.extra_args = [["-maxuploadtarget=200"]] # Cache for utxos, as the listunspent may take a long time later in the # test self.utxo_cache = [] def run_test(self): # Before we connect anything, we first set the time on the node # to be in the past, otherwise things break because the CNode # time counters can't be reset backward after initialization old_time = int(time.time() - 2 * 60 * 60 * 24 * 7) self.nodes[0].setmocktime(old_time) # Generate some old blocks self.nodes[0].generate(130) # p2p_conns[0] will only request old blocks # p2p_conns[1] will only request new blocks # p2p_conns[2] will test resetting the counters p2p_conns = [] for _ in range(3): p2p_conns.append(self.nodes[0].add_p2p_connection(TestNode())) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() for p2pc in p2p_conns: p2pc.wait_for_verack() # Test logic begins here # Now mine a big block mine_big_block(self.nodes[0], self.utxo_cache) # Store the hash; we'll request this later big_old_block = self.nodes[0].getbestblockhash() old_block_size = self.nodes[0].getblock(big_old_block, True)['size'] big_old_block = int(big_old_block, 16) # Advance to two days ago self.nodes[0].setmocktime(int(time.time()) - 2 * 60 * 60 * 24) # Mine one more block, so that the prior block looks old mine_big_block(self.nodes[0], self.utxo_cache) # We'll be requesting this new block too big_new_block = self.nodes[0].getbestblockhash() big_new_block = int(big_new_block, 16) # p2p_conns[0] will test what happens if we just keep requesting the # the same big old block too many times (expect: disconnect) getdata_request = msg_getdata() getdata_request.inv.append(CInv(2, big_old_block)) max_bytes_per_day = 200 * 1024 * 1024 daily_buffer = 144 * LEGACY_MAX_BLOCK_SIZE max_bytes_available = max_bytes_per_day - daily_buffer success_count = max_bytes_available // old_block_size # 144MB will be reserved for relaying new blocks, so expect this to # succeed for ~70 tries. for i in range(success_count): p2p_conns[0].send_message(getdata_request) p2p_conns[0].sync_with_ping() assert_equal(p2p_conns[0].block_receive_map[big_old_block], i + 1) assert_equal(len(self.nodes[0].getpeerinfo()), 3) # At most a couple more tries should succeed (depending on how long # the test has been running so far). for i in range(3): p2p_conns[0].send_message(getdata_request) p2p_conns[0].wait_for_disconnect() assert_equal(len(self.nodes[0].getpeerinfo()), 2) self.log.info( "Peer 0 disconnected after downloading old block too many times") # Requesting the current block on p2p_conns[1] should succeed indefinitely, # even when over the max upload target. # We'll try 200 times getdata_request.inv = [CInv(2, big_new_block)] for i in range(200): p2p_conns[1].send_message(getdata_request) p2p_conns[1].sync_with_ping() assert_equal(p2p_conns[1].block_receive_map[big_new_block], i + 1) self.log.info("Peer 1 able to repeatedly download new block") # But if p2p_conns[1] tries for an old block, it gets disconnected # too. getdata_request.inv = [CInv(2, big_old_block)] p2p_conns[1].send_message(getdata_request) p2p_conns[1].wait_for_disconnect() assert_equal(len(self.nodes[0].getpeerinfo()), 1) self.log.info("Peer 1 disconnected after trying to download old block") self.log.info("Advancing system time on node to clear counters...") # If we advance the time by 24 hours, then the counters should reset, # and p2p_conns[2] should be able to retrieve the old block. self.nodes[0].setmocktime(int(time.time())) p2p_conns[2].sync_with_ping() p2p_conns[2].send_message(getdata_request) p2p_conns[2].sync_with_ping() assert_equal(p2p_conns[2].block_receive_map[big_old_block], 1) self.log.info("Peer 2 able to download old block") self.nodes[0].disconnect_p2ps() # stop and start node 0 with 1MB maxuploadtarget, whitelist 127.0.0.1 self.log.info("Restarting nodes with -whitelist=127.0.0.1") self.stop_node(0) self.start_node(0, ["-whitelist=127.0.0.1", "-maxuploadtarget=1", "-blockmaxsize=999000"]) # Reconnect to self.nodes[0] self.nodes[0].add_p2p_connection(TestNode()) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() self.nodes[0].p2p.wait_for_verack() # retrieve 20 blocks which should be enough to break the 1MB limit getdata_request.inv = [CInv(2, big_new_block)] for i in range(20): self.nodes[0].p2p.send_message(getdata_request) self.nodes[0].p2p.sync_with_ping() assert_equal( self.nodes[0].p2p.block_receive_map[big_new_block], i + 1) getdata_request.inv = [CInv(2, big_old_block)] self.nodes[0].p2p.send_and_ping(getdata_request) # node is still connected because of the whitelist assert_equal(len(self.nodes[0].getpeerinfo()), 1) self.log.info( "Peer still connected after trying to download old block (whitelisted)") if __name__ == '__main__': MaxUploadTest().main() diff --git a/test/functional/node_network_limited.py b/test/functional/node_network_limited.py index 8dd0efadd..cc14d7e8d 100755 --- a/test/functional/node_network_limited.py +++ b/test/functional/node_network_limited.py @@ -1,65 +1,65 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Tests NODE_NETWORK_LIMITED. Tests that a node configured with -prune=550 signals NODE_NETWORK_LIMITED correctly and that it responds to getdata requests for blocks correctly: - send a block within 288 + 2 of the tip - disconnect peers who request blocks older than that.""" from test_framework.messages import CInv, msg_getdata -from test_framework.mininode import NODE_BLOOM, NODE_NETWORK_LIMITED, NODE_BITCOIN_CASH, NetworkThread, P2PInterface +from test_framework.mininode import NODE_BLOOM, NODE_NETWORK_LIMITED, NODE_BITCOIN_CASH, network_thread_start, P2PInterface from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal class P2PIgnoreInv(P2PInterface): def on_inv(self, message): # The node will send us invs for other blocks. Ignore them. pass def send_getdata_for_block(self, blockhash): getdata_request = msg_getdata() getdata_request.inv.append(CInv(2, int(blockhash, 16))) self.send_message(getdata_request) class NodeNetworkLimitedTest(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [['-prune=550']] def run_test(self): node = self.nodes[0].add_p2p_connection(P2PIgnoreInv()) - NetworkThread().start() + network_thread_start() node.wait_for_verack() expected_services = NODE_BLOOM | NODE_BITCOIN_CASH | NODE_NETWORK_LIMITED self.log.info("Check that node has signalled expected services.") assert_equal(node.nServices, expected_services) self.log.info("Check that the localservices is as expected.") assert_equal(int(self.nodes[0].getnetworkinfo()[ 'localservices'], 16), expected_services) self.log.info( "Mine enough blocks to reach the NODE_NETWORK_LIMITED range.") blocks = self.nodes[0].generate(292) self.log.info("Make sure we can max retrive block at tip-288.") # last block in valid range node.send_getdata_for_block(blocks[1]) node.wait_for_block(int(blocks[1], 16), timeout=3) self.log.info( "Requesting block at height 2 (tip-289) must fail (ignored).") # first block outside of the 288+2 limit node.send_getdata_for_block(blocks[0]) node.wait_for_disconnect(5) if __name__ == '__main__': NodeNetworkLimitedTest().main() diff --git a/test/functional/nulldummy.py b/test/functional/nulldummy.py index 022d47fc1..20d7b94e1 100755 --- a/test/functional/nulldummy.py +++ b/test/functional/nulldummy.py @@ -1,122 +1,122 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * -from test_framework.mininode import CTransaction, NetworkThread +from test_framework.mininode import CTransaction, network_thread_start from test_framework.blocktools import create_coinbase, create_block from test_framework.script import CScript from io import BytesIO import time NULLDUMMY_ERROR = "64: non-mandatory-script-verify-flag (Dummy CHECKMULTISIG argument must be zero)" def trueDummy(tx): scriptSig = CScript(tx.vin[0].scriptSig) newscript = [] for i in scriptSig: if (len(newscript) == 0): assert(len(i) == 0) newscript.append(b'\x51') else: newscript.append(i) tx.vin[0].scriptSig = CScript(newscript) tx.rehash() ''' This test is meant to exercise NULLDUMMY softfork. Connect to a single node. Generate 2 blocks (save the coinbases for later). Generate 427 more blocks. [Policy/Consensus] Check that NULLDUMMY compliant transactions are accepted in the 430th block. [Policy] Check that non-NULLDUMMY transactions are rejected before activation. [Consensus] Check that the new NULLDUMMY rules are not enforced on the 431st block. [Policy/Consensus] Check that the new NULLDUMMY rules are enforced on the 432nd block. ''' class NULLDUMMYTest(BitcoinTestFramework): def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.extra_args = [['-whitelist=127.0.0.1']] def run_test(self): self.address = self.nodes[0].getnewaddress() self.ms_address = self.nodes[0].addmultisigaddress(1, [self.address]) - NetworkThread().start() # Start up network handling in another thread + network_thread_start() self.coinbase_blocks = self.nodes[0].generate(2) # Block 2 coinbase_txid = [] for i in self.coinbase_blocks: coinbase_txid.append(self.nodes[0].getblock(i)['tx'][0]) self.nodes[0].generate(427) # Block 429 self.lastblockhash = self.nodes[0].getbestblockhash() self.tip = int("0x" + self.lastblockhash, 0) self.lastblockheight = 429 self.lastblocktime = int(time.time()) + 429 self.log.info( "Test 1: NULLDUMMY compliant base transactions should be accepted to mempool and mined before activation [430]") test1txs = [self.create_transaction( self.nodes[0], coinbase_txid[0], self.ms_address, 49)] txid1 = self.nodes[0].sendrawtransaction( bytes_to_hex_str(test1txs[0].serialize()), True) test1txs.append(self.create_transaction( self.nodes[0], txid1, self.ms_address, 48)) txid2 = self.nodes[0].sendrawtransaction( bytes_to_hex_str(test1txs[1].serialize()), True) self.block_submit(self.nodes[0], test1txs, True) self.log.info( "Test 2: Non-NULLDUMMY base multisig transaction should not be accepted to mempool before activation") test2tx = self.create_transaction( self.nodes[0], txid2, self.ms_address, 48) trueDummy(test2tx) assert_raises_rpc_error(-26, NULLDUMMY_ERROR, self.nodes[0].sendrawtransaction, bytes_to_hex_str(test2tx.serialize()), True) self.log.info( "Test 3: Non-NULLDUMMY base transactions should be accepted in a block before activation [431]") self.block_submit(self.nodes[0], [test2tx], True) def create_transaction(self, node, txid, to_address, amount): inputs = [{"txid": txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) signresult = node.signrawtransaction(rawtx) tx = CTransaction() f = BytesIO(hex_str_to_bytes(signresult['hex'])) tx.deserialize(f) return tx def block_submit(self, node, txs, accept=False): block = create_block(self.tip, create_coinbase( self.lastblockheight + 1), self.lastblocktime + 1) block.nVersion = 4 for tx in txs: tx.rehash() block.vtx.append(tx) block.vtx = [block.vtx[0]] + \ sorted(block.vtx[1:], key=lambda tx: tx.get_id()) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() node.submitblock(bytes_to_hex_str(block.serialize())) if (accept): assert_equal(node.getbestblockhash(), block.hash) self.tip = block.sha256 self.lastblockhash = block.hash self.lastblocktime += 1 self.lastblockheight += 1 else: assert_equal(node.getbestblockhash(), self.lastblockhash) if __name__ == '__main__': NULLDUMMYTest().main() diff --git a/test/functional/p2p-acceptblock.py b/test/functional/p2p-acceptblock.py index 9380d1aea..62d37a7d5 100755 --- a/test/functional/p2p-acceptblock.py +++ b/test/functional/p2p-acceptblock.py @@ -1,349 +1,348 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import time from test_framework.blocktools import create_block, create_coinbase, create_transaction ''' AcceptBlockTest -- test processing of unrequested blocks. Setup: two nodes, node0+node1, not connected to each other. Node1 will have nMinimumChainWork set to 0x10, so it won't process low-work unrequested blocks. We have one P2PInterface connection to node0 called test_node, and one to node1 called min_work_node. The test: 1. Generate one block on each node, to leave IBD. 2. Mine a new block on each tip, and deliver to each node from node's peer. The tip should advance for node0, but node1 should skip processing due to nMinimumChainWork. Node1 is unused in tests 3-7: 3. Mine a block that forks from the genesis block, and deliver to test_node. Node0 should not process this block (just accept the header), because it is unrequested and doesn't have more or equal work to the tip. 4a,b. Send another two blocks that build on the forking block. Node0 should process the second block but be stuck on the shorter chain, because it's missing an intermediate block. 4c.Send 288 more blocks on the longer chain (the number of blocks ahead we currently store). Node0 should process all but the last block (too far ahead in height). 5. Send a duplicate of the block in #3 to Node0. Node0 should not process the block because it is unrequested, and stay on the shorter chain. 6. Send Node0 an inv for the height 3 block produced in #4 above. Node0 should figure out that Node0 has the missing height 2 block and send a getdata. 7. Send Node0 the missing block again. Node0 should process and the tip should advance. 8. Create a fork which is invalid at a height longer than the current chain (ie to which the node will try to reorg) but which has headers built on top of the invalid block. Check that we get disconnected if we send more headers on the chain the node now knows to be invalid. 9. Test Node1 is able to sync when connected to node0 (which should have sufficient work on its chain). ''' class AcceptBlockTest(BitcoinTestFramework): def add_options(self, parser): parser.add_argument("--testbinary", dest="testbinary", default=os.getenv("BITCOIND", "bitcoind"), help="bitcoind binary to test") def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 self.extra_args = [["-noparkdeepreorg"], ["-minimumchainwork=0x10"]] def setup_network(self): # Node0 will be used to test behavior of processing unrequested blocks # from peers which are not whitelisted, while Node1 will be used for # the whitelisted case. # Node2 will be used for non-whitelisted peers to test the interaction # with nMinimumChainWork. self.setup_nodes() def run_test(self): # Setup the p2p connections and start up the network thread. # test_node connects to node0 (not whitelisted) test_node = self.nodes[0].add_p2p_connection(P2PInterface()) # min_work_node connects to node1 (whitelisted) min_work_node = self.nodes[1].add_p2p_connection(P2PInterface()) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() # Test logic begins here test_node.wait_for_verack() min_work_node.wait_for_verack() # 1. Have nodes mine a block (leave IBD) [n.generate(1) for n in self.nodes] tips = [int("0x" + n.getbestblockhash(), 0) for n in self.nodes] # 2. Send one block that builds on each tip. # This should be accepted by node0 blocks_h2 = [] # the height 2 blocks on each node's chain block_time = int(time.time()) + 1 for i in range(2): blocks_h2.append(create_block( tips[i], create_coinbase(2), block_time)) blocks_h2[i].solve() block_time += 1 test_node.send_message(msg_block(blocks_h2[0])) min_work_node.send_message(msg_block(blocks_h2[1])) for x in [test_node, min_work_node]: x.sync_with_ping() assert_equal(self.nodes[0].getblockcount(), 2) assert_equal(self.nodes[1].getblockcount(), 1) self.log.info( "First height 2 block accepted by node0; correctly rejected by node1") # 3. Send another block that builds on genesis. block_h1f = create_block( int("0x" + self.nodes[0].getblockhash(0), 0), create_coinbase(1), block_time) block_time += 1 block_h1f.solve() test_node.send_message(msg_block(block_h1f)) test_node.sync_with_ping() tip_entry_found = False for x in self.nodes[0].getchaintips(): if x['hash'] == block_h1f.hash: assert_equal(x['status'], "headers-only") tip_entry_found = True assert(tip_entry_found) assert_raises_rpc_error(-1, "Block not found on disk", self.nodes[0].getblock, block_h1f.hash) # 4. Send another two block that build on the fork. block_h2f = create_block( block_h1f.sha256, create_coinbase(2), block_time) block_time += 1 block_h2f.solve() test_node.send_message(msg_block(block_h2f)) test_node.sync_with_ping() # Since the earlier block was not processed by node, the new block # can't be fully validated. tip_entry_found = False for x in self.nodes[0].getchaintips(): if x['hash'] == block_h2f.hash: assert_equal(x['status'], "headers-only") tip_entry_found = True assert(tip_entry_found) # But this block should be accepted by node since it has equal work. self.nodes[0].getblock(block_h2f.hash) self.log.info("Second height 2 block accepted, but not reorg'ed to") # 4b. Now send another block that builds on the forking chain. block_h3 = create_block( block_h2f.sha256, create_coinbase(3), block_h2f.nTime+1) block_h3.solve() test_node.send_message(msg_block(block_h3)) test_node.sync_with_ping() # Since the earlier block was not processed by node, the new block # can't be fully validated. tip_entry_found = False for x in self.nodes[0].getchaintips(): if x['hash'] == block_h3.hash: assert_equal(x['status'], "headers-only") tip_entry_found = True assert(tip_entry_found) self.nodes[0].getblock(block_h3.hash) # But this block should be accepted by node since it has more work. self.nodes[0].getblock(block_h3.hash) self.log.info("Unrequested more-work block accepted") # 4c. Now mine 288 more blocks and deliver; all should be processed but # the last (height-too-high) on node (as long as its not missing any headers) tip = block_h3 all_blocks = [] for i in range(288): next_block = create_block( tip.sha256, create_coinbase(i + 4), tip.nTime+1) next_block.solve() all_blocks.append(next_block) tip = next_block # Now send the block at height 5 and check that it wasn't accepted (missing header) test_node.send_message(msg_block(all_blocks[1])) test_node.sync_with_ping() assert_raises_rpc_error(-5, "Block not found", self.nodes[0].getblock, all_blocks[1].hash) assert_raises_rpc_error(-5, "Block not found", self.nodes[0].getblockheader, all_blocks[1].hash) # The block at height 5 should be accepted if we provide the missing header, though headers_message = msg_headers() headers_message.headers.append(CBlockHeader(all_blocks[0])) test_node.send_message(headers_message) test_node.send_message(msg_block(all_blocks[1])) test_node.sync_with_ping() self.nodes[0].getblock(all_blocks[1].hash) # Now send the blocks in all_blocks for i in range(288): test_node.send_message(msg_block(all_blocks[i])) test_node.sync_with_ping() # Blocks 1-287 should be accepted, block 288 should be ignored because it's too far ahead for x in all_blocks[:-1]: self.nodes[0].getblock(x.hash) assert_raises_rpc_error( -1, "Block not found on disk", self.nodes[0].getblock, all_blocks[-1].hash) # 5. Test handling of unrequested block on the node that didn't process # Should still not be processed (even though it has a child that has more # work). # The node should have requested the blocks at some point, so # disconnect/reconnect first self.nodes[0].disconnect_p2ps() test_node = self.nodes[0].add_p2p_connection(P2PInterface()) test_node.wait_for_verack() test_node.send_message(msg_block(block_h1f)) test_node.sync_with_ping() assert_equal(self.nodes[0].getblockcount(), 2) self.log.info( "Unrequested block that would complete more-work chain was ignored") # 6. Try to get node to request the missing block. # Poke the node with an inv for block at height 3 and see if that # triggers a getdata on block 2 (it should if block 2 is missing). with mininode_lock: # Clear state so we can check the getdata request test_node.last_message.pop("getdata", None) test_node.send_message(msg_inv([CInv(2, block_h3.sha256)])) test_node.sync_with_ping() with mininode_lock: getdata = test_node.last_message["getdata"] # Check that the getdata includes the right block assert_equal(getdata.inv[0].hash, block_h1f.sha256) self.log.info("Inv at tip triggered getdata for unprocessed block") # 7. Send the missing block for the third time (now it is requested) test_node.send_message(msg_block(block_h1f)) test_node.sync_with_ping() assert_equal(self.nodes[0].getblockcount(), 290) self.nodes[0].getblock(all_blocks[286].hash) assert_equal(self.nodes[0].getbestblockhash(), all_blocks[286].hash) assert_raises_rpc_error(-1, "Block not found on disk", self.nodes[0].getblock, all_blocks[287].hash) self.log.info( "Successfully reorged to longer chain from non-whitelisted peer") # 8. Create a chain which is invalid at a height longer than the # current chain, but which has more blocks on top of that block_289f = create_block( all_blocks[284].sha256, create_coinbase(289), all_blocks[284].nTime+1) block_289f.solve() block_290f = create_block( block_289f.sha256, create_coinbase(290), block_289f.nTime+1) block_290f.solve() block_291 = create_block( block_290f.sha256, create_coinbase(291), block_290f.nTime+1) # block_291 spends a coinbase below maturity! block_291.vtx.append(create_transaction( block_290f.vtx[0], 0, b"42", 1)) block_291.hashMerkleRoot = block_291.calc_merkle_root() block_291.solve() block_292 = create_block( block_291.sha256, create_coinbase(292), block_291.nTime+1) block_292.solve() # Now send all the headers on the chain and enough blocks to trigger reorg headers_message = msg_headers() headers_message.headers.append(CBlockHeader(block_289f)) headers_message.headers.append(CBlockHeader(block_290f)) headers_message.headers.append(CBlockHeader(block_291)) headers_message.headers.append(CBlockHeader(block_292)) test_node.send_message(headers_message) test_node.sync_with_ping() tip_entry_found = False for x in self.nodes[0].getchaintips(): if x['hash'] == block_292.hash: assert_equal(x['status'], "headers-only") tip_entry_found = True assert(tip_entry_found) assert_raises_rpc_error(-1, "Block not found on disk", self.nodes[0].getblock, block_292.hash) test_node.send_message(msg_block(block_289f)) test_node.send_message(msg_block(block_290f)) test_node.sync_with_ping() self.nodes[0].getblock(block_289f.hash) self.nodes[0].getblock(block_290f.hash) test_node.send_message(msg_block(block_291)) # At this point we've sent an obviously-bogus block, wait for full processing # without assuming whether we will be disconnected or not try: # Only wait a short while so the test doesn't take forever if we do get # disconnected test_node.sync_with_ping(timeout=1) except AssertionError: test_node.wait_for_disconnect() self.nodes[0].disconnect_p2ps() test_node = self.nodes[0].add_p2p_connection(P2PInterface()) - NetworkThread().start() # Start up network handling in another thread + network_thread_start() test_node.wait_for_verack() # We should have failed reorg and switched back to 290 (but have block 291) assert_equal(self.nodes[0].getblockcount(), 290) assert_equal(self.nodes[0].getbestblockhash(), all_blocks[286].hash) assert_equal(self.nodes[0].getblock( block_291.hash)["confirmations"], -1) # Now send a new header on the invalid chain, indicating we're forked off, and expect to get disconnected block_293 = create_block( block_292.sha256, create_coinbase(293), block_292.nTime+1) block_293.solve() headers_message = msg_headers() headers_message.headers.append(CBlockHeader(block_293)) test_node.send_message(headers_message) # FIXME: Uncomment this line once Core backport 015a525 is completed. # Current behavior does not ban peers that give us headers on invalid chains. # test_node.wait_for_disconnect() # 9. Connect node1 to node0 and ensure it is able to sync connect_nodes(self.nodes[0], self.nodes[1]) sync_blocks([self.nodes[0], self.nodes[1]]) self.log.info("Successfully synced nodes 1 and 0") if __name__ == '__main__': AcceptBlockTest().main() diff --git a/test/functional/p2p-compactblocks.py b/test/functional/p2p-compactblocks.py index 762fcf409..d0b553be5 100755 --- a/test/functional/p2p-compactblocks.py +++ b/test/functional/p2p-compactblocks.py @@ -1,899 +1,898 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.blocktools import create_block, create_coinbase from test_framework.script import CScript, OP_TRUE from test_framework.txtools import pad_tx ''' CompactBlocksTest -- test compact blocks (BIP 152) Only testing Version 1 compact blocks (txids) ''' # TestNode: A peer we use to send messages to bitcoind, and store responses. class TestNode(P2PInterface): def __init__(self): super().__init__() self.last_sendcmpct = [] self.block_announced = False # Store the hashes of blocks we've seen announced. # This is for synchronizing the p2p message traffic, # so we can eg wait until a particular block is announced. self.announced_blockhashes = set() def on_sendcmpct(self, message): self.last_sendcmpct.append(message) def on_cmpctblock(self, message): self.block_announced = True self.last_message["cmpctblock"].header_and_shortids.header.calc_sha256() self.announced_blockhashes.add( self.last_message["cmpctblock"].header_and_shortids.header.sha256) def on_headers(self, message): self.block_announced = True for x in self.last_message["headers"].headers: x.calc_sha256() self.announced_blockhashes.add(x.sha256) def on_inv(self, message): for x in self.last_message["inv"].inv: if x.type == 2: self.block_announced = True self.announced_blockhashes.add(x.hash) # Requires caller to hold mininode_lock def received_block_announcement(self): return self.block_announced def clear_block_announcement(self): with mininode_lock: self.block_announced = False self.last_message.pop("inv", None) self.last_message.pop("headers", None) self.last_message.pop("cmpctblock", None) def get_headers(self, locator, hashstop): msg = msg_getheaders() msg.locator.vHave = locator msg.hashstop = hashstop self.send_message(msg) def send_header_for_blocks(self, new_blocks): headers_message = msg_headers() headers_message.headers = [CBlockHeader(b) for b in new_blocks] self.send_message(headers_message) def request_headers_and_sync(self, locator, hashstop=0): self.clear_block_announcement() self.get_headers(locator, hashstop) wait_until(self.received_block_announcement, timeout=30, lock=mininode_lock) self.clear_block_announcement() # Block until a block announcement for a particular block hash is # received. def wait_for_block_announcement(self, block_hash, timeout=30): def received_hash(): return (block_hash in self.announced_blockhashes) wait_until(received_hash, timeout=timeout, lock=mininode_lock) def send_await_disconnect(self, message, timeout=30): """Sends a message to the node and wait for disconnect. This is used when we want to send a message into the node that we expect will get us disconnected, eg an invalid block.""" self.send_message(message) wait_until(lambda: self.state != "connected", timeout=timeout, lock=mininode_lock) class CompactBlocksTest(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 self.extra_args = [[], ["-txindex"]] self.utxos = [] def build_block_on_tip(self, node): height = node.getblockcount() tip = node.getbestblockhash() mtp = node.getblockheader(tip)['mediantime'] block = create_block( int(tip, 16), create_coinbase(height + 1), mtp + 1) block.nVersion = 4 block.solve() return block # Create 10 more anyone-can-spend utxo's for testing. def make_utxos(self): # Doesn't matter which node we use, just use node0. block = self.build_block_on_tip(self.nodes[0]) self.test_node.send_and_ping(msg_block(block)) assert(int(self.nodes[0].getbestblockhash(), 16) == block.sha256) self.nodes[0].generate(100) total_value = block.vtx[0].vout[0].nValue out_value = total_value // 10 tx = CTransaction() tx.vin.append(CTxIn(COutPoint(block.vtx[0].sha256, 0), b'')) for i in range(10): tx.vout.append(CTxOut(out_value, CScript([OP_TRUE]))) tx.rehash() block2 = self.build_block_on_tip(self.nodes[0]) block2.vtx.append(tx) block2.hashMerkleRoot = block2.calc_merkle_root() block2.solve() self.test_node.send_and_ping(msg_block(block2)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), block2.sha256) self.utxos.extend([[tx.sha256, i, out_value] for i in range(10)]) return # Test "sendcmpct" (between peers preferring the same version): # - No compact block announcements unless sendcmpct is sent. # - If sendcmpct is sent with version > preferred_version, the message is ignored. # - If sendcmpct is sent with boolean 0, then block announcements are not # made with compact blocks. # - If sendcmpct is then sent with boolean 1, then new block announcements # are made with compact blocks. # If old_node is passed in, request compact blocks with version=preferred-1 # and verify that it receives block announcements via compact block. def test_sendcmpct(self, node, test_node, preferred_version, old_node=None): # Make sure we get a SENDCMPCT message from our peer def received_sendcmpct(): return (len(test_node.last_sendcmpct) > 0) wait_until(received_sendcmpct, timeout=30, lock=mininode_lock) with mininode_lock: # Check that the first version received is the preferred one assert_equal( test_node.last_sendcmpct[0].version, preferred_version) # And that we receive versions down to 1. assert_equal(test_node.last_sendcmpct[-1].version, 1) test_node.last_sendcmpct = [] tip = int(node.getbestblockhash(), 16) def check_announcement_of_new_block(node, peer, predicate): peer.clear_block_announcement() block_hash = int(node.generate(1)[0], 16) peer.wait_for_block_announcement(block_hash, timeout=30) assert(peer.block_announced) with mininode_lock: assert predicate(peer), ( "block_hash={!r}, cmpctblock={!r}, inv={!r}".format( block_hash, peer.last_message.get("cmpctblock", None), peer.last_message.get("inv", None))) # We shouldn't get any block announcements via cmpctblock yet. check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message) # Try one more time, this time after requesting headers. test_node.request_headers_and_sync(locator=[tip]) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message and "inv" in p.last_message) # Test a few ways of using sendcmpct that should NOT # result in compact block announcements. # Before each test, sync the headers chain. test_node.request_headers_and_sync(locator=[tip]) # Now try a SENDCMPCT message with too-high version sendcmpct = msg_sendcmpct() sendcmpct.version = 999 # was: preferred_version+1 sendcmpct.announce = True test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message) # Headers sync before next test. test_node.request_headers_and_sync(locator=[tip]) # Now try a SENDCMPCT message with valid version, but announce=False sendcmpct.version = preferred_version sendcmpct.announce = False test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message) # Headers sync before next test. test_node.request_headers_and_sync(locator=[tip]) # Finally, try a SENDCMPCT message with announce=True sendcmpct.version = preferred_version sendcmpct.announce = True test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Try one more time (no headers sync should be needed!) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Try one more time, after turning on sendheaders test_node.send_and_ping(msg_sendheaders()) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Try one more time, after sending a version-1, announce=false message. sendcmpct.version = preferred_version - 1 sendcmpct.announce = False test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Now turn off announcements sendcmpct.version = preferred_version sendcmpct.announce = False test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message and "headers" in p.last_message) if old_node is not None: # Verify that a peer using an older protocol version can receive # announcements from this node. sendcmpct.version = 1 # preferred_version-1 sendcmpct.announce = True old_node.send_and_ping(sendcmpct) # Header sync old_node.request_headers_and_sync(locator=[tip]) check_announcement_of_new_block( node, old_node, lambda p: "cmpctblock" in p.last_message) # This test actually causes bitcoind to (reasonably!) disconnect us, so do # this last. def test_invalid_cmpctblock_message(self): self.nodes[0].generate(101) block = self.build_block_on_tip(self.nodes[0]) cmpct_block = P2PHeaderAndShortIDs() cmpct_block.header = CBlockHeader(block) cmpct_block.prefilled_txn_length = 1 # This index will be too high prefilled_txn = PrefilledTransaction(1, block.vtx[0]) cmpct_block.prefilled_txn = [prefilled_txn] self.test_node.send_await_disconnect(msg_cmpctblock(cmpct_block)) assert_equal( int(self.nodes[0].getbestblockhash(), 16), block.hashPrevBlock) # Compare the generated shortids to what we expect based on BIP 152, given # bitcoind's choice of nonce. def test_compactblock_construction(self, node, test_node): # Generate a bunch of transactions. node.generate(101) num_transactions = 25 address = node.getnewaddress() for i in range(num_transactions): txid = node.sendtoaddress(address, 0.1) hex_tx = node.gettransaction(txid)["hex"] tx = FromHex(CTransaction(), hex_tx) # Wait until we've seen the block announcement for the resulting tip tip = int(node.getbestblockhash(), 16) test_node.wait_for_block_announcement(tip) # Make sure we will receive a fast-announce compact block self.request_cb_announcements(test_node, node) # Now mine a block, and look at the resulting compact block. test_node.clear_block_announcement() block_hash = int(node.generate(1)[0], 16) # Store the raw block in our internal format. block = FromHex(CBlock(), node.getblock("%02x" % block_hash, False)) for tx in block.vtx: tx.calc_sha256() block.rehash() # Wait until the block was announced (via compact blocks) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) # Now fetch and check the compact block header_and_shortids = None with mininode_lock: assert("cmpctblock" in test_node.last_message) # Convert the on-the-wire representation to absolute indexes header_and_shortids = HeaderAndShortIDs( test_node.last_message["cmpctblock"].header_and_shortids) self.check_compactblock_construction_from_block( header_and_shortids, block_hash, block) # Now fetch the compact block using a normal non-announce getdata with mininode_lock: test_node.clear_block_announcement() inv = CInv(4, block_hash) # 4 == "CompactBlock" test_node.send_message(msg_getdata([inv])) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) # Now fetch and check the compact block header_and_shortids = None with mininode_lock: assert("cmpctblock" in test_node.last_message) # Convert the on-the-wire representation to absolute indexes header_and_shortids = HeaderAndShortIDs( test_node.last_message["cmpctblock"].header_and_shortids) self.check_compactblock_construction_from_block( header_and_shortids, block_hash, block) def check_compactblock_construction_from_block(self, header_and_shortids, block_hash, block): # Check that we got the right block! header_and_shortids.header.calc_sha256() assert_equal(header_and_shortids.header.sha256, block_hash) # Make sure the prefilled_txn appears to have included the coinbase assert(len(header_and_shortids.prefilled_txn) >= 1) assert_equal(header_and_shortids.prefilled_txn[0].index, 0) # Check that all prefilled_txn entries match what's in the block. for entry in header_and_shortids.prefilled_txn: entry.tx.calc_sha256() # This checks the tx agree assert_equal(entry.tx.sha256, block.vtx[entry.index].sha256) # Check that the cmpctblock message announced all the transactions. assert_equal(len(header_and_shortids.prefilled_txn) + len(header_and_shortids.shortids), len(block.vtx)) # And now check that all the shortids are as expected as well. # Determine the siphash keys to use. [k0, k1] = header_and_shortids.get_siphash_keys() index = 0 while index < len(block.vtx): if (len(header_and_shortids.prefilled_txn) > 0 and header_and_shortids.prefilled_txn[0].index == index): # Already checked prefilled transactions above header_and_shortids.prefilled_txn.pop(0) else: tx_hash = block.vtx[index].sha256 shortid = calculate_shortid(k0, k1, tx_hash) assert_equal(shortid, header_and_shortids.shortids[0]) header_and_shortids.shortids.pop(0) index += 1 # Test that bitcoind requests compact blocks when we announce new blocks # via header or inv, and that responding to getblocktxn causes the block # to be successfully reconstructed. def test_compactblock_requests(self, node, test_node, version): # Try announcing a block with an inv or header, expect a compactblock # request for announce in ["inv", "header"]: block = self.build_block_on_tip(node) with mininode_lock: test_node.last_message.pop("getdata", None) if announce == "inv": test_node.send_message(msg_inv([CInv(2, block.sha256)])) wait_until(lambda: "getheaders" in test_node.last_message, timeout=30, lock=mininode_lock) test_node.send_header_for_blocks([block]) else: test_node.send_header_for_blocks([block]) wait_until(lambda: "getdata" in test_node.last_message, timeout=30, lock=mininode_lock) assert_equal(len(test_node.last_message["getdata"].inv), 1) assert_equal(test_node.last_message["getdata"].inv[0].type, 4) assert_equal( test_node.last_message["getdata"].inv[0].hash, block.sha256) # Send back a compactblock message that omits the coinbase comp_block = HeaderAndShortIDs() comp_block.header = CBlockHeader(block) comp_block.nonce = 0 [k0, k1] = comp_block.get_siphash_keys() coinbase_hash = block.vtx[0].sha256 comp_block.shortids = [ calculate_shortid(k0, k1, coinbase_hash)] test_node.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) assert_equal(int(node.getbestblockhash(), 16), block.hashPrevBlock) # Expect a getblocktxn message. with mininode_lock: assert("getblocktxn" in test_node.last_message) absolute_indexes = test_node.last_message["getblocktxn"].block_txn_request.to_absolute( ) assert_equal(absolute_indexes, [0]) # should be a coinbase request # Send the coinbase, and verify that the tip advances. msg = msg_blocktxn() msg.block_transactions.blockhash = block.sha256 msg.block_transactions.transactions = [block.vtx[0]] test_node.send_and_ping(msg) assert_equal(int(node.getbestblockhash(), 16), block.sha256) # Create a chain of transactions from given utxo, and add to a new block. # Note that num_transactions is number of transactions not including the # coinbase. def build_block_with_transactions(self, node, utxo, num_transactions): block = self.build_block_on_tip(node) for i in range(num_transactions): tx = CTransaction() tx.vin.append(CTxIn(COutPoint(utxo[0], utxo[1]), b'')) tx.vout.append(CTxOut(utxo[2] - 1000, CScript([OP_TRUE]))) pad_tx(tx) tx.rehash() utxo = [tx.sha256, 0, tx.vout[0].nValue] block.vtx.append(tx) ordered_txs = block.vtx block.vtx = [block.vtx[0]] + \ sorted(block.vtx[1:], key=lambda tx: tx.get_id()) block.hashMerkleRoot = block.calc_merkle_root() block.solve() return block, ordered_txs # Test that we only receive getblocktxn requests for transactions that the # node needs, and that responding to them causes the block to be # reconstructed. def test_getblocktxn_requests(self, node, test_node, version): def test_getblocktxn_response(compact_block, peer, expected_result): msg = msg_cmpctblock(compact_block.to_p2p()) peer.send_and_ping(msg) with mininode_lock: assert("getblocktxn" in peer.last_message) absolute_indexes = peer.last_message["getblocktxn"].block_txn_request.to_absolute( ) assert_equal(absolute_indexes, expected_result) def test_tip_after_message(node, peer, msg, tip): peer.send_and_ping(msg) assert_equal(int(node.getbestblockhash(), 16), tip) # First try announcing compactblocks that won't reconstruct, and verify # that we receive getblocktxn messages back. utxo = self.utxos.pop(0) block, ordered_txs = self.build_block_with_transactions(node, utxo, 5) self.utxos.append( [ordered_txs[-1].sha256, 0, ordered_txs[-1].vout[0].nValue]) comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(block) test_getblocktxn_response(comp_block, test_node, [1, 2, 3, 4, 5]) msg_bt = msg_blocktxn() msg_bt.block_transactions = BlockTransactions( block.sha256, block.vtx[1:]) test_tip_after_message(node, test_node, msg_bt, block.sha256) utxo = self.utxos.pop(0) block, ordered_txs = self.build_block_with_transactions(node, utxo, 5) self.utxos.append( [ordered_txs[-1].sha256, 0, ordered_txs[-1].vout[0].nValue]) # Now try interspersing the prefilled transactions comp_block.initialize_from_block( block, prefill_list=[0, 1, 5]) test_getblocktxn_response(comp_block, test_node, [2, 3, 4]) msg_bt.block_transactions = BlockTransactions( block.sha256, block.vtx[2:5]) test_tip_after_message(node, test_node, msg_bt, block.sha256) # Now try giving one transaction ahead of time. utxo = self.utxos.pop(0) block, ordered_txs = self.build_block_with_transactions(node, utxo, 5) self.utxos.append( [ordered_txs[-1].sha256, 0, ordered_txs[-1].vout[0].nValue]) test_node.send_and_ping(msg_tx(ordered_txs[1])) assert(ordered_txs[1].hash in node.getrawmempool()) test_node.send_and_ping(msg_tx(ordered_txs[1])) # Prefill 4 out of the 6 transactions, and verify that only the one # that was not in the mempool is requested. prefill_list = [0, 1, 2, 3, 4, 5] prefill_list.remove(block.vtx.index(ordered_txs[1])) expected_index = block.vtx.index(ordered_txs[-1]) prefill_list.remove(expected_index) comp_block.initialize_from_block(block, prefill_list=prefill_list) test_getblocktxn_response(comp_block, test_node, [expected_index]) msg_bt.block_transactions = BlockTransactions( block.sha256, [ordered_txs[5]]) test_tip_after_message(node, test_node, msg_bt, block.sha256) # Now provide all transactions to the node before the block is # announced and verify reconstruction happens immediately. utxo = self.utxos.pop(0) block, ordered_txs = self.build_block_with_transactions(node, utxo, 10) self.utxos.append( [ordered_txs[-1].sha256, 0, ordered_txs[-1].vout[0].nValue]) for tx in block.vtx[1:]: test_node.send_message(msg_tx(tx)) test_node.sync_with_ping() # Make sure all transactions were accepted. mempool = node.getrawmempool() for tx in block.vtx[1:]: assert(tx.hash in mempool) # Clear out last request. with mininode_lock: test_node.last_message.pop("getblocktxn", None) # Send compact block comp_block.initialize_from_block(block, prefill_list=[0]) test_tip_after_message( node, test_node, msg_cmpctblock(comp_block.to_p2p()), block.sha256) with mininode_lock: # Shouldn't have gotten a request for any transaction assert("getblocktxn" not in test_node.last_message) # Incorrectly responding to a getblocktxn shouldn't cause the block to be # permanently failed. def test_incorrect_blocktxn_response(self, node, test_node, version): if (len(self.utxos) == 0): self.make_utxos() utxo = self.utxos.pop(0) block, ordered_txs = self.build_block_with_transactions(node, utxo, 10) self.utxos.append( [ordered_txs[-1].sha256, 0, ordered_txs[-1].vout[0].nValue]) # Relay the first 5 transactions from the block in advance for tx in ordered_txs[1:6]: test_node.send_message(msg_tx(tx)) test_node.sync_with_ping() # Make sure all transactions were accepted. mempool = node.getrawmempool() for tx in ordered_txs[1:6]: assert(tx.hash in mempool) # Send compact block comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(block, prefill_list=[0]) test_node.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) absolute_indices = [] with mininode_lock: assert("getblocktxn" in test_node.last_message) absolute_indices = test_node.last_message["getblocktxn"].block_txn_request.to_absolute( ) expected_indices = [] for i in [6, 7, 8, 9, 10]: expected_indices.append(block.vtx.index(ordered_txs[i])) assert_equal(absolute_indices, sorted(expected_indices)) # Now give an incorrect response. # Note that it's possible for bitcoind to be smart enough to know we're # lying, since it could check to see if the shortid matches what we're # sending, and eg disconnect us for misbehavior. If that behavior # change were made, we could just modify this test by having a # different peer provide the block further down, so that we're still # verifying that the block isn't marked bad permanently. This is good # enough for now. msg = msg_blocktxn() msg.block_transactions = BlockTransactions( block.sha256, [ordered_txs[5]] + ordered_txs[7:]) test_node.send_and_ping(msg) # Tip should not have updated assert_equal(int(node.getbestblockhash(), 16), block.hashPrevBlock) # We should receive a getdata request wait_until(lambda: "getdata" in test_node.last_message, timeout=10, lock=mininode_lock) assert_equal(len(test_node.last_message["getdata"].inv), 1) assert(test_node.last_message["getdata"].inv[0].type == 2) assert_equal( test_node.last_message["getdata"].inv[0].hash, block.sha256) # Deliver the block test_node.send_and_ping(msg_block(block)) assert_equal(int(node.getbestblockhash(), 16), block.sha256) def test_getblocktxn_handler(self, node, test_node, version): # bitcoind will not send blocktxn responses for blocks whose height is # more than 10 blocks deep. MAX_GETBLOCKTXN_DEPTH = 10 chain_height = node.getblockcount() current_height = chain_height while (current_height >= chain_height - MAX_GETBLOCKTXN_DEPTH): block_hash = node.getblockhash(current_height) block = FromHex(CBlock(), node.getblock(block_hash, False)) msg = msg_getblocktxn() msg.block_txn_request = BlockTransactionsRequest( int(block_hash, 16), []) num_to_request = random.randint(1, len(block.vtx)) msg.block_txn_request.from_absolute( sorted(random.sample(range(len(block.vtx)), num_to_request))) test_node.send_message(msg) wait_until(lambda: "blocktxn" in test_node.last_message, timeout=10, lock=mininode_lock) [tx.calc_sha256() for tx in block.vtx] with mininode_lock: assert_equal(test_node.last_message["blocktxn"].block_transactions.blockhash, int( block_hash, 16)) all_indices = msg.block_txn_request.to_absolute() for index in all_indices: tx = test_node.last_message["blocktxn"].block_transactions.transactions.pop( 0) tx.calc_sha256() assert_equal(tx.sha256, block.vtx[index].sha256) test_node.last_message.pop("blocktxn", None) current_height -= 1 # Next request should send a full block response, as we're past the # allowed depth for a blocktxn response. block_hash = node.getblockhash(current_height) msg.block_txn_request = BlockTransactionsRequest( int(block_hash, 16), [0]) with mininode_lock: test_node.last_message.pop("block", None) test_node.last_message.pop("blocktxn", None) test_node.send_and_ping(msg) with mininode_lock: test_node.last_message["block"].block.calc_sha256() assert_equal( test_node.last_message["block"].block.sha256, int(block_hash, 16)) assert "blocktxn" not in test_node.last_message def test_compactblocks_not_at_tip(self, node, test_node): # Test that requesting old compactblocks doesn't work. MAX_CMPCTBLOCK_DEPTH = 5 new_blocks = [] for i in range(MAX_CMPCTBLOCK_DEPTH + 1): test_node.clear_block_announcement() new_blocks.append(node.generate(1)[0]) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) test_node.clear_block_announcement() test_node.send_message(msg_getdata([CInv(4, int(new_blocks[0], 16))])) wait_until(lambda: "cmpctblock" in test_node.last_message, timeout=30, lock=mininode_lock) test_node.clear_block_announcement() node.generate(1) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) test_node.clear_block_announcement() with mininode_lock: test_node.last_message.pop("block", None) test_node.send_message(msg_getdata([CInv(4, int(new_blocks[0], 16))])) wait_until(lambda: "block" in test_node.last_message, timeout=30, lock=mininode_lock) with mininode_lock: test_node.last_message["block"].block.calc_sha256() assert_equal( test_node.last_message["block"].block.sha256, int(new_blocks[0], 16)) # Generate an old compactblock, and verify that it's not accepted. cur_height = node.getblockcount() hashPrevBlock = int(node.getblockhash(cur_height - 5), 16) block = self.build_block_on_tip(node) block.hashPrevBlock = hashPrevBlock block.solve() comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(block) test_node.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) tips = node.getchaintips() found = False for x in tips: if x["hash"] == block.hash: assert_equal(x["status"], "headers-only") found = True break assert(found) # Requesting this block via getblocktxn should silently fail # (to avoid fingerprinting attacks). msg = msg_getblocktxn() msg.block_txn_request = BlockTransactionsRequest(block.sha256, [0]) with mininode_lock: test_node.last_message.pop("blocktxn", None) test_node.send_and_ping(msg) with mininode_lock: assert "blocktxn" not in test_node.last_message def test_end_to_end_block_relay(self, node, listeners): utxo = self.utxos.pop(0) block, _ = self.build_block_with_transactions(node, utxo, 10) [l.clear_block_announcement() for l in listeners] node.submitblock(ToHex(block)) for l in listeners: wait_until(lambda: l.received_block_announcement(), timeout=30, lock=mininode_lock) with mininode_lock: for l in listeners: assert "cmpctblock" in l.last_message l.last_message["cmpctblock"].header_and_shortids.header.calc_sha256( ) assert_equal( l.last_message["cmpctblock"].header_and_shortids.header.sha256, block.sha256) # Test that we don't get disconnected if we relay a compact block with valid header, # but invalid transactions. def test_invalid_tx_in_compactblock(self, node, test_node): assert(len(self.utxos)) utxo = self.utxos[0] block, ordered_txs = self.build_block_with_transactions(node, utxo, 5) block.vtx.remove(ordered_txs[3]) block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Now send the compact block with all transactions prefilled, and # verify that we don't get disconnected. comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(block, prefill_list=[0, 1, 2, 3, 4]) msg = msg_cmpctblock(comp_block.to_p2p()) test_node.send_and_ping(msg) # Check that the tip didn't advance assert(int(node.getbestblockhash(), 16) is not block.sha256) test_node.sync_with_ping() # Helper for enabling cb announcements # Send the sendcmpct request and sync headers def request_cb_announcements(self, peer, node, version=1): tip = node.getbestblockhash() peer.get_headers(locator=[int(tip, 16)], hashstop=0) msg = msg_sendcmpct() msg.version = version msg.announce = True peer.send_and_ping(msg) def test_compactblock_reconstruction_multiple_peers(self, node, stalling_peer, delivery_peer): assert(len(self.utxos)) def announce_cmpct_block(node, peer): utxo = self.utxos.pop(0) block, _ = self.build_block_with_transactions(node, utxo, 5) cmpct_block = HeaderAndShortIDs() cmpct_block.initialize_from_block(block) msg = msg_cmpctblock(cmpct_block.to_p2p()) peer.send_and_ping(msg) with mininode_lock: assert "getblocktxn" in peer.last_message return block, cmpct_block block, cmpct_block = announce_cmpct_block(node, stalling_peer) for tx in block.vtx[1:]: delivery_peer.send_message(msg_tx(tx)) delivery_peer.sync_with_ping() mempool = node.getrawmempool() for tx in block.vtx[1:]: assert(tx.hash in mempool) delivery_peer.send_and_ping(msg_cmpctblock(cmpct_block.to_p2p())) assert_equal(int(node.getbestblockhash(), 16), block.sha256) self.utxos.append( [block.vtx[-1].sha256, 0, block.vtx[-1].vout[0].nValue]) # Now test that delivering an invalid compact block won't break relay block, cmpct_block = announce_cmpct_block(node, stalling_peer) for tx in block.vtx[1:]: delivery_peer.send_message(msg_tx(tx)) delivery_peer.sync_with_ping() # TODO: modify txhash in a way that doesn't impact txid. delivery_peer.send_and_ping(msg_cmpctblock(cmpct_block.to_p2p())) # Because txhash isn't modified, we end up reconstructing the same block # assert(int(node.getbestblockhash(), 16) != block.sha256) msg = msg_blocktxn() msg.block_transactions.blockhash = block.sha256 msg.block_transactions.transactions = block.vtx[1:] stalling_peer.send_and_ping(msg) assert_equal(int(node.getbestblockhash(), 16), block.sha256) def run_test(self): # Setup the p2p connections and start up the network thread. self.test_node = self.nodes[0].add_p2p_connection(TestNode()) self.ex_softfork_node = self.nodes[1].add_p2p_connection( TestNode(), services=NODE_NETWORK) self.old_node = self.nodes[1].add_p2p_connection( TestNode(), services=NODE_NETWORK) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() self.test_node.wait_for_verack() # We will need UTXOs to construct transactions in later tests. self.make_utxos() self.log.info("Running tests:") self.log.info("\tTesting SENDCMPCT p2p message... ") self.test_sendcmpct(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_sendcmpct( self.nodes[1], self.ex_softfork_node, 1, old_node=self.old_node) sync_blocks(self.nodes) self.log.info("\tTesting compactblock construction...") self.test_compactblock_construction(self.nodes[0], self.test_node) sync_blocks(self.nodes) self.test_compactblock_construction( self.nodes[1], self.ex_softfork_node) sync_blocks(self.nodes) self.log.info("\tTesting compactblock requests... ") self.test_compactblock_requests(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_compactblock_requests( self.nodes[1], self.ex_softfork_node, 2) sync_blocks(self.nodes) self.log.info("\tTesting getblocktxn requests...") self.test_getblocktxn_requests(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_getblocktxn_requests(self.nodes[1], self.ex_softfork_node, 2) sync_blocks(self.nodes) self.log.info("\tTesting getblocktxn handler...") self.test_getblocktxn_handler(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_getblocktxn_handler(self.nodes[1], self.ex_softfork_node, 2) self.test_getblocktxn_handler(self.nodes[1], self.old_node, 1) sync_blocks(self.nodes) self.log.info( "\tTesting compactblock requests/announcements not at chain tip...") self.test_compactblocks_not_at_tip(self.nodes[0], self.test_node) sync_blocks(self.nodes) self.test_compactblocks_not_at_tip( self.nodes[1], self.ex_softfork_node) self.test_compactblocks_not_at_tip(self.nodes[1], self.old_node) sync_blocks(self.nodes) self.log.info("\tTesting handling of incorrect blocktxn responses...") self.test_incorrect_blocktxn_response(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_incorrect_blocktxn_response( self.nodes[1], self.ex_softfork_node, 2) sync_blocks(self.nodes) # End-to-end block relay tests self.log.info("\tTesting end-to-end block relay...") self.request_cb_announcements(self.test_node, self.nodes[0]) self.request_cb_announcements(self.old_node, self.nodes[1]) self.request_cb_announcements( self.ex_softfork_node, self.nodes[1], version=2) self.test_end_to_end_block_relay( self.nodes[0], [self.ex_softfork_node, self.test_node, self.old_node]) self.test_end_to_end_block_relay( self.nodes[1], [self.ex_softfork_node, self.test_node, self.old_node]) self.log.info("\tTesting handling of invalid compact blocks...") self.test_invalid_tx_in_compactblock(self.nodes[0], self.test_node) self.test_invalid_tx_in_compactblock( self.nodes[1], self.ex_softfork_node) self.test_invalid_tx_in_compactblock(self.nodes[1], self.old_node) self.log.info( "\tTesting reconstructing compact blocks from all peers...") self.test_compactblock_reconstruction_multiple_peers( self.nodes[1], self.ex_softfork_node, self.old_node) sync_blocks(self.nodes) self.log.info("\tTesting invalid index in cmpctblock message...") self.test_invalid_cmpctblock_message() if __name__ == '__main__': CompactBlocksTest().main() diff --git a/test/functional/p2p-feefilter.py b/test/functional/p2p-feefilter.py index 6404f5573..5a673f88a 100755 --- a/test/functional/p2p-feefilter.py +++ b/test/functional/p2p-feefilter.py @@ -1,106 +1,106 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import time ''' FeeFilterTest -- test processing of feefilter messages ''' def hashToHex(hash): return format(hash, '064x') # Wait up to 60 secs to see if the testnode has received all the expected invs def allInvsMatch(invsExpected, testnode): for x in range(60): with mininode_lock: if (sorted(invsExpected) == sorted(testnode.txinvs)): return True time.sleep(1) return False class TestNode(P2PInterface): def __init__(self): super().__init__() self.txinvs = [] def on_inv(self, message): for i in message.inv: if (i.type == 1): self.txinvs.append(hashToHex(i.hash)) def clear_invs(self): with mininode_lock: self.txinvs = [] class FeeFilterTest(BitcoinTestFramework): def set_test_params(self): self.num_nodes = 2 def run_test(self): node1 = self.nodes[1] node0 = self.nodes[0] # Get out of IBD node1.generate(1) sync_blocks(self.nodes) # Setup the p2p connections and start up the network thread. self.nodes[0].add_p2p_connection(TestNode()) - NetworkThread().start() + network_thread_start() self.nodes[0].p2p.wait_for_verack() # Test that invs are received for all txs at feerate of 20 sat/byte node1.settxfee(Decimal("0.00020000")) txids = [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] assert(allInvsMatch(txids, self.nodes[0].p2p)) self.nodes[0].p2p.clear_invs() # Set a filter of 15 sat/byte self.nodes[0].p2p.send_and_ping(msg_feefilter(15000)) # Test that txs are still being received (paying 20 sat/byte) txids = [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] assert(allInvsMatch(txids, self.nodes[0].p2p)) self.nodes[0].p2p.clear_invs() # Change tx fee rate to 10 sat/byte and test they are no longer # received node1.settxfee(Decimal("0.00010000")) [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] sync_mempools(self.nodes) # must be sure node 0 has received all txs # Send one transaction from node0 that should be received, so that we # we can sync the test on receipt (if node1's txs were relayed, they'd # be received by the time this node0 tx is received). This is # unfortunately reliant on the current relay behavior where we batch up # to 35 entries in an inv, which means that when this next transaction # is eligible for relay, the prior transactions from node1 are eligible # as well. node0.settxfee(Decimal("0.00020000")) txids = [node0.sendtoaddress(node0.getnewaddress(), 1)] assert(allInvsMatch(txids, self.nodes[0].p2p)) self.nodes[0].p2p.clear_invs() # Remove fee filter and check that txs are received again self.nodes[0].p2p.send_and_ping(msg_feefilter(0)) txids = [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] assert(allInvsMatch(txids, self.nodes[0].p2p)) self.nodes[0].p2p.clear_invs() if __name__ == '__main__': FeeFilterTest().main() diff --git a/test/functional/p2p-fingerprint.py b/test/functional/p2p-fingerprint.py index 2300b8189..1af78750f 100755 --- a/test/functional/p2p-fingerprint.py +++ b/test/functional/p2p-fingerprint.py @@ -1,160 +1,160 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test various fingerprinting protections. If an stale block more than a month old or its header are requested by a peer, the node should pretend that it does not have it to avoid fingerprinting. """ import time from test_framework.blocktools import (create_block, create_coinbase) from test_framework.mininode import ( CInv, - NetworkThread, P2PInterface, msg_headers, msg_block, msg_getdata, msg_getheaders, + network_thread_start, wait_until, ) from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, p2p_port, ) class P2PFingerprintTest(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 # Build a chain of blocks on top of given one def build_chain(self, nblocks, prev_hash, prev_height, prev_median_time): blocks = [] for _ in range(nblocks): coinbase = create_coinbase(prev_height + 1) block_time = prev_median_time + 1 block = create_block(int(prev_hash, 16), coinbase, block_time) block.solve() blocks.append(block) prev_hash = block.hash prev_height += 1 prev_median_time = block_time return blocks # Send a getdata request for a given block hash def send_block_request(self, block_hash, node): msg = msg_getdata() # 2 == "Block" msg.inv.append(CInv(2, block_hash)) node.send_message(msg) # Send a getheaders request for a given single block hash def send_header_request(self, block_hash, node): msg = msg_getheaders() msg.hashstop = block_hash node.send_message(msg) # Check whether last block received from node has a given hash def last_block_equals(self, expected_hash, node): block_msg = node.last_message.get("block") return block_msg and block_msg.block.rehash() == expected_hash # Check whether last block header received from node has a given hash def last_header_equals(self, expected_hash, node): headers_msg = node.last_message.get("headers") return (headers_msg and headers_msg.headers and headers_msg.headers[0].rehash() == expected_hash) # Checks that stale blocks timestamped more than a month ago are not served # by the node while recent stale blocks and old active chain blocks are. # This does not currently test that stale blocks timestamped within the # last month but that have over a month's worth of work are also withheld. def run_test(self): node0 = self.nodes[0].add_p2p_connection(P2PInterface()) - NetworkThread().start() + network_thread_start() node0.wait_for_verack() # Set node time to 60 days ago self.nodes[0].setmocktime(int(time.time()) - 60 * 24 * 60 * 60) # Generating a chain of 10 blocks block_hashes = self.nodes[0].generate(nblocks=10) # Create longer chain starting 2 blocks before current tip height = len(block_hashes) - 2 block_hash = block_hashes[height - 1] block_time = self.nodes[0].getblockheader(block_hash)["mediantime"] + 1 new_blocks = self.build_chain(5, block_hash, height, block_time) # Force reorg to a longer chain node0.send_message(msg_headers(new_blocks)) node0.wait_for_getdata() for block in new_blocks: node0.send_and_ping(msg_block(block)) # Check that reorg succeeded assert_equal(self.nodes[0].getblockcount(), 13) stale_hash = int(block_hashes[-1], 16) # Check that getdata request for stale block succeeds self.send_block_request(stale_hash, node0) def test_function(): return self.last_block_equals(stale_hash, node0) wait_until(test_function, timeout=3) # Check that getheader request for stale block header succeeds self.send_header_request(stale_hash, node0) def test_function(): return self.last_header_equals(stale_hash, node0) wait_until(test_function, timeout=3) # Longest chain is extended so stale is much older than chain tip self.nodes[0].setmocktime(0) tip = self.nodes[0].generate(nblocks=1)[0] assert_equal(self.nodes[0].getblockcount(), 14) # Send getdata & getheaders to refresh last received getheader message block_hash = int(tip, 16) self.send_block_request(block_hash, node0) self.send_header_request(block_hash, node0) node0.sync_with_ping() # Request for very old stale block should now fail self.send_block_request(stale_hash, node0) time.sleep(3) assert not self.last_block_equals(stale_hash, node0) # Request for very old stale block header should now fail self.send_header_request(stale_hash, node0) time.sleep(3) assert not self.last_header_equals(stale_hash, node0) # Verify we can fetch very old blocks and headers on the active chain block_hash = int(block_hashes[2], 16) self.send_block_request(block_hash, node0) self.send_header_request(block_hash, node0) node0.sync_with_ping() self.send_block_request(block_hash, node0) def test_function(): return self.last_block_equals(block_hash, node0) wait_until(test_function, timeout=3) self.send_header_request(block_hash, node0) def test_function(): return self.last_header_equals(block_hash, node0) wait_until(test_function, timeout=3) if __name__ == '__main__': P2PFingerprintTest().main() diff --git a/test/functional/p2p-fullblocktest.py b/test/functional/p2p-fullblocktest.py index 47d8e1a9f..ef730335b 100755 --- a/test/functional/p2p-fullblocktest.py +++ b/test/functional/p2p-fullblocktest.py @@ -1,1313 +1,1313 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * +from test_framework.mininode import network_thread_start import struct from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE, MAX_BLOCK_SIGOPS_PER_MB class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending ''' This reimplements tests from the bitcoinj/FullBlockTestGenerator used by the pull-tester. We use the testing framework in which we expect a particular answer from each test. ''' # Use this class for tests that require behavior other than normal "mininode" behavior. # For now, it is used to serialize a bloated varint (b64). class CBrokenBlock(CBlock): def __init__(self, header=None): super(CBrokenBlock, self).__init__(header) def initialize(self, base_block): self.vtx = copy.deepcopy(base_block.vtx) self.hashMerkleRoot = self.calc_merkle_root() def serialize(self): r = b"" r += super(CBlock, self).serialize() r += struct.pack(" b1 (0) -> b2 (1) block(1, spend=out[0]) save_spendable_output() yield accepted() block(2, spend=out[1]) yield accepted() save_spendable_output() # so fork like this: # # genesis -> b1 (0) -> b2 (1) # \-> b3 (1) # # Nothing should happen at this point. We saw b2 first so it takes # priority. tip(1) b3 = block(3, spend=out[1]) txout_b3 = PreviousSpendableOutput(b3.vtx[1], 0) yield rejected() # Now we add another block to make the alternative chain longer. # # genesis -> b1 (0) -> b2 (1) # \-> b3 (1) -> b4 (2) block(4, spend=out[2]) yield accepted() # ... and back to the first chain. # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b3 (1) -> b4 (2) tip(2) block(5, spend=out[2]) save_spendable_output() yield rejected() block(6, spend=out[3]) yield accepted() # Try to create a fork that double-spends # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b7 (2) -> b8 (4) # \-> b3 (1) -> b4 (2) tip(5) block(7, spend=out[2]) yield rejected() block(8, spend=out[4]) yield rejected() # Try to create a block that has too much fee # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b9 (4) # \-> b3 (1) -> b4 (2) tip(6) block(9, spend=out[4], additional_coinbase_value=1) yield rejected(RejectResult(16, b'bad-cb-amount')) # Create a fork that ends in a block with too much fee (the one that causes the reorg) # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b10 (3) -> b11 (4) # \-> b3 (1) -> b4 (2) tip(5) block(10, spend=out[3]) yield rejected() block(11, spend=out[4], additional_coinbase_value=1) yield rejected(RejectResult(16, b'bad-cb-amount')) # Try again, but with a valid fork first # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b14 (5) # (b12 added last) # \-> b3 (1) -> b4 (2) tip(5) b12 = block(12, spend=out[3]) save_spendable_output() b13 = block(13, spend=out[4]) # Deliver the block header for b12, and the block b13. # b13 should be accepted but the tip won't advance until b12 is # delivered. yield TestInstance([[CBlockHeader(b12), None], [b13, False]]) save_spendable_output() # b14 is invalid, but the node won't know that until it tries to connect # Tip still can't advance because b12 is missing block(14, spend=out[5], additional_coinbase_value=1) yield rejected() yield TestInstance([[b12, True, b13.sha256]]) # New tip should be b13. # Add a block with MAX_BLOCK_SIGOPS_PER_MB and one with one more sigop # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b16 (6) # \-> b3 (1) -> b4 (2) # Test that a block with a lot of checksigs is okay lots_of_checksigs = CScript( [OP_CHECKSIG] * (MAX_BLOCK_SIGOPS_PER_MB - 1)) tip(13) block(15, spend=out[5], script=lots_of_checksigs) yield accepted() save_spendable_output() # Test that a block with too many checksigs is rejected too_many_checksigs = CScript([OP_CHECKSIG] * (MAX_BLOCK_SIGOPS_PER_MB)) block(16, spend=out[6], script=too_many_checksigs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Attempt to spend a transaction created on a different fork # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b17 (b3.vtx[1]) # \-> b3 (1) -> b4 (2) tip(15) block(17, spend=txout_b3) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # Attempt to spend a transaction created on a different fork (on a fork this time) # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) # \-> b18 (b3.vtx[1]) -> b19 (6) # \-> b3 (1) -> b4 (2) tip(13) block(18, spend=txout_b3) yield rejected() block(19, spend=out[6]) yield rejected() # Attempt to spend a coinbase at depth too low # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b20 (7) # \-> b3 (1) -> b4 (2) tip(15) block(20, spend=out[7]) yield rejected(RejectResult(16, b'bad-txns-premature-spend-of-coinbase')) # Attempt to spend a coinbase at depth too low (on a fork this time) # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) # \-> b21 (6) -> b22 (5) # \-> b3 (1) -> b4 (2) tip(13) block(21, spend=out[6]) yield rejected() block(22, spend=out[5]) yield rejected() # Create a block on either side of LEGACY_MAX_BLOCK_SIZE and make sure its accepted/rejected # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) # \-> b24 (6) -> b25 (7) # \-> b3 (1) -> b4 (2) tip(15) b23 = block(23, spend=out[6]) tx = CTransaction() script_length = LEGACY_MAX_BLOCK_SIZE - len(b23.serialize()) - 69 script_output = CScript([b'\x00' * script_length]) tx.vout.append(CTxOut(0, script_output)) tx.vin.append(CTxIn(COutPoint(b23.vtx[1].sha256, 0))) b23 = update_block(23, [tx]) # Make sure the math above worked out to produce a max-sized block assert_equal(len(b23.serialize()), LEGACY_MAX_BLOCK_SIZE) yield accepted() save_spendable_output() # Create blocks with a coinbase input script size out of range # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) -> b30 (7) # \-> ... (6) -> ... (7) # \-> b3 (1) -> b4 (2) tip(15) b26 = block(26, spend=out[6]) b26.vtx[0].vin[0].scriptSig = b'\x00' b26.vtx[0].rehash() # update_block causes the merkle root to get updated, even with no new # transactions, and updates the required state. b26 = update_block(26, []) yield rejected(RejectResult(16, b'bad-cb-length')) # Extend the b26 chain to make sure bitcoind isn't accepting b26 block(27, spend=out[7]) yield rejected(False) # Now try a too-large-coinbase script tip(15) b28 = block(28, spend=out[6]) b28.vtx[0].vin[0].scriptSig = b'\x00' * 101 b28.vtx[0].rehash() b28 = update_block(28, []) yield rejected(RejectResult(16, b'bad-cb-length')) # Extend the b28 chain to make sure bitcoind isn't accepting b28 block(29, spend=out[7]) yield rejected(False) # b30 has a max-sized coinbase scriptSig. tip(23) b30 = block(30) b30.vtx[0].vin[0].scriptSig = b'\x00' * 100 b30.vtx[0].rehash() b30 = update_block(30, []) yield accepted() save_spendable_output() # b31 - b35 - check sigops of OP_CHECKMULTISIG / OP_CHECKMULTISIGVERIFY / OP_CHECKSIGVERIFY # # genesis -> ... -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) # \-> b36 (11) # \-> b34 (10) # \-> b32 (9) # # MULTISIG: each op code counts as 20 sigops. To create the edge case, # pack another 19 sigops at the end. lots_of_multisigs = CScript([OP_CHECKMULTISIG] * ( (MAX_BLOCK_SIGOPS_PER_MB - 1) // 20) + [OP_CHECKSIG] * 19) b31 = block(31, spend=out[8], script=lots_of_multisigs) assert_equal(get_legacy_sigopcount_block(b31), MAX_BLOCK_SIGOPS_PER_MB) yield accepted() save_spendable_output() # this goes over the limit because the coinbase has one sigop too_many_multisigs = CScript( [OP_CHECKMULTISIG] * (MAX_BLOCK_SIGOPS_PER_MB // 20)) b32 = block(32, spend=out[9], script=too_many_multisigs) assert_equal(get_legacy_sigopcount_block( b32), MAX_BLOCK_SIGOPS_PER_MB + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # CHECKMULTISIGVERIFY tip(31) lots_of_multisigs = CScript([OP_CHECKMULTISIGVERIFY] * ( (MAX_BLOCK_SIGOPS_PER_MB - 1) // 20) + [OP_CHECKSIG] * 19) block(33, spend=out[9], script=lots_of_multisigs) yield accepted() save_spendable_output() too_many_multisigs = CScript( [OP_CHECKMULTISIGVERIFY] * (MAX_BLOCK_SIGOPS_PER_MB // 20)) block(34, spend=out[10], script=too_many_multisigs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # CHECKSIGVERIFY tip(33) lots_of_checksigs = CScript( [OP_CHECKSIGVERIFY] * (MAX_BLOCK_SIGOPS_PER_MB - 1)) b35 = block(35, spend=out[10], script=lots_of_checksigs) yield accepted() save_spendable_output() too_many_checksigs = CScript( [OP_CHECKSIGVERIFY] * (MAX_BLOCK_SIGOPS_PER_MB)) block(36, spend=out[11], script=too_many_checksigs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Check spending of a transaction in a block which failed to connect # # b6 (3) # b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) # \-> b37 (11) # \-> b38 (11/37) # # save 37's spendable output, but then double-spend out11 to invalidate # the block tip(35) b37 = block(37, spend=out[11]) txout_b37 = PreviousSpendableOutput(b37.vtx[1], 0) tx = create_and_sign_tx(out[11].tx, out[11].n, 0) b37 = update_block(37, [tx]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # attempt to spend b37's first non-coinbase tx, at which point b37 was # still considered valid tip(35) block(38, spend=txout_b37) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # Check P2SH SigOp counting # # # 13 (4) -> b15 (5) -> b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b41 (12) # \-> b40 (12) # # b39 - create some P2SH outputs that will require 6 sigops to spend: # # redeem_script = COINBASE_PUBKEY, (OP_2DUP+OP_CHECKSIGVERIFY) * 5, OP_CHECKSIG # p2sh_script = OP_HASH160, ripemd160(sha256(script)), OP_EQUAL # tip(35) b39 = block(39) b39_outputs = 0 b39_sigops_per_output = 6 # Build the redeem script, hash it, use hash to create the p2sh script redeem_script = CScript([self.coinbase_pubkey] + [ OP_2DUP, OP_CHECKSIGVERIFY] * 5 + [OP_CHECKSIG]) redeem_script_hash = hash160(redeem_script) p2sh_script = CScript([OP_HASH160, redeem_script_hash, OP_EQUAL]) # Create a transaction that spends one satoshi to the p2sh_script, the rest to OP_TRUE # This must be signed because it is spending a coinbase spend = out[11] tx = create_tx(spend.tx, spend.n, 1, p2sh_script) tx.vout.append( CTxOut(spend.tx.vout[spend.n].nValue - 1, CScript([OP_TRUE]))) self.sign_tx(tx, spend.tx, spend.n) tx.rehash() b39 = update_block(39, [tx]) b39_outputs += 1 # Until block is full, add tx's with 1 satoshi to p2sh_script, the rest # to OP_TRUE tx_new = None tx_last = tx tx_last_n = len(tx.vout) - 1 total_size = len(b39.serialize()) while(total_size < LEGACY_MAX_BLOCK_SIZE): tx_new = create_tx(tx_last, tx_last_n, 1, p2sh_script) tx_new.vout.append( CTxOut(tx_last.vout[tx_last_n].nValue - 1, CScript([OP_TRUE]))) tx_new.rehash() total_size += len(tx_new.serialize()) if total_size >= LEGACY_MAX_BLOCK_SIZE: break b39.vtx.append(tx_new) # add tx to block tx_last = tx_new tx_last_n = len(tx_new.vout) - 1 b39_outputs += 1 b39 = update_block(39, []) yield accepted() save_spendable_output() # Test sigops in P2SH redeem scripts # # b40 creates 3333 tx's spending the 6-sigop P2SH outputs from b39 for a total of 19998 sigops. # The first tx has one sigop and then at the end we add 2 more to put us just over the max. # # b41 does the same, less one, so it has the maximum sigops permitted. # tip(39) b40 = block(40, spend=out[12]) sigops = get_legacy_sigopcount_block(b40) numTxs = (MAX_BLOCK_SIGOPS_PER_MB - sigops) // b39_sigops_per_output assert_equal(numTxs <= b39_outputs, True) lastOutpoint = COutPoint(b40.vtx[1].sha256, 0) lastAmount = b40.vtx[1].vout[0].nValue new_txs = [] for i in range(1, numTxs + 1): tx = CTransaction() tx.vout.append(CTxOut(1, CScript([OP_TRUE]))) tx.vin.append(CTxIn(lastOutpoint, b'')) # second input is corresponding P2SH output from b39 tx.vin.append(CTxIn(COutPoint(b39.vtx[i].sha256, 0), b'')) # Note: must pass the redeem_script (not p2sh_script) to the # signature hash function sighash = SignatureHashForkId( redeem_script, tx, 1, SIGHASH_ALL | SIGHASH_FORKID, lastAmount) sig = self.coinbase_key.sign( sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID])) scriptSig = CScript([sig, redeem_script]) tx.vin[1].scriptSig = scriptSig pad_tx(tx) tx.rehash() new_txs.append(tx) lastOutpoint = COutPoint(tx.sha256, 0) lastAmount = tx.vout[0].nValue b40_sigops_to_fill = MAX_BLOCK_SIGOPS_PER_MB - \ (numTxs * b39_sigops_per_output + sigops) + 1 tx = CTransaction() tx.vin.append(CTxIn(lastOutpoint, b'')) tx.vout.append(CTxOut(1, CScript([OP_CHECKSIG] * b40_sigops_to_fill))) pad_tx(tx) tx.rehash() new_txs.append(tx) update_block(40, new_txs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # same as b40, but one less sigop tip(39) block(41, spend=None) update_block(41, [b40tx for b40tx in b40.vtx[1:] if b40tx != tx]) b41_sigops_to_fill = b40_sigops_to_fill - 1 tx = CTransaction() tx.vin.append(CTxIn(lastOutpoint, b'')) tx.vout.append(CTxOut(1, CScript([OP_CHECKSIG] * b41_sigops_to_fill))) pad_tx(tx) update_block(41, [tx]) yield accepted() # Fork off of b39 to create a constant base again # # b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) # \-> b41 (12) # tip(39) block(42, spend=out[12]) yield rejected() save_spendable_output() block(43, spend=out[13]) yield accepted() save_spendable_output() # Test a number of really invalid scenarios # # -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b44 (14) # \-> ??? (15) # The next few blocks are going to be created "by hand" since they'll do funky things, such as having # the first transaction be non-coinbase, etc. The purpose of b44 is to # make sure this works. height = self.block_heights[self.tip.sha256] + 1 coinbase = create_coinbase(height, self.coinbase_pubkey) b44 = CBlock() b44.nTime = self.tip.nTime + 1 b44.hashPrevBlock = self.tip.sha256 b44.nBits = 0x207fffff b44.vtx.append(coinbase) b44.hashMerkleRoot = b44.calc_merkle_root() b44.solve() self.tip = b44 self.block_heights[b44.sha256] = height self.blocks[44] = b44 yield accepted() # A block with a non-coinbase as the first tx non_coinbase = create_tx(out[15].tx, out[15].n, 1) b45 = CBlock() b45.nTime = self.tip.nTime + 1 b45.hashPrevBlock = self.tip.sha256 b45.nBits = 0x207fffff b45.vtx.append(non_coinbase) b45.hashMerkleRoot = b45.calc_merkle_root() b45.calc_sha256() b45.solve() self.block_heights[b45.sha256] = self.block_heights[ self.tip.sha256] + 1 self.tip = b45 self.blocks[45] = b45 yield rejected(RejectResult(16, b'bad-cb-missing')) # A block with no txns tip(44) b46 = CBlock() b46.nTime = b44.nTime + 1 b46.hashPrevBlock = b44.sha256 b46.nBits = 0x207fffff b46.vtx = [] b46.hashMerkleRoot = 0 b46.solve() self.block_heights[b46.sha256] = self.block_heights[b44.sha256] + 1 self.tip = b46 assert 46 not in self.blocks self.blocks[46] = b46 s = ser_uint256(b46.hashMerkleRoot) yield rejected(RejectResult(16, b'bad-cb-missing')) # A block with invalid work tip(44) b47 = block(47, solve=False) target = uint256_from_compact(b47.nBits) while b47.sha256 < target: # changed > to < b47.nNonce += 1 b47.rehash() yield rejected(RejectResult(16, b'high-hash')) # A block with timestamp > 2 hrs in the future tip(44) b48 = block(48, solve=False) b48.nTime = int(time.time()) + 60 * 60 * 3 b48.solve() yield rejected(RejectResult(16, b'time-too-new')) # A block with an invalid merkle hash tip(44) b49 = block(49) b49.hashMerkleRoot += 1 b49.solve() yield rejected(RejectResult(16, b'bad-txnmrklroot')) # A block with an incorrect POW limit tip(44) b50 = block(50) b50.nBits = b50.nBits - 1 b50.solve() yield rejected(RejectResult(16, b'bad-diffbits')) # A block with two coinbase txns tip(44) b51 = block(51) cb2 = create_coinbase(51, self.coinbase_pubkey) b51 = update_block(51, [cb2]) yield rejected(RejectResult(16, b'bad-tx-coinbase')) # A block w/ duplicate txns tip(44) b52 = block(52, spend=out[15]) b52 = update_block(52, [b52.vtx[1]]) yield rejected(RejectResult(16, b'tx-duplicate')) # Test block timestamps # -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) # \-> b54 (15) # tip(43) block(53, spend=out[14]) yield rejected() # rejected since b44 is at same height save_spendable_output() # invalid timestamp (b35 is 5 blocks back, so its time is # MedianTimePast) b54 = block(54, spend=out[15]) b54.nTime = b35.nTime - 1 b54.solve() yield rejected(RejectResult(16, b'time-too-old')) # valid timestamp tip(53) b55 = block(55, spend=out[15]) b55.nTime = b35.nTime update_block(55, []) yield accepted() save_spendable_output() # Test CVE-2012-2459 # # -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57p2 (16) # \-> b57 (16) # \-> b56p2 (16) # \-> b56 (16) # # Merkle tree malleability (CVE-2012-2459): repeating sequences of transactions in a block without # affecting the merkle root of a block, while still invalidating it. # See: src/consensus/merkle.h # # b57 has three txns: coinbase, tx, tx1. The merkle root computation will duplicate tx. # Result: OK # # b56 copies b57 but duplicates tx1 and does not recalculate the block hash. So it has a valid merkle # root but duplicate transactions. # Result: Fails # # b57p2 has six transactions in its merkle tree: # - coinbase, tx, tx1, tx2, tx3, tx4 # Merkle root calculation will duplicate as necessary. # Result: OK. # # b56p2 copies b57p2 but adds both tx3 and tx4. The purpose of the test is to make sure the code catches # duplicate txns that are not next to one another with the "bad-txns-duplicate" error (which indicates # that the error was caught early, avoiding a DOS vulnerability.) # b57 - a good block with 2 txs, don't submit until end tip(55) b57 = block(57) tx = create_and_sign_tx(out[16].tx, out[16].n, 1) tx1 = create_tx(tx, 0, 1) b57 = update_block(57, [tx, tx1]) # b56 - copy b57, add a duplicate tx tip(55) b56 = copy.deepcopy(b57) self.blocks[56] = b56 assert_equal(len(b56.vtx), 3) b56 = update_block(56, [b57.vtx[2]]) assert_equal(b56.hash, b57.hash) yield rejected(RejectResult(16, b'bad-txns-duplicate')) # b57p2 - a good block with 6 tx'es, don't submit until end tip(55) b57p2 = block("57p2") tx = create_and_sign_tx(out[16].tx, out[16].n, 1) tx1 = create_tx(tx, 0, 1) tx2 = create_tx(tx1, 0, 1) tx3 = create_tx(tx2, 0, 1) tx4 = create_tx(tx3, 0, 1) b57p2 = update_block("57p2", [tx, tx1, tx2, tx3, tx4]) # b56p2 - copy b57p2, duplicate two non-consecutive tx's tip(55) b56p2 = copy.deepcopy(b57p2) self.blocks["b56p2"] = b56p2 assert_equal(len(b56p2.vtx), 6) b56p2 = update_block("b56p2", b56p2.vtx[4:6], reorder=False) assert_equal(b56p2.hash, b57p2.hash) yield rejected(RejectResult(16, b'bad-txns-duplicate')) tip("57p2") yield accepted() tip(57) yield rejected() # rejected because 57p2 seen first save_spendable_output() # Test a few invalid tx types # # -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> ??? (17) # # tx with prevout.n out of range tip(57) b58 = block(58, spend=out[17]) tx = CTransaction() assert(len(out[17].tx.vout) < 42) tx.vin.append( CTxIn(COutPoint(out[17].tx.sha256, 42), CScript([OP_TRUE]), 0xffffffff)) tx.vout.append(CTxOut(0, b"")) pad_tx(tx) tx.calc_sha256() b58 = update_block(58, [tx]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # tx with output value > input value out of range tip(57) b59 = block(59) tx = create_and_sign_tx(out[17].tx, out[17].n, 51 * COIN) b59 = update_block(59, [tx]) yield rejected(RejectResult(16, b'bad-txns-in-belowout')) # reset to good chain tip(57) b60 = block(60, spend=out[17]) yield accepted() save_spendable_output() # Test BIP30 # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> b61 (18) # # Blocks are not allowed to contain a transaction whose id matches that of an earlier, # not-fully-spent transaction in the same chain. To test, make identical coinbases; # the second one should be rejected. # tip(60) b61 = block(61, spend=out[18]) b61.vtx[0].vin[0].scriptSig = b60.vtx[ 0].vin[0].scriptSig # equalize the coinbases b61.vtx[0].rehash() b61 = update_block(61, []) assert_equal(b60.vtx[0].serialize(), b61.vtx[0].serialize()) yield rejected(RejectResult(16, b'bad-txns-BIP30')) # Test tx.isFinal is properly rejected (not an exhaustive tx.isFinal test, that should be in data-driven transaction tests) # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> b62 (18) # tip(60) b62 = block(62) tx = CTransaction() tx.nLockTime = 0xffffffff # this locktime is non-final assert(out[18].n < len(out[18].tx.vout)) tx.vin.append( CTxIn(COutPoint(out[18].tx.sha256, out[18].n))) # don't set nSequence tx.vout.append(CTxOut(0, CScript([OP_TRUE]))) assert(tx.vin[0].nSequence < 0xffffffff) tx.calc_sha256() b62 = update_block(62, [tx]) yield rejected(RejectResult(16, b'bad-txns-nonfinal')) # Test a non-final coinbase is also rejected # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> b63 (-) # tip(60) b63 = block(63) b63.vtx[0].nLockTime = 0xffffffff b63.vtx[0].vin[0].nSequence = 0xDEADBEEF b63.vtx[0].rehash() b63 = update_block(63, []) yield rejected(RejectResult(16, b'bad-txns-nonfinal')) # This checks that a block with a bloated VARINT between the block_header and the array of tx such that # the block is > LEGACY_MAX_BLOCK_SIZE with the bloated varint, but <= LEGACY_MAX_BLOCK_SIZE without the bloated varint, # does not cause a subsequent, identical block with canonical encoding to be rejected. The test does not # care whether the bloated block is accepted or rejected; it only cares that the second block is accepted. # # What matters is that the receiving node should not reject the bloated block, and then reject the canonical # block on the basis that it's the same as an already-rejected block (which would be a consensus failure.) # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) # \ # b64a (18) # b64a is a bloated block (non-canonical varint) # b64 is a good block (same as b64 but w/ canonical varint) # tip(60) regular_block = block("64a", spend=out[18]) # make it a "broken_block," with non-canonical serialization b64a = CBrokenBlock(regular_block) b64a.initialize(regular_block) self.blocks["64a"] = b64a self.tip = b64a tx = CTransaction() # use canonical serialization to calculate size script_length = LEGACY_MAX_BLOCK_SIZE - \ len(b64a.normal_serialize()) - 69 script_output = CScript([b'\x00' * script_length]) tx.vout.append(CTxOut(0, script_output)) tx.vin.append(CTxIn(COutPoint(b64a.vtx[1].sha256, 0))) b64a = update_block("64a", [tx]) assert_equal(len(b64a.serialize()), LEGACY_MAX_BLOCK_SIZE + 8) yield TestInstance([[self.tip, None]]) # comptool workaround: to make sure b64 is delivered, manually erase # b64a from blockstore self.test.block_store.erase(b64a.sha256) tip(60) b64 = CBlock(b64a) b64.vtx = copy.deepcopy(b64a.vtx) assert_equal(b64.hash, b64a.hash) assert_equal(len(b64.serialize()), LEGACY_MAX_BLOCK_SIZE) self.blocks[64] = b64 update_block(64, []) yield accepted() save_spendable_output() # Spend an output created in the block itself # # -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) # tip(64) block(65) tx1 = create_and_sign_tx( out[19].tx, out[19].n, out[19].tx.vout[0].nValue) tx2 = create_and_sign_tx(tx1, 0, 0) update_block(65, [tx1, tx2]) yield accepted() save_spendable_output() # Attempt to double-spend a transaction created in a block # # -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) # \-> b67 (20) # # tip(65) block(67) tx1 = create_and_sign_tx( out[20].tx, out[20].n, out[20].tx.vout[0].nValue) tx2 = create_and_sign_tx(tx1, 0, 1) tx3 = create_and_sign_tx(tx1, 0, 2) update_block(67, [tx1, tx2, tx3]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # More tests of block subsidy # # -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) # \-> b68 (20) # # b68 - coinbase with an extra 10 satoshis, # creates a tx that has 9 satoshis from out[20] go to fees # this fails because the coinbase is trying to claim 1 satoshi too much in fees # # b69 - coinbase with extra 10 satoshis, and a tx that gives a 10 satoshi fee # this succeeds # tip(65) block(68, additional_coinbase_value=10) tx = create_and_sign_tx( out[20].tx, out[20].n, out[20].tx.vout[0].nValue - 9) update_block(68, [tx]) yield rejected(RejectResult(16, b'bad-cb-amount')) tip(65) b69 = block(69, additional_coinbase_value=10) tx = create_and_sign_tx( out[20].tx, out[20].n, out[20].tx.vout[0].nValue - 10) update_block(69, [tx]) yield accepted() save_spendable_output() # Test spending the outpoint of a non-existent transaction # # -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) # \-> b70 (21) # tip(69) block(70, spend=out[21]) bogus_tx = CTransaction() bogus_tx.sha256 = uint256_from_str( b"23c70ed7c0506e9178fc1a987f40a33946d4ad4c962b5ae3a52546da53af0c5c") tx = CTransaction() tx.vin.append(CTxIn(COutPoint(bogus_tx.sha256, 0), b"", 0xffffffff)) tx.vout.append(CTxOut(1, b"")) pad_tx(tx) tx.rehash() update_block(70, [tx]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # Test accepting an invalid block which has the same hash as a valid one (via merkle tree tricks) # # -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) -> b72 (21) # \-> b71 (21) # # b72 is a good block. # b71 is a copy of 72, but re-adds one of its transactions. However, it has the same hash as b71. # tip(69) b72 = block(72) tx1 = create_and_sign_tx(out[21].tx, out[21].n, 2) tx2 = create_and_sign_tx(tx1, 0, 1) b72 = update_block(72, [tx1, tx2]) # now tip is 72 b71 = copy.deepcopy(b72) # add duplicate last transaction b71.vtx.append(b72.vtx[-1]) # b71 builds off b69 self.block_heights[b71.sha256] = self.block_heights[b69.sha256] + 1 self.blocks[71] = b71 assert_equal(len(b71.vtx), 4) assert_equal(len(b72.vtx), 3) assert_equal(b72.sha256, b71.sha256) tip(71) yield rejected(RejectResult(16, b'bad-txns-duplicate')) tip(72) yield accepted() save_spendable_output() # Test some invalid scripts and MAX_BLOCK_SIGOPS_PER_MB # # -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) -> b72 (21) # \-> b** (22) # # b73 - tx with excessive sigops that are placed after an excessively large script element. # The purpose of the test is to make sure those sigops are counted. # # script is a bytearray of size 20,526 # # bytearray[0-19,998] : OP_CHECKSIG # bytearray[19,999] : OP_PUSHDATA4 # bytearray[20,000-20,003]: 521 (max_script_element_size+1, in little-endian format) # bytearray[20,004-20,525]: unread data (script_element) # bytearray[20,526] : OP_CHECKSIG (this puts us over the limit) # tip(72) b73 = block(73) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + \ MAX_SCRIPT_ELEMENT_SIZE + 1 + 5 + 1 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB - 1] = int("4e", 16) # OP_PUSHDATA4 element_size = MAX_SCRIPT_ELEMENT_SIZE + 1 a[MAX_BLOCK_SIGOPS_PER_MB] = element_size % 256 a[MAX_BLOCK_SIGOPS_PER_MB + 1] = element_size // 256 a[MAX_BLOCK_SIGOPS_PER_MB + 2] = 0 a[MAX_BLOCK_SIGOPS_PER_MB + 3] = 0 tx = create_and_sign_tx(out[22].tx, 0, 1, CScript(a)) b73 = update_block(73, [tx]) assert_equal( get_legacy_sigopcount_block(b73), MAX_BLOCK_SIGOPS_PER_MB + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # b74/75 - if we push an invalid script element, all prevous sigops are counted, # but sigops after the element are not counted. # # The invalid script element is that the push_data indicates that # there will be a large amount of data (0xffffff bytes), but we only # provide a much smaller number. These bytes are CHECKSIGS so they would # cause b75 to fail for excessive sigops, if those bytes were counted. # # b74 fails because we put MAX_BLOCK_SIGOPS_PER_MB+1 before the element # b75 succeeds because we put MAX_BLOCK_SIGOPS_PER_MB before the element # # tip(72) b74 = block(74) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + \ MAX_SCRIPT_ELEMENT_SIZE + 42 # total = 20,561 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB] = 0x4e a[MAX_BLOCK_SIGOPS_PER_MB + 1] = 0xfe a[MAX_BLOCK_SIGOPS_PER_MB + 2] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 3] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 4] = 0xff tx = create_and_sign_tx(out[22].tx, 0, 1, CScript(a)) b74 = update_block(74, [tx]) yield rejected(RejectResult(16, b'bad-blk-sigops')) tip(72) b75 = block(75) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + MAX_SCRIPT_ELEMENT_SIZE + 42 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB - 1] = 0x4e a[MAX_BLOCK_SIGOPS_PER_MB] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 1] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 2] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 3] = 0xff tx = create_and_sign_tx(out[22].tx, 0, 1, CScript(a)) b75 = update_block(75, [tx]) yield accepted() save_spendable_output() # Check that if we push an element filled with CHECKSIGs, they are not # counted tip(75) b76 = block(76) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + MAX_SCRIPT_ELEMENT_SIZE + 1 + 5 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB - 1] = 0x4e # PUSHDATA4, but leave the following bytes as just checksigs tx = create_and_sign_tx(out[23].tx, 0, 1, CScript(a)) b76 = update_block(76, [tx]) yield accepted() save_spendable_output() # Test transaction resurrection # # -> b77 (24) -> b78 (25) -> b79 (26) # \-> b80 (25) -> b81 (26) -> b82 (27) # # b78 creates a tx, which is spent in b79. After b82, both should be in mempool # # The tx'es must be unsigned and pass the node's mempool policy. It is unsigned for the # rather obscure reason that the Python signature code does not distinguish between # Low-S and High-S values (whereas the bitcoin code has custom code which does so); # as a result of which, the odds are 50% that the python code will use the right # value and the transaction will be accepted into the mempool. Until we modify the # test framework to support low-S signing, we are out of luck. # # To get around this issue, we construct transactions which are not signed and which # spend to OP_TRUE. If the standard-ness rules change, this test would need to be # updated. (Perhaps to spend to a P2SH OP_TRUE script) # tip(76) block(77) tx77 = create_and_sign_tx(out[24].tx, out[24].n, 10 * COIN) update_block(77, [tx77]) yield accepted() save_spendable_output() block(78) tx78 = create_tx(tx77, 0, 9 * COIN) update_block(78, [tx78]) yield accepted() block(79) tx79 = create_tx(tx78, 0, 8 * COIN) update_block(79, [tx79]) yield accepted() # mempool should be empty assert_equal(len(self.nodes[0].getrawmempool()), 0) tip(77) block(80, spend=out[25]) yield rejected() save_spendable_output() block(81, spend=out[26]) yield rejected() # other chain is same length save_spendable_output() block(82, spend=out[27]) yield accepted() # now this chain is longer, triggers re-org save_spendable_output() # now check that tx78 and tx79 have been put back into the peer's # mempool mempool = self.nodes[0].getrawmempool() assert_equal(len(mempool), 2) assert(tx78.hash in mempool) assert(tx79.hash in mempool) # Test invalid opcodes in dead execution paths. # # -> b81 (26) -> b82 (27) -> b83 (28) # block(83) op_codes = [OP_IF, OP_INVALIDOPCODE, OP_ELSE, OP_TRUE, OP_ENDIF] script = CScript(op_codes) tx1 = create_and_sign_tx( out[28].tx, out[28].n, out[28].tx.vout[0].nValue, script) tx2 = create_and_sign_tx(tx1, 0, 0, CScript([OP_TRUE])) tx2.vin[0].scriptSig = CScript([OP_FALSE]) tx2.rehash() update_block(83, [tx1, tx2]) yield accepted() save_spendable_output() # Reorg on/off blocks that have OP_RETURN in them (and try to spend them) # # -> b81 (26) -> b82 (27) -> b83 (28) -> b84 (29) -> b87 (30) -> b88 (31) # \-> b85 (29) -> b86 (30) \-> b89a (32) # # block(84) tx1 = create_tx(out[29].tx, out[29].n, 0, CScript([OP_RETURN])) vout_offset = len(tx1.vout) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.calc_sha256() self.sign_tx(tx1, out[29].tx, out[29].n) tx1.rehash() tx2 = create_tx(tx1, vout_offset, 0, CScript([OP_RETURN])) tx2.vout.append(CTxOut(0, CScript([OP_RETURN]))) tx3 = create_tx(tx1, vout_offset + 1, 0, CScript([OP_RETURN])) tx3.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx4 = create_tx(tx1, vout_offset + 2, 0, CScript([OP_TRUE])) tx4.vout.append(CTxOut(0, CScript([OP_RETURN]))) tx5 = create_tx(tx1, vout_offset + 3, 0, CScript([OP_RETURN])) update_block(84, [tx1, tx2, tx3, tx4, tx5]) yield accepted() save_spendable_output() tip(83) block(85, spend=out[29]) yield rejected() block(86, spend=out[30]) yield accepted() tip(84) block(87, spend=out[30]) yield rejected() save_spendable_output() block(88, spend=out[31]) yield accepted() save_spendable_output() # trying to spend the OP_RETURN output is rejected block("89a", spend=out[32]) tx = create_tx(tx1, 0, 0, CScript([OP_TRUE])) update_block("89a", [tx]) yield rejected() # Test re-org of a week's worth of blocks (1088 blocks) # This test takes a minute or two and can be accomplished in memory # if self.options.runbarelyexpensive: tip(88) LARGE_REORG_SIZE = 1088 test1 = TestInstance(sync_every_block=False) spend = out[32] for i in range(89, LARGE_REORG_SIZE + 89): b = block(i, spend) tx = CTransaction() script_length = LEGACY_MAX_BLOCK_SIZE - len(b.serialize()) - 69 script_output = CScript([b'\x00' * script_length]) tx.vout.append(CTxOut(0, script_output)) tx.vin.append(CTxIn(COutPoint(b.vtx[1].sha256, 0))) b = update_block(i, [tx]) assert_equal(len(b.serialize()), LEGACY_MAX_BLOCK_SIZE) test1.blocks_and_transactions.append([self.tip, True]) save_spendable_output() spend = get_spendable_output() yield test1 chain1_tip = i # now create alt chain of same length tip(88) test2 = TestInstance(sync_every_block=False) for i in range(89, LARGE_REORG_SIZE + 89): block("alt" + str(i)) test2.blocks_and_transactions.append([self.tip, False]) yield test2 # extend alt chain to trigger re-org block("alt" + str(chain1_tip + 1)) yield accepted() # ... and re-org back to the first chain tip(chain1_tip) block(chain1_tip + 1) yield rejected() block(chain1_tip + 2) yield accepted() chain1_tip += 2 if __name__ == '__main__': FullBlockTest().main() diff --git a/test/functional/p2p-leaktests.py b/test/functional/p2p-leaktests.py index 9d1eb9ecc..2f23ccc07 100755 --- a/test/functional/p2p-leaktests.py +++ b/test/functional/p2p-leaktests.py @@ -1,161 +1,161 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test message sending before handshake completion. A node should never send anything other than VERSION/VERACK/REJECT until it's received a VERACK. This test connects to a node and sends it a few messages, trying to intice it into sending us something it shouldn't.""" from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * banscore = 10 class CLazyNode(P2PInterface): def __init__(self): super().__init__() self.unexpected_msg = False self.ever_connected = False def bad_message(self, message): self.unexpected_msg = True self.log.info("should not have received message: %s" % message.command) def on_open(self): self.ever_connected = True def on_version(self, message): self.bad_message(message) def on_verack(self, message): self.bad_message(message) def on_reject(self, message): self.bad_message(message) def on_inv(self, message): self.bad_message(message) def on_addr(self, message): self.bad_message(message) def on_getdata(self, message): self.bad_message(message) def on_getblocks(self, message): self.bad_message(message) def on_tx(self, message): self.bad_message(message) def on_block(self, message): self.bad_message(message) def on_getaddr(self, message): self.bad_message(message) def on_headers(self, message): self.bad_message(message) def on_getheaders(self, message): self.bad_message(message) def on_ping(self, message): self.bad_message(message) def on_mempool(self, message): self.bad_message(message) def on_pong(self, message): self.bad_message(message) def on_feefilter(self, message): self.bad_message(message) def on_sendheaders(self, message): self.bad_message(message) def on_sendcmpct(self, message): self.bad_message(message) def on_cmpctblock(self, message): self.bad_message(message) def on_getblocktxn(self, message): self.bad_message(message) def on_blocktxn(self, message): self.bad_message(message) # Node that never sends a version. We'll use this to send a bunch of messages # anyway, and eventually get disconnected. class CNodeNoVersionBan(CLazyNode): # send a bunch of veracks without sending a message. This should get us disconnected. # NOTE: implementation-specific check here. Remove if bitcoind ban behavior changes def on_open(self): super().on_open() for i in range(banscore): self.send_message(msg_verack()) def on_reject(self, message): pass # Node that never sends a version. This one just sits idle and hopes to receive # any message (it shouldn't!) class CNodeNoVersionIdle(CLazyNode): def __init__(self): super().__init__() # Node that sends a version but not a verack. class CNodeNoVerackIdle(CLazyNode): def __init__(self): self.version_received = False super().__init__() def on_reject(self, message): pass def on_verack(self, message): pass # When version is received, don't reply with a verack. Instead, see if the # node will give us a message that it shouldn't. This is not an exhaustive # list! def on_version(self, message): self.version_received = True self.send_message(msg_ping()) self.send_message(msg_getaddr()) class P2PLeakTest(BitcoinTestFramework): def set_test_params(self): self.num_nodes = 1 self.extra_args = [['-banscore=' + str(banscore)]] def run_test(self): no_version_bannode = self.nodes[0].add_p2p_connection( CNodeNoVersionBan(), send_version=False) no_version_idlenode = self.nodes[0].add_p2p_connection( CNodeNoVersionIdle(), send_version=False) no_verack_idlenode = self.nodes[0].add_p2p_connection( CNodeNoVerackIdle()) - NetworkThread().start() # Start up network handling in another thread + network_thread_start() wait_until(lambda: no_version_bannode.ever_connected, timeout=10, lock=mininode_lock) wait_until(lambda: no_version_idlenode.ever_connected, timeout=10, lock=mininode_lock) wait_until(lambda: no_verack_idlenode.version_received, timeout=10, lock=mininode_lock) # Mine a block and make sure that it's not sent to the connected nodes self.nodes[0].generate(1) # Give the node enough time to possibly leak out a message time.sleep(5) # This node should have been banned assert no_version_bannode.state != "connected" self.nodes[0].disconnect_p2ps() # Wait until all connections are closed wait_until(lambda: len(self.nodes[0].getpeerinfo()) == 0) # Make sure no unexpected messages came in assert(no_version_bannode.unexpected_msg == False) assert(no_version_idlenode.unexpected_msg == False) assert(no_verack_idlenode.unexpected_msg == False) if __name__ == '__main__': P2PLeakTest().main() diff --git a/test/functional/p2p-mempool.py b/test/functional/p2p-mempool.py index 65938d631..a1f29fdae 100755 --- a/test/functional/p2p-mempool.py +++ b/test/functional/p2p-mempool.py @@ -1,32 +1,32 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class P2PMempoolTests(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [["-peerbloomfilters=0"]] def run_test(self): # Add a p2p connection self.nodes[0].add_p2p_connection(P2PInterface()) - NetworkThread().start() + network_thread_start() self.nodes[0].p2p.wait_for_verack() # request mempool self.nodes[0].p2p.send_message(msg_mempool()) self.nodes[0].p2p.wait_for_disconnect() # mininode must be disconnected at this point assert_equal(len(self.nodes[0].getpeerinfo()), 0) if __name__ == '__main__': P2PMempoolTests().main() diff --git a/test/functional/p2p-timeouts.py b/test/functional/p2p-timeouts.py index 0bd1ec971..8156c63f4 100755 --- a/test/functional/p2p-timeouts.py +++ b/test/functional/p2p-timeouts.py @@ -1,80 +1,80 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ TimeoutsTest -- test various net timeouts (only in extended tests) - Create three bitcoind nodes: no_verack_node - we never send a verack in response to their version no_version_node - we never send a version (only a ping) no_send_node - we never send any P2P message. - Start all three nodes - Wait 1 second - Assert that we're connected - Send a ping to no_verack_node and no_version_node - Wait 30 seconds - Assert that we're still connected - Send a ping to no_verack_node and no_version_node - Wait 31 seconds - Assert that we're no longer connected (timeout to receive version/verack is 60 seconds) """ from time import sleep from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class TestNode(P2PInterface): def on_version(self, message): # Don't send a verack in response pass class TimeoutsTest(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def run_test(self): # Setup the p2p connections and start up the network thread. no_verack_node = self.nodes[0].add_p2p_connection(TestNode()) no_version_node = self.nodes[0].add_p2p_connection( TestNode(), send_version=False) no_send_node = self.nodes[0].add_p2p_connection( TestNode(), send_version=False) - NetworkThread().start() # Start up network handling in another thread + network_thread_start() sleep(1) assert no_verack_node.connected assert no_version_node.connected assert no_send_node.connected no_verack_node.send_message(msg_ping()) no_version_node.send_message(msg_ping()) sleep(30) assert "version" in no_verack_node.last_message assert no_verack_node.connected assert no_version_node.connected assert no_send_node.connected no_verack_node.send_message(msg_ping()) no_version_node.send_message(msg_ping()) sleep(31) assert not no_verack_node.connected assert not no_version_node.connected assert not no_send_node.connected if __name__ == '__main__': TimeoutsTest().main() diff --git a/test/functional/sendheaders.py b/test/functional/sendheaders.py index f238a4451..76729b5da 100755 --- a/test/functional/sendheaders.py +++ b/test/functional/sendheaders.py @@ -1,618 +1,617 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test behavior of headers messages to announce blocks. Setup: - Two nodes, two p2p connections to node0. One p2p connection should only ever receive inv's (omitted from testing description below, this is our control). Second node is used for creating reorgs. test_null_locators ================== Sends two getheaders requests with null locator values. First request's hashstop value refers to validated block, while second request's hashstop value refers to a block which hasn't been validated. Verifies only the first request returns headers. test_nonnull_locators ===================== Part 1: No headers announcements before "sendheaders" a. node mines a block [expect: inv] send getdata for the block [expect: block] b. node mines another block [expect: inv] send getheaders and getdata [expect: headers, then block] c. node mines another block [expect: inv] peer mines a block, announces with header [expect: getdata] d. node mines another block [expect: inv] Part 2: After "sendheaders", headers announcements should generally work. a. peer sends sendheaders [expect: no response] peer sends getheaders with current tip [expect: no response] b. node mines a block [expect: tip header] c. for N in 1, ..., 10: * for announce-type in {inv, header} - peer mines N blocks, announces with announce-type [ expect: getheaders/getdata or getdata, deliver block(s) ] - node mines a block [ expect: 1 header ] Part 3: Headers announcements stop after large reorg and resume after getheaders or inv from peer. - For response-type in {inv, getheaders} * node mines a 7 block reorg [ expect: headers announcement of 8 blocks ] * node mines an 8-block reorg [ expect: inv at tip ] * peer responds with getblocks/getdata [expect: inv, blocks ] * node mines another block [ expect: inv at tip, peer sends getdata, expect: block ] * node mines another block at tip [ expect: inv ] * peer responds with getheaders with an old hashstop more than 8 blocks back [expect: headers] * peer requests block [ expect: block ] * node mines another block at tip [ expect: inv, peer sends getdata, expect: block ] * peer sends response-type [expect headers if getheaders, getheaders/getdata if mining new block] * node mines 1 block [expect: 1 header, peer responds with getdata] Part 4: Test direct fetch behavior a. Announce 2 old block headers. Expect: no getdata requests. b. Announce 3 new blocks via 1 headers message. Expect: one getdata request for all 3 blocks. (Send blocks.) c. Announce 1 header that forks off the last two blocks. Expect: no response. d. Announce 1 more header that builds on that fork. Expect: one getdata request for two blocks. e. Announce 16 more headers that build on that fork. Expect: getdata request for 14 more blocks. f. Announce 1 more header that builds on that fork. Expect: no response. Part 5: Test handling of headers that don't connect. a. Repeat 10 times: 1. Announce a header that doesn't connect. Expect: getheaders message 2. Send headers chain. Expect: getdata for the missing blocks, tip update. b. Then send 9 more headers that don't connect. Expect: getheaders message each time. c. Announce a header that does connect. Expect: no response. d. Announce 49 headers that don't connect. Expect: getheaders message each time. e. Announce one more that doesn't connect. Expect: disconnect. """ from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.blocktools import create_block, create_coinbase direct_fetch_response_time = 0.05 class TestNode(P2PInterface): def __init__(self): super().__init__() self.block_announced = False self.last_blockhash_announced = None def clear_last_announcement(self): with mininode_lock: self.block_announced = False self.last_message.pop("inv", None) self.last_message.pop("headers", None) # Request data for a list of block hashes def get_data(self, block_hashes): msg = msg_getdata() for x in block_hashes: msg.inv.append(CInv(2, x)) self.send_message(msg) def get_headers(self, locator, hashstop): msg = msg_getheaders() msg.locator.vHave = locator msg.hashstop = hashstop self.send_message(msg) def send_block_inv(self, blockhash): msg = msg_inv() msg.inv = [CInv(2, blockhash)] self.send_message(msg) def on_inv(self, message): self.block_announced = True self.last_blockhash_announced = message.inv[-1].hash def on_headers(self, message): if len(message.headers): self.block_announced = True message.headers[-1].calc_sha256() self.last_blockhash_announced = message.headers[-1].sha256 # Test whether the last announcement we received had the # right header or the right inv # inv and headers should be lists of block hashes def check_last_announcement(self, headers=None, inv=None): expect_headers = headers if headers != None else [] expect_inv = inv if inv != None else [] def test_function(): return self.block_announced wait_until(test_function, timeout=60, lock=mininode_lock) with mininode_lock: self.block_announced = False success = True compare_inv = [] if "inv" in self.last_message: compare_inv = [x.hash for x in self.last_message["inv"].inv] if compare_inv != expect_inv: success = False hash_headers = [] if "headers" in self.last_message: # treat headers as a list of block hashes hash_headers = [ x.sha256 for x in self.last_message["headers"].headers] if hash_headers != expect_headers: success = False self.last_message.pop("inv", None) self.last_message.pop("headers", None) return success def wait_for_getdata(self, hash_list, timeout=60): if hash_list == []: return def test_function(): return "getdata" in self.last_message and [ x.hash for x in self.last_message["getdata"].inv] == hash_list wait_until(test_function, timeout=timeout, lock=mininode_lock) return def wait_for_block_announcement(self, block_hash, timeout=60): def test_function(): return self.last_blockhash_announced == block_hash wait_until(test_function, timeout=timeout, lock=mininode_lock) return def send_header_for_blocks(self, new_blocks): headers_message = msg_headers() headers_message.headers = [CBlockHeader(b) for b in new_blocks] self.send_message(headers_message) def send_getblocks(self, locator): getblocks_message = msg_getblocks() getblocks_message.locator.vHave = locator self.send_message(getblocks_message) class SendHeadersTest(BitcoinTestFramework): def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 self.extra_args = [["-noparkdeepreorg"], ["-noparkdeepreorg"]] # mine count blocks and return the new tip def mine_blocks(self, count): # Clear out last block announcement from each p2p listener [x.clear_last_announcement() for x in self.nodes[0].p2ps] self.nodes[0].generate(count) return int(self.nodes[0].getbestblockhash(), 16) # mine a reorg that invalidates length blocks (replacing them with # length+1 blocks). # Note: we clear the state of our p2p connections after the # to-be-reorged-out blocks are mined, so that we don't break later tests. # return the list of block hashes newly mined def mine_reorg(self, length): # make sure all invalidated blocks are node0's self.nodes[0].generate(length) sync_blocks(self.nodes, wait=0.1) for x in self.nodes[0].p2ps: x.wait_for_block_announcement( int(self.nodes[0].getbestblockhash(), 16)) x.clear_last_announcement() tip_height = self.nodes[1].getblockcount() hash_to_invalidate = self.nodes[1].getblockhash( tip_height - (length - 1)) self.nodes[1].invalidateblock(hash_to_invalidate) # Must be longer than the orig chain all_hashes = self.nodes[1].generate(length + 1) sync_blocks(self.nodes, wait=0.1) return [int(x, 16) for x in all_hashes] def run_test(self): # Setup the p2p connections and start up the network thread. inv_node = self.nodes[0].add_p2p_connection(TestNode()) # Set nServices to 0 for test_node, so no block download will occur outside of # direct fetching test_node = self.nodes[0].add_p2p_connection( TestNode(), services=0) - # Start up network handling in another thread - NetworkThread().start() + network_thread_start() # Test logic begins here inv_node.wait_for_verack() test_node.wait_for_verack() # Ensure verack's have been processed by our peer inv_node.sync_with_ping() test_node.sync_with_ping() self.test_null_locators(test_node) self.test_nonnull_locators(test_node, inv_node) def test_null_locators(self, test_node): tip = self.nodes[0].getblockheader(self.nodes[0].generate(1)[0]) tip_hash = int(tip["hash"], 16) self.log.info( "Verify getheaders with null locator and valid hashstop returns headers.") test_node.clear_last_announcement() test_node.get_headers(locator=[], hashstop=tip_hash) assert_equal(test_node.check_last_announcement( headers=[tip_hash]), True) self.log.info( "Verify getheaders with null locator and invalid hashstop does not return headers.") block = create_block(int(tip["hash"], 16), create_coinbase( tip["height"] + 1), tip["mediantime"] + 1) block.solve() test_node.send_header_for_blocks([block]) test_node.clear_last_announcement() test_node.get_headers(locator=[], hashstop=int(block.hash, 16)) test_node.sync_with_ping() assert_equal(test_node.block_announced, False) test_node.send_message(msg_block(block)) def test_nonnull_locators(self, test_node, inv_node): tip = int(self.nodes[0].getbestblockhash(), 16) # PART 1 # 1. Mine a block; expect inv announcements each time self.log.info( "Part 1: headers don't start before sendheaders message...") for i in range(4): old_tip = tip tip = self.mine_blocks(1) assert_equal(inv_node.check_last_announcement(inv=[tip]), True) assert_equal(test_node.check_last_announcement(inv=[tip]), True) # Try a few different responses; none should affect next announcement if i == 0: # first request the block test_node.get_data([tip]) test_node.wait_for_block(tip) elif i == 1: # next try requesting header and block test_node.get_headers(locator=[old_tip], hashstop=tip) test_node.get_data([tip]) test_node.wait_for_block(tip) # since we requested headers... test_node.clear_last_announcement() elif i == 2: # this time announce own block via headers height = self.nodes[0].getblockcount() last_time = self.nodes[0].getblock( self.nodes[0].getbestblockhash())['time'] block_time = last_time + 1 new_block = create_block( tip, create_coinbase(height + 1), block_time) new_block.solve() test_node.send_header_for_blocks([new_block]) test_node.wait_for_getdata([new_block.sha256]) test_node.send_message(msg_block(new_block)) test_node.sync_with_ping() # make sure this block is processed inv_node.clear_last_announcement() test_node.clear_last_announcement() self.log.info("Part 1: success!") self.log.info( "Part 2: announce blocks with headers after sendheaders message...") # PART 2 # 2. Send a sendheaders message and test that headers announcements # commence and keep working. test_node.send_message(msg_sendheaders()) prev_tip = int(self.nodes[0].getbestblockhash(), 16) test_node.get_headers(locator=[prev_tip], hashstop=0) test_node.sync_with_ping() # Now that we've synced headers, headers announcements should work tip = self.mine_blocks(1) assert_equal(inv_node.check_last_announcement(inv=[tip]), True) assert_equal(test_node.check_last_announcement(headers=[tip]), True) height = self.nodes[0].getblockcount() + 1 block_time += 10 # Advance far enough ahead for i in range(10): # Mine i blocks, and alternate announcing either via # inv (of tip) or via headers. After each, new blocks # mined by the node should successfully be announced # with block header, even though the blocks are never requested for j in range(2): blocks = [] for b in range(i + 1): blocks.append(create_block( tip, create_coinbase(height), block_time)) blocks[-1].solve() tip = blocks[-1].sha256 block_time += 1 height += 1 if j == 0: # Announce via inv test_node.send_block_inv(tip) test_node.wait_for_getheaders() # Should have received a getheaders now test_node.send_header_for_blocks(blocks) # Test that duplicate inv's won't result in duplicate # getdata requests, or duplicate headers announcements [inv_node.send_block_inv(x.sha256) for x in blocks] test_node.wait_for_getdata([x.sha256 for x in blocks]) inv_node.sync_with_ping() else: # Announce via headers test_node.send_header_for_blocks(blocks) test_node.wait_for_getdata([x.sha256 for x in blocks]) # Test that duplicate headers won't result in duplicate # getdata requests (the check is further down) inv_node.send_header_for_blocks(blocks) inv_node.sync_with_ping() [test_node.send_message(msg_block(x)) for x in blocks] test_node.sync_with_ping() inv_node.sync_with_ping() # This block should not be announced to the inv node (since it also # broadcast it) assert "inv" not in inv_node.last_message assert "headers" not in inv_node.last_message tip = self.mine_blocks(1) assert_equal(inv_node.check_last_announcement(inv=[tip]), True) assert_equal(test_node.check_last_announcement( headers=[tip]), True) height += 1 block_time += 1 self.log.info("Part 2: success!") self.log.info( "Part 3: headers announcements can stop after large reorg, and resume after headers/inv from peer...") # PART 3. Headers announcements can stop after large reorg, and resume after # getheaders or inv from peer. for j in range(2): # First try mining a reorg that can propagate with header announcement new_block_hashes = self.mine_reorg(length=7) tip = new_block_hashes[-1] assert_equal(inv_node.check_last_announcement(inv=[tip]), True) assert_equal(test_node.check_last_announcement( headers=new_block_hashes), True) block_time += 8 # Mine a too-large reorg, which should be announced with a single inv new_block_hashes = self.mine_reorg(length=8) tip = new_block_hashes[-1] assert_equal(inv_node.check_last_announcement(inv=[tip]), True) assert_equal(test_node.check_last_announcement(inv=[tip]), True) block_time += 9 fork_point = self.nodes[0].getblock("%02x" % new_block_hashes[0])[ "previousblockhash"] fork_point = int(fork_point, 16) # Use getblocks/getdata test_node.send_getblocks(locator=[fork_point]) assert_equal(test_node.check_last_announcement( inv=new_block_hashes), True) test_node.get_data(new_block_hashes) test_node.wait_for_block(new_block_hashes[-1]) for i in range(3): # Mine another block, still should get only an inv tip = self.mine_blocks(1) assert_equal(inv_node.check_last_announcement(inv=[tip]), True) assert_equal( test_node.check_last_announcement(inv=[tip]), True) if i == 0: # Just get the data -- shouldn't cause headers announcements to resume test_node.get_data([tip]) test_node.wait_for_block(tip) elif i == 1: # Send a getheaders message that shouldn't trigger headers announcements # to resume (best header sent will be too old) test_node.get_headers( locator=[fork_point], hashstop=new_block_hashes[1]) test_node.get_data([tip]) test_node.wait_for_block(tip) elif i == 2: test_node.get_data([tip]) test_node.wait_for_block(tip) # This time, try sending either a getheaders to trigger resumption # of headers announcements, or mine a new block and inv it, also # triggering resumption of headers announcements. if j == 0: test_node.get_headers(locator=[tip], hashstop=0) test_node.sync_with_ping() else: test_node.send_block_inv(tip) test_node.sync_with_ping() # New blocks should now be announced with header tip = self.mine_blocks(1) assert_equal(inv_node.check_last_announcement(inv=[tip]), True) assert_equal(test_node.check_last_announcement( headers=[tip]), True) self.log.info("Part 3: success!") self.log.info("Part 4: Testing direct fetch behavior...") tip = self.mine_blocks(1) height = self.nodes[0].getblockcount() + 1 last_time = self.nodes[0].getblock( self.nodes[0].getbestblockhash())['time'] block_time = last_time + 1 # Create 2 blocks. Send the blocks, then send the headers. blocks = [] for b in range(2): blocks.append(create_block( tip, create_coinbase(height), block_time)) blocks[-1].solve() tip = blocks[-1].sha256 block_time += 1 height += 1 inv_node.send_message(msg_block(blocks[-1])) inv_node.sync_with_ping() # Make sure blocks are processed test_node.last_message.pop("getdata", None) test_node.send_header_for_blocks(blocks) test_node.sync_with_ping() # should not have received any getdata messages with mininode_lock: assert "getdata" not in test_node.last_message # This time, direct fetch should work blocks = [] for b in range(3): blocks.append(create_block( tip, create_coinbase(height), block_time)) blocks[-1].solve() tip = blocks[-1].sha256 block_time += 1 height += 1 test_node.send_header_for_blocks(blocks) test_node.sync_with_ping() test_node.wait_for_getdata( [x.sha256 for x in blocks], timeout=direct_fetch_response_time) [test_node.send_message(msg_block(x)) for x in blocks] test_node.sync_with_ping() # Now announce a header that forks the last two blocks tip = blocks[0].sha256 height -= 1 blocks = [] # Create extra blocks for later for b in range(20): blocks.append(create_block( tip, create_coinbase(height), block_time)) blocks[-1].solve() tip = blocks[-1].sha256 block_time += 1 height += 1 # Announcing one block on fork should not trigger direct fetch # (less work than tip) test_node.last_message.pop("getdata", None) test_node.send_header_for_blocks(blocks[0:1]) test_node.sync_with_ping() with mininode_lock: assert "getdata" not in test_node.last_message # Announcing one more block on fork should trigger direct fetch for # both blocks (same work as tip) test_node.send_header_for_blocks(blocks[1:2]) test_node.sync_with_ping() test_node.wait_for_getdata( [x.sha256 for x in blocks[0:2]], timeout=direct_fetch_response_time) # Announcing 16 more headers should trigger direct fetch for 14 more # blocks test_node.send_header_for_blocks(blocks[2:18]) test_node.sync_with_ping() test_node.wait_for_getdata( [x.sha256 for x in blocks[2:16]], timeout=direct_fetch_response_time) # Announcing 1 more header should not trigger any response test_node.last_message.pop("getdata", None) test_node.send_header_for_blocks(blocks[18:19]) test_node.sync_with_ping() with mininode_lock: assert "getdata" not in test_node.last_message self.log.info("Part 4: success!") # Now deliver all those blocks we announced. [test_node.send_message(msg_block(x)) for x in blocks] self.log.info("Part 5: Testing handling of unconnecting headers") # First we test that receipt of an unconnecting header doesn't prevent # chain sync. for i in range(10): test_node.last_message.pop("getdata", None) blocks = [] # Create two more blocks. for j in range(2): blocks.append(create_block( tip, create_coinbase(height), block_time)) blocks[-1].solve() tip = blocks[-1].sha256 block_time += 1 height += 1 # Send the header of the second block -> this won't connect. with mininode_lock: test_node.last_message.pop("getheaders", None) test_node.send_header_for_blocks([blocks[1]]) test_node.wait_for_getheaders() test_node.send_header_for_blocks(blocks) test_node.wait_for_getdata([x.sha256 for x in blocks]) [test_node.send_message(msg_block(x)) for x in blocks] test_node.sync_with_ping() assert_equal( int(self.nodes[0].getbestblockhash(), 16), blocks[1].sha256) blocks = [] # Now we test that if we repeatedly don't send connecting headers, we # don't go into an infinite loop trying to get them to connect. MAX_UNCONNECTING_HEADERS = 10 for j in range(MAX_UNCONNECTING_HEADERS + 1): blocks.append(create_block( tip, create_coinbase(height), block_time)) blocks[-1].solve() tip = blocks[-1].sha256 block_time += 1 height += 1 for i in range(1, MAX_UNCONNECTING_HEADERS): # Send a header that doesn't connect, check that we get a getheaders. with mininode_lock: test_node.last_message.pop("getheaders", None) test_node.send_header_for_blocks([blocks[i]]) test_node.wait_for_getheaders() # Next header will connect, should re-set our count: test_node.send_header_for_blocks([blocks[0]]) # Remove the first two entries (blocks[1] would connect): blocks = blocks[2:] # Now try to see how many unconnecting headers we can send # before we get disconnected. Should be 5*MAX_UNCONNECTING_HEADERS for i in range(5 * MAX_UNCONNECTING_HEADERS - 1): # Send a header that doesn't connect, check that we get a getheaders. with mininode_lock: test_node.last_message.pop("getheaders", None) test_node.send_header_for_blocks([blocks[i % len(blocks)]]) test_node.wait_for_getheaders() # Eventually this stops working. test_node.send_header_for_blocks([blocks[-1]]) # Should get disconnected test_node.wait_for_disconnect() self.log.info("Part 5: success!") # Finally, check that the inv node never received a getdata request, # throughout the test assert "getdata" not in inv_node.last_message if __name__ == '__main__': SendHeadersTest().main() diff --git a/test/functional/test_framework/mininode.py b/test/functional/test_framework/mininode.py index 17704f2f5..35ff6a020 100755 --- a/test/functional/test_framework/mininode.py +++ b/test/functional/test_framework/mininode.py @@ -1,453 +1,477 @@ #!/usr/bin/env python3 # Copyright (c) 2010 ArtForz -- public domain half-a-node # Copyright (c) 2012 Jeff Garzik # Copyright (c) 2010-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Bitcoin P2P network half-a-node. This python code was modified from ArtForz' public domain half-a-node, as found in the mini-node branch of http://github.com/jgarzik/pynode. P2PConnection: A low-level connection object to a node's P2P interface P2PInterface: A high-level interface object for communicating to a node over P2P""" import asyncore from collections import defaultdict from io import BytesIO import logging import socket import struct import sys -from threading import RLock, Thread +import threading from test_framework.messages import * from test_framework.util import wait_until logger = logging.getLogger("TestFramework.mininode") MESSAGEMAP = { b"addr": msg_addr, b"block": msg_block, b"blocktxn": msg_blocktxn, b"cmpctblock": msg_cmpctblock, b"feefilter": msg_feefilter, b"getaddr": msg_getaddr, b"getblocks": msg_getblocks, b"getblocktxn": msg_getblocktxn, b"getdata": msg_getdata, b"getheaders": msg_getheaders, b"headers": msg_headers, b"inv": msg_inv, b"mempool": msg_mempool, b"ping": msg_ping, b"pong": msg_pong, b"reject": msg_reject, b"sendcmpct": msg_sendcmpct, b"sendheaders": msg_sendheaders, b"tx": msg_tx, b"verack": msg_verack, b"version": msg_version, } MAGIC_BYTES = { "mainnet": b"\xe3\xe1\xf3\xe8", "testnet3": b"\xf4\xe5\xf3\xf4", "regtest": b"\xda\xb5\xbf\xfa", } class P2PConnection(asyncore.dispatcher): """A low-level connection object to a node's P2P interface. This class is responsible for: - opening and closing the TCP connection to the node - reading bytes from and writing bytes to the socket - deserializing and serializing the P2P message header - logging messages as they are sent and received This class contains no logic for handing the P2P message payloads. It must be sub-classed and the on_message() callback overridden.""" def __init__(self): super().__init__(map=mininode_socket_map) def peer_connect(self, dstaddr, dstport, net="regtest"): self.dstaddr = dstaddr self.dstport = dstport self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) self.sendbuf = b"" self.recvbuf = b"" self.state = "connecting" self.network = net self.disconnect = False logger.info('Connecting to Bitcoin Node: %s:%d' % (self.dstaddr, self.dstport)) try: self.connect((dstaddr, dstport)) except: self.handle_close() def peer_disconnect(self): # Connection could have already been closed by other end. if self.state == "connected": self.disconnect_node() # Connection and disconnection methods def handle_connect(self): """asyncore callback when a connection is opened.""" if self.state != "connected": logger.debug("Connected & Listening: %s:%d" % (self.dstaddr, self.dstport)) self.state = "connected" self.on_open() def handle_close(self): """asyncore callback when a connection is closed.""" logger.debug("Closing connection to: %s:%d" % (self.dstaddr, self.dstport)) self.state = "closed" self.recvbuf = b"" self.sendbuf = b"" try: self.close() except: pass self.on_close() def disconnect_node(self): """Disconnect the p2p connection. Called by the test logic thread. Causes the p2p connection to be disconnected on the next iteration of the asyncore loop.""" self.disconnect = True # Socket read methods def handle_read(self): """asyncore callback when data is read from the socket.""" with mininode_lock: t = self.recv(READ_BUFFER_SIZE) if len(t) > 0: self.recvbuf += t while True: msg = self._on_data() if msg == None: break self.on_message(msg) def _on_data(self): """Try to read P2P messages from the recv buffer. This method reads data from the buffer in a loop. It deserializes, parses and verifies the P2P header, then passes the P2P payload to the on_message callback for processing.""" try: with mininode_lock: if len(self.recvbuf) < 4: return None if self.recvbuf[:4] != MAGIC_BYTES[self.network]: raise ValueError("got garbage %s" % repr(self.recvbuf)) if len(self.recvbuf) < 4 + 12 + 4 + 4: return command = self.recvbuf[4:4+12].split(b"\x00", 1)[0] msglen = struct.unpack(" 0 or pre_connection) def handle_write(self): """asyncore callback when data should be written to the socket.""" with mininode_lock: # asyncore does not expose socket connection, only the first read/write # event, thus we must check connection manually here to know when we # actually connect if self.state == "connecting": self.handle_connect() if not self.writable(): return try: sent = self.send(self.sendbuf) except: self.handle_close() return self.sendbuf = self.sendbuf[sent:] def send_message(self, message, pushbuf=False): """Send a P2P message over the socket. This method takes a P2P payload, builds the P2P header and adds the message to the send buffer to be sent over the socket.""" if self.state != "connected" and not pushbuf: raise IOError('Not connected, no pushbuf') self._log_message("send", message) command = message.command data = message.serialize() tmsg = MAGIC_BYTES[self.network] tmsg += command tmsg += b"\x00" * (12 - len(command)) tmsg += struct.pack(" 500: log_message += "... (msg truncated)" logger.debug(log_message) class P2PInterface(P2PConnection): """A high-level P2P interface class for communicating with a Bitcoin Cash node. This class provides high-level callbacks for processing P2P message payloads, as well as convenience methods for interacting with the node over P2P. Individual testcases should subclass this and override the on_* methods if they want to alter message handling behaviour.""" def __init__(self): super().__init__() # Track number of messages of each type received and the most recent # message of each type self.message_count = defaultdict(int) self.last_message = {} # A count of the number of ping messages we've sent to the node self.ping_counter = 1 # The network services received from the peer self.nServices = 0 def peer_connect(self, *args, services=NODE_NETWORK, send_version=True, **kwargs): super().peer_connect(*args, **kwargs) if send_version: # Send a version msg vt = msg_version() vt.nServices = services vt.addrTo.ip = self.dstaddr vt.addrTo.port = self.dstport vt.addrFrom.ip = "0.0.0.0" vt.addrFrom.port = 0 self.send_message(vt, True) # Message receiving methods def on_message(self, message): """Receive message and dispatch message to appropriate callback. We keep a count of how many of each message type has been received and the most recent message of each type.""" with mininode_lock: try: command = message.command.decode('ascii') self.message_count[command] += 1 self.last_message[command] = message getattr(self, 'on_' + command)(message) except: print("ERROR delivering %s (%s)" % (repr(message), sys.exc_info()[0])) raise # Callback methods. Can be overridden by subclasses in individual test # cases to provide custom message handling behaviour. def on_open(self): pass def on_close(self): pass def on_addr(self, message): pass def on_block(self, message): pass def on_blocktxn(self, message): pass def on_cmpctblock(self, message): pass def on_feefilter(self, message): pass def on_getaddr(self, message): pass def on_getblocks(self, message): pass def on_getblocktxn(self, message): pass def on_getdata(self, message): pass def on_getheaders(self, message): pass def on_headers(self, message): pass def on_mempool(self, message): pass def on_pong(self, message): pass def on_reject(self, message): pass def on_sendcmpct(self, message): pass def on_sendheaders(self, message): pass def on_tx(self, message): pass def on_inv(self, message): want = msg_getdata() for i in message.inv: if i.type != 0: want.inv.append(i) if len(want.inv): self.send_message(want) def on_ping(self, message): self.send_message(msg_pong(message.nonce)) def on_verack(self, message): self.verack_received = True def on_version(self, message): assert message.nVersion >= MIN_VERSION_SUPPORTED, "Version {} received. Test framework only supports versions greater than {}".format( message.nVersion, MIN_VERSION_SUPPORTED) self.send_message(msg_verack()) self.nServices = message.nServices # Connection helper methods def wait_for_disconnect(self, timeout=60): def test_function(): return self.state != "connected" wait_until(test_function, timeout=timeout, lock=mininode_lock) # Message receiving helper methods def wait_for_block(self, blockhash, timeout=60): def test_function(): return self.last_message.get( "block") and self.last_message["block"].block.rehash() == blockhash wait_until(test_function, timeout=timeout, lock=mininode_lock) def wait_for_getdata(self, timeout=60): def test_function(): return self.last_message.get("getdata") wait_until(test_function, timeout=timeout, lock=mininode_lock) def wait_for_getheaders(self, timeout=60): def test_function(): return self.last_message.get("getheaders") wait_until(test_function, timeout=timeout, lock=mininode_lock) def wait_for_inv(self, expected_inv, timeout=60): """Waits for an INV message and checks that the first inv object in the message was as expected.""" if len(expected_inv) > 1: raise NotImplementedError( "wait_for_inv() will only verify the first inv object") def test_function(): return self.last_message.get("inv") and \ self.last_message["inv"].inv[0].type == expected_inv[0].type and \ self.last_message["inv"].inv[0].hash == expected_inv[0].hash wait_until(test_function, timeout=timeout, lock=mininode_lock) def wait_for_verack(self, timeout=60): def test_function(): return self.message_count["verack"] wait_until(test_function, timeout=timeout, lock=mininode_lock) # Message sending helper functions def send_and_ping(self, message): self.send_message(message) self.sync_with_ping() # Sync up with the node def sync_with_ping(self, timeout=60): self.send_message(msg_ping(nonce=self.ping_counter)) def test_function(): if not self.last_message.get("pong"): return False return self.last_message["pong"].nonce == self.ping_counter wait_until(test_function, timeout=timeout, lock=mininode_lock) self.ping_counter += 1 # Keep our own socket map for asyncore, so that we can track disconnects # ourselves (to workaround an issue with closing an asyncore socket when # using select) mininode_socket_map = dict() # One lock for synchronizing all data access between the networking thread (see # NetworkThread below) and the thread running the test logic. For simplicity, # P2PConnection acquires this lock whenever delivering a message to a P2PInterface, # and whenever adding anything to the send buffer (in send_message()). This # lock should be acquired in the thread running the test logic to synchronize # access to any data shared with the P2PInterface or P2PConnection. -mininode_lock = RLock() +mininode_lock = threading.RLock() -class NetworkThread(Thread): +class NetworkThread(threading.Thread): + def __init__(self): + super().__init__(name="NetworkThread") def run(self): while mininode_socket_map: # We check for whether to disconnect outside of the asyncore # loop to workaround the behavior of asyncore when using # select disconnected = [] for fd, obj in mininode_socket_map.items(): if obj.disconnect: disconnected.append(obj) [obj.handle_close() for obj in disconnected] asyncore.loop(0.1, use_poll=True, map=mininode_socket_map, count=1) logger.debug("Network thread closing") + + +def network_thread_start(): + """Start the network thread.""" + NetworkThread().start() + + +def network_thread_running(): + """Return whether the network thread is running.""" + return any([thread.name == "NetworkThread" for thread in threading.enumerate()]) + + +def network_thread_join(timeout=10): + """Wait timeout seconds for the network thread to terminate. + + Throw if the network thread doesn't terminate in timeout seconds.""" + network_threads = [ + thread for thread in threading.enumerate() if thread.name == "NetworkThread"] + assert len(network_threads) <= 1 + for thread in network_threads: + thread.join(timeout) + assert not thread.is_alive()