diff --git a/test/functional/README.md b/test/functional/README.md index 96fe0becc..6ad0b58b5 100644 --- a/test/functional/README.md +++ b/test/functional/README.md @@ -1,154 +1,154 @@ # Functional tests ### Writing Functional Tests #### Example test The [example_test.py](example_test.py) is a heavily commented example of a test case that uses both the RPC and P2P interfaces. If you are writing your first test, copy that file and modify to fit your needs. #### Coverage Running `test_runner.py` with the `--coverage` argument tracks which RPCs are called by the tests and prints a report of uncovered RPCs in the summary. This can be used (along with the `--extended` argument) to find out which RPCs we don't have test cases for. #### Style guidelines - Where possible, try to adhere to [PEP-8 guidelines]([https://www.python.org/dev/peps/pep-0008/) - Use a python linter like flake8 before submitting PRs to catch common style nits (eg trailing whitespace, unused imports, etc) - Avoid wildcard imports where possible - Use a module-level docstring to describe what the test is testing, and how it is testing it. - When subclassing the BitcoinTestFramwork, place overrides for the - `__init__()`, and `setup_xxxx()` methods at the top of the subclass, then - locally-defined helper methods, then the `run_test()` method. + `set_test_params()`, `add_options()` and `setup_xxxx()` methods at the top of + the subclass, then locally-defined helper methods, then the `run_test()` method. #### General test-writing advice - Set `self.num_nodes` to the minimum number of nodes necessary for the test. Having additional unrequired nodes adds to the execution time of the test as well as memory/CPU/disk requirements (which is important when running tests in parallel or on Travis). - Avoid stop-starting the nodes multiple times during the test if possible. A stop-start takes several seconds, so doing it several times blows up the runtime of the test. -- Set the `self.setup_clean_chain` variable in `__init__()` to control whether +- Set the `self.setup_clean_chain` variable in `set_test_params()` to control whether or not to use the cached data directories. The cached data directories contain a 200-block pre-mined blockchain and wallets for four nodes. Each node has 25 mature blocks (25x50=1250 BTC) in its wallet. - When calling RPCs with lots of arguments, consider using named keyword arguments instead of positional arguments to make the intent of the call clear to readers. #### RPC and P2P definitions Test writers may find it helpful to refer to the definitions for the RPC and P2P messages. These can be found in the following source files: - `/src/rpc/*` for RPCs - `/src/wallet/rpc*` for wallet RPCs - `ProcessMessage()` in `/src/net_processing.cpp` for parsing P2P messages #### Using the P2P interface - `mininode.py` contains all the definitions for objects that pass over the network (`CBlock`, `CTransaction`, etc, along with the network-level wrappers for them, `msg_block`, `msg_tx`, etc). - P2P tests have two threads. One thread handles all network communication with the bitcoind(s) being tested (using python's asyncore package); the other implements the test logic. - `NodeConn` is the class used to connect to a bitcoind. If you implement a callback class that derives from `NodeConnCB` and pass that to the `NodeConn` object, your code will receive the appropriate callbacks when events of interest arrive. - Call `NetworkThread.start()` after all `NodeConn` objects are created to start the networking thread. (Continue with the test logic in your existing thread.) - Can be used to write tests where specific P2P protocol behavior is tested. Examples tests are `p2p-accept-block.py`, `p2p-compactblocks.py`. #### Comptool - Comptool is a Testing framework for writing tests that compare the block/tx acceptance behavior of a bitcoind against 1 or more other bitcoind instances. It should not be used to write static tests with known outcomes, since that type of test is easier to write and maintain using the standard BitcoinTestFramework. - Set the `num_nodes` variable (defined in `ComparisonTestFramework`) to start up 1 or more nodes. If using 1 node, then `--testbinary` can be used as a command line option to change the bitcoind binary used by the test. If using 2 or more nodes, then `--refbinary` can be optionally used to change the bitcoind that will be used on nodes 2 and up. - Implement a (generator) function called `get_tests()` which yields `TestInstance`s. Each `TestInstance` consists of: - a list of `[object, outcome, hash]` entries * `object` is a `CBlock`, `CTransaction`, or `CBlockHeader`. `CBlock`'s and `CTransaction`'s are tested for acceptance. `CBlockHeader`s can be used so that the test runner can deliver complete headers-chains when requested from the bitcoind, to allow writing tests where blocks can be delivered out of order but still processed by headers-first bitcoind's. * `outcome` is `True`, `False`, or `None`. If `True` or `False`, the tip is compared with the expected tip -- either the block passed in, or the hash specified as the optional 3rd entry. If `None` is specified, then the test will compare all the bitcoind's being tested to see if they all agree on what the best tip is. * `hash` is the block hash of the tip to compare against. Optional to specify; if left out then the hash of the block passed in will be used as the expected tip. This allows for specifying an expected tip while testing the handling of either invalid blocks or blocks delivered out of order, which complete a longer chain. - `sync_every_block`: `True/False`. If `False`, then all blocks are inv'ed together, and the test runner waits until the node receives the last one, and tests only the last block for tip acceptance using the outcome and specified tip. If `True`, then each block is tested in sequence and synced (this is slower when processing many blocks). - `sync_every_transaction`: `True/False`. Analogous to `sync_every_block`, except if the outcome on the last tx is "None", then the contents of the entire mempool are compared across all bitcoind connections. If `True` or `False`, then only the last tx's acceptance is tested against the given outcome. - For examples of tests written in this framework, see `invalidblockrequest.py` and `p2p-fullblocktest.py`. ### test-framework modules #### [test_framework/authproxy.py](test_framework/authproxy.py) Taken from the [python-bitcoinrpc repository](https://github.com/jgarzik/python-bitcoinrpc). #### [test_framework/test_framework.py](test_framework/test_framework.py) Base class for functional tests. #### [test_framework/util.py](test_framework/util.py) Generally useful functions. #### [test_framework/mininode.py](test_framework/mininode.py) Basic code to support P2P connectivity to a bitcoind. #### [test_framework/comptool.py](test_framework/comptool.py) Framework for comparison-tool style, P2P tests. #### [test_framework/script.py](test_framework/script.py) Utilities for manipulating transaction scripts (originally from python-bitcoinlib) #### [test_framework/blockstore.py](test_framework/blockstore.py) Implements disk-backed block and tx storage. #### [test_framework/key.py](test_framework/key.py) Wrapper around OpenSSL EC_Key (originally from python-bitcoinlib) #### [test_framework/bignum.py](test_framework/bignum.py) Helpers for script.py #### [test_framework/blocktools.py](test_framework/blocktools.py) Helper functions for creating blocks and transactions. diff --git a/test/functional/abandonconflict.py b/test/functional/abandonconflict.py index 9b2fdf125..2d29b0d8a 100755 --- a/test/functional/abandonconflict.py +++ b/test/functional/abandonconflict.py @@ -1,176 +1,170 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import urllib.parse class AbandonConflictTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False self.extra_args = [["-minrelaytxfee=0.00001"], []] def run_test(self): self.nodes[1].generate(100) sync_blocks(self.nodes) balance = self.nodes[0].getbalance() txA = self.nodes[0].sendtoaddress( self.nodes[0].getnewaddress(), Decimal("10")) txB = self.nodes[0].sendtoaddress( self.nodes[0].getnewaddress(), Decimal("10")) txC = self.nodes[0].sendtoaddress( self.nodes[0].getnewaddress(), Decimal("10")) sync_mempools(self.nodes) self.nodes[1].generate(1) sync_blocks(self.nodes) newbalance = self.nodes[0].getbalance() # no more than fees lost assert(balance - newbalance < Decimal("0.001")) balance = newbalance url = urllib.parse.urlparse(self.nodes[1].url) self.nodes[0].disconnectnode(url.hostname + ":" + str(p2p_port(1))) # Identify the 10btc outputs nA = next(i for i, vout in enumerate( self.nodes[0].getrawtransaction(txA, 1)["vout"]) if vout["value"] == Decimal("10")) nB = next(i for i, vout in enumerate( self.nodes[0].getrawtransaction(txB, 1)["vout"]) if vout["value"] == Decimal("10")) nC = next(i for i, vout in enumerate( self.nodes[0].getrawtransaction(txC, 1)["vout"]) if vout["value"] == Decimal("10")) inputs = [] # spend 10btc outputs from txA and txB inputs.append({"txid": txA, "vout": nA}) inputs.append({"txid": txB, "vout": nB}) outputs = {} outputs[self.nodes[0].getnewaddress()] = Decimal("14.99998") outputs[self.nodes[1].getnewaddress()] = Decimal("5") signed = self.nodes[0].signrawtransaction( self.nodes[0].createrawtransaction(inputs, outputs), None, None, "ALL|FORKID") txAB1 = self.nodes[0].sendrawtransaction(signed["hex"]) # Identify the 14.99998btc output nAB = next(i for i, vout in enumerate(self.nodes[0].getrawtransaction( txAB1, 1)["vout"]) if vout["value"] == Decimal("14.99998")) # Create a child tx spending AB1 and C inputs = [] inputs.append({"txid": txAB1, "vout": nAB}) inputs.append({"txid": txC, "vout": nC}) outputs = {} outputs[self.nodes[0].getnewaddress()] = Decimal("24.9996") signed2 = self.nodes[0].signrawtransaction( self.nodes[0].createrawtransaction(inputs, outputs), None, None, "ALL|FORKID") txABC2 = self.nodes[0].sendrawtransaction(signed2["hex"]) # In mempool txs from self should increase balance from change newbalance = self.nodes[0].getbalance() assert_equal(newbalance, balance - Decimal("30") + Decimal("24.9996")) balance = newbalance # Restart the node with a higher min relay fee so the parent tx is no longer in mempool # TODO: redo with eviction self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir, [ - "-minrelaytxfee=0.0001"]) + self.start_node(0, extra_args=["-minrelaytxfee=0.0001"]) # Verify txs no longer in mempool assert_equal(len(self.nodes[0].getrawmempool()), 0) # Not in mempool txs from self should only reduce balance # inputs are still spent, but change not received newbalance = self.nodes[0].getbalance() assert_equal(newbalance, balance - Decimal("24.9996")) # Unconfirmed received funds that are not in mempool, also shouldn't show # up in unconfirmed balance unconfbalance = self.nodes[ 0].getunconfirmedbalance() + self.nodes[0].getbalance() assert_equal(unconfbalance, newbalance) # Also shouldn't show up in listunspent assert(not txABC2 in [utxo["txid"] for utxo in self.nodes[0].listunspent(0)]) balance = newbalance # Abandon original transaction and verify inputs are available again # including that the child tx was also abandoned self.nodes[0].abandontransaction(txAB1) newbalance = self.nodes[0].getbalance() assert_equal(newbalance, balance + Decimal("30")) balance = newbalance # Verify that even with a low min relay fee, the tx is not reaccepted from wallet on startup once abandoned self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir, [ - "-minrelaytxfee=0.00001"]) + self.start_node(0, extra_args=["-minrelaytxfee=0.00001"]) assert_equal(len(self.nodes[0].getrawmempool()), 0) assert_equal(self.nodes[0].getbalance(), balance) # But if its received again then it is unabandoned # And since now in mempool, the change is available # But its child tx remains abandoned self.nodes[0].sendrawtransaction(signed["hex"]) newbalance = self.nodes[0].getbalance() assert_equal(newbalance, balance - Decimal("20") + Decimal("14.99998")) balance = newbalance # Send child tx again so its unabandoned self.nodes[0].sendrawtransaction(signed2["hex"]) newbalance = self.nodes[0].getbalance() assert_equal(newbalance, balance - Decimal("10") - Decimal("14.99998") + Decimal("24.9996")) balance = newbalance # Remove using high relay fee again self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir, [ - "-minrelaytxfee=0.0001"]) + self.start_node(0, extra_args=["-minrelaytxfee=0.0001"]) assert_equal(len(self.nodes[0].getrawmempool()), 0) newbalance = self.nodes[0].getbalance() assert_equal(newbalance, balance - Decimal("24.9996")) balance = newbalance # Create a double spend of AB1 by spending again from only A's 10 output # Mine double spend from node 1 inputs = [] inputs.append({"txid": txA, "vout": nA}) outputs = {} outputs[self.nodes[1].getnewaddress()] = Decimal("9.9999") tx = self.nodes[0].createrawtransaction(inputs, outputs) signed = self.nodes[0].signrawtransaction(tx, None, None, "ALL|FORKID") self.nodes[1].sendrawtransaction(signed["hex"]) self.nodes[1].generate(1) connect_nodes(self.nodes[0], 1) sync_blocks(self.nodes) # Verify that B and C's 10 BTC outputs are available for spending again # because AB1 is now conflicted newbalance = self.nodes[0].getbalance() assert_equal(newbalance, balance + Decimal("20")) balance = newbalance # There is currently a minor bug around this and so this test doesn't work. See Issue #7315 # Invalidate the block with the double spend and B's 10 BTC output should no longer be available # Don't think C's should either self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) newbalance = self.nodes[0].getbalance() # assert_equal(newbalance, balance - Decimal("10")) self.log.info( "If balance has not declined after invalidateblock then out of mempool wallet tx which is no longer") self.log.info( "conflicted has not resumed causing its inputs to be seen as spent. See Issue #7315") self.log.info(str(balance) + " -> " + str(newbalance) + " ?") if __name__ == '__main__': AbandonConflictTest().main() diff --git a/test/functional/abc-cmdline.py b/test/functional/abc-cmdline.py index bd75fb257..ab6d20e5e 100755 --- a/test/functional/abc-cmdline.py +++ b/test/functional/abc-cmdline.py @@ -1,72 +1,71 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ Exercise the command line functions specific to ABC functionality. Currently: -excessiveblocksize= """ import re from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE, DEFAULT_MAX_BLOCK_SIZE MAX_GENERATED_BLOCK_SIZE_ERROR = ( 'Max generated block size (blockmaxsize) cannot exceed the excessive block size (excessiveblocksize)') class ABC_CmdLine_Test (BitcoinTestFramework): - def __init__(self): - super(ABC_CmdLine_Test, self).__init__() + def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = False def check_excessive(self, expected_value): 'Check that the excessiveBlockSize is as expected' getsize = self.nodes[0].getexcessiveblock() ebs = getsize['excessiveBlockSize'] assert_equal(ebs, expected_value) def check_subversion(self, pattern_str): 'Check that the subversion is set as expected' netinfo = self.nodes[0].getnetworkinfo() subversion = netinfo['subversion'] pattern = re.compile(pattern_str) assert(pattern.match(subversion)) def excessiveblocksize_test(self): self.log.info("Testing -excessiveblocksize") self.log.info(" Set to twice the default, i.e. %d bytes" % (2 * LEGACY_MAX_BLOCK_SIZE)) self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir, [ - "-excessiveblocksize=%d" % (2 * LEGACY_MAX_BLOCK_SIZE)]) + self.start_node(0, ["-excessiveblocksize=%d" % + (2 * LEGACY_MAX_BLOCK_SIZE)]) self.check_excessive(2 * LEGACY_MAX_BLOCK_SIZE) # Check for EB correctness in the subver string self.check_subversion("/Bitcoin ABC:.*\(EB2\.0; .*\)/") self.log.info(" Attempt to set below legacy limit of 1MB - try %d bytes" % LEGACY_MAX_BLOCK_SIZE) self.stop_node(0) - self.assert_start_raises_init_error(0, self.options.tmpdir, [ - "-excessiveblocksize=%d" % LEGACY_MAX_BLOCK_SIZE], 'Error: Excessive block size must be > 1,000,000 bytes (1MB)') + self.assert_start_raises_init_error( + 0, ["-excessiveblocksize=%d" % LEGACY_MAX_BLOCK_SIZE], 'Error: Excessive block size must be > 1,000,000 bytes (1MB)') self.log.info(" Attempt to set below blockmaxsize (mining limit)") - self.assert_start_raises_init_error(0, self.options.tmpdir, [ - '-blockmaxsize=1500000', '-excessiveblocksize=1300000'], 'Error: ' + MAX_GENERATED_BLOCK_SIZE_ERROR) + self.assert_start_raises_init_error( + 0, ['-blockmaxsize=1500000', '-excessiveblocksize=1300000'], 'Error: ' + MAX_GENERATED_BLOCK_SIZE_ERROR) # Make sure we leave the test with a node running as this is what thee # framework expects. - self.nodes[0] = self.start_node(0, self.options.tmpdir, []) + self.start_node(0, []) def run_test(self): # Run tests on -excessiveblocksize option self.excessiveblocksize_test() if __name__ == '__main__': ABC_CmdLine_Test().main() diff --git a/test/functional/abc-p2p-fullblocktest.py b/test/functional/abc-p2p-fullblocktest.py index 15495e11b..9aaa265ed 100755 --- a/test/functional/abc-p2p-fullblocktest.py +++ b/test/functional/abc-p2p-fullblocktest.py @@ -1,504 +1,504 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ This test checks simple acceptance of bigger blocks via p2p. It is derived from the much more complex p2p-fullblocktest. The intention is that small tests can be derived from this one, or this one can be extended, to cover the checks done for bigger blocks (e.g. sigops limits). """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * from test_framework.cdefs import (ONE_MEGABYTE, LEGACY_MAX_BLOCK_SIZE, MAX_BLOCK_SIGOPS_PER_MB, MAX_TX_SIGOPS_COUNT) class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending # TestNode: A peer we use to send messages to bitcoind, and store responses. class TestNode(NodeConnCB): def __init__(self): self.last_sendcmpct = None self.last_cmpctblock = None self.last_getheaders = None self.last_headers = None super().__init__() def on_sendcmpct(self, conn, message): self.last_sendcmpct = message def on_cmpctblock(self, conn, message): self.last_cmpctblock = message self.last_cmpctblock.header_and_shortids.header.calc_sha256() def on_getheaders(self, conn, message): self.last_getheaders = message def on_headers(self, conn, message): self.last_headers = message for x in self.last_headers.headers: x.calc_sha256() def clear_block_data(self): with mininode_lock: self.last_sendcmpct = None self.last_cmpctblock = None class FullBlockTest(ComparisonTestFramework): # Can either run this test as 1 node with expected answers, or two and compare them. # Change the "outcome" variable from each TestInstance object to only do # the comparison. - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 + self.setup_clean_chain = True self.block_heights = {} self.coinbase_key = CECKey() self.coinbase_key.set_secretbytes(b"fatstacks") self.coinbase_pubkey = self.coinbase_key.get_pubkey() self.tip = None self.blocks = {} self.excessive_block_size = 16 * ONE_MEGABYTE self.extra_args = [['-norelaypriority', '-whitelist=127.0.0.1', '-limitancestorcount=9999', '-limitancestorsize=9999', '-limitdescendantcount=9999', '-limitdescendantsize=9999', '-maxmempool=999', "-excessiveblocksize=%d" % self.excessive_block_size]] def add_options(self, parser): super().add_options(parser) parser.add_option( "--runbarelyexpensive", dest="runbarelyexpensive", default=True) def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) # Start up network handling in another thread NetworkThread().start() # Set the blocksize to 2MB as initial condition self.nodes[0].setexcessiveblock(self.excessive_block_size) self.test.run() def add_transactions_to_block(self, block, tx_list): [tx.rehash() for tx in tx_list] block.vtx.extend(tx_list) # this is a little handier to use than the version in blocktools.py def create_tx(self, spend_tx, n, value, script=CScript([OP_TRUE])): tx = create_transaction(spend_tx, n, b"", value, script) return tx # sign a transaction, using the key we know about # this signs input 0 in tx, which is assumed to be spending output n in # spend_tx def sign_tx(self, tx, spend_tx, n): scriptPubKey = bytearray(spend_tx.vout[n].scriptPubKey) if (scriptPubKey[0] == OP_TRUE): # an anyone-can-spend tx.vin[0].scriptSig = CScript() return sighash = SignatureHashForkId( spend_tx.vout[n].scriptPubKey, tx, 0, SIGHASH_ALL | SIGHASH_FORKID, spend_tx.vout[n].nValue) tx.vin[0].scriptSig = CScript( [self.coinbase_key.sign(sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID]))]) def create_and_sign_transaction(self, spend_tx, n, value, script=CScript([OP_TRUE])): tx = self.create_tx(spend_tx, n, value, script) self.sign_tx(tx, spend_tx, n) tx.rehash() return tx def next_block(self, number, spend=None, additional_coinbase_value=0, script=None, extra_sigops=0, block_size=0, solve=True): """ Create a block on top of self.tip, and advance self.tip to point to the new block if spend is specified, then 1 satoshi will be spent from that to an anyone-can-spend output, and rest will go to fees. """ if self.tip == None: base_block_hash = self.genesis_hash block_time = int(time.time()) + 1 else: base_block_hash = self.tip.sha256 block_time = self.tip.nTime + 1 # First create the coinbase height = self.block_heights[base_block_hash] + 1 coinbase = create_coinbase(height, self.coinbase_pubkey) coinbase.vout[0].nValue += additional_coinbase_value if (spend != None): coinbase.vout[0].nValue += spend.tx.vout[ spend.n].nValue - 1 # all but one satoshi to fees coinbase.rehash() block = create_block(base_block_hash, coinbase, block_time) spendable_output = None if (spend != None): tx = CTransaction() # no signature yet tx.vin.append( CTxIn(COutPoint(spend.tx.sha256, spend.n), b"", 0xffffffff)) # We put some random data into the first transaction of the chain # to randomize ids tx.vout.append( CTxOut(0, CScript([random.randint(0, 255), OP_DROP, OP_TRUE]))) if script == None: tx.vout.append(CTxOut(1, CScript([OP_TRUE]))) else: tx.vout.append(CTxOut(1, script)) spendable_output = PreviousSpendableOutput(tx, 0) # Now sign it if necessary scriptSig = b"" scriptPubKey = bytearray(spend.tx.vout[spend.n].scriptPubKey) if (scriptPubKey[0] == OP_TRUE): # looks like an anyone-can-spend scriptSig = CScript([OP_TRUE]) else: # We have to actually sign it sighash = SignatureHashForkId( spend.tx.vout[spend.n].scriptPubKey, tx, 0, SIGHASH_ALL | SIGHASH_FORKID, spend.tx.vout[spend.n].nValue) scriptSig = CScript( [self.coinbase_key.sign(sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID]))]) tx.vin[0].scriptSig = scriptSig # Now add the transaction to the block self.add_transactions_to_block(block, [tx]) block.hashMerkleRoot = block.calc_merkle_root() if spendable_output != None and block_size > 0: while len(block.serialize()) < block_size: tx = CTransaction() script_length = block_size - len(block.serialize()) - 79 if script_length > 510000: script_length = 500000 tx_sigops = min( extra_sigops, script_length, MAX_TX_SIGOPS_COUNT) extra_sigops -= tx_sigops script_pad_len = script_length - tx_sigops script_output = CScript( [b'\x00' * script_pad_len] + [OP_CHECKSIG] * tx_sigops) tx.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx.vout.append(CTxOut(0, script_output)) tx.vin.append( CTxIn(COutPoint(spendable_output.tx.sha256, spendable_output.n))) spendable_output = PreviousSpendableOutput(tx, 0) self.add_transactions_to_block(block, [tx]) block.hashMerkleRoot = block.calc_merkle_root() # Make sure the math above worked out to produce the correct block size # (the math will fail if there are too many transactions in the block) assert_equal(len(block.serialize()), block_size) # Make sure all the requested sigops have been included assert_equal(extra_sigops, 0) if solve: block.solve() self.tip = block self.block_heights[block.sha256] = height assert number not in self.blocks self.blocks[number] = block return block def get_tests(self): self.genesis_hash = int(self.nodes[0].getbestblockhash(), 16) self.block_heights[self.genesis_hash] = 0 spendable_outputs = [] # save the current tip so it can be spent by a later block def save_spendable_output(): spendable_outputs.append(self.tip) # get an output that we previously marked as spendable def get_spendable_output(): return PreviousSpendableOutput(spendable_outputs.pop(0).vtx[0], 0) # returns a test case that asserts that the current tip was accepted def accepted(): return TestInstance([[self.tip, True]]) # returns a test case that asserts that the current tip was rejected def rejected(reject=None): if reject is None: return TestInstance([[self.tip, False]]) else: return TestInstance([[self.tip, reject]]) # move the tip back to a previous block def tip(number): self.tip = self.blocks[number] # adds transactions to the block and updates state def update_block(block_number, new_transactions): block = self.blocks[block_number] self.add_transactions_to_block(block, new_transactions) old_sha256 = block.sha256 block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Update the internal state just like in next_block self.tip = block if block.sha256 != old_sha256: self.block_heights[ block.sha256] = self.block_heights[old_sha256] del self.block_heights[old_sha256] self.blocks[block_number] = block return block # shorthand for functions block = self.next_block # Create a new block block(0) save_spendable_output() yield accepted() # Now we need that block to mature so we can spend the coinbase. test = TestInstance(sync_every_block=False) for i in range(99): block(5000 + i) test.blocks_and_transactions.append([self.tip, True]) save_spendable_output() yield test # collect spendable outputs now to avoid cluttering the code later on out = [] for i in range(100): out.append(get_spendable_output()) # Let's build some blocks and test them. for i in range(16): n = i + 1 block(n, spend=out[i], block_size=n * ONE_MEGABYTE) yield accepted() # block of maximal size block(17, spend=out[16], block_size=self.excessive_block_size) yield accepted() # Reject oversized blocks with bad-blk-length error block(18, spend=out[17], block_size=self.excessive_block_size + 1) yield rejected(RejectResult(16, b'bad-blk-length')) # Rewind bad block. tip(17) # Accept many sigops lots_of_checksigs = CScript( [OP_CHECKSIG] * (MAX_BLOCK_SIGOPS_PER_MB - 1)) block( 19, spend=out[17], script=lots_of_checksigs, block_size=ONE_MEGABYTE) yield accepted() too_many_blk_checksigs = CScript( [OP_CHECKSIG] * MAX_BLOCK_SIGOPS_PER_MB) block( 20, spend=out[18], script=too_many_blk_checksigs, block_size=ONE_MEGABYTE) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(19) # Accept 40k sigops per block > 1MB and <= 2MB block(21, spend=out[18], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB, block_size=ONE_MEGABYTE + 1) yield accepted() # Accept 40k sigops per block > 1MB and <= 2MB block(22, spend=out[19], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB, block_size=2 * ONE_MEGABYTE) yield accepted() # Reject more than 40k sigops per block > 1MB and <= 2MB. block(23, spend=out[20], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=ONE_MEGABYTE + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(22) # Reject more than 40k sigops per block > 1MB and <= 2MB. block(24, spend=out[20], script=lots_of_checksigs, extra_sigops=MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=2 * ONE_MEGABYTE) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(22) # Accept 60k sigops per block > 2MB and <= 3MB block(25, spend=out[20], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB, block_size=2 * ONE_MEGABYTE + 1) yield accepted() # Accept 60k sigops per block > 2MB and <= 3MB block(26, spend=out[21], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB, block_size=3 * ONE_MEGABYTE) yield accepted() # Reject more than 40k sigops per block > 1MB and <= 2MB. block(27, spend=out[22], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=2 * ONE_MEGABYTE + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(26) # Reject more than 40k sigops per block > 1MB and <= 2MB. block(28, spend=out[22], script=lots_of_checksigs, extra_sigops=2 * MAX_BLOCK_SIGOPS_PER_MB + 1, block_size=3 * ONE_MEGABYTE) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Rewind bad block tip(26) # Too many sigops in one txn too_many_tx_checksigs = CScript( [OP_CHECKSIG] * (MAX_BLOCK_SIGOPS_PER_MB + 1)) block( 29, spend=out[22], script=too_many_tx_checksigs, block_size=ONE_MEGABYTE + 1) yield rejected(RejectResult(16, b'bad-txn-sigops')) # Rewind bad block tip(26) # P2SH # Build the redeem script, hash it, use hash to create the p2sh script redeem_script = CScript([self.coinbase_pubkey] + [ OP_2DUP, OP_CHECKSIGVERIFY] * 5 + [OP_CHECKSIG]) redeem_script_hash = hash160(redeem_script) p2sh_script = CScript([OP_HASH160, redeem_script_hash, OP_EQUAL]) # Create a p2sh transaction p2sh_tx = self.create_and_sign_transaction( out[22].tx, out[22].n, 1, p2sh_script) # Add the transaction to the block block(30) update_block(30, [p2sh_tx]) yield accepted() # Creates a new transaction using the p2sh transaction included in the # last block def spend_p2sh_tx(output_script=CScript([OP_TRUE])): # Create the transaction spent_p2sh_tx = CTransaction() spent_p2sh_tx.vin.append(CTxIn(COutPoint(p2sh_tx.sha256, 0), b'')) spent_p2sh_tx.vout.append(CTxOut(1, output_script)) # Sign the transaction using the redeem script sighash = SignatureHashForkId( redeem_script, spent_p2sh_tx, 0, SIGHASH_ALL | SIGHASH_FORKID, p2sh_tx.vout[0].nValue) sig = self.coinbase_key.sign(sighash) + bytes( bytearray([SIGHASH_ALL | SIGHASH_FORKID])) spent_p2sh_tx.vin[0].scriptSig = CScript([sig, redeem_script]) spent_p2sh_tx.rehash() return spent_p2sh_tx # Sigops p2sh limit p2sh_sigops_limit = MAX_BLOCK_SIGOPS_PER_MB - \ redeem_script.GetSigOpCount(True) # Too many sigops in one p2sh txn too_many_p2sh_sigops = CScript([OP_CHECKSIG] * (p2sh_sigops_limit + 1)) block(31, spend=out[23], block_size=ONE_MEGABYTE + 1) update_block(31, [spend_p2sh_tx(too_many_p2sh_sigops)]) yield rejected(RejectResult(16, b'bad-txn-sigops')) # Rewind bad block tip(30) # Max sigops in one p2sh txn max_p2sh_sigops = CScript([OP_CHECKSIG] * (p2sh_sigops_limit)) block(32, spend=out[23], block_size=ONE_MEGABYTE + 1) update_block(32, [spend_p2sh_tx(max_p2sh_sigops)]) yield accepted() # Check that compact block also work for big blocks node = self.nodes[0] peer = TestNode() peer.add_connection(NodeConn('127.0.0.1', p2p_port(0), node, peer)) # Start up network handling in another thread and wait for connection # to be etablished NetworkThread().start() peer.wait_for_verack() # Wait for SENDCMPCT def received_sendcmpct(): return (peer.last_sendcmpct != None) wait_until(received_sendcmpct, timeout=30) sendcmpct = msg_sendcmpct() sendcmpct.version = 1 sendcmpct.announce = True peer.send_and_ping(sendcmpct) # Exchange headers def received_getheaders(): return (peer.last_getheaders != None) wait_until(received_getheaders, timeout=30) # Return the favor peer.send_message(peer.last_getheaders) # Wait for the header list def received_headers(): return (peer.last_headers != None) wait_until(received_headers, timeout=30) # It's like we know about the same headers ! peer.send_message(peer.last_headers) # Send a block b33 = block(33, spend=out[24], block_size=ONE_MEGABYTE + 1) yield accepted() # Checks the node to forward it via compact block def received_block(): return (peer.last_cmpctblock != None) wait_until(received_block, timeout=30) # Was it our block ? cmpctblk_header = peer.last_cmpctblock.header_and_shortids.header cmpctblk_header.calc_sha256() assert(cmpctblk_header.sha256 == b33.sha256) # Send a bigger block peer.clear_block_data() b34 = block(34, spend=out[25], block_size=8 * ONE_MEGABYTE) yield accepted() # Checks the node to forward it via compact block wait_until(received_block, timeout=30) # Was it our block ? cmpctblk_header = peer.last_cmpctblock.header_and_shortids.header cmpctblk_header.calc_sha256() assert(cmpctblk_header.sha256 == b34.sha256) # Let's send a compact block and see if the node accepts it. # First, we generate the block and send all transaction to the mempool b35 = block(35, spend=out[26], block_size=8 * ONE_MEGABYTE) for i in range(1, len(b35.vtx)): node.sendrawtransaction(ToHex(b35.vtx[i]), True) # Now we create the compact block and send it comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(b35) peer.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) # Check that compact block is received properly assert(int(node.getbestblockhash(), 16) == b35.sha256) if __name__ == '__main__': FullBlockTest().main() diff --git a/test/functional/abc-rpc.py b/test/functional/abc-rpc.py index 86ee915c2..c0c91cd4d 100755 --- a/test/functional/abc-rpc.py +++ b/test/functional/abc-rpc.py @@ -1,100 +1,99 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # Exercise the Bitcoin ABC RPC calls. import time import random import re from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import NODE_BITCOIN_CASH from test_framework.cdefs import (ONE_MEGABYTE, LEGACY_MAX_BLOCK_SIZE, DEFAULT_MAX_BLOCK_SIZE) class ABC_RPC_Test (BitcoinTestFramework): - def __init__(self): - super(ABC_RPC_Test, self).__init__() + def set_test_params(self): self.num_nodes = 1 self.tip = None self.setup_clean_chain = True self.extra_args = [['-norelaypriority', '-whitelist=127.0.0.1']] def check_subversion(self, pattern_str): # Check that the subversion is set as expected netinfo = self.nodes[0].getnetworkinfo() subversion = netinfo['subversion'] pattern = re.compile(pattern_str) assert(pattern.match(subversion)) def test_excessiveblock(self): # Check that we start with DEFAULT_MAX_BLOCK_SIZE getsize = self.nodes[0].getexcessiveblock() ebs = getsize['excessiveBlockSize'] assert_equal(ebs, DEFAULT_MAX_BLOCK_SIZE) # Check that setting to legacy size is ok self.nodes[0].setexcessiveblock(LEGACY_MAX_BLOCK_SIZE + 1) getsize = self.nodes[0].getexcessiveblock() ebs = getsize['excessiveBlockSize'] assert_equal(ebs, LEGACY_MAX_BLOCK_SIZE + 1) # Check that going below legacy size is not accepted try: self.nodes[0].setexcessiveblock(LEGACY_MAX_BLOCK_SIZE) except JSONRPCException as e: assert("Invalid parameter, excessiveblock must be larger than %d" % LEGACY_MAX_BLOCK_SIZE in e.error['message']) else: raise AssertionError( "Must not accept excessiveblock values <= %d bytes" % LEGACY_MAX_BLOCK_SIZE) getsize = self.nodes[0].getexcessiveblock() ebs = getsize['excessiveBlockSize'] assert_equal(ebs, LEGACY_MAX_BLOCK_SIZE + 1) # Check setting to 2MB self.nodes[0].setexcessiveblock(2 * ONE_MEGABYTE) getsize = self.nodes[0].getexcessiveblock() ebs = getsize['excessiveBlockSize'] assert_equal(ebs, 2 * ONE_MEGABYTE) # Check for EB correctness in the subver string self.check_subversion("/Bitcoin ABC:.*\(EB2\.0; .*\)/") # Check setting to 13MB self.nodes[0].setexcessiveblock(13 * ONE_MEGABYTE) getsize = self.nodes[0].getexcessiveblock() ebs = getsize['excessiveBlockSize'] assert_equal(ebs, 13 * ONE_MEGABYTE) # Check for EB correctness in the subver string self.check_subversion("/Bitcoin ABC:.*\(EB13\.0; .*\)/") # Check setting to 13.14MB self.nodes[0].setexcessiveblock(13140000) getsize = self.nodes[0].getexcessiveblock() ebs = getsize['excessiveBlockSize'] assert_equal(ebs, 13.14 * ONE_MEGABYTE) # check for EB correctness in the subver string self.check_subversion("/Bitcoin ABC:.*\(EB13\.1; .*\)/") def test_cashservicebit(self): # Check that NODE_BITCOIN_CASH bit is set. # This can be seen in the 'localservices' entry of getnetworkinfo RPC. node = self.nodes[0] nw_info = node.getnetworkinfo() assert_equal(int(nw_info['localservices'], 16) & NODE_BITCOIN_CASH, NODE_BITCOIN_CASH) def run_test(self): self.genesis_hash = int(self.nodes[0].getbestblockhash(), 16) self.test_excessiveblock() self.test_cashservicebit() if __name__ == '__main__': ABC_RPC_Test().main() diff --git a/test/functional/assumevalid.py b/test/functional/assumevalid.py index ecd5e1cf4..45e5262d9 100755 --- a/test/functional/assumevalid.py +++ b/test/functional/assumevalid.py @@ -1,215 +1,213 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test logic for skipping signature validation on old blocks. Test logic for skipping signature validation on blocks which we've assumed valid (https://github.com/bitcoin/bitcoin/pull/9484) We build a chain that includes and invalid signature for one of the transactions: 0: genesis block 1: block 1 with coinbase transaction output. 2-101: bury that block with 100 blocks so the coinbase transaction output can be spent 102: a block containing a transaction spending the coinbase transaction output. The transaction has an invalid signature. 103-2202: bury the bad block with just over two weeks' worth of blocks (2100 blocks) Start three nodes: - node0 has no -assumevalid parameter. Try to sync to block 2202. It will reject block 102 and only sync as far as block 101 - node1 has -assumevalid set to the hash of block 102. Try to sync to block 2202. node1 will sync all the way to block 2202. - node2 has -assumevalid set to the hash of block 102. Try to sync to block 200. node2 will reject block 102 since it's assumed valid, but it isn't buried by at least two weeks' work. """ import time from test_framework.blocktools import (create_block, create_coinbase) from test_framework.key import CECKey from test_framework.mininode import (CBlockHeader, COutPoint, CTransaction, CTxIn, CTxOut, NetworkThread, NodeConn, NodeConnCB, msg_block, msg_headers) from test_framework.script import (CScript, OP_TRUE) from test_framework.test_framework import BitcoinTestFramework from test_framework.util import (p2p_port, assert_equal) class BaseNode(NodeConnCB): def send_header_for_blocks(self, new_blocks): headers_message = msg_headers() headers_message.headers = [CBlockHeader(b) for b in new_blocks] self.send_message(headers_message) class AssumeValidTest(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 3 def setup_network(self): + self.add_nodes(3) # Start node0. We don't start the other nodes yet since # we need to pre-mine a block with an invalid transaction # signature so we can pass in the block hash as assumevalid. - self.nodes = [self.start_node(0, self.options.tmpdir)] + self.start_node(0) def send_blocks_until_disconnected(self, node): """Keep sending blocks to the node until we're disconnected.""" for i in range(len(self.blocks)): try: node.send_message(msg_block(self.blocks[i])) except IOError as e: assert str(e) == 'Not connected, no pushbuf' break def assert_blockchain_height(self, node, height): """Wait until the blockchain is no longer advancing and verify it's reached the expected height.""" last_height = node.getblock(node.getbestblockhash())['height'] timeout = 10 while True: time.sleep(0.25) current_height = node.getblock(node.getbestblockhash())['height'] if current_height != last_height: last_height = current_height if timeout < 0: assert False, "blockchain too short after timeout: %d" % current_height timeout - 0.25 continue elif current_height > height: assert False, "blockchain too long: %d" % current_height elif current_height == height: break def run_test(self): # Connect to node0 node0 = BaseNode() connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], node0)) node0.add_connection(connections[0]) NetworkThread().start() # Start up network handling in another thread node0.wait_for_verack() # Build the blockchain self.tip = int(self.nodes[0].getbestblockhash(), 16) self.block_time = self.nodes[0].getblock( self.nodes[0].getbestblockhash())['time'] + 1 self.blocks = [] # Get a pubkey for the coinbase TXO coinbase_key = CECKey() coinbase_key.set_secretbytes(b"horsebattery") coinbase_pubkey = coinbase_key.get_pubkey() # Create the first block with a coinbase output to our key height = 1 block = create_block(self.tip, create_coinbase( height, coinbase_pubkey), self.block_time) self.blocks.append(block) self.block_time += 1 block.solve() # Save the coinbase for later self.block1 = block self.tip = block.sha256 height += 1 # Bury the block 100 deep so the coinbase output is spendable for i in range(100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() self.blocks.append(block) self.tip = block.sha256 self.block_time += 1 height += 1 # Create a transaction spending the coinbase output with an invalid (null) signature tx = CTransaction() tx.vin.append( CTxIn(COutPoint(self.block1.vtx[0].sha256, 0), scriptSig=b"")) tx.vout.append(CTxOut(49 * 100000000, CScript([OP_TRUE]))) tx.calc_sha256() block102 = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block102.vtx.extend([tx]) block102.hashMerkleRoot = block102.calc_merkle_root() block102.rehash() block102.solve() self.blocks.append(block102) self.tip = block102.sha256 self.block_time += 1 height += 1 # Bury the assumed valid block 2100 deep for i in range(2100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.nVersion = 4 block.solve() self.blocks.append(block) self.tip = block.sha256 self.block_time += 1 height += 1 # Start node1 and node2 with assumevalid so they accept a block with a bad signature. - self.nodes.append(self.start_node(1, self.options.tmpdir, - ["-assumevalid=" + hex(block102.sha256)])) + self.start_node(1, extra_args=["-assumevalid=" + hex(block102.sha256)]) node1 = BaseNode() # connects to node1 connections.append( NodeConn('127.0.0.1', p2p_port(1), self.nodes[1], node1)) node1.add_connection(connections[1]) node1.wait_for_verack() - self.nodes.append(self.start_node(2, self.options.tmpdir, - ["-assumevalid=" + hex(block102.sha256)])) + self.start_node(2, extra_args=["-assumevalid=" + hex(block102.sha256)]) node2 = BaseNode() # connects to node2 connections.append( NodeConn('127.0.0.1', p2p_port(2), self.nodes[2], node2)) node2.add_connection(connections[2]) node2.wait_for_verack() # send header lists to all three nodes node0.send_header_for_blocks(self.blocks[0:2000]) node0.send_header_for_blocks(self.blocks[2000:]) node1.send_header_for_blocks(self.blocks[0:2000]) node1.send_header_for_blocks(self.blocks[2000:]) node2.send_header_for_blocks(self.blocks[0:200]) # Send blocks to node0. Block 102 will be rejected. self.send_blocks_until_disconnected(node0) self.assert_blockchain_height(self.nodes[0], 101) # Send all blocks to node1. All blocks will be accepted. for i in range(2202): node1.send_message(msg_block(self.blocks[i])) # Syncing 2200 blocks can take a while on slow systems. Give it plenty of time to sync. node1.sync_with_ping(120) assert_equal(self.nodes[1].getblock( self.nodes[1].getbestblockhash())['height'], 2202) # Send blocks to node2. Block 102 will be rejected. self.send_blocks_until_disconnected(node2) self.assert_blockchain_height(self.nodes[2], 101) if __name__ == '__main__': AssumeValidTest().main() diff --git a/test/functional/bip65-cltv-p2p.py b/test/functional/bip65-cltv-p2p.py index 21d3b053b..880986a3c 100755 --- a/test/functional/bip65-cltv-p2p.py +++ b/test/functional/bip65-cltv-p2p.py @@ -1,181 +1,179 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test BIP65 (CHECKLOCKTIMEVERIFY). Test that the CHECKLOCKTIMEVERIFY soft-fork activates at (regtest) block height 1351. """ from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import * from test_framework.blocktools import create_coinbase, create_block from test_framework.script import CScript, OP_1NEGATE, OP_CHECKLOCKTIMEVERIFY, OP_DROP, CScriptNum from io import BytesIO CLTV_HEIGHT = 1351 # Reject codes that we might receive in this test REJECT_INVALID = 16 REJECT_OBSOLETE = 17 REJECT_NONSTANDARD = 64 def cltv_invalidate(tx): '''Modify the signature in vin 0 of the tx to fail CLTV Prepends -1 CLTV DROP in the scriptSig itself. TODO: test more ways that transactions using CLTV could be invalid (eg locktime requirements fail, sequence time requirements fail, etc). ''' tx.vin[0].scriptSig = CScript([OP_1NEGATE, OP_CHECKLOCKTIMEVERIFY, OP_DROP] + list(CScript(tx.vin[0].scriptSig))) def cltv_validate(node, tx, height): '''Modify the signature in vin 0 of the tx to pass CLTV Prepends CLTV DROP in the scriptSig, and sets the locktime to height''' tx.vin[0].nSequence = 0 tx.nLockTime = height # Need to re-sign, since nSequence and nLockTime changed signed_result = node.signrawtransaction( ToHex(tx), None, None, "ALL|FORKID") new_tx = CTransaction() new_tx.deserialize(BytesIO(hex_str_to_bytes(signed_result['hex']))) new_tx.vin[0].scriptSig = CScript([CScriptNum(height), OP_CHECKLOCKTIMEVERIFY, OP_DROP] + list(CScript(new_tx.vin[0].scriptSig))) return new_tx def create_transaction(node, coinbase, to_address, amount): from_txid = node.getblock(coinbase)['tx'][0] inputs = [{"txid": from_txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) signresult = node.signrawtransaction(rawtx, None, None, "ALL|FORKID") tx = CTransaction() tx.deserialize(BytesIO(hex_str_to_bytes(signresult['hex']))) return tx class BIP65Test(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 self.extra_args = [ ['-promiscuousmempoolflags=1', '-whitelist=127.0.0.1']] self.setup_clean_chain = True def run_test(self): node0 = NodeConnCB() connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], node0)) node0.add_connection(connections[0]) NetworkThread().start() # Start up network handling in another thread # wait_for_verack ensures that the P2P connection is fully up. node0.wait_for_verack() self.log.info("Mining %d blocks", CLTV_HEIGHT - 2) self.coinbase_blocks = self.nodes[0].generate(CLTV_HEIGHT - 2) self.nodeaddress = self.nodes[0].getnewaddress() self.log.info( "Test that an invalid-according-to-CLTV transaction can still appear in a block") spendtx = create_transaction(self.nodes[0], self.coinbase_blocks[0], self.nodeaddress, 1.0) cltv_invalidate(spendtx) spendtx.rehash() tip = self.nodes[0].getbestblockhash() block_time = self.nodes[0].getblockheader(tip)['mediantime'] + 1 block = create_block(int(tip, 16), create_coinbase( CLTV_HEIGHT - 1), block_time) block.nVersion = 3 block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() node0.send_and_ping(msg_block(block)) assert_equal(self.nodes[0].getbestblockhash(), block.hash) self.log.info("Test that blocks must now be at least version 4") tip = block.sha256 block_time += 1 block = create_block(tip, create_coinbase(CLTV_HEIGHT), block_time) block.nVersion = 3 block.solve() node0.send_and_ping(msg_block(block)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), tip) wait_until(lambda: "reject" in node0.last_message.keys(), lock=mininode_lock) with mininode_lock: assert_equal(node0.last_message["reject"].code, REJECT_OBSOLETE) assert_equal( node0.last_message["reject"].reason, b'bad-version(0x00000003)') assert_equal(node0.last_message["reject"].data, block.sha256) del node0.last_message["reject"] self.log.info( "Test that invalid-according-to-cltv transactions cannot appear in a block") block.nVersion = 4 spendtx = create_transaction(self.nodes[0], self.coinbase_blocks[1], self.nodeaddress, 1.0) cltv_invalidate(spendtx) spendtx.rehash() # First we show that this tx is valid except for CLTV by getting it # accepted to the mempool (which we can achieve with # -promiscuousmempoolflags). node0.send_and_ping(msg_tx(spendtx)) assert spendtx.hash in self.nodes[0].getrawmempool() # Now we verify that a block with this transaction is invalid. block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() node0.send_and_ping(msg_block(block)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), tip) wait_until(lambda: "reject" in node0.last_message.keys(), lock=mininode_lock) with mininode_lock: assert node0.last_message["reject"].code in [ REJECT_INVALID, REJECT_NONSTANDARD] assert_equal(node0.last_message["reject"].data, block.sha256) if node0.last_message["reject"].code == REJECT_INVALID: # Generic rejection when a block is invalid assert_equal( node0.last_message["reject"].reason, b'blk-bad-inputs') else: assert b'Negative locktime' in node0.last_message["reject"].reason self.log.info( "Test that a version 4 block with a valid-according-to-CLTV transaction is accepted") spendtx = cltv_validate(self.nodes[0], spendtx, CLTV_HEIGHT - 1) spendtx.rehash() block.vtx.pop(1) block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() node0.send_and_ping(msg_block(block)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), block.sha256) if __name__ == '__main__': BIP65Test().main() diff --git a/test/functional/bip68-112-113-p2p.py b/test/functional/bip68-112-113-p2p.py index 2fe1130f1..89b7867b4 100755 --- a/test/functional/bip68-112-113-p2p.py +++ b/test/functional/bip68-112-113-p2p.py @@ -1,629 +1,628 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.mininode import ToHex, CTransaction, NetworkThread from test_framework.blocktools import create_coinbase, create_block from test_framework.comptool import TestInstance, TestManager from test_framework.script import * from io import BytesIO import time ''' This test is meant to exercise activation of the first version bits soft fork This soft fork will activate the following BIPS: BIP 68 - nSequence relative lock times BIP 112 - CHECKSEQUENCEVERIFY BIP 113 - MedianTimePast semantics for nLockTime regtest lock-in with 108/144 block signalling activation after a further 144 blocks mine 82 blocks whose coinbases will be used to generate inputs for our tests mine 61 blocks to transition from DEFINED to STARTED mine 144 blocks only 100 of which are signaling readiness in order to fail to change state this period mine 144 blocks with 108 signaling and verify STARTED->LOCKED_IN mine 140 blocks and seed block chain with the 82 inputs will use for our tests at height 572 mine 3 blocks and verify still at LOCKED_IN and test that enforcement has not triggered mine 1 block and test that enforcement has triggered (which triggers ACTIVE) Test BIP 113 is enforced Mine 4 blocks so next height is 580 and test BIP 68 is enforced for time and height Mine 1 block so next height is 581 and test BIP 68 now passes time but not height Mine 1 block so next height is 582 and test BIP 68 now passes time and height Test that BIP 112 is enforced Various transactions will be used to test that the BIPs rules are not enforced before the soft fork activates And that after the soft fork activates transactions pass and fail as they should according to the rules. For each BIP, transactions of versions 1 and 2 will be tested. ---------------- BIP 113: bip113tx - modify the nLocktime variable BIP 68: bip68txs - 16 txs with nSequence relative locktime of 10 with various bits set as per the relative_locktimes below BIP 112: bip112txs_vary_nSequence - 16 txs with nSequence relative_locktimes of 10 evaluated against 10 OP_CSV OP_DROP bip112txs_vary_nSequence_9 - 16 txs with nSequence relative_locktimes of 9 evaluated against 10 OP_CSV OP_DROP bip112txs_vary_OP_CSV - 16 txs with nSequence = 10 evaluated against varying {relative_locktimes of 10} OP_CSV OP_DROP bip112txs_vary_OP_CSV_9 - 16 txs with nSequence = 9 evaluated against varying {relative_locktimes of 10} OP_CSV OP_DROP bip112tx_special - test negative argument to OP_CSV ''' base_relative_locktime = 10 seq_disable_flag = 1 << 31 seq_random_high_bit = 1 << 25 seq_type_flag = 1 << 22 seq_random_low_bit = 1 << 18 # b31,b25,b22,b18 represent the 31st, 25th, 22nd and 18th bits respectively in the nSequence field # relative_locktimes[b31][b25][b22][b18] is a base_relative_locktime with # the indicated bits set if their indices are 1 relative_locktimes = [] for b31 in range(2): b25times = [] for b25 in range(2): b22times = [] for b22 in range(2): b18times = [] for b18 in range(2): rlt = base_relative_locktime if (b31): rlt = rlt | seq_disable_flag if (b25): rlt = rlt | seq_random_high_bit if (b22): rlt = rlt | seq_type_flag if (b18): rlt = rlt | seq_random_low_bit b18times.append(rlt) b22times.append(b18times) b25times.append(b22times) relative_locktimes.append(b25times) def all_rlt_txs(txarray): txs = [] for b31 in range(2): for b25 in range(2): for b22 in range(2): for b18 in range(2): txs.append(txarray[b31][b25][b22][b18]) return txs class BIP68_112_113Test(ComparisonTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 + self.setup_clean_chain = True self.extra_args = [['-whitelist=127.0.0.1', '-blockversion=4']] def run_test(self): test = TestManager(self, self.options.tmpdir) test.add_all_connections(self.nodes) NetworkThread().start() # Start up network handling in another thread test.run() def send_generic_input_tx(self, node, coinbases): amount = Decimal("49.99") return node.sendrawtransaction(ToHex(self.sign_transaction(node, self.create_transaction(node, node.getblock(coinbases.pop())['tx'][0], self.nodeaddress, amount)))) def create_transaction(self, node, txid, to_address, amount): inputs = [{"txid": txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) tx = CTransaction() f = BytesIO(hex_str_to_bytes(rawtx)) tx.deserialize(f) return tx def sign_transaction(self, node, unsignedtx): rawtx = ToHex(unsignedtx) signresult = node.signrawtransaction(rawtx, None, None, "ALL|FORKID") tx = CTransaction() f = BytesIO(hex_str_to_bytes(signresult['hex'])) tx.deserialize(f) return tx def generate_blocks(self, number, version, test_blocks=[]): for i in range(number): block = self.create_test_block([], version) test_blocks.append([block, True]) self.last_block_time += 600 self.tip = block.sha256 self.tipheight += 1 return test_blocks def create_test_block(self, txs, version=536870912): block = create_block(self.tip, create_coinbase( self.tipheight + 1), self.last_block_time + 600) block.nVersion = version block.vtx.extend(txs) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() return block def create_bip68txs(self, bip68inputs, txversion, locktime_delta=0): txs = [] assert(len(bip68inputs) >= 16) i = 0 for b31 in range(2): b25txs = [] for b25 in range(2): b22txs = [] for b22 in range(2): b18txs = [] for b18 in range(2): tx = self.create_transaction( self.nodes[0], bip68inputs[i], self.nodeaddress, Decimal("49.98")) i += 1 tx.nVersion = txversion tx.vin[0].nSequence = relative_locktimes[ b31][b25][b22][b18] + locktime_delta b18txs.append(self.sign_transaction(self.nodes[0], tx)) b22txs.append(b18txs) b25txs.append(b22txs) txs.append(b25txs) return txs def create_bip112special(self, input, txversion): tx = self.create_transaction( self.nodes[0], input, self.nodeaddress, Decimal("49.98")) tx.nVersion = txversion signtx = self.sign_transaction(self.nodes[0], tx) signtx.vin[0].scriptSig = CScript( [-1, OP_CHECKSEQUENCEVERIFY, OP_DROP] + list(CScript(signtx.vin[0].scriptSig))) return signtx def create_bip112txs(self, bip112inputs, varyOP_CSV, txversion, locktime_delta=0): txs = [] assert(len(bip112inputs) >= 16) i = 0 for b31 in range(2): b25txs = [] for b25 in range(2): b22txs = [] for b22 in range(2): b18txs = [] for b18 in range(2): tx = self.create_transaction( self.nodes[0], bip112inputs[i], self.nodeaddress, Decimal("49.98")) i += 1 if (varyOP_CSV): # if varying OP_CSV, nSequence is fixed tx.vin[ 0].nSequence = base_relative_locktime + locktime_delta else: # vary nSequence instead, OP_CSV is fixed tx.vin[0].nSequence = relative_locktimes[ b31][b25][b22][b18] + locktime_delta tx.nVersion = txversion signtx = self.sign_transaction(self.nodes[0], tx) if (varyOP_CSV): signtx.vin[0].scriptSig = CScript( [relative_locktimes[b31][b25][b22][b18], OP_CHECKSEQUENCEVERIFY, OP_DROP] + list(CScript(signtx.vin[0].scriptSig))) else: signtx.vin[0].scriptSig = CScript( [base_relative_locktime, OP_CHECKSEQUENCEVERIFY, OP_DROP] + list(CScript(signtx.vin[0].scriptSig))) b18txs.append(signtx) b22txs.append(b18txs) b25txs.append(b22txs) txs.append(b25txs) return txs def get_tests(self): long_past_time = int(time.time()) - 600 * \ 1000 # enough to build up to 1000 blocks 10 minutes apart without worrying about getting into the future self.nodes[0].setmocktime(long_past_time - 100) # enough so that the generated blocks will # still all be before long_past_time self.coinbase_blocks = self.nodes[0].generate( 1 + 16 + 2 * 32 + 1) # 82 blocks generated for inputs self.nodes[0].setmocktime( 0) # set time back to present so yielded blocks aren't in the future as we advance last_block_time self.tipheight = 82 # height of the next block to build self.last_block_time = long_past_time self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0) self.nodeaddress = self.nodes[0].getnewaddress() assert_equal( get_bip9_status(self.nodes[0], 'csv')['status'], 'defined') test_blocks = self.generate_blocks(61, 4) yield TestInstance(test_blocks, sync_every_block=False) # 1 # Advanced from DEFINED to STARTED, height = 143 assert_equal( get_bip9_status(self.nodes[0], 'csv')['status'], 'started') # Fail to achieve LOCKED_IN 100 out of 144 signal bit 0 # using a variety of bits to simulate multiple parallel softforks # 0x20000001 (signalling ready) test_blocks = self.generate_blocks(50, 536870913) # 0x00000004 (signalling not) test_blocks = self.generate_blocks(20, 4, test_blocks) # 0x20000101 (signalling ready) test_blocks = self.generate_blocks(50, 536871169, test_blocks) # 0x20010000 (signalling not) test_blocks = self.generate_blocks(24, 536936448, test_blocks) yield TestInstance(test_blocks, sync_every_block=False) # 2 # Failed to advance past STARTED, height = 287 assert_equal( get_bip9_status(self.nodes[0], 'csv')['status'], 'started') # 108 out of 144 signal bit 0 to achieve lock-in # using a variety of bits to simulate multiple parallel softforks # 0x20000001 (signalling ready) test_blocks = self.generate_blocks(58, 536870913) # 0x00000004 (signalling not) test_blocks = self.generate_blocks(26, 4, test_blocks) # 0x20000101 (signalling ready) test_blocks = self.generate_blocks(50, 536871169, test_blocks) # 0x20010000 (signalling not) test_blocks = self.generate_blocks(10, 536936448, test_blocks) yield TestInstance(test_blocks, sync_every_block=False) # 3 # Advanced from STARTED to LOCKED_IN, height = 431 assert_equal( get_bip9_status(self.nodes[0], 'csv')['status'], 'locked_in') # 140 more version 4 blocks test_blocks = self.generate_blocks(140, 4) yield TestInstance(test_blocks, sync_every_block=False) # 4 # Inputs at height = 572 # Put inputs for all tests in the chain at height 572 (tip now = 571) (time increases by 600s per block) # Note we reuse inputs for v1 and v2 txs so must test these separately # 16 normal inputs bip68inputs = [] for i in range(16): bip68inputs.append( self.send_generic_input_tx(self.nodes[0], self.coinbase_blocks)) # 2 sets of 16 inputs with 10 OP_CSV OP_DROP (actually will be # prepended to spending scriptSig) bip112basicinputs = [] for j in range(2): inputs = [] for i in range(16): inputs.append(self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks)) bip112basicinputs.append(inputs) # 2 sets of 16 varied inputs with (relative_lock_time) OP_CSV OP_DROP # (actually will be prepended to spending scriptSig) bip112diverseinputs = [] for j in range(2): inputs = [] for i in range(16): inputs.append(self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks)) bip112diverseinputs.append(inputs) # 1 special input with -1 OP_CSV OP_DROP (actually will be prepended to # spending scriptSig) bip112specialinput = self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks) # 1 normal input bip113input = self.send_generic_input_tx( self.nodes[0], self.coinbase_blocks) self.nodes[0].setmocktime(self.last_block_time + 600) inputblockhash = self.nodes[0].generate(1)[ 0] # 1 block generated for inputs to be in chain at height 572 self.nodes[0].setmocktime(0) self.tip = int("0x" + inputblockhash, 0) self.tipheight += 1 self.last_block_time += 600 assert_equal( len(self.nodes[0].getblock(inputblockhash, True)["tx"]), 82 + 1) # 2 more version 4 blocks test_blocks = self.generate_blocks(2, 4) yield TestInstance(test_blocks, sync_every_block=False) # 5 # Not yet advanced to ACTIVE, height = 574 (will activate for block # 576, not 575) assert_equal( get_bip9_status(self.nodes[0], 'csv')['status'], 'locked_in') # Test both version 1 and version 2 transactions for all tests # BIP113 test transaction will be modified before each use to put in # appropriate block time bip113tx_v1 = self.create_transaction( self.nodes[0], bip113input, self.nodeaddress, Decimal("49.98")) bip113tx_v1.vin[0].nSequence = 0xFFFFFFFE bip113tx_v1.nVersion = 1 bip113tx_v2 = self.create_transaction( self.nodes[0], bip113input, self.nodeaddress, Decimal("49.98")) bip113tx_v2.vin[0].nSequence = 0xFFFFFFFE bip113tx_v2.nVersion = 2 # For BIP68 test all 16 relative sequence locktimes bip68txs_v1 = self.create_bip68txs(bip68inputs, 1) bip68txs_v2 = self.create_bip68txs(bip68inputs, 2) # For BIP112 test: # 16 relative sequence locktimes of 10 against 10 OP_CSV OP_DROP inputs bip112txs_vary_nSequence_v1 = self.create_bip112txs( bip112basicinputs[0], False, 1) bip112txs_vary_nSequence_v2 = self.create_bip112txs( bip112basicinputs[0], False, 2) # 16 relative sequence locktimes of 9 against 10 OP_CSV OP_DROP inputs bip112txs_vary_nSequence_9_v1 = self.create_bip112txs( bip112basicinputs[1], False, 1, -1) bip112txs_vary_nSequence_9_v2 = self.create_bip112txs( bip112basicinputs[1], False, 2, -1) # sequence lock time of 10 against 16 (relative_lock_time) OP_CSV # OP_DROP inputs bip112txs_vary_OP_CSV_v1 = self.create_bip112txs( bip112diverseinputs[0], True, 1) bip112txs_vary_OP_CSV_v2 = self.create_bip112txs( bip112diverseinputs[0], True, 2) # sequence lock time of 9 against 16 (relative_lock_time) OP_CSV # OP_DROP inputs bip112txs_vary_OP_CSV_9_v1 = self.create_bip112txs( bip112diverseinputs[1], True, 1, -1) bip112txs_vary_OP_CSV_9_v2 = self.create_bip112txs( bip112diverseinputs[1], True, 2, -1) # -1 OP_CSV OP_DROP input bip112tx_special_v1 = self.create_bip112special(bip112specialinput, 1) bip112tx_special_v2 = self.create_bip112special(bip112specialinput, 2) # TESTING ### # # Before Soft Forks Activate ### # # All txs should pass # Version 1 txs ### success_txs = [] # add BIP113 tx and -1 CSV tx bip113tx_v1.nLockTime = self.last_block_time - 600 * \ 5 # = MTP of prior block (not <) but < time put on current block bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1) success_txs.append(bip113signed1) success_txs.append(bip112tx_special_v1) # add BIP 68 txs success_txs.extend(all_rlt_txs(bip68txs_v1)) # add BIP 112 with seq=10 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v1)) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_v1)) # try BIP 112 with seq=9 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v1)) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_9_v1)) yield TestInstance([[self.create_test_block(success_txs), True]]) # 6 self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Version 2 txs ### success_txs = [] # add BIP113 tx and -1 CSV tx bip113tx_v2.nLockTime = self.last_block_time - 600 * \ 5 # = MTP of prior block (not <) but < time put on current block bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2) success_txs.append(bip113signed2) success_txs.append(bip112tx_special_v2) # add BIP 68 txs success_txs.extend(all_rlt_txs(bip68txs_v2)) # add BIP 112 with seq=10 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v2)) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_v2)) # try BIP 112 with seq=9 txs success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v2)) success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_9_v2)) yield TestInstance([[self.create_test_block(success_txs), True]]) # 7 self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # 1 more version 4 block to get us to height 575 so the fork should now # be active for the next block test_blocks = self.generate_blocks(1, 4) yield TestInstance(test_blocks, sync_every_block=False) # 8 assert_equal(get_bip9_status(self.nodes[0], 'csv')['status'], 'active') # # After Soft Forks Activate ### # # BIP 113 ### # BIP 113 tests should now fail regardless of version number if # nLockTime isn't satisfied by new rules bip113tx_v1.nLockTime = self.last_block_time - 600 * \ 5 # = MTP of prior block (not <) but < time put on current block bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1) bip113tx_v2.nLockTime = self.last_block_time - 600 * \ 5 # = MTP of prior block (not <) but < time put on current block bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2) for bip113tx in [bip113signed1, bip113signed2]: # 9,10 yield TestInstance([[self.create_test_block([bip113tx]), False]]) # BIP 113 tests should now pass if the locktime is < MTP bip113tx_v1.nLockTime = self.last_block_time - \ 600 * 5 - 1 # < MTP of prior block bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1) bip113tx_v2.nLockTime = self.last_block_time - \ 600 * 5 - 1 # < MTP of prior block bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2) for bip113tx in [bip113signed1, bip113signed2]: # 11,12 yield TestInstance([[self.create_test_block([bip113tx]), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Next block height = 580 after 4 blocks of random version test_blocks = self.generate_blocks(4, 1234) yield TestInstance(test_blocks, sync_every_block=False) # 13 # BIP 68 ### # Version 1 txs ### # All still pass success_txs = [] success_txs.extend(all_rlt_txs(bip68txs_v1)) yield TestInstance([[self.create_test_block(success_txs), True]]) # 14 self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Version 2 txs ### bip68success_txs = [] # All txs with SEQUENCE_LOCKTIME_DISABLE_FLAG set pass for b25 in range(2): for b22 in range(2): for b18 in range(2): bip68success_txs.append(bip68txs_v2[1][b25][b22][b18]) # 15 yield TestInstance([[self.create_test_block(bip68success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # All txs without flag fail as we are at delta height = 8 < 10 and # delta time = 8 * 600 < 10 * 512 bip68timetxs = [] for b25 in range(2): for b18 in range(2): bip68timetxs.append(bip68txs_v2[0][b25][1][b18]) for tx in bip68timetxs: # 16 - 19 yield TestInstance([[self.create_test_block([tx]), False]]) bip68heighttxs = [] for b25 in range(2): for b18 in range(2): bip68heighttxs.append(bip68txs_v2[0][b25][0][b18]) for tx in bip68heighttxs: # 20 - 23 yield TestInstance([[self.create_test_block([tx]), False]]) # Advance one block to 581 test_blocks = self.generate_blocks(1, 1234) yield TestInstance(test_blocks, sync_every_block=False) # 24 # Height txs should fail and time txs should now pass 9 * 600 > 10 * # 512 bip68success_txs.extend(bip68timetxs) # 25 yield TestInstance([[self.create_test_block(bip68success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) for tx in bip68heighttxs: # 26 - 29 yield TestInstance([[self.create_test_block([tx]), False]]) # Advance one block to 582 test_blocks = self.generate_blocks(1, 1234) yield TestInstance(test_blocks, sync_every_block=False) # 30 # All BIP 68 txs should pass bip68success_txs.extend(bip68heighttxs) # 31 yield TestInstance([[self.create_test_block(bip68success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # BIP 112 ### # Version 1 txs ### # -1 OP_CSV tx should fail # 32 yield TestInstance([[self.create_test_block([bip112tx_special_v1]), False]]) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in argument to OP_CSV, # version 1 txs should still pass success_txs = [] for b25 in range(2): for b22 in range(2): for b18 in range(2): success_txs.append( bip112txs_vary_OP_CSV_v1[1][b25][b22][b18]) success_txs.append( bip112txs_vary_OP_CSV_9_v1[1][b25][b22][b18]) yield TestInstance([[self.create_test_block(success_txs), True]]) # 33 self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is unset in argument to OP_CSV, # version 1 txs should now fail fail_txs = [] fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v1)) fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v1)) for b25 in range(2): for b22 in range(2): for b18 in range(2): fail_txs.append(bip112txs_vary_OP_CSV_v1[0][b25][b22][b18]) fail_txs.append( bip112txs_vary_OP_CSV_9_v1[0][b25][b22][b18]) for tx in fail_txs: # 34 - 81 yield TestInstance([[self.create_test_block([tx]), False]]) # Version 2 txs ### # -1 OP_CSV tx should fail # 82 yield TestInstance([[self.create_test_block([bip112tx_special_v2]), False]]) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in argument to OP_CSV, # version 2 txs should pass (all sequence locks are met) success_txs = [] for b25 in range(2): for b22 in range(2): for b18 in range(2): success_txs.append(bip112txs_vary_OP_CSV_v2[ 1][b25][b22][b18]) # 8/16 of vary_OP_CSV success_txs.append(bip112txs_vary_OP_CSV_9_v2[ 1][b25][b22][b18]) # 8/16 of vary_OP_CSV_9 yield TestInstance([[self.create_test_block(success_txs), True]]) # 83 self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # SEQUENCE_LOCKTIME_DISABLE_FLAG is unset in argument to OP_CSV for all remaining txs ## # All txs with nSequence 9 should fail either due to earlier mismatch # or failing the CSV check fail_txs = [] fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v2)) # 16/16 of vary_nSequence_9 for b25 in range(2): for b22 in range(2): for b18 in range(2): fail_txs.append(bip112txs_vary_OP_CSV_9_v2[ 0][b25][b22][b18]) # 16/16 of vary_OP_CSV_9 for tx in fail_txs: # 84 - 107 yield TestInstance([[self.create_test_block([tx]), False]]) # If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in nSequence, tx should fail fail_txs = [] for b25 in range(2): for b22 in range(2): for b18 in range(2): fail_txs.append(bip112txs_vary_nSequence_v2[ 1][b25][b22][b18]) # 8/16 of vary_nSequence for tx in fail_txs: # 108-115 yield TestInstance([[self.create_test_block([tx]), False]]) # If sequencelock types mismatch, tx should fail fail_txs = [] for b25 in range(2): for b18 in range(2): fail_txs.append(bip112txs_vary_nSequence_v2[ 0][b25][1][b18]) # 12/16 of vary_nSequence fail_txs.append(bip112txs_vary_OP_CSV_v2[ 0][b25][1][b18]) # 12/16 of vary_OP_CSV for tx in fail_txs: # 116-123 yield TestInstance([[self.create_test_block([tx]), False]]) # Remaining txs should pass, just test masking works properly success_txs = [] for b25 in range(2): for b18 in range(2): success_txs.append(bip112txs_vary_nSequence_v2[ 0][b25][0][b18]) # 16/16 of vary_nSequence success_txs.append(bip112txs_vary_OP_CSV_v2[ 0][b25][0][b18]) # 16/16 of vary_OP_CSV # 124 yield TestInstance([[self.create_test_block(success_txs), True]]) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Additional test, of checking that comparison of two time types works # properly time_txs = [] for b25 in range(2): for b18 in range(2): tx = bip112txs_vary_OP_CSV_v2[0][b25][1][b18] tx.vin[0].nSequence = base_relative_locktime | seq_type_flag signtx = self.sign_transaction(self.nodes[0], tx) time_txs.append(signtx) yield TestInstance([[self.create_test_block(time_txs), True]]) # 125 self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Missing aspects of test # Testing empty stack fails if __name__ == '__main__': BIP68_112_113Test().main() diff --git a/test/functional/bip68-sequence.py b/test/functional/bip68-sequence.py index a24d6f9a5..fa9492cb0 100755 --- a/test/functional/bip68-sequence.py +++ b/test/functional/bip68-sequence.py @@ -1,451 +1,448 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test BIP68 implementation # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.script import * from test_framework.mininode import * from test_framework.blocktools import * SEQUENCE_LOCKTIME_DISABLE_FLAG = (1 << 31) SEQUENCE_LOCKTIME_TYPE_FLAG = (1 << 22) # this means use time (0 means height) SEQUENCE_LOCKTIME_GRANULARITY = 9 # this is a bit-shift SEQUENCE_LOCKTIME_MASK = 0x0000ffff # RPC error for non-BIP68 final transactions NOT_FINAL_ERROR = "64: non-BIP68-final" class BIP68Test(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False self.extra_args = [["-blockprioritypercentage=0"], - ["-acceptnonstdtxn=0", "-blockprioritypercentage=0"]] + ["-blockprioritypercentage=0", "-acceptnonstdtxn=0"]] def run_test(self): self.relayfee = self.nodes[0].getnetworkinfo()["relayfee"] # Generate some coins self.nodes[0].generate(110) self.log.info("Running test disable flag") self.test_disable_flag() self.log.info("Running test sequence-lock-confirmed-inputs") self.test_sequence_lock_confirmed_inputs() self.log.info("Running test sequence-lock-unconfirmed-inputs") self.test_sequence_lock_unconfirmed_inputs() self.log.info( "Running test BIP68 not consensus before versionbits activation") self.test_bip68_not_consensus() self.log.info("Verifying nVersion=2 transactions aren't standard") self.test_version2_relay(before_activation=True) self.log.info("Activating BIP68 (and 112/113)") self.activateCSV() self.log.info("Verifying nVersion=2 transactions are now standard") self.test_version2_relay(before_activation=False) self.log.info("Passed") # Test that BIP68 is not in effect if tx version is 1, or if # the first sequence bit is set. def test_disable_flag(self): # Create some unconfirmed inputs new_addr = self.nodes[0].getnewaddress() self.nodes[0].sendtoaddress(new_addr, 2) # send 2 BTC utxos = self.nodes[0].listunspent(0, 0) assert(len(utxos) > 0) utxo = utxos[0] tx1 = CTransaction() value = int(satoshi_round(utxo["amount"] - self.relayfee) * COIN) # Check that the disable flag disables relative locktime. # If sequence locks were used, this would require 1 block for the # input to mature. sequence_value = SEQUENCE_LOCKTIME_DISABLE_FLAG | 1 tx1.vin = [ CTxIn(COutPoint(int(utxo["txid"], 16), utxo["vout"]), nSequence=sequence_value)] tx1.vout = [CTxOut(value, CScript([b'a']))] tx1_signed = self.nodes[0].signrawtransaction( ToHex(tx1), None, None, "ALL|FORKID")["hex"] tx1_id = self.nodes[0].sendrawtransaction(tx1_signed) tx1_id = int(tx1_id, 16) # This transaction will enable sequence-locks, so this transaction should # fail tx2 = CTransaction() tx2.nVersion = 2 sequence_value = sequence_value & 0x7fffffff tx2.vin = [CTxIn(COutPoint(tx1_id, 0), nSequence=sequence_value)] tx2.vout = [CTxOut(int(value - self.relayfee * COIN), CScript([b'a']))] tx2.rehash() assert_raises_jsonrpc(-26, NOT_FINAL_ERROR, self.nodes[0].sendrawtransaction, ToHex(tx2)) # Setting the version back down to 1 should disable the sequence lock, # so this should be accepted. tx2.nVersion = 1 self.nodes[0].sendrawtransaction(ToHex(tx2)) # Calculate the median time past of a prior block ("confirmations" before # the current tip). def get_median_time_past(self, confirmations): block_hash = self.nodes[0].getblockhash( self.nodes[0].getblockcount() - confirmations) return self.nodes[0].getblockheader(block_hash)["mediantime"] # Test that sequence locks are respected for transactions spending # confirmed inputs. def test_sequence_lock_confirmed_inputs(self): # Create lots of confirmed utxos, and use them to generate lots of random # transactions. max_outputs = 50 addresses = [] while len(addresses) < max_outputs: addresses.append(self.nodes[0].getnewaddress()) while len(self.nodes[0].listunspent()) < 200: import random random.shuffle(addresses) num_outputs = random.randint(1, max_outputs) outputs = {} for i in range(num_outputs): outputs[addresses[i]] = random.randint(1, 20) * 0.01 self.nodes[0].sendmany("", outputs) self.nodes[0].generate(1) utxos = self.nodes[0].listunspent() # Try creating a lot of random transactions. # Each time, choose a random number of inputs, and randomly set # some of those inputs to be sequence locked (and randomly choose # between height/time locking). Small random chance of making the locks # all pass. for i in range(400): # Randomly choose up to 10 inputs num_inputs = random.randint(1, 10) random.shuffle(utxos) # Track whether any sequence locks used should fail should_pass = True # Track whether this transaction was built with sequence locks using_sequence_locks = False tx = CTransaction() tx.nVersion = 2 value = 0 for j in range(num_inputs): sequence_value = 0xfffffffe # this disables sequence locks # 50% chance we enable sequence locks if random.randint(0, 1): using_sequence_locks = True # 10% of the time, make the input sequence value pass input_will_pass = (random.randint(1, 10) == 1) sequence_value = utxos[j]["confirmations"] if not input_will_pass: sequence_value += 1 should_pass = False # Figure out what the median-time-past was for the confirmed input # Note that if an input has N confirmations, we're going back N blocks # from the tip so that we're looking up MTP of the block # PRIOR to the one the input appears in, as per the BIP68 # spec. orig_time = self.get_median_time_past( utxos[j]["confirmations"]) cur_time = self.get_median_time_past(0) # MTP of the tip # can only timelock this input if it's not too old -- # otherwise use height can_time_lock = True if ((cur_time - orig_time) >> SEQUENCE_LOCKTIME_GRANULARITY) >= SEQUENCE_LOCKTIME_MASK: can_time_lock = False # if time-lockable, then 50% chance we make this a time # lock if random.randint(0, 1) and can_time_lock: # Find first time-lock value that fails, or latest one # that succeeds time_delta = sequence_value << SEQUENCE_LOCKTIME_GRANULARITY if input_will_pass and time_delta > cur_time - orig_time: sequence_value = ( (cur_time - orig_time) >> SEQUENCE_LOCKTIME_GRANULARITY) elif (not input_will_pass and time_delta <= cur_time - orig_time): sequence_value = ( (cur_time - orig_time) >> SEQUENCE_LOCKTIME_GRANULARITY) + 1 sequence_value |= SEQUENCE_LOCKTIME_TYPE_FLAG tx.vin.append( CTxIn(COutPoint(int(utxos[j]["txid"], 16), utxos[j]["vout"]), nSequence=sequence_value)) value += utxos[j]["amount"] * COIN # Overestimate the size of the tx - signatures should be less than # 120 bytes, and leave 50 for the output tx_size = len(ToHex(tx)) // 2 + 120 * num_inputs + 50 tx.vout.append( CTxOut(int(value - self.relayfee * tx_size * COIN / 1000), CScript([b'a']))) rawtx = self.nodes[0].signrawtransaction( ToHex(tx), None, None, "ALL|FORKID")["hex"] if (using_sequence_locks and not should_pass): # This transaction should be rejected assert_raises_jsonrpc(-26, NOT_FINAL_ERROR, self.nodes[0].sendrawtransaction, rawtx) else: # This raw transaction should be accepted self.nodes[0].sendrawtransaction(rawtx) utxos = self.nodes[0].listunspent() # Test that sequence locks on unconfirmed inputs must have nSequence # height or time of 0 to be accepted. # Then test that BIP68-invalid transactions are removed from the mempool # after a reorg. def test_sequence_lock_unconfirmed_inputs(self): # Store height so we can easily reset the chain at the end of the test cur_height = self.nodes[0].getblockcount() # Create a mempool tx. txid = self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 2) tx1 = FromHex(CTransaction(), self.nodes[0].getrawtransaction(txid)) tx1.rehash() # Anyone-can-spend mempool tx. # Sequence lock of 0 should pass. tx2 = CTransaction() tx2.nVersion = 2 tx2.vin = [CTxIn(COutPoint(tx1.sha256, 0), nSequence=0)] tx2.vout = [ CTxOut(int(tx1.vout[0].nValue - self.relayfee * COIN), CScript([b'a']))] tx2_raw = self.nodes[0].signrawtransaction( ToHex(tx2), None, None, "ALL|FORKID")["hex"] tx2 = FromHex(tx2, tx2_raw) tx2.rehash() self.nodes[0].sendrawtransaction(tx2_raw) # Create a spend of the 0th output of orig_tx with a sequence lock # of 1, and test what happens when submitting. # orig_tx.vout[0] must be an anyone-can-spend output def test_nonzero_locks(orig_tx, node, relayfee, use_height_lock): sequence_value = 1 if not use_height_lock: sequence_value |= SEQUENCE_LOCKTIME_TYPE_FLAG tx = CTransaction() tx.nVersion = 2 tx.vin = [ CTxIn(COutPoint(orig_tx.sha256, 0), nSequence=sequence_value)] tx.vout = [ CTxOut(int(orig_tx.vout[0].nValue - relayfee * COIN), CScript([b'a']))] tx.rehash() if (orig_tx.hash in node.getrawmempool()): # sendrawtransaction should fail if the tx is in the mempool assert_raises_jsonrpc(-26, NOT_FINAL_ERROR, node.sendrawtransaction, ToHex(tx)) else: # sendrawtransaction should succeed if the tx is not in the mempool node.sendrawtransaction(ToHex(tx)) return tx test_nonzero_locks( tx2, self.nodes[0], self.relayfee, use_height_lock=True) test_nonzero_locks( tx2, self.nodes[0], self.relayfee, use_height_lock=False) # Now mine some blocks, but make sure tx2 doesn't get mined. # Use prioritisetransaction to lower the effective feerate to 0 self.nodes[0].prioritisetransaction( tx2.hash, -1e15, int(-self.relayfee * COIN)) cur_time = int(time.time()) for i in range(10): self.nodes[0].setmocktime(cur_time + 600) self.nodes[0].generate(1) cur_time += 600 assert(tx2.hash in self.nodes[0].getrawmempool()) test_nonzero_locks( tx2, self.nodes[0], self.relayfee, use_height_lock=True) test_nonzero_locks( tx2, self.nodes[0], self.relayfee, use_height_lock=False) # Mine tx2, and then try again self.nodes[0].prioritisetransaction( tx2.hash, 1e15, int(self.relayfee * COIN)) # Advance the time on the node so that we can test timelocks self.nodes[0].setmocktime(cur_time + 600) self.nodes[0].generate(1) assert(tx2.hash not in self.nodes[0].getrawmempool()) # Now that tx2 is not in the mempool, a sequence locked spend should # succeed tx3 = test_nonzero_locks( tx2, self.nodes[0], self.relayfee, use_height_lock=False) assert(tx3.hash in self.nodes[0].getrawmempool()) self.nodes[0].generate(1) assert(tx3.hash not in self.nodes[0].getrawmempool()) # One more test, this time using height locks tx4 = test_nonzero_locks( tx3, self.nodes[0], self.relayfee, use_height_lock=True) assert(tx4.hash in self.nodes[0].getrawmempool()) # Now try combining confirmed and unconfirmed inputs tx5 = test_nonzero_locks( tx4, self.nodes[0], self.relayfee, use_height_lock=True) assert(tx5.hash not in self.nodes[0].getrawmempool()) utxos = self.nodes[0].listunspent() tx5.vin.append( CTxIn(COutPoint(int(utxos[0]["txid"], 16), utxos[0]["vout"]), nSequence=1)) tx5.vout[0].nValue += int(utxos[0]["amount"] * COIN) raw_tx5 = self.nodes[0].signrawtransaction( ToHex(tx5), None, None, "ALL|FORKID")["hex"] assert_raises_jsonrpc(-26, NOT_FINAL_ERROR, self.nodes[0].sendrawtransaction, raw_tx5) # Test mempool-BIP68 consistency after reorg # # State of the transactions in the last blocks: # ... -> [ tx2 ] -> [ tx3 ] # tip-1 tip # And currently tx4 is in the mempool. # # If we invalidate the tip, tx3 should get added to the mempool, causing # tx4 to be removed (fails sequence-lock). self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) assert(tx4.hash not in self.nodes[0].getrawmempool()) assert(tx3.hash in self.nodes[0].getrawmempool()) # Now mine 2 empty blocks to reorg out the current tip (labeled tip-1 in # diagram above). # This would cause tx2 to be added back to the mempool, which in turn causes # tx3 to be removed. tip = int(self.nodes[0].getblockhash( self.nodes[0].getblockcount() - 1), 16) height = self.nodes[0].getblockcount() for i in range(2): block = create_block(tip, create_coinbase(height), cur_time) block.nVersion = 3 block.rehash() block.solve() tip = block.sha256 height += 1 self.nodes[0].submitblock(ToHex(block)) cur_time += 1 mempool = self.nodes[0].getrawmempool() assert(tx3.hash not in mempool) assert(tx2.hash in mempool) # Reset the chain and get rid of the mocktimed-blocks self.nodes[0].setmocktime(0) self.nodes[0].invalidateblock( self.nodes[0].getblockhash(cur_height + 1)) self.nodes[0].generate(10) # Make sure that BIP68 isn't being used to validate blocks, prior to # versionbits activation. If more blocks are mined prior to this test # being run, then it's possible the test has activated the soft fork, and # this test should be moved to run earlier, or deleted. def test_bip68_not_consensus(self): assert(get_bip9_status(self.nodes[0], 'csv')['status'] != 'active') txid = self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 2) tx1 = FromHex(CTransaction(), self.nodes[0].getrawtransaction(txid)) tx1.rehash() # Make an anyone-can-spend transaction tx2 = CTransaction() tx2.nVersion = 1 tx2.vin = [CTxIn(COutPoint(tx1.sha256, 0), nSequence=0)] tx2.vout = [ CTxOut(int(tx1.vout[0].nValue - self.relayfee * COIN), CScript([b'a']))] # sign tx2 tx2_raw = self.nodes[0].signrawtransaction( ToHex(tx2), None, None, "ALL|FORKID")["hex"] tx2 = FromHex(tx2, tx2_raw) tx2.rehash() self.nodes[0].sendrawtransaction(ToHex(tx2)) # Now make an invalid spend of tx2 according to BIP68 sequence_value = 100 # 100 block relative locktime tx3 = CTransaction() tx3.nVersion = 2 tx3.vin = [CTxIn(COutPoint(tx2.sha256, 0), nSequence=sequence_value)] tx3.vout = [ CTxOut(int(tx2.vout[0].nValue - self.relayfee * COIN), CScript([b'a']))] tx3.rehash() assert_raises_jsonrpc(-26, NOT_FINAL_ERROR, self.nodes[0].sendrawtransaction, ToHex(tx3)) # make a block that violates bip68; ensure that the tip updates tip = int(self.nodes[0].getbestblockhash(), 16) block = create_block( tip, create_coinbase(self.nodes[0].getblockcount() + 1)) block.nVersion = 3 block.vtx.extend([tx1, tx2, tx3]) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() self.nodes[0].submitblock(ToHex(block)) assert_equal(self.nodes[0].getbestblockhash(), block.hash) def activateCSV(self): # activation should happen at block height 432 (3 periods) min_activation_height = 432 height = self.nodes[0].getblockcount() assert(height < 432) self.nodes[0].generate(432 - height) assert(get_bip9_status(self.nodes[0], 'csv')['status'] == 'active') sync_blocks(self.nodes) # Use self.nodes[1] to test standardness relay policy def test_version2_relay(self, before_activation): inputs = [] outputs = {self.nodes[1].getnewaddress(): 1.0} rawtx = self.nodes[1].createrawtransaction(inputs, outputs) rawtxfund = self.nodes[1].fundrawtransaction(rawtx)['hex'] tx = FromHex(CTransaction(), rawtxfund) tx.nVersion = 2 tx_signed = self.nodes[1].signrawtransaction( ToHex(tx), None, None, "ALL|FORKID")["hex"] try: tx_id = self.nodes[1].sendrawtransaction(tx_signed) assert(before_activation == False) except: assert(before_activation) if __name__ == '__main__': BIP68Test().main() diff --git a/test/functional/bip9-softforks.py b/test/functional/bip9-softforks.py index 9944d7dd9..8e06f6115 100755 --- a/test/functional/bip9-softforks.py +++ b/test/functional/bip9-softforks.py @@ -1,258 +1,258 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. ''' This test is meant to exercise BIP forks Connect to a single node. regtest lock-in with 108/144 block signalling activation after a further 144 blocks mine 2 block and save coinbases for later use mine 141 blocks to transition from DEFINED to STARTED mine 100 blocks signalling readiness and 44 not in order to fail to change state this period mine 108 blocks signalling readiness and 36 blocks not signalling readiness (STARTED->LOCKED_IN) mine a further 143 blocks (LOCKED_IN) test that enforcement has not triggered (which triggers ACTIVE) test that enforcement has triggered ''' from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.mininode import CTransaction, NetworkThread from test_framework.blocktools import create_coinbase, create_block from test_framework.comptool import TestInstance, TestManager from test_framework.script import CScript, OP_1NEGATE, OP_CHECKSEQUENCEVERIFY, OP_DROP from io import BytesIO import time import shutil import itertools class BIP9SoftForksTest(ComparisonTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 self.extra_args = [['-whitelist=127.0.0.1']] + self.setup_clean_chain = True def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) NetworkThread().start() # Start up network handling in another thread self.test.run() def create_transaction(self, node, coinbase, to_address, amount): from_txid = node.getblock(coinbase)['tx'][0] inputs = [{"txid": from_txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) tx = CTransaction() f = BytesIO(hex_str_to_bytes(rawtx)) tx.deserialize(f) tx.nVersion = 2 return tx def sign_transaction(self, node, tx): signresult = node.signrawtransaction( bytes_to_hex_str(tx.serialize()), None, None, "ALL|FORKID") tx = CTransaction() f = BytesIO(hex_str_to_bytes(signresult['hex'])) tx.deserialize(f) return tx def generate_blocks(self, number, version, test_blocks=[]): for i in range(number): block = create_block( self.tip, create_coinbase(self.height), self.last_block_time + 1) block.nVersion = version block.rehash() block.solve() test_blocks.append([block, True]) self.last_block_time += 1 self.tip = block.sha256 self.height += 1 return test_blocks def get_bip9_status(self, key): info = self.nodes[0].getblockchaininfo() return info['bip9_softforks'][key] def test_BIP(self, bipName, activated_version, invalidate, invalidatePostSignature, bitno): assert_equal(self.get_bip9_status(bipName)['status'], 'defined') assert_equal(self.get_bip9_status(bipName)['since'], 0) # generate some coins for later self.coinbase_blocks = self.nodes[0].generate(2) self.height = 3 # height of the next block to build self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0) self.nodeaddress = self.nodes[0].getnewaddress() self.last_block_time = int(time.time()) assert_equal(self.get_bip9_status(bipName)['status'], 'defined') assert_equal(self.get_bip9_status(bipName)['since'], 0) tmpl = self.nodes[0].getblocktemplate({}) assert(bipName not in tmpl['rules']) assert(bipName not in tmpl['vbavailable']) assert_equal(tmpl['vbrequired'], 0) assert_equal(tmpl['version'], 0x20000000) # Test 1 # Advance from DEFINED to STARTED test_blocks = self.generate_blocks(141, 4) yield TestInstance(test_blocks, sync_every_block=False) assert_equal(self.get_bip9_status(bipName)['status'], 'started') assert_equal(self.get_bip9_status(bipName)['since'], 144) tmpl = self.nodes[0].getblocktemplate({}) assert(bipName not in tmpl['rules']) assert_equal(tmpl['vbavailable'][bipName], bitno) assert_equal(tmpl['vbrequired'], 0) assert(tmpl['version'] & activated_version) # Test 2 # Fail to achieve LOCKED_IN 100 out of 144 signal bit 1 # using a variety of bits to simulate multiple parallel softforks test_blocks = self.generate_blocks( 50, activated_version) # 0x20000001 (signalling ready) test_blocks = self.generate_blocks( 20, 4, test_blocks) # 0x00000004 (signalling not) test_blocks = self.generate_blocks( 50, activated_version, test_blocks) # 0x20000101 (signalling ready) test_blocks = self.generate_blocks( 24, 4, test_blocks) # 0x20010000 (signalling not) yield TestInstance(test_blocks, sync_every_block=False) assert_equal(self.get_bip9_status(bipName)['status'], 'started') assert_equal(self.get_bip9_status(bipName)['since'], 144) tmpl = self.nodes[0].getblocktemplate({}) assert(bipName not in tmpl['rules']) assert_equal(tmpl['vbavailable'][bipName], bitno) assert_equal(tmpl['vbrequired'], 0) assert(tmpl['version'] & activated_version) # Test 3 # 108 out of 144 signal bit 1 to achieve LOCKED_IN # using a variety of bits to simulate multiple parallel softforks # 0x20000001 (signalling ready) test_blocks = self.generate_blocks(58, activated_version) # 0x00000004 (signalling not) test_blocks = self.generate_blocks(26, 4, test_blocks) # 0x20000101 (signalling ready) test_blocks = self.generate_blocks(50, activated_version, test_blocks) # 0x20010000 (signalling not) test_blocks = self.generate_blocks(10, 4, test_blocks) yield TestInstance(test_blocks, sync_every_block=False) assert_equal(self.get_bip9_status(bipName)['status'], 'locked_in') assert_equal(self.get_bip9_status(bipName)['since'], 432) tmpl = self.nodes[0].getblocktemplate({}) assert(bipName not in tmpl['rules']) # Test 4 # 143 more version 536870913 blocks (waiting period-1) test_blocks = self.generate_blocks(143, 4) yield TestInstance(test_blocks, sync_every_block=False) assert_equal(self.get_bip9_status(bipName)['status'], 'locked_in') assert_equal(self.get_bip9_status(bipName)['since'], 432) tmpl = self.nodes[0].getblocktemplate({}) assert(bipName not in tmpl['rules']) # Test 5 # Check that the new rule is enforced spendtx = self.create_transaction(self.nodes[0], self.coinbase_blocks[0], self.nodeaddress, 1.0) invalidate(spendtx) spendtx = self.sign_transaction(self.nodes[0], spendtx) spendtx.rehash() invalidatePostSignature(spendtx) spendtx.rehash() block = create_block( self.tip, create_coinbase(self.height), self.last_block_time + 1) block.nVersion = activated_version block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() self.last_block_time += 1 self.tip = block.sha256 self.height += 1 yield TestInstance([[block, True]]) assert_equal(self.get_bip9_status(bipName)['status'], 'active') assert_equal(self.get_bip9_status(bipName)['since'], 576) tmpl = self.nodes[0].getblocktemplate({}) assert(bipName in tmpl['rules']) assert(bipName not in tmpl['vbavailable']) assert_equal(tmpl['vbrequired'], 0) assert(not (tmpl['version'] & (1 << bitno))) # Test 6 # Check that the new sequence lock rules are enforced spendtx = self.create_transaction(self.nodes[0], self.coinbase_blocks[1], self.nodeaddress, 1.0) invalidate(spendtx) spendtx = self.sign_transaction(self.nodes[0], spendtx) spendtx.rehash() invalidatePostSignature(spendtx) spendtx.rehash() block = create_block( self.tip, create_coinbase(self.height), self.last_block_time + 1) block.nVersion = 5 block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() self.last_block_time += 1 yield TestInstance([[block, False]]) # Restart all self.test.clear_all_connections() self.stop_nodes() + self.nodes = [] shutil.rmtree(self.options.tmpdir + "/node0") self.setup_chain() self.setup_network() self.test.add_all_connections(self.nodes) NetworkThread().start() self.test.test_nodes[0].wait_for_verack() def get_tests(self): for test in itertools.chain( self.test_BIP( 'csv', 0x20000001, self.sequence_lock_invalidate, self.donothing, 0), self.test_BIP( 'csv', 0x20000001, self.mtp_invalidate, self.donothing, 0), self.test_BIP( 'csv', 0x20000001, self.donothing, self.csv_invalidate, 0) ): yield test def donothing(self, tx): return def csv_invalidate(self, tx): '''Modify the signature in vin 0 of the tx to fail CSV Prepends -1 CSV DROP in the scriptSig itself. ''' tx.vin[0].scriptSig = CScript([OP_1NEGATE, OP_CHECKSEQUENCEVERIFY, OP_DROP] + list(CScript(tx.vin[0].scriptSig))) def sequence_lock_invalidate(self, tx): '''Modify the nSequence to make it fails once sequence lock rule is activated (high timespan) ''' tx.vin[0].nSequence = 0x00FFFFFF tx.nLockTime = 0 def mtp_invalidate(self, tx): '''Modify the nLockTime to make it fails once MTP rule is activated ''' # Disable Sequence lock, Activate nLockTime tx.vin[0].nSequence = 0x90FFFFFF tx.nLockTime = self.last_block_time if __name__ == '__main__': BIP9SoftForksTest().main() diff --git a/test/functional/bipdersig-p2p.py b/test/functional/bipdersig-p2p.py index 9a30f1679..56ebeca8a 100755 --- a/test/functional/bipdersig-p2p.py +++ b/test/functional/bipdersig-p2p.py @@ -1,147 +1,145 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test BIP66 (DER SIG). Test that the DERSIG soft-fork activates at (regtest) height 1251. """ from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import * from test_framework.blocktools import create_coinbase, create_block from test_framework.script import CScript from io import BytesIO DERSIG_HEIGHT = 1251 # Reject codes that we might receive in this test REJECT_INVALID = 16 REJECT_OBSOLETE = 17 REJECT_NONSTANDARD = 64 # A canonical signature consists of: # <30> <02> <02> def unDERify(tx): """ Make the signature in vin 0 of a tx non-DER-compliant, by adding padding after the S-value. """ scriptSig = CScript(tx.vin[0].scriptSig) newscript = [] for i in scriptSig: if (len(newscript) == 0): newscript.append(i[0:-1] + b'\0' + i[-1:]) else: newscript.append(i) tx.vin[0].scriptSig = CScript(newscript) def create_transaction(node, coinbase, to_address, amount): from_txid = node.getblock(coinbase)['tx'][0] inputs = [{"txid": from_txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) signresult = node.signrawtransaction(rawtx, None, None, "ALL|FORKID") tx = CTransaction() tx.deserialize(BytesIO(hex_str_to_bytes(signresult['hex']))) return tx class BIP66Test(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 self.extra_args = [ ['-promiscuousmempoolflags=1', '-whitelist=127.0.0.1']] self.setup_clean_chain = True def run_test(self): node0 = NodeConnCB() connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], node0)) node0.add_connection(connections[0]) NetworkThread().start() # Start up network handling in another thread # wait_for_verack ensures that the P2P connection is fully up. node0.wait_for_verack() self.log.info("Mining %d blocks", DERSIG_HEIGHT - 1) self.coinbase_blocks = self.nodes[0].generate(DERSIG_HEIGHT - 1) self.nodeaddress = self.nodes[0].getnewaddress() self.log.info("Test that blocks must now be at least version 3") tip = self.nodes[0].getbestblockhash() block_time = self.nodes[0].getblockheader(tip)['mediantime'] + 1 block = create_block( int(tip, 16), create_coinbase(DERSIG_HEIGHT), block_time) block.nVersion = 2 block.rehash() block.solve() node0.send_and_ping(msg_block(block)) assert_equal(self.nodes[0].getbestblockhash(), tip) wait_until(lambda: "reject" in node0.last_message.keys(), lock=mininode_lock) with mininode_lock: assert_equal(node0.last_message["reject"].code, REJECT_OBSOLETE) assert_equal( node0.last_message["reject"].reason, b'bad-version(0x00000002)') assert_equal(node0.last_message["reject"].data, block.sha256) del node0.last_message["reject"] self.log.info( "Test that transactions with non-DER signatures cannot appear in a block") block.nVersion = 3 spendtx = create_transaction(self.nodes[0], self.coinbase_blocks[1], self.nodeaddress, 1.0) unDERify(spendtx) spendtx.rehash() # Now we verify that a block with this transaction is invalid. block.vtx.append(spendtx) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() node0.send_and_ping(msg_block(block)) assert_equal(self.nodes[0].getbestblockhash(), tip) wait_until(lambda: "reject" in node0.last_message.keys(), lock=mininode_lock) with mininode_lock: # We can receive different reject messages depending on whether # bitcoind is running with multiple script check threads. If script # check threads are not in use, then transaction script validation # happens sequentially, and bitcoind produces more specific reject # reasons. assert node0.last_message["reject"].code in [ REJECT_INVALID, REJECT_NONSTANDARD] assert_equal(node0.last_message["reject"].data, block.sha256) if node0.last_message["reject"].code == REJECT_INVALID: # Generic rejection when a block is invalid assert_equal( node0.last_message["reject"].reason, b'blk-bad-inputs') else: assert b'Non-canonical DER signature' in node0.last_message["reject"].reason self.log.info( "Test that a version 3 block with a DERSIG-compliant transaction is accepted") block.vtx[1] = create_transaction(self.nodes[0], self.coinbase_blocks[1], self.nodeaddress, 1.0) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() node0.send_and_ping(msg_block(block)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), block.sha256) if __name__ == '__main__': BIP66Test().main() diff --git a/test/functional/bitcoin_cli.py b/test/functional/bitcoin_cli.py index 3e00e4482..48a201e2b 100755 --- a/test/functional/bitcoin_cli.py +++ b/test/functional/bitcoin_cli.py @@ -1,48 +1,47 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test bitcoin-cli""" from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal, assert_raises_process_error, get_auth_cookie class TestBitcoinCli(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def run_test(self): """Main test logic""" self.log.info( "Compare responses from gewalletinfo RPC and `bitcoin-cli getwalletinfo`") cli_response = self.nodes[0].cli.getwalletinfo() rpc_response = self.nodes[0].getwalletinfo() assert_equal(cli_response, rpc_response) self.log.info( "Compare responses from getblockchaininfo RPC and `bitcoin-cli getblockchaininfo`") cli_response = self.nodes[0].cli.getblockchaininfo() rpc_response = self.nodes[0].getblockchaininfo() assert_equal(cli_response, rpc_response) user, password = get_auth_cookie(self.nodes[0].datadir) self.log.info("Test -stdinrpcpass option") assert_equal(0, self.nodes[0].cli( '-rpcuser=%s' % user, '-stdinrpcpass', input=password).getblockcount()) assert_raises_process_error(1, "incorrect rpcuser or rpcpassword", self.nodes[0].cli( '-rpcuser=%s' % user, '-stdinrpcpass', input="foo").echo) self.log.info("Test -stdin and -stdinrpcpass") assert_equal(["foo", "bar"], self.nodes[0].cli('-rpcuser=%s' % user, '-stdin', '-stdinrpcpass', input=password + "\nfoo\nbar").echo()) assert_raises_process_error(1, "incorrect rpcuser or rpcpassword", self.nodes[0].cli( '-rpcuser=%s' % user, '-stdin', '-stdinrpcpass', input="foo").echo) if __name__ == '__main__': TestBitcoinCli().main() diff --git a/test/functional/blockchain.py b/test/functional/blockchain.py index f7348fcf5..101670ff7 100755 --- a/test/functional/blockchain.py +++ b/test/functional/blockchain.py @@ -1,182 +1,180 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test RPCs related to blockchainstate. Test the following RPCs: - gettxoutsetinfo - getdifficulty - getbestblockhash - getblockhash - getblockheader - getchaintxstats - getnetworkhashps - verifychain Tests correspond to code in rpc/blockchain.cpp. """ from decimal import Decimal import http.client import subprocess from test_framework.test_framework import ( BitcoinTestFramework, BITCOIND_PROC_WAIT_TIMEOUT) from test_framework.util import ( assert_equal, assert_raises, assert_raises_jsonrpc, assert_is_hex_string, assert_is_hash_string, ) class BlockchainTest(BitcoinTestFramework): - def __init__(self): - super().__init__() - self.setup_clean_chain = False + def set_test_params(self): self.num_nodes = 1 self.extra_args = [['-stopatheight=207']] def run_test(self): self._test_getchaintxstats() self._test_gettxoutsetinfo() self._test_getblockheader() self._test_getdifficulty() self._test_getnetworkhashps() self._test_stopatheight() assert self.nodes[0].verifychain(4, 0) def _test_getchaintxstats(self): chaintxstats = self.nodes[0].getchaintxstats(1) # 200 txs plus genesis tx assert_equal(chaintxstats['txcount'], 201) # tx rate should be 1 per 10 minutes, or 1/600 # we have to round because of binary math assert_equal(round(chaintxstats['txrate'] * 600, 10), Decimal(1)) b1 = self.nodes[0].getblock(self.nodes[0].getblockhash(1)) b200 = self.nodes[0].getblock(self.nodes[0].getblockhash(200)) time_diff = b200['mediantime'] - b1['mediantime'] chaintxstats = self.nodes[0].getchaintxstats() assert_equal(chaintxstats['time'], b200['time']) assert_equal(chaintxstats['txcount'], 201) assert_equal(chaintxstats['window_block_count'], 199) assert_equal(chaintxstats['window_tx_count'], 199) assert_equal(chaintxstats['window_interval'], time_diff) assert_equal( round(chaintxstats['txrate'] * time_diff, 10), Decimal(199)) chaintxstats = self.nodes[0].getchaintxstats(blockhash=b1['hash']) assert_equal(chaintxstats['time'], b1['time']) assert_equal(chaintxstats['txcount'], 2) assert_equal(chaintxstats['window_block_count'], 0) assert('window_tx_count' not in chaintxstats) assert('window_interval' not in chaintxstats) assert('txrate' not in chaintxstats) assert_raises_jsonrpc(-8, "Invalid block count: should be between 0 and the block's height - 1", self.nodes[0].getchaintxstats, 201) def _test_gettxoutsetinfo(self): node = self.nodes[0] res = node.gettxoutsetinfo() assert_equal(res['total_amount'], Decimal('8725.00000000')) assert_equal(res['transactions'], 200) assert_equal(res['height'], 200) assert_equal(res['txouts'], 200) assert_equal(res['bogosize'], 17000), assert_equal(res['bestblock'], node.getblockhash(200)) size = res['disk_size'] assert size > 6400 assert size < 64000 assert_equal(len(res['bestblock']), 64) assert_equal(len(res['hash_serialized']), 64) self.log.info( "Test that gettxoutsetinfo() works for blockchain with just the genesis block") b1hash = node.getblockhash(1) node.invalidateblock(b1hash) res2 = node.gettxoutsetinfo() assert_equal(res2['transactions'], 0) assert_equal(res2['total_amount'], Decimal('0')) assert_equal(res2['height'], 0) assert_equal(res2['txouts'], 0) assert_equal(res2['bogosize'], 0), assert_equal(res2['bestblock'], node.getblockhash(0)) assert_equal(len(res2['hash_serialized']), 64) self.log.info( "Test that gettxoutsetinfo() returns the same result after invalidate/reconsider block") node.reconsiderblock(b1hash) res3 = node.gettxoutsetinfo() assert_equal(res['total_amount'], res3['total_amount']) assert_equal(res['transactions'], res3['transactions']) assert_equal(res['height'], res3['height']) assert_equal(res['txouts'], res3['txouts']) assert_equal(res['bogosize'], res3['bogosize']) assert_equal(res['bestblock'], res3['bestblock']) assert_equal(res['hash_serialized'], res3['hash_serialized']) def _test_getblockheader(self): node = self.nodes[0] assert_raises_jsonrpc(-5, "Block not found", node.getblockheader, "nonsense") besthash = node.getbestblockhash() secondbesthash = node.getblockhash(199) header = node.getblockheader(besthash) assert_equal(header['hash'], besthash) assert_equal(header['height'], 200) assert_equal(header['confirmations'], 1) assert_equal(header['previousblockhash'], secondbesthash) assert_is_hex_string(header['chainwork']) assert_is_hash_string(header['hash']) assert_is_hash_string(header['previousblockhash']) assert_is_hash_string(header['merkleroot']) assert_is_hash_string(header['bits'], length=None) assert isinstance(header['time'], int) assert isinstance(header['mediantime'], int) assert isinstance(header['nonce'], int) assert isinstance(header['version'], int) assert isinstance(int(header['versionHex'], 16), int) assert isinstance(header['difficulty'], Decimal) def _test_getdifficulty(self): difficulty = self.nodes[0].getdifficulty() # 1 hash in 2 should be valid, so difficulty should be 1/2**31 # binary => decimal => binary math is why we do this check assert abs(difficulty * 2**31 - 1) < 0.0001 def _test_getnetworkhashps(self): hashes_per_second = self.nodes[0].getnetworkhashps() # This should be 2 hashes every 10 minutes or 1/300 assert abs(hashes_per_second * 300 - 1) < 0.0001 def _test_stopatheight(self): assert_equal(self.nodes[0].getblockcount(), 200) self.nodes[0].generate(6) assert_equal(self.nodes[0].getblockcount(), 206) self.log.debug('Node should not stop at this height') assert_raises(subprocess.TimeoutExpired, lambda: self.nodes[0].process.wait(timeout=3)) try: self.nodes[0].generate(1) except (ConnectionError, http.client.BadStatusLine): pass # The node already shut down before response self.log.debug('Node should stop at this height...') self.nodes[0].process.wait(timeout=BITCOIND_PROC_WAIT_TIMEOUT) - self.nodes[0] = self.start_node(0, self.options.tmpdir) + self.start_node(0) assert_equal(self.nodes[0].getblockcount(), 207) if __name__ == '__main__': BlockchainTest().main() diff --git a/test/functional/create_cache.py b/test/functional/create_cache.py index 6c6b53510..0fdd5c607 100755 --- a/test/functional/create_cache.py +++ b/test/functional/create_cache.py @@ -1,32 +1,29 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ Creating a cache of the blockchain speeds up test execution when running multiple functional tests. This helper script is executed by test_runner when multiple tests are being run in parallel. """ from test_framework.test_framework import BitcoinTestFramework class CreateCache(BitcoinTestFramework): + # Test network and test nodes are not required: - def __init__(self): - super().__init__() - - # Test network and test nodes are not required: + def set_test_params(self): self.num_nodes = 0 - self.nodes = [] def setup_network(self): pass def run_test(self): pass if __name__ == '__main__': CreateCache().main() diff --git a/test/functional/dbcrash.py b/test/functional/dbcrash.py index 8daa371b3..e60fb9988 100755 --- a/test/functional/dbcrash.py +++ b/test/functional/dbcrash.py @@ -1,307 +1,305 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test recovery from a crash during chainstate writing. - 4 nodes * node0, node1, and node2 will have different dbcrash ratios, and different dbcache sizes * node3 will be a regular node, with no crashing. * The nodes will not connect to each other. - use default test framework starting chain. initialize starting_tip_height to tip height. - Main loop: * generate lots of transactions on node3, enough to fill up a block. * uniformly randomly pick a tip height from starting_tip_height to tip_height; with probability 1/(height_difference+4), invalidate this block. * mine enough blocks to overtake tip_height at start of loop. * for each node in [node0,node1,node2]: - for each mined block: * submit block to node * if node crashed on/after submitting: - restart until recovery succeeds - check that utxo matches node3 using gettxoutsetinfo""" import errno import http.client import random import sys import time from test_framework.mininode import * from test_framework.script import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * HTTP_DISCONNECT_ERRORS = [http.client.CannotSendRequest] try: HTTP_DISCONNECT_ERRORS.append(http.client.RemoteDisconnected) except AttributeError: pass class ChainstateWriteCrashTest(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 4 self.setup_clean_chain = False # Set -maxmempool=0 to turn off mempool memory sharing with dbcache # Set -rpcservertimeout=900 to reduce socket disconnects in this # long-running test self.base_args = ["-limitdescendantsize=0", "-maxmempool=0", "-rpcservertimeout=900", "-dbbatchsize=200000"] # Set different crash ratios and cache sizes. Note that not all of # -dbcache goes to pcoinsTip. self.node0_args = ["-dbcrashratio=8", "-dbcache=4"] + self.base_args self.node1_args = ["-dbcrashratio=16", "-dbcache=8"] + self.base_args self.node2_args = ["-dbcrashratio=24", "-dbcache=16"] + self.base_args # Node3 is a normal node with default args, except will mine full blocks self.node3_args = ["-blockmaxweight=4000000"] self.extra_args = [self.node0_args, self.node1_args, self.node2_args, self.node3_args] def setup_network(self): # Need a bit of extra time for the nodes to start up for this test - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, self.extra_args, timewait=90) + self.add_nodes(self.num_nodes, extra_args=self.extra_args, timewait=90) + self.start_nodes() # Leave them unconnected, we'll use submitblock directly in this test def restart_node(self, node_index, expected_tip): """Start up a given node id, wait for the tip to reach the given block hash, and calculate the utxo hash. Exceptions on startup should indicate node crash (due to -dbcrashratio), in which case we try again. Give up after 60 seconds. Returns the utxo hash of the given node.""" time_start = time.time() while time.time() - time_start < 120: try: # Any of these RPC calls could throw due to node crash - self.nodes[node_index] = self.start_node( - node_index, self.options.tmpdir, self.extra_args[node_index], timewait=90) + self.start_node(node_index) self.nodes[node_index].waitforblock(expected_tip) utxo_hash = self.nodes[node_index].gettxoutsetinfo()[ 'hash_serialized'] return utxo_hash except: # An exception here should mean the node is about to crash. # If bitcoind exits, then try again. wait_for_node_exit() # should raise an exception if bitcoind doesn't exit. self.wait_for_node_exit(node_index, timeout=10) self.crashed_on_restart += 1 time.sleep(1) # If we got here, bitcoind isn't coming back up on restart. Could be a # bug in bitcoind, or we've gotten unlucky with our dbcrash ratio -- # perhaps we generated a test case that blew up our cache? # TODO: If this happens a lot, we should try to restart without -dbcrashratio # and make sure that recovery happens. raise AssertionError( "Unable to successfully restart node %d in allotted time", node_index) def submit_block_catch_error(self, node_index, block): """Try submitting a block to the given node. Catch any exceptions that indicate the node has crashed. Returns true if the block was submitted successfully; false otherwise.""" try: self.nodes[node_index].submitblock(block) return True except http.client.BadStatusLine as e: # Prior to 3.5 BadStatusLine('') was raised for a remote disconnect error. if sys.version_info[0] == 3 and sys.version_info[1] < 5 and e.line == "''": self.log.debug( "node %d submitblock raised exception: %s", node_index, e) return False else: raise except tuple(HTTP_DISCONNECT_ERRORS) as e: self.log.debug( "node %d submitblock raised exception: %s", node_index, e) return False except OSError as e: self.log.debug( "node %d submitblock raised OSError exception: errno=%s", node_index, e.errno) if e.errno in [errno.EPIPE, errno.ECONNREFUSED, errno.ECONNRESET]: # The node has likely crashed return False else: # Unexpected exception, raise raise def sync_node3blocks(self, block_hashes): """Use submitblock to sync node3's chain with the other nodes If submitblock fails, restart the node and get the new utxo hash. If any nodes crash while updating, we'll compare utxo hashes to ensure recovery was successful.""" node3_utxo_hash = self.nodes[3].gettxoutsetinfo()['hash_serialized'] # Retrieve all the blocks from node3 blocks = [] for block_hash in block_hashes: blocks.append( [block_hash, self.nodes[3].getblock(block_hash, False)]) # Deliver each block to each other node for i in range(3): nodei_utxo_hash = None self.log.debug("Syncing blocks to node %d", i) for (block_hash, block) in blocks: # Get the block from node3, and submit to node_i self.log.debug("submitting block %s", block_hash) if not self.submit_block_catch_error(i, block): # TODO: more carefully check that the crash is due to -dbcrashratio # (change the exit code perhaps, and check that here?) self.wait_for_node_exit(i, timeout=30) self.log.debug( "Restarting node %d after block hash %s", i, block_hash) nodei_utxo_hash = self.restart_node(i, block_hash) assert nodei_utxo_hash is not None self.restart_counts[i] += 1 else: # Clear it out after successful submitblock calls -- the cached # utxo hash will no longer be correct nodei_utxo_hash = None # Check that the utxo hash matches node3's utxo set # NOTE: we only check the utxo set if we had to restart the node # after the last block submitted: # - checking the utxo hash causes a cache flush, which we don't # want to do every time; so # - we only update the utxo cache after a node restart, since flushing # the cache is a no-op at that point if nodei_utxo_hash is not None: self.log.debug("Checking txoutsetinfo matches for node %d", i) assert_equal(nodei_utxo_hash, node3_utxo_hash) def verify_utxo_hash(self): """Verify that the utxo hash of each node matches node3. Restart any nodes that crash while querying.""" node3_utxo_hash = self.nodes[3].gettxoutsetinfo()['hash_serialized'] self.log.info("Verifying utxo hash matches for all nodes") for i in range(3): try: nodei_utxo_hash = self.nodes[i].gettxoutsetinfo()[ 'hash_serialized'] except OSError: # probably a crash on db flushing nodei_utxo_hash = self.restart_node( i, self.nodes[3].getbestblockhash()) assert_equal(nodei_utxo_hash, node3_utxo_hash) def generate_small_transactions(self, node, count, utxo_list): FEE = 1000 # TODO: replace this with node relay fee based calculation num_transactions = 0 random.shuffle(utxo_list) while len(utxo_list) >= 2 and num_transactions < count: tx = CTransaction() input_amount = 0 for i in range(2): utxo = utxo_list.pop() tx.vin.append( CTxIn(COutPoint(int(utxo['txid'], 16), utxo['vout']))) input_amount += int(utxo['amount'] * COIN) output_amount = (input_amount - FEE) // 3 if output_amount <= 0: # Sanity check -- if we chose inputs that are too small, skip continue for i in range(3): tx.vout.append( CTxOut(output_amount, hex_str_to_bytes(utxo['scriptPubKey']))) # Sign and send the transaction to get into the mempool tx_signed_hex = node.signrawtransaction(ToHex(tx))['hex'] node.sendrawtransaction(tx_signed_hex) num_transactions += 1 def run_test(self): # Track test coverage statistics self.restart_counts = [0, 0, 0] # Track the restarts for nodes 0-2 self.crashed_on_restart = 0 # Track count of crashes during recovery # Start by creating a lot of utxos on node3 initial_height = self.nodes[3].getblockcount() utxo_list = create_confirmed_utxos(self.nodes[3].getnetworkinfo()[ 'relayfee'], self.nodes[3], 5000) self.log.info("Prepped %d utxo entries", len(utxo_list)) # Sync these blocks with the other nodes block_hashes_to_sync = [] for height in range(initial_height + 1, self.nodes[3].getblockcount() + 1): block_hashes_to_sync.append(self.nodes[3].getblockhash(height)) self.log.debug("Syncing %d blocks with other nodes", len(block_hashes_to_sync)) # Syncing the blocks could cause nodes to crash, so the test begins here. self.sync_node3blocks(block_hashes_to_sync) starting_tip_height = self.nodes[3].getblockcount() # Main test loop: # each time through the loop, generate a bunch of transactions, # and then either mine a single new block on the tip, or some-sized reorg. for i in range(40): self.log.info( "Iteration %d, generating 2500 transactions %s", i, self.restart_counts) # Generate a bunch of small-ish transactions self.generate_small_transactions(self.nodes[3], 2500, utxo_list) # Pick a random block between current tip, and starting tip current_height = self.nodes[3].getblockcount() random_height = random.randint(starting_tip_height, current_height) self.log.debug("At height %d, considering height %d", current_height, random_height) if random_height > starting_tip_height: # Randomly reorg from this point with some probability (1/4 for # tip, 1/5 for tip-1, ...) if random.random() < 1.0 / (current_height + 4 - random_height): self.log.debug( "Invalidating block at height %d", random_height) self.nodes[3].invalidateblock( self.nodes[3].getblockhash(random_height)) # Now generate new blocks until we pass the old tip height self.log.debug("Mining longer tip") block_hashes = [] while current_height + 1 > self.nodes[3].getblockcount(): block_hashes.extend(self.nodes[3].generate( min(10, current_height + 1 - self.nodes[3].getblockcount()))) self.log.debug("Syncing %d new blocks...", len(block_hashes)) self.sync_node3blocks(block_hashes) utxo_list = self.nodes[3].listunspent() self.log.debug("Node3 utxo count: %d", len(utxo_list)) # Check that the utxo hashes agree with node3 # Useful side effect: each utxo cache gets flushed here, so that we # won't get crashes on shutdown at the end of the test. self.verify_utxo_hash() # Check the test coverage self.log.info("Restarted nodes: %s; crashes on restart: %d", self.restart_counts, self.crashed_on_restart) # If no nodes were restarted, we didn't test anything. assert self.restart_counts != [0, 0, 0] # Make sure we tested the case of crash-during-recovery. assert self.crashed_on_restart > 0 # Warn if any of the nodes escaped restart. for i in range(3): if self.restart_counts[i] == 0: self.log.warn("Node %d never crashed during utxo flush!", i) if __name__ == "__main__": ChainstateWriteCrashTest().main() diff --git a/test/functional/decodescript.py b/test/functional/decodescript.py index 00dfc5df8..6537afe21 100755 --- a/test/functional/decodescript.py +++ b/test/functional/decodescript.py @@ -1,226 +1,222 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import * from io import BytesIO class DecodeScriptTest(BitcoinTestFramework): - - """Tests decoding scripts via RPC command "decodescript".""" - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def decodescript_script_sig(self): signature = '304502207fa7a6d1e0ee81132a269ad84e68d695483745cde8b541e3bf630749894e342a022100c1f7ab20e13e22fb95281a870f3dcf38d782e53023ee313d741ad0cfbc0c509001' push_signature = '48' + signature public_key = '03b0da749730dc9b4b1f4a14d6902877a92541f5368778853d9c4a0cb7802dcfb2' push_public_key = '21' + public_key # below are test cases for all of the standard transaction types # 1) P2PK scriptSig # the scriptSig of a public key scriptPubKey simply pushes a signature # onto the stack rpc_result = self.nodes[0].decodescript(push_signature) assert_equal(signature, rpc_result['asm']) # 2) P2PKH scriptSig rpc_result = self.nodes[0].decodescript( push_signature + push_public_key) assert_equal(signature + ' ' + public_key, rpc_result['asm']) # 3) multisig scriptSig # this also tests the leading portion of a P2SH multisig scriptSig # OP_0 rpc_result = self.nodes[0].decodescript( '00' + push_signature + push_signature) assert_equal('0 ' + signature + ' ' + signature, rpc_result['asm']) # 4) P2SH scriptSig # an empty P2SH redeemScript is valid and makes for a very simple test case. # thus, such a spending scriptSig would just need to pass the outer redeemScript # hash test and leave true on the top of the stack. rpc_result = self.nodes[0].decodescript('5100') assert_equal('1 0', rpc_result['asm']) # 5) null data scriptSig - no such thing because null data scripts can not be spent. # thus, no test case for that standard transaction type is here. def decodescript_script_pub_key(self): public_key = '03b0da749730dc9b4b1f4a14d6902877a92541f5368778853d9c4a0cb7802dcfb2' push_public_key = '21' + public_key public_key_hash = '11695b6cd891484c2d49ec5aa738ec2b2f897777' push_public_key_hash = '14' + public_key_hash # below are test cases for all of the standard transaction types # 1) P2PK scriptPubKey # OP_CHECKSIG rpc_result = self.nodes[0].decodescript(push_public_key + 'ac') assert_equal(public_key + ' OP_CHECKSIG', rpc_result['asm']) # 2) P2PKH scriptPubKey # OP_DUP OP_HASH160 OP_EQUALVERIFY OP_CHECKSIG rpc_result = self.nodes[0].decodescript( '76a9' + push_public_key_hash + '88ac') assert_equal('OP_DUP OP_HASH160 ' + public_key_hash + ' OP_EQUALVERIFY OP_CHECKSIG', rpc_result['asm']) # 3) multisig scriptPubKey # OP_CHECKMULTISIG # just imagine that the pub keys used below are different. # for our purposes here it does not matter that they are the same even # though it is unrealistic. rpc_result = self.nodes[0].decodescript( '52' + push_public_key + push_public_key + push_public_key + '53ae') assert_equal('2 ' + public_key + ' ' + public_key + ' ' + public_key + ' 3 OP_CHECKMULTISIG', rpc_result['asm']) # 4) P2SH scriptPubKey # OP_HASH160 OP_EQUAL. # push_public_key_hash here should actually be the hash of a redeem script. # but this works the same for purposes of this test. rpc_result = self.nodes[0].decodescript( 'a9' + push_public_key_hash + '87') assert_equal( 'OP_HASH160 ' + public_key_hash + ' OP_EQUAL', rpc_result['asm']) # 5) null data scriptPubKey # use a signature look-alike here to make sure that we do not decode random data as a signature. # this matters if/when signature sighash decoding comes along. # would want to make sure that no such decoding takes place in this # case. signature_imposter = '48304502207fa7a6d1e0ee81132a269ad84e68d695483745cde8b541e3bf630749894e342a022100c1f7ab20e13e22fb95281a870f3dcf38d782e53023ee313d741ad0cfbc0c509001' # OP_RETURN rpc_result = self.nodes[0].decodescript('6a' + signature_imposter) assert_equal('OP_RETURN ' + signature_imposter[2:], rpc_result['asm']) # 6) a CLTV redeem script. redeem scripts are in-effect scriptPubKey scripts, so adding a test here. # OP_NOP2 is also known as OP_CHECKLOCKTIMEVERIFY. # just imagine that the pub keys used below are different. # for our purposes here it does not matter that they are the same even though it is unrealistic. # # OP_IF # OP_CHECKSIGVERIFY # OP_ELSE # OP_CHECKLOCKTIMEVERIFY OP_DROP # OP_ENDIF # OP_CHECKSIG # # lock until block 500,000 rpc_result = self.nodes[0].decodescript( '63' + push_public_key + 'ad670320a107b17568' + push_public_key + 'ac') assert_equal('OP_IF ' + public_key + ' OP_CHECKSIGVERIFY OP_ELSE 500000 OP_CHECKLOCKTIMEVERIFY OP_DROP OP_ENDIF ' + public_key + ' OP_CHECKSIG', rpc_result['asm']) def decoderawtransaction_asm_sighashtype(self): """Tests decoding scripts via RPC command "decoderawtransaction". This test is in with the "decodescript" tests because they are testing the same "asm" script decodes. """ # this test case uses a random plain vanilla mainnet transaction with a # single P2PKH input and output tx = '0100000001696a20784a2c70143f634e95227dbdfdf0ecd51647052e70854512235f5986ca010000008a47304402207174775824bec6c2700023309a168231ec80b82c6069282f5133e6f11cbb04460220570edc55c7c5da2ca687ebd0372d3546ebc3f810516a002350cac72dfe192dfb014104d3f898e6487787910a690410b7a917ef198905c27fb9d3b0a42da12aceae0544fc7088d239d9a48f2828a15a09e84043001f27cc80d162cb95404e1210161536ffffffff0100e1f505000000001976a914eb6c6e0cdb2d256a32d97b8df1fc75d1920d9bca88ac00000000' rpc_result = self.nodes[0].decoderawtransaction(tx) assert_equal( '304402207174775824bec6c2700023309a168231ec80b82c6069282f5133e6f11cbb04460220570edc55c7c5da2ca687ebd0372d3546ebc3f810516a002350cac72dfe192dfb[ALL] 04d3f898e6487787910a690410b7a917ef198905c27fb9d3b0a42da12aceae0544fc7088d239d9a48f2828a15a09e84043001f27cc80d162cb95404e1210161536', rpc_result['vin'][0]['scriptSig']['asm']) # this test case uses a mainnet transaction that has a P2SH input and both P2PKH and P2SH outputs. # it's from James D'Angelo's awesome introductory videos about multisig: https://www.youtube.com/watch?v=zIbUSaZBJgU and https://www.youtube.com/watch?v=OSA1pwlaypc # verify that we have not altered scriptPubKey decoding. tx = '01000000018d1f5635abd06e2c7e2ddf58dc85b3de111e4ad6e0ab51bb0dcf5e84126d927300000000fdfe0000483045022100ae3b4e589dfc9d48cb82d41008dc5fa6a86f94d5c54f9935531924602730ab8002202f88cf464414c4ed9fa11b773c5ee944f66e9b05cc1e51d97abc22ce098937ea01483045022100b44883be035600e9328a01b66c7d8439b74db64187e76b99a68f7893b701d5380220225bf286493e4c4adcf928c40f785422572eb232f84a0b83b0dea823c3a19c75014c695221020743d44be989540d27b1b4bbbcfd17721c337cb6bc9af20eb8a32520b393532f2102c0120a1dda9e51a938d39ddd9fe0ebc45ea97e1d27a7cbd671d5431416d3dd87210213820eb3d5f509d7438c9eeecb4157b2f595105e7cd564b3cdbb9ead3da41eed53aeffffffff02611e0000000000001976a914dc863734a218bfe83ef770ee9d41a27f824a6e5688acee2a02000000000017a9142a5edea39971049a540474c6a99edf0aa4074c588700000000' rpc_result = self.nodes[0].decoderawtransaction(tx) assert_equal( '8e3730608c3b0bb5df54f09076e196bc292a8e39a78e73b44b6ba08c78f5cbb0', rpc_result['txid']) assert_equal( '0 3045022100ae3b4e589dfc9d48cb82d41008dc5fa6a86f94d5c54f9935531924602730ab8002202f88cf464414c4ed9fa11b773c5ee944f66e9b05cc1e51d97abc22ce098937ea[ALL] 3045022100b44883be035600e9328a01b66c7d8439b74db64187e76b99a68f7893b701d5380220225bf286493e4c4adcf928c40f785422572eb232f84a0b83b0dea823c3a19c75[ALL] 5221020743d44be989540d27b1b4bbbcfd17721c337cb6bc9af20eb8a32520b393532f2102c0120a1dda9e51a938d39ddd9fe0ebc45ea97e1d27a7cbd671d5431416d3dd87210213820eb3d5f509d7438c9eeecb4157b2f595105e7cd564b3cdbb9ead3da41eed53ae', rpc_result['vin'][0]['scriptSig']['asm']) assert_equal( 'OP_DUP OP_HASH160 dc863734a218bfe83ef770ee9d41a27f824a6e56 OP_EQUALVERIFY OP_CHECKSIG', rpc_result['vout'][0]['scriptPubKey']['asm']) assert_equal( 'OP_HASH160 2a5edea39971049a540474c6a99edf0aa4074c58 OP_EQUAL', rpc_result['vout'][1]['scriptPubKey']['asm']) txSave = CTransaction() txSave.deserialize(BytesIO(hex_str_to_bytes(tx))) # make sure that a specifically crafted op_return value will not pass # all the IsDERSignature checks and then get decoded as a sighash type tx = '01000000015ded05872fdbda629c7d3d02b194763ce3b9b1535ea884e3c8e765d42e316724020000006b48304502204c10d4064885c42638cbff3585915b322de33762598321145ba033fc796971e2022100bb153ad3baa8b757e30a2175bd32852d2e1cb9080f84d7e32fcdfd667934ef1b012103163c0ff73511ea1743fb5b98384a2ff09dd06949488028fd819f4d83f56264efffffffff0200000000000000000b6a0930060201000201000180380100000000001976a9141cabd296e753837c086da7a45a6c2fe0d49d7b7b88ac00000000' rpc_result = self.nodes[0].decoderawtransaction(tx) assert_equal('OP_RETURN 300602010002010001', rpc_result['vout'][0]['scriptPubKey']['asm']) # verify that we have not altered scriptPubKey processing even of a # specially crafted P2PKH pubkeyhash and P2SH redeem script hash that # is made to pass the der signature checks tx = '01000000018d1f5635abd06e2c7e2ddf58dc85b3de111e4ad6e0ab51bb0dcf5e84126d927300000000fdfe0000483045022100ae3b4e589dfc9d48cb82d41008dc5fa6a86f94d5c54f9935531924602730ab8002202f88cf464414c4ed9fa11b773c5ee944f66e9b05cc1e51d97abc22ce098937ea01483045022100b44883be035600e9328a01b66c7d8439b74db64187e76b99a68f7893b701d5380220225bf286493e4c4adcf928c40f785422572eb232f84a0b83b0dea823c3a19c75014c695221020743d44be989540d27b1b4bbbcfd17721c337cb6bc9af20eb8a32520b393532f2102c0120a1dda9e51a938d39ddd9fe0ebc45ea97e1d27a7cbd671d5431416d3dd87210213820eb3d5f509d7438c9eeecb4157b2f595105e7cd564b3cdbb9ead3da41eed53aeffffffff02611e0000000000001976a914301102070101010101010102060101010101010188acee2a02000000000017a91430110207010101010101010206010101010101018700000000' rpc_result = self.nodes[0].decoderawtransaction(tx) assert_equal( 'OP_DUP OP_HASH160 3011020701010101010101020601010101010101 OP_EQUALVERIFY OP_CHECKSIG', rpc_result['vout'][0]['scriptPubKey']['asm']) assert_equal( 'OP_HASH160 3011020701010101010101020601010101010101 OP_EQUAL', rpc_result['vout'][1]['scriptPubKey']['asm']) # some more full transaction tests of varying specific scriptSigs. used instead of # tests in decodescript_script_sig because the decodescript RPC is specifically # for working on scriptPubKeys (argh!). push_signature = bytes_to_hex_str( txSave.vin[0].scriptSig)[2:(0x48 * 2 + 4)] signature = push_signature[2:] der_signature = signature[:-2] signature_sighash_decoded = der_signature + '[ALL]' signature_2 = der_signature + '82' push_signature_2 = '48' + signature_2 signature_2_sighash_decoded = der_signature + '[NONE|ANYONECANPAY]' # 1) P2PK scriptSig txSave.vin[0].scriptSig = hex_str_to_bytes(push_signature) rpc_result = self.nodes[0].decoderawtransaction( bytes_to_hex_str(txSave.serialize())) assert_equal(signature_sighash_decoded, rpc_result['vin'][0]['scriptSig']['asm']) # make sure that the sighash decodes come out correctly for a more # complex / lesser used case. txSave.vin[0].scriptSig = hex_str_to_bytes(push_signature_2) rpc_result = self.nodes[0].decoderawtransaction( bytes_to_hex_str(txSave.serialize())) assert_equal(signature_2_sighash_decoded, rpc_result['vin'][0]['scriptSig']['asm']) # 2) multisig scriptSig txSave.vin[0].scriptSig = hex_str_to_bytes( '00' + push_signature + push_signature_2) rpc_result = self.nodes[0].decoderawtransaction( bytes_to_hex_str(txSave.serialize())) assert_equal('0 ' + signature_sighash_decoded + ' ' + signature_2_sighash_decoded, rpc_result['vin'][0]['scriptSig']['asm']) # 3) test a scriptSig that contains more than push operations. # in fact, it contains an OP_RETURN with data specially crafted to # cause improper decode if the code does not catch it. txSave.vin[0].scriptSig = hex_str_to_bytes( '6a143011020701010101010101020601010101010101') rpc_result = self.nodes[0].decoderawtransaction( bytes_to_hex_str(txSave.serialize())) assert_equal('OP_RETURN 3011020701010101010101020601010101010101', rpc_result['vin'][0]['scriptSig']['asm']) def run_test(self): self.decodescript_script_sig() self.decodescript_script_pub_key() self.decoderawtransaction_asm_sighashtype() if __name__ == '__main__': DecodeScriptTest().main() diff --git a/test/functional/disablewallet.py b/test/functional/disablewallet.py index dce8b1d67..d62a5da3d 100755 --- a/test/functional/disablewallet.py +++ b/test/functional/disablewallet.py @@ -1,39 +1,37 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Exercise API with -disablewallet. # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class DisableWalletTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [["-disablewallet"]] def run_test(self): # Check regression: # https://github.com/bitcoin/bitcoin/issues/6963#issuecomment-154548880 x = self.nodes[0].validateaddress('3J98t1WpEZ73CNmQviecrnyiWrnqRhWNLy') assert(x['isvalid'] == False) x = self.nodes[0].validateaddress('mneYUmWYsuk7kySiURxCi3AGxrAqZxLgPZ') assert(x['isvalid'] == True) # Checking mining to an address without a wallet. Generating to a valid address should succeed # but generating to an invalid address will fail. self.nodes[0].generatetoaddress( 1, 'mneYUmWYsuk7kySiURxCi3AGxrAqZxLgPZ') assert_raises_jsonrpc(-5, "Invalid address", self.nodes[0].generatetoaddress, 1, '3J98t1WpEZ73CNmQviecrnyiWrnqRhWNLy') if __name__ == '__main__': DisableWalletTest().main() diff --git a/test/functional/disconnect_ban.py b/test/functional/disconnect_ban.py index 590f30b74..14b3a11d5 100755 --- a/test/functional/disconnect_ban.py +++ b/test/functional/disconnect_ban.py @@ -1,127 +1,124 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test node disconnect and ban behavior""" from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, assert_raises_jsonrpc, connect_nodes_bi, wait_until, ) class DisconnectBanTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False def run_test(self): self.log.info("Test setban and listbanned RPCs") self.log.info("setban: successfully ban single IP address") # node1 should have 2 connections to node0 at this point assert_equal(len(self.nodes[1].getpeerinfo()), 2) self.nodes[1].setban("127.0.0.1", "add") wait_until(lambda: len(self.nodes[1].getpeerinfo()) == 0, timeout=10) # all nodes must be disconnected at this point assert_equal(len(self.nodes[1].getpeerinfo()), 0) assert_equal(len(self.nodes[1].listbanned()), 1) self.log.info("clearbanned: successfully clear ban list") self.nodes[1].clearbanned() assert_equal(len(self.nodes[1].listbanned()), 0) self.nodes[1].setban("127.0.0.0/24", "add") self.log.info("setban: fail to ban an already banned subnet") assert_equal(len(self.nodes[1].listbanned()), 1) assert_raises_jsonrpc( -23, "IP/Subnet already banned", self.nodes[1].setban, "127.0.0.1", "add") self.log.info("setban: fail to ban an invalid subnet") assert_raises_jsonrpc( -30, "Error: Invalid IP/Subnet", self.nodes[1].setban, "127.0.0.1/42", "add") # still only one banned ip because 127.0.0.1 is within the range of # 127.0.0.0/24 assert_equal(len(self.nodes[1].listbanned()), 1) self.log.info("setban remove: fail to unban a non-banned subnet") assert_raises_jsonrpc( -30, "Error: Unban failed", self.nodes[1].setban, "127.0.0.1", "remove") assert_equal(len(self.nodes[1].listbanned()), 1) self.log.info("setban remove: successfully unban subnet") self.nodes[1].setban("127.0.0.0/24", "remove") assert_equal(len(self.nodes[1].listbanned()), 0) self.nodes[1].clearbanned() assert_equal(len(self.nodes[1].listbanned()), 0) self.log.info("setban: test persistence across node restart") self.nodes[1].setban("127.0.0.0/32", "add") self.nodes[1].setban("127.0.0.0/24", "add") # ban for 1 seconds self.nodes[1].setban("192.168.0.1", "add", 1) # ban for 1000 seconds self.nodes[1].setban( "2001:4d48:ac57:400:cacf:e9ff:fe1d:9c63/19", "add", 1000) listBeforeShutdown = self.nodes[1].listbanned() assert_equal("192.168.0.1/32", listBeforeShutdown[2]['address']) wait_until(lambda: len(self.nodes[1].listbanned()) == 3) self.stop_node(1) + self.start_node(1) - self.nodes[1] = self.start_node(1, self.options.tmpdir) listAfterShutdown = self.nodes[1].listbanned() assert_equal("127.0.0.0/24", listAfterShutdown[0]['address']) assert_equal("127.0.0.0/32", listAfterShutdown[1]['address']) assert_equal("/19" in listAfterShutdown[2]['address'], True) # Clear ban lists self.nodes[1].clearbanned() connect_nodes_bi(self.nodes, 0, 1) self.log.info("Test disconnectnode RPCs") self.log.info( "disconnectnode: fail to disconnect when calling with address and nodeid") address1 = self.nodes[0].getpeerinfo()[0]['addr'] node1 = self.nodes[0].getpeerinfo()[0]['addr'] assert_raises_jsonrpc( -32602, "Only one of address and nodeid should be provided.", self.nodes[0].disconnectnode, address=address1, nodeid=node1) self.log.info( "disconnectnode: fail to disconnect when calling with junk address") assert_raises_jsonrpc(-29, "Node not found in connected nodes", self.nodes[0].disconnectnode, address="221B Baker Street") self.log.info( "disconnectnode: successfully disconnect node by address") address1 = self.nodes[0].getpeerinfo()[0]['addr'] self.nodes[0].disconnectnode(address=address1) wait_until(lambda: len(self.nodes[0].getpeerinfo()) == 1, timeout=10) assert not [node for node in self.nodes[0].getpeerinfo() if node['addr'] == address1] self.log.info("disconnectnode: successfully reconnect node") # reconnect the node connect_nodes_bi(self.nodes, 0, 1) assert_equal(len(self.nodes[0].getpeerinfo()), 2) assert [node for node in self.nodes[0] .getpeerinfo() if node['addr'] == address1] self.log.info( "disconnectnode: successfully disconnect node by node id") id1 = self.nodes[0].getpeerinfo()[0]['id'] self.nodes[0].disconnectnode(nodeid=id1) wait_until(lambda: len(self.nodes[0].getpeerinfo()) == 1, timeout=10) assert not [node for node in self.nodes[0].getpeerinfo() if node['id'] == id1] if __name__ == '__main__': DisconnectBanTest().main() diff --git a/test/functional/example_test.py b/test/functional/example_test.py index e48499b20..d0d404873 100755 --- a/test/functional/example_test.py +++ b/test/functional/example_test.py @@ -1,231 +1,229 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """An example functional test The module-level docstring should include a high-level description of what the test is doing. It's the first thing people see when they open the file and should give the reader information about *what* the test is testing and *how* it's being tested """ # Imports should be in PEP8 ordering (std library first, then third party # libraries then local imports). from collections import defaultdict # Avoid wildcard * imports if possible from test_framework.blocktools import (create_block, create_coinbase) from test_framework.mininode import ( CInv, NetworkThread, NodeConn, NodeConnCB, mininode_lock, msg_block, msg_getdata, ) from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, connect_nodes, p2p_port, wait_until, ) # NodeConnCB is a class containing callbacks to be executed when a P2P # message is received from the node-under-test. Subclass NodeConnCB and # override the on_*() methods if you need custom behaviour. class BaseNode(NodeConnCB): def __init__(self): """Initialize the NodeConnCB Used to inialize custom properties for the Node that aren't included by default in the base class. Be aware that the NodeConnCB base class already stores a counter for each P2P message type and the last received message of each type, which should be sufficient for the needs of most tests. Call super().__init__() first for standard initialization and then initialize custom properties.""" super().__init__() # Stores a dictionary of all blocks received self.block_receive_map = defaultdict(int) def on_block(self, conn, message): """Override the standard on_block callback Store the hash of a received block in the dictionary.""" message.block.calc_sha256() self.block_receive_map[message.block.sha256] += 1 def custom_function(): """Do some custom behaviour If this function is more generally useful for other tests, consider moving it to a module in test_framework.""" # self.log.info("running custom_function") # Oops! Can't run self.log outside the BitcoinTestFramework pass class ExampleTest(BitcoinTestFramework): # Each functional test is a subclass of the BitcoinTestFramework class. - # Override the __init__(), add_options(), setup_chain(), setup_network() + # Override the set_test_params(), add_options(), setup_chain(), setup_network() # and setup_nodes() methods to customize the test setup as required. - def __init__(self): - """Initialize the test + def set_test_params(self): + """Override test parameters for your individual test. - Call super().__init__() first, and then override any test parameters - for your individual test.""" - super().__init__() + This method must be overridden and num_nodes must be exlicitly set.""" self.setup_clean_chain = True self.num_nodes = 3 # Use self.extra_args to change command-line arguments for the nodes self.extra_args = [[], ["-logips"], []] - # self.log.info("I've finished __init__") # Oops! Can't run self.log before run_test() + # self.log.info("I've finished set_test_params") # Oops! Can't run self.log before run_test() # Use add_options() to add specific command-line options for your test. # In practice this is not used very much, since the tests are mostly written # to be run in automated environments without command-line options. # def add_options() # pass # Use setup_chain() to customize the node data directories. In practice # this is not used very much since the default behaviour is almost always # fine # def setup_chain(): # pass def setup_network(self): """Setup the test network topology Often you won't need to override this, since the standard network topology (linear: node0 <-> node1 <-> node2 <-> ...) is fine for most tests. If you do override this method, remember to start the nodes, assign them to self.nodes, connect them and then sync.""" self.setup_nodes() # In this test, we're not connecting node2 to node0 or node1. Calls to # sync_all() should not include node2, since we're not expecting it to # sync. connect_nodes(self.nodes[0], 1) self.sync_all([self.nodes[0:1]]) # Use setup_nodes() to customize the node start behaviour (for example if # you don't want to start all nodes at the start of the test). # def setup_nodes(): # pass def custom_method(self): """Do some custom behaviour for this test Define it in a method here because you're going to use it repeatedly. If you think it's useful in general, consider moving it to the base BitcoinTestFramework class so other tests can use it.""" self.log.info("Running custom_method") def run_test(self): """Main test logic""" # Create a P2P connection to one of the nodes node0 = BaseNode() connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], node0)) node0.add_connection(connections[0]) # Start up network handling in another thread. This needs to be called # after the P2P connections have been created. NetworkThread().start() # wait_for_verack ensures that the P2P connection is fully up. node0.wait_for_verack() # Generating a block on one of the nodes will get us out of IBD blocks = [int(self.nodes[0].generate(nblocks=1)[0], 16)] self.sync_all([self.nodes[0:1]]) # Notice above how we called an RPC by calling a method with the same # name on the node object. Notice also how we used a keyword argument # to specify a named RPC argument. Neither of those are defined on the # node object. Instead there's some __getattr__() magic going on under # the covers to dispatch unrecognised attribute calls to the RPC # interface. # Logs are nice. Do plenty of them. They can be used in place of comments for # breaking the test into sub-sections. self.log.info("Starting test!") self.log.info("Calling a custom function") custom_function() self.log.info("Calling a custom method") self.custom_method() self.log.info("Create some blocks") self.tip = int(self.nodes[0].getbestblockhash(), 16) self.block_time = self.nodes[0].getblock( self.nodes[0].getbestblockhash())['time'] + 1 height = 1 for i in range(10): # Use the mininode and blocktools functionality to manually build a block # Calling the generate() rpc is easier, but this allows us to exactly # control the blocks and transactions. block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() block_message = msg_block(block) # Send message is used to send a P2P message to the node over our NodeConn connection node0.send_message(block_message) self.tip = block.sha256 blocks.append(self.tip) self.block_time += 1 height += 1 self.log.info( "Wait for node1 to reach current tip (height 11) using RPC") self.nodes[1].waitforblockheight(11) self.log.info("Connect node2 and node1") connect_nodes(self.nodes[1], 2) self.log.info("Add P2P connection to node2") node2 = BaseNode() connections.append( NodeConn('127.0.0.1', p2p_port(2), self.nodes[2], node2)) node2.add_connection(connections[1]) node2.wait_for_verack() self.log.info( "Wait for node2 reach current tip. Test that it has propagated all the blocks to us") for block in blocks: getdata_request = msg_getdata() getdata_request.inv.append(CInv(2, block)) node2.send_message(getdata_request) # wait_until() will loop until a predicate condition is met. Use it to test properties of the # NodeConnCB objects. wait_until(lambda: sorted(blocks) == sorted( list(node2.block_receive_map.keys())), timeout=5, lock=mininode_lock) self.log.info("Check that each block was received only once") # The network thread uses a global lock on data access to the NodeConn objects when sending and receiving # messages. The test thread should acquire the global lock before accessing any NodeConn data to avoid locking # and synchronization issues. Note wait_until() acquires this global lock when testing the predicate. with mininode_lock: for block in node2.block_receive_map.values(): assert_equal(block, 1) if __name__ == '__main__': ExampleTest().main() diff --git a/test/functional/forknotify.py b/test/functional/forknotify.py index bff8ab736..76f745978 100755 --- a/test/functional/forknotify.py +++ b/test/functional/forknotify.py @@ -1,77 +1,67 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test -alertnotify # """Test the -alertnotify option.""" import os import time from test_framework.test_framework import BitcoinTestFramework -from test_framework.util import * class ForkNotifyTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False def setup_network(self): - self.nodes = [] self.alert_filename = os.path.join(self.options.tmpdir, "alert.txt") with open(self.alert_filename, 'w', encoding='utf8'): pass # Just open then close to create zero-length file - self.nodes.append(self.start_node(0, self.options.tmpdir, - ["-blockversion=2", "-alertnotify=echo %s >> \"" + self.alert_filename + "\""])) - # Node1 mines block.version=211 blocks - self.nodes.append(self.start_node(1, self.options.tmpdir, - ["-blockversion=211"])) - connect_nodes(self.nodes[1], 0) - - self.sync_all() + self.extra_args = [["-blockversion=2", "-alertnotify=echo %s >> \"" + self.alert_filename + "\""], + ["-blockversion=211"]] + super().setup_network() def run_test(self): # Mine 51 up-version blocks self.nodes[1].generate(51) self.sync_all() # -alertnotify should trigger on the 51'st, # but mine and sync another to give # -alertnotify time to write self.nodes[1].generate(1) self.sync_all() # Give bitcoind 10 seconds to write the alert notification timeout = 10.0 while timeout > 0: if os.path.exists(self.alert_filename) and os.path.getsize(self.alert_filename): break time.sleep(0.1) timeout -= 0.1 else: assert False, "-alertnotify did not warn of up-version blocks" with open(self.alert_filename, 'r', encoding='utf8') as f: alert_text = f.read() # Mine more up-version blocks, should not get more alerts: self.nodes[1].generate(1) self.sync_all() self.nodes[1].generate(1) self.sync_all() with open(self.alert_filename, 'r', encoding='utf8') as f: alert_text2 = f.read() if alert_text != alert_text2: raise AssertionError( "-alertnotify excessive warning of up-version blocks") if __name__ == '__main__': ForkNotifyTest().main() diff --git a/test/functional/fundrawtransaction.py b/test/functional/fundrawtransaction.py index 672bfa5a1..4f47dd41d 100755 --- a/test/functional/fundrawtransaction.py +++ b/test/functional/fundrawtransaction.py @@ -1,777 +1,775 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework, BITCOIND_PROC_WAIT_TIMEOUT from test_framework.util import * def get_unspent(listunspent, amount): for utx in listunspent: if utx['amount'] == amount: return utx raise AssertionError( 'Could not find unspent with amount={}'.format(amount)) class RawTransactionsTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.setup_clean_chain = True + def set_test_params(self): self.num_nodes = 4 + self.setup_clean_chain = True def setup_network(self, split=False): self.setup_nodes() connect_nodes_bi(self.nodes, 0, 1) connect_nodes_bi(self.nodes, 1, 2) connect_nodes_bi(self.nodes, 0, 2) connect_nodes_bi(self.nodes, 0, 3) def run_test(self): min_relay_tx_fee = self.nodes[0].getnetworkinfo()['relayfee'] # This test is not meant to test fee estimation and we'd like # to be sure all txs are sent at a consistent desired feerate for node in self.nodes: node.settxfee(min_relay_tx_fee) # if the fee's positive delta is higher than this value tests will fail, # neg. delta always fail the tests. # The size of the signature of every input may be at most 2 bytes larger # than a minimum sized signature. # = 2 bytes * minRelayTxFeePerByte feeTolerance = 2 * min_relay_tx_fee / 1000 self.nodes[2].generate(1) self.sync_all() self.nodes[0].generate(121) self.sync_all() watchonly_address = self.nodes[0].getnewaddress() watchonly_pubkey = self.nodes[ 0].validateaddress(watchonly_address)["pubkey"] watchonly_amount = Decimal(200) self.nodes[3].importpubkey(watchonly_pubkey, "", True) watchonly_txid = self.nodes[0].sendtoaddress( watchonly_address, watchonly_amount) self.nodes[0].sendtoaddress( self.nodes[3].getnewaddress(), watchonly_amount / 10) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 1.5) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 1.0) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 5.0) self.nodes[0].generate(1) self.sync_all() # # simple test # # inputs = [] outputs = {self.nodes[0].getnewaddress(): 1.0} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) assert(len(dec_tx['vin']) > 0) # test that we have enough inputs # # simple test with two coins # # inputs = [] outputs = {self.nodes[0].getnewaddress(): 2.2} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) assert(len(dec_tx['vin']) > 0) # test if we have enough inputs # # simple test with two coins # # inputs = [] outputs = {self.nodes[0].getnewaddress(): 2.6} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) assert(len(dec_tx['vin']) > 0) assert_equal(dec_tx['vin'][0]['scriptSig']['hex'], '') # # simple test with two outputs # # inputs = [] outputs = { self.nodes[0].getnewaddress(): 2.6, self.nodes[1].getnewaddress(): 2.5} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) totalOut = 0 for out in dec_tx['vout']: totalOut += out['value'] assert(len(dec_tx['vin']) > 0) assert_equal(dec_tx['vin'][0]['scriptSig']['hex'], '') # # test a fundrawtransaction with a VIN greater than the required amount # # utx = get_unspent(self.nodes[2].listunspent(), 5) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}] outputs = {self.nodes[0].getnewaddress(): 1.0} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) totalOut = 0 for out in dec_tx['vout']: totalOut += out['value'] # compare vin total and totalout+fee assert_equal(fee + totalOut, utx['amount']) # # test a fundrawtransaction with which will not get a change output # # utx = get_unspent(self.nodes[2].listunspent(), 5) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}] outputs = { self.nodes[0].getnewaddress(): Decimal(5.0) - fee - feeTolerance} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) totalOut = 0 for out in dec_tx['vout']: totalOut += out['value'] assert_equal(rawtxfund['changepos'], -1) assert_equal(fee + totalOut, utx['amount']) # compare vin total and totalout+fee # # test a fundrawtransaction with an invalid option # # utx = get_unspent(self.nodes[2].listunspent(), 5) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}] outputs = {self.nodes[0].getnewaddress(): Decimal(4.0)} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) assert_raises_jsonrpc(-3, "Unexpected key foo", self.nodes[ 2].fundrawtransaction, rawtx, {'foo': 'bar'}) # # test a fundrawtransaction with an invalid change address # # utx = get_unspent(self.nodes[2].listunspent(), 5) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}] outputs = {self.nodes[0].getnewaddress(): Decimal(4.0)} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) assert_raises_jsonrpc( -5, "changeAddress must be a valid bitcoin address", self.nodes[2].fundrawtransaction, rawtx, {'changeAddress': 'foobar'}) # # test a fundrawtransaction with a provided change address # # utx = get_unspent(self.nodes[2].listunspent(), 5) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}] outputs = {self.nodes[0].getnewaddress(): Decimal(4.0)} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) change = self.nodes[2].getnewaddress() assert_raises_jsonrpc(-8, "changePosition out of bounds", self.nodes[ 2].fundrawtransaction, rawtx, {'changeAddress': change, 'changePosition': 2}) rawtxfund = self.nodes[2].fundrawtransaction( rawtx, {'changeAddress': change, 'changePosition': 0}) dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) out = dec_tx['vout'][0] assert_equal(change, out['scriptPubKey']['addresses'][0]) # # test a fundrawtransaction with a VIN smaller than the required amount # # utx = get_unspent(self.nodes[2].listunspent(), 1) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}] outputs = {self.nodes[0].getnewaddress(): 1.0} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) # 4-byte version + 1-byte vin count + 36-byte prevout then script_len rawtx = rawtx[:82] + "0100" + rawtx[84:] dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) assert_equal("00", dec_tx['vin'][0]['scriptSig']['hex']) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) totalOut = 0 matchingOuts = 0 for i, out in enumerate(dec_tx['vout']): totalOut += out['value'] if out['scriptPubKey']['addresses'][0] in outputs: matchingOuts += 1 else: assert_equal(i, rawtxfund['changepos']) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) assert_equal("00", dec_tx['vin'][0]['scriptSig']['hex']) assert_equal(matchingOuts, 1) assert_equal(len(dec_tx['vout']), 2) # # test a fundrawtransaction with two VINs # # utx = get_unspent(self.nodes[2].listunspent(), 1) utx2 = get_unspent(self.nodes[2].listunspent(), 5) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}, {'txid': utx2['txid'], 'vout': utx2['vout']}] outputs = {self.nodes[0].getnewaddress(): 6.0} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) totalOut = 0 matchingOuts = 0 for out in dec_tx['vout']: totalOut += out['value'] if out['scriptPubKey']['addresses'][0] in outputs: matchingOuts += 1 assert_equal(matchingOuts, 1) assert_equal(len(dec_tx['vout']), 2) matchingIns = 0 for vinOut in dec_tx['vin']: for vinIn in inputs: if vinIn['txid'] == vinOut['txid']: matchingIns += 1 # we now must see two vins identical to vins given as params assert_equal(matchingIns, 2) # # test a fundrawtransaction with two VINs and two vOUTs # # utx = get_unspent(self.nodes[2].listunspent(), 1) utx2 = get_unspent(self.nodes[2].listunspent(), 5) inputs = [{'txid': utx['txid'], 'vout': utx['vout']}, {'txid': utx2['txid'], 'vout': utx2['vout']}] outputs = { self.nodes[0].getnewaddress(): 6.0, self.nodes[0].getnewaddress(): 1.0} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(utx['txid'], dec_tx['vin'][0]['txid']) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) fee = rawtxfund['fee'] dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) totalOut = 0 matchingOuts = 0 for out in dec_tx['vout']: totalOut += out['value'] if out['scriptPubKey']['addresses'][0] in outputs: matchingOuts += 1 assert_equal(matchingOuts, 2) assert_equal(len(dec_tx['vout']), 3) # # test a fundrawtransaction with invalid vin # # listunspent = self.nodes[2].listunspent() inputs = [ {'txid': "1c7f966dab21119bac53213a2bc7532bff1fa844c124fd750a7d0b1332440bd1", 'vout': 0}] # invalid vin! outputs = {self.nodes[0].getnewaddress(): 1.0} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_raises_jsonrpc( -4, "Insufficient funds", self.nodes[2].fundrawtransaction, rawtx) # # compare fee of a standard pubkeyhash transaction inputs = [] outputs = {self.nodes[1].getnewaddress(): 1.1} rawTx = self.nodes[0].createrawtransaction(inputs, outputs) fundedTx = self.nodes[0].fundrawtransaction(rawTx) # create same transaction over sendtoaddress txId = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 1.1) signedFee = self.nodes[0].getrawmempool(True)[txId]['fee'] # compare fee feeDelta = Decimal(fundedTx['fee']) - Decimal(signedFee) assert(feeDelta >= 0 and feeDelta <= feeTolerance) # # # compare fee of a standard pubkeyhash transaction with multiple # outputs inputs = [] outputs = {self.nodes[1].getnewaddress(): 1.1, self.nodes[1].getnewaddress(): 1.2, self.nodes[1].getnewaddress(): 0.1, self.nodes[ 1].getnewaddress(): 1.3, self.nodes[1].getnewaddress(): 0.2, self.nodes[1].getnewaddress(): 0.3} rawTx = self.nodes[0].createrawtransaction(inputs, outputs) fundedTx = self.nodes[0].fundrawtransaction(rawTx) # create same transaction over sendtoaddress txId = self.nodes[0].sendmany("", outputs) signedFee = self.nodes[0].getrawmempool(True)[txId]['fee'] # compare fee feeDelta = Decimal(fundedTx['fee']) - Decimal(signedFee) assert(feeDelta >= 0 and feeDelta <= feeTolerance) # # # compare fee of a 2of2 multisig p2sh transaction # create 2of2 addr addr1 = self.nodes[1].getnewaddress() addr2 = self.nodes[1].getnewaddress() addr1Obj = self.nodes[1].validateaddress(addr1) addr2Obj = self.nodes[1].validateaddress(addr2) mSigObj = self.nodes[1].addmultisigaddress( 2, [addr1Obj['pubkey'], addr2Obj['pubkey']]) inputs = [] outputs = {mSigObj: 1.1} rawTx = self.nodes[0].createrawtransaction(inputs, outputs) fundedTx = self.nodes[0].fundrawtransaction(rawTx) # create same transaction over sendtoaddress txId = self.nodes[0].sendtoaddress(mSigObj, 1.1) signedFee = self.nodes[0].getrawmempool(True)[txId]['fee'] # compare fee feeDelta = Decimal(fundedTx['fee']) - Decimal(signedFee) assert(feeDelta >= 0 and feeDelta <= feeTolerance) # # # compare fee of a standard pubkeyhash transaction # create 4of5 addr addr1 = self.nodes[1].getnewaddress() addr2 = self.nodes[1].getnewaddress() addr3 = self.nodes[1].getnewaddress() addr4 = self.nodes[1].getnewaddress() addr5 = self.nodes[1].getnewaddress() addr1Obj = self.nodes[1].validateaddress(addr1) addr2Obj = self.nodes[1].validateaddress(addr2) addr3Obj = self.nodes[1].validateaddress(addr3) addr4Obj = self.nodes[1].validateaddress(addr4) addr5Obj = self.nodes[1].validateaddress(addr5) mSigObj = self.nodes[1].addmultisigaddress( 4, [addr1Obj['pubkey'], addr2Obj['pubkey'], addr3Obj['pubkey'], addr4Obj['pubkey'], addr5Obj['pubkey']]) inputs = [] outputs = {mSigObj: 1.1} rawTx = self.nodes[0].createrawtransaction(inputs, outputs) fundedTx = self.nodes[0].fundrawtransaction(rawTx) # create same transaction over sendtoaddress txId = self.nodes[0].sendtoaddress(mSigObj, 1.1) signedFee = self.nodes[0].getrawmempool(True)[txId]['fee'] # compare fee feeDelta = Decimal(fundedTx['fee']) - Decimal(signedFee) assert(feeDelta >= 0 and feeDelta <= feeTolerance) # # # spend a 2of2 multisig transaction over fundraw # create 2of2 addr addr1 = self.nodes[2].getnewaddress() addr2 = self.nodes[2].getnewaddress() addr1Obj = self.nodes[2].validateaddress(addr1) addr2Obj = self.nodes[2].validateaddress(addr2) mSigObj = self.nodes[2].addmultisigaddress( 2, [addr1Obj['pubkey'], addr2Obj['pubkey']]) # send 1.2 BTC to msig addr txId = self.nodes[0].sendtoaddress(mSigObj, 1.2) self.sync_all() self.nodes[1].generate(1) self.sync_all() oldBalance = self.nodes[1].getbalance() inputs = [] outputs = {self.nodes[1].getnewaddress(): 1.1} rawTx = self.nodes[2].createrawtransaction(inputs, outputs) fundedTx = self.nodes[2].fundrawtransaction(rawTx) signedTx = self.nodes[2].signrawtransaction( fundedTx['hex'], None, None, "ALL|FORKID") txId = self.nodes[2].sendrawtransaction(signedTx['hex']) self.sync_all() self.nodes[1].generate(1) self.sync_all() # make sure funds are received at node1 assert_equal( oldBalance + Decimal('1.10000000'), self.nodes[1].getbalance()) # # locked wallet test self.stop_node(0) + self.nodes[1].node_encrypt_wallet("test") self.stop_node(2) self.stop_node(3) - self.nodes[1].node_encrypt_wallet("test") - self.nodes = self.start_nodes(self.num_nodes, self.options.tmpdir) + self.start_nodes() # This test is not meant to test fee estimation and we'd like # to be sure all txs are sent at a consistent desired feerate for node in self.nodes: node.settxfee(min_relay_tx_fee) connect_nodes_bi(self.nodes, 0, 1) connect_nodes_bi(self.nodes, 1, 2) connect_nodes_bi(self.nodes, 0, 2) connect_nodes_bi(self.nodes, 0, 3) self.sync_all() # drain the keypool self.nodes[1].getnewaddress() self.nodes[1].getrawchangeaddress() inputs = [] outputs = {self.nodes[0].getnewaddress(): 1.1} rawTx = self.nodes[1].createrawtransaction(inputs, outputs) # fund a transaction that requires a new key for the change output # creating the key must be impossible because the wallet is locked assert_raises_jsonrpc( -4, "Insufficient funds", self.nodes[1].fundrawtransaction, rawtx) # refill the keypool self.nodes[1].walletpassphrase("test", 100) # need to refill the keypool to get an internal change address self.nodes[1].keypoolrefill(8) self.nodes[1].walletlock() assert_raises_jsonrpc(-13, "walletpassphrase", self.nodes[ 1].sendtoaddress, self.nodes[0].getnewaddress(), 1.2) oldBalance = self.nodes[0].getbalance() inputs = [] outputs = {self.nodes[0].getnewaddress(): 1.1} rawTx = self.nodes[1].createrawtransaction(inputs, outputs) fundedTx = self.nodes[1].fundrawtransaction(rawTx) # now we need to unlock self.nodes[1].walletpassphrase("test", 600) signedTx = self.nodes[1].signrawtransaction( fundedTx['hex'], None, None, "ALL|FORKID") txId = self.nodes[1].sendrawtransaction(signedTx['hex']) self.nodes[1].generate(1) self.sync_all() # make sure funds are received at node1 assert_equal( oldBalance + Decimal('51.10000000'), self.nodes[0].getbalance()) # # multiple (~19) inputs tx test | Compare fee # # # empty node1, send some small coins from node0 to node1 self.nodes[1].sendtoaddress( self.nodes[0].getnewaddress(), self.nodes[1].getbalance(), "", "", True) self.sync_all() self.nodes[0].generate(1) self.sync_all() for i in range(0, 20): self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 0.01) self.nodes[0].generate(1) self.sync_all() # fund a tx with ~20 small inputs inputs = [] outputs = { self.nodes[0].getnewaddress(): 0.15, self.nodes[0].getnewaddress(): 0.04} rawTx = self.nodes[1].createrawtransaction(inputs, outputs) fundedTx = self.nodes[1].fundrawtransaction(rawTx) # create same transaction over sendtoaddress txId = self.nodes[1].sendmany("", outputs) signedFee = self.nodes[1].getrawmempool(True)[txId]['fee'] # compare fee feeDelta = Decimal(fundedTx['fee']) - Decimal(signedFee) assert(feeDelta >= 0 and feeDelta <= feeTolerance * 19) # ~19 inputs # # multiple (~19) inputs tx test | sign/send # # # again, empty node1, send some small coins from node0 to node1 self.nodes[1].sendtoaddress( self.nodes[0].getnewaddress(), self.nodes[1].getbalance(), "", "", True) self.sync_all() self.nodes[0].generate(1) self.sync_all() for i in range(0, 20): self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 0.01) self.nodes[0].generate(1) self.sync_all() # fund a tx with ~20 small inputs oldBalance = self.nodes[0].getbalance() inputs = [] outputs = { self.nodes[0].getnewaddress(): 0.15, self.nodes[0].getnewaddress(): 0.04} rawTx = self.nodes[1].createrawtransaction(inputs, outputs) fundedTx = self.nodes[1].fundrawtransaction(rawTx) fundedAndSignedTx = self.nodes[1].signrawtransaction( fundedTx['hex'], None, None, "ALL|FORKID") txId = self.nodes[1].sendrawtransaction(fundedAndSignedTx['hex']) self.sync_all() self.nodes[0].generate(1) self.sync_all() assert_equal(oldBalance + Decimal('50.19000000'), self.nodes[0].getbalance()) # 0.19+block reward # # test fundrawtransaction with OP_RETURN and no vin # # rawtx = "0100000000010000000000000000066a047465737400000000" dec_tx = self.nodes[2].decoderawtransaction(rawtx) assert_equal(len(dec_tx['vin']), 0) assert_equal(len(dec_tx['vout']), 1) rawtxfund = self.nodes[2].fundrawtransaction(rawtx) dec_tx = self.nodes[2].decoderawtransaction(rawtxfund['hex']) assert_greater_than(len(dec_tx['vin']), 0) # at least one vin assert_equal(len(dec_tx['vout']), 2) # one change output added # # test a fundrawtransaction using only watchonly # # inputs = [] outputs = {self.nodes[2].getnewaddress(): watchonly_amount / 2} rawtx = self.nodes[3].createrawtransaction(inputs, outputs) result = self.nodes[3].fundrawtransaction( rawtx, {'includeWatching': True}) res_dec = self.nodes[0].decoderawtransaction(result["hex"]) assert_equal(len(res_dec["vin"]), 1) assert_equal(res_dec["vin"][0]["txid"], watchonly_txid) assert("fee" in result.keys()) assert_greater_than(result["changepos"], -1) # # test fundrawtransaction using the entirety of watched funds # # inputs = [] outputs = {self.nodes[2].getnewaddress(): watchonly_amount} rawtx = self.nodes[3].createrawtransaction(inputs, outputs) # Backward compatibility test (2nd param is includeWatching) result = self.nodes[3].fundrawtransaction(rawtx, True) res_dec = self.nodes[0].decoderawtransaction(result["hex"]) assert_equal(len(res_dec["vin"]), 2) assert(res_dec["vin"][0]["txid"] == watchonly_txid or res_dec[ "vin"][1]["txid"] == watchonly_txid) assert_greater_than(result["fee"], 0) assert_greater_than(result["changepos"], -1) assert_equal(result["fee"] + res_dec["vout"][ result["changepos"]]["value"], watchonly_amount / 10) signedtx = self.nodes[3].signrawtransaction( result["hex"], None, None, "ALL|FORKID") assert(not signedtx["complete"]) signedtx = self.nodes[0].signrawtransaction( signedtx["hex"], None, None, "ALL|FORKID") assert(signedtx["complete"]) self.nodes[0].sendrawtransaction(signedtx["hex"]) self.nodes[0].generate(1) self.sync_all() # # Test feeRate option # # # Make sure there is exactly one input so coin selection can't skew the # result assert_equal(len(self.nodes[3].listunspent(1)), 1) inputs = [] outputs = {self.nodes[3].getnewaddress(): 1} rawtx = self.nodes[3].createrawtransaction(inputs, outputs) result = self.nodes[3].fundrawtransaction( rawtx) # uses min_relay_tx_fee (set by settxfee) result2 = self.nodes[3].fundrawtransaction( rawtx, {"feeRate": 2 * min_relay_tx_fee}) result3 = self.nodes[3].fundrawtransaction( rawtx, {"feeRate": 10 * min_relay_tx_fee}) result_fee_rate = result['fee'] * 1000 / count_bytes(result['hex']) assert_fee_amount( result2['fee'], count_bytes(result2['hex']), 2 * result_fee_rate) assert_fee_amount( result3['fee'], count_bytes(result3['hex']), 10 * result_fee_rate) # # Test address reuse option # # result3 = self.nodes[3].fundrawtransaction( rawtx, {"reserveChangeKey": False}) res_dec = self.nodes[0].decoderawtransaction(result3["hex"]) changeaddress = "" for out in res_dec['vout']: if out['value'] > 1.0: changeaddress += out['scriptPubKey']['addresses'][0] assert(changeaddress != "") nextaddr = self.nodes[3].getrawchangeaddress() # frt should not have removed the key from the keypool assert(changeaddress == nextaddr) result3 = self.nodes[3].fundrawtransaction(rawtx) res_dec = self.nodes[0].decoderawtransaction(result3["hex"]) changeaddress = "" for out in res_dec['vout']: if out['value'] > 1.0: changeaddress += out['scriptPubKey']['addresses'][0] assert(changeaddress != "") nextaddr = self.nodes[3].getnewaddress() # Now the change address key should be removed from the keypool assert(changeaddress != nextaddr) # # Test subtractFeeFromOutputs option # # # Make sure there is exactly one input so coin selection can't skew the # result assert_equal(len(self.nodes[3].listunspent(1)), 1) inputs = [] outputs = {self.nodes[2].getnewaddress(): 1} rawtx = self.nodes[3].createrawtransaction(inputs, outputs) result = [self.nodes[3].fundrawtransaction(rawtx), # uses min_relay_tx_fee (set by settxfee) self.nodes[3].fundrawtransaction( rawtx, {"subtractFeeFromOutputs": []}), # empty subtraction list self.nodes[3].fundrawtransaction( rawtx, {"subtractFeeFromOutputs": [0]}), # uses min_relay_tx_fee (set by settxfee) self.nodes[3].fundrawtransaction( rawtx, {"feeRate": 2 * min_relay_tx_fee}), self.nodes[3].fundrawtransaction(rawtx, {"feeRate": 2 * min_relay_tx_fee, "subtractFeeFromOutputs": [0]})] dec_tx = [self.nodes[3].decoderawtransaction(tx['hex']) for tx in result] output = [d['vout'][1 - r['changepos']]['value'] for d, r in zip(dec_tx, result)] change = [d['vout'][r['changepos']]['value'] for d, r in zip(dec_tx, result)] assert_equal(result[0]['fee'], result[1]['fee'], result[2]['fee']) assert_equal(result[3]['fee'], result[4]['fee']) assert_equal(change[0], change[1]) assert_equal(output[0], output[1]) assert_equal(output[0], output[2] + result[2]['fee']) assert_equal(change[0] + result[0]['fee'], change[2]) assert_equal(output[3], output[4] + result[4]['fee']) assert_equal(change[3] + result[3]['fee'], change[4]) inputs = [] outputs = { self.nodes[2].getnewaddress(): value for value in (1.0, 1.1, 1.2, 1.3)} keys = list(outputs.keys()) rawtx = self.nodes[3].createrawtransaction(inputs, outputs) result = [self.nodes[3].fundrawtransaction(rawtx), # split the fee between outputs 0, 2, and 3, but not output 1 self.nodes[3].fundrawtransaction(rawtx, {"subtractFeeFromOutputs": [0, 2, 3]})] dec_tx = [self.nodes[3].decoderawtransaction(result[0]['hex']), self.nodes[3].decoderawtransaction(result[1]['hex'])] # Nested list of non-change output amounts for each transaction output = [[out['value'] for i, out in enumerate(d['vout']) if i != r['changepos']] for d, r in zip(dec_tx, result)] # List of differences in output amounts between normal and subtractFee # transactions share = [o0 - o1 for o0, o1 in zip(output[0], output[1])] # output 1 is the same in both transactions assert_equal(share[1], 0) # the other 3 outputs are smaller as a result of subtractFeeFromOutputs assert_greater_than(share[0], 0) assert_greater_than(share[2], 0) assert_greater_than(share[3], 0) # outputs 2 and 3 take the same share of the fee assert_equal(share[2], share[3]) # output 0 takes at least as much share of the fee, and no more than 2 # satoshis more, than outputs 2 and 3 assert_greater_than_or_equal(share[0], share[2]) assert_greater_than_or_equal(share[2] + Decimal(2e-8), share[0]) # the fee is the same in both transactions assert_equal(result[0]['fee'], result[1]['fee']) # the total subtracted from the outputs is equal to the fee assert_equal(share[0] + share[2] + share[3], result[0]['fee']) if __name__ == '__main__': RawTransactionsTest().main() diff --git a/test/functional/getblocktemplate_longpoll.py b/test/functional/getblocktemplate_longpoll.py index 8d0fbac6d..7b6a2769d 100755 --- a/test/functional/getblocktemplate_longpoll.py +++ b/test/functional/getblocktemplate_longpoll.py @@ -1,90 +1,83 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import threading class LongpollThread(threading.Thread): def __init__(self, node): threading.Thread.__init__(self) # query current longpollid templat = node.getblocktemplate() self.longpollid = templat['longpollid'] # create a new connection to the node, we can't use the same # connection from two threads self.node = get_rpc_proxy( node.url, 1, timeout=600, coveragedir=node.coverage_dir) def run(self): self.node.getblocktemplate({'longpollid': self.longpollid}) class GetBlockTemplateLPTest(BitcoinTestFramework): - - ''' - Test longpolling with getblocktemplate. - ''' - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False def run_test(self): self.log.info( "Warning: this test will take about 70 seconds in the best case. Be patient.") self.nodes[0].generate(10) templat = self.nodes[0].getblocktemplate() longpollid = templat['longpollid'] # longpollid should not change between successive invocations if # nothing else happens templat2 = self.nodes[0].getblocktemplate() assert(templat2['longpollid'] == longpollid) # Test 1: test that the longpolling wait if we do nothing thr = LongpollThread(self.nodes[0]) thr.start() # check that thread still lives # wait 5 seconds or until thread exits thr.join(5) assert(thr.is_alive()) # Test 2: test that longpoll will terminate if another node generates a block # generate a block on another node self.nodes[1].generate(1) # check that thread will exit now that new transaction entered mempool # wait 5 seconds or until thread exits thr.join(5) assert(not thr.is_alive()) # Test 3: test that longpoll will terminate if we generate a block # ourselves thr = LongpollThread(self.nodes[0]) thr.start() # generate a block on another node self.nodes[0].generate(1) # wait 5 seconds or until thread exits thr.join(5) assert(not thr.is_alive()) # Test 4: test that introducing a new transaction into the mempool will # terminate the longpoll thr = LongpollThread(self.nodes[0]) thr.start() # generate a random transaction and submit it (txid, txhex, fee) = random_transaction(self.nodes, Decimal("1.1"), Decimal("0.0"), Decimal("0.001"), 20) # after one minute, every 10 seconds the mempool is probed, so in 80 # seconds it should have returned thr.join(60 + 20) assert(not thr.is_alive()) if __name__ == '__main__': GetBlockTemplateLPTest().main() diff --git a/test/functional/getchaintips.py b/test/functional/getchaintips.py index 1a762face..98bc67324 100755 --- a/test/functional/getchaintips.py +++ b/test/functional/getchaintips.py @@ -1,65 +1,61 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # Exercise the getchaintips API. We introduce a network split, work # on chains of different lengths, and join the network together again. # This gives us two tips, verify that it works. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal class GetChainTipsTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 4 - self.setup_clean_chain = False def run_test(self): - tips = self.nodes[0].getchaintips() assert_equal(len(tips), 1) assert_equal(tips[0]['branchlen'], 0) assert_equal(tips[0]['height'], 200) assert_equal(tips[0]['status'], 'active') # Split the network and build two chains of different lengths. self.split_network() self.nodes[0].generate(10) self.nodes[2].generate(20) self.sync_all([self.nodes[:2], self.nodes[2:]]) tips = self.nodes[1].getchaintips() assert_equal(len(tips), 1) shortTip = tips[0] assert_equal(shortTip['branchlen'], 0) assert_equal(shortTip['height'], 210) assert_equal(tips[0]['status'], 'active') tips = self.nodes[3].getchaintips() assert_equal(len(tips), 1) longTip = tips[0] assert_equal(longTip['branchlen'], 0) assert_equal(longTip['height'], 220) assert_equal(tips[0]['status'], 'active') # Join the network halves and check that we now have two tips # (at least at the nodes that previously had the short chain). self.join_network() tips = self.nodes[0].getchaintips() assert_equal(len(tips), 2) assert_equal(tips[0], longTip) assert_equal(tips[1]['branchlen'], 10) assert_equal(tips[1]['status'], 'valid-fork') tips[1]['branchlen'] = 0 tips[1]['status'] = 'active' assert_equal(tips[1], shortTip) if __name__ == '__main__': GetChainTipsTest().main() diff --git a/test/functional/high_priority_transaction.py b/test/functional/high_priority_transaction.py index 04a805bdf..132884bd8 100755 --- a/test/functional/high_priority_transaction.py +++ b/test/functional/high_priority_transaction.py @@ -1,105 +1,104 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test HighPriorityTransaction code # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import COIN from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE, COINBASE_MATURITY class HighPriorityTransactionTest(BitcoinTestFramework): - def __init__(self): - super().__init__() - self.setup_clean_chain = True + def set_test_params(self): self.num_nodes = 1 self.extra_args = [["-blockprioritypercentage=0", "-limitfreerelay=2"]] def create_small_transactions(self, node, utxos, num, fee): addr = node.getnewaddress() txids = [] for _ in range(num): t = utxos.pop() inputs = [{"txid": t["txid"], "vout": t["vout"]}] outputs = {} change = t['amount'] - fee outputs[addr] = satoshi_round(change) rawtx = node.createrawtransaction(inputs, outputs) signresult = node.signrawtransaction( rawtx, None, None, "NONE|FORKID") txid = node.sendrawtransaction(signresult["hex"], True) txids.append(txid) return txids def generate_high_priotransactions(self, node, count): # generate a bunch of spendable utxos self.txouts = gen_return_txouts() # create 150 simple one input one output hi prio txns hiprio_utxo_count = 150 age = 250 # be sure to make this utxo aged enough hiprio_utxos = create_confirmed_utxos( self.relayfee, node, hiprio_utxo_count, age) txids = [] # Create hiprio_utxo_count number of txns with 0 fee range_size = [0, hiprio_utxo_count] start_range = range_size[0] end_range = range_size[1] txids = self.create_small_transactions( node, hiprio_utxos[start_range:end_range], end_range - start_range, 0) return txids def run_test(self): # this is the priority cut off as defined in AllowFreeThreshold() (see: src/txmempool.h) # anything above that value is considered an high priority transaction hiprio_threshold = COIN * 144 / 250 self.relayfee = self.nodes[0].getnetworkinfo()['relayfee'] # first test step: 0 reserved prio space in block txids = self.generate_high_priotransactions(self.nodes[0], 150) mempool_size_pre = self.nodes[0].getmempoolinfo()['bytes'] mempool = self.nodes[0].getrawmempool(True) # assert that all the txns are in the mempool and that all of them are hi prio for i in txids: assert(i in mempool) assert(mempool[i]['currentpriority'] > hiprio_threshold) # mine one block self.nodes[0].generate(1) self.log.info( "Assert that all high prio transactions haven't been mined") assert_equal(self.nodes[0].getmempoolinfo()['bytes'], mempool_size_pre) # restart with default blockprioritypercentage self.stop_nodes() - self.nodes = self.start_nodes(self.num_nodes, self.options.tmpdir, - [["-limitfreerelay=2"]]) + self.nodes = [] + self.add_nodes(self.num_nodes, [["-limitfreerelay=2"]]) + self.start_nodes() # second test step: default reserved prio space in block (100K). # the mempool size is about 25K this means that all txns will be # included in the soon to be mined block txids = self.generate_high_priotransactions(self.nodes[0], 150) mempool_size_pre = self.nodes[0].getmempoolinfo()['bytes'] mempool = self.nodes[0].getrawmempool(True) # assert that all the txns are in the mempool and that all of them are hiprio for i in txids: assert(i in mempool) assert(mempool[i]['currentpriority'] > hiprio_threshold) # mine one block self.nodes[0].generate(1) self.log.info("Assert that all high prio transactions have been mined") assert(self.nodes[0].getmempoolinfo()['bytes'] == 0) if __name__ == '__main__': HighPriorityTransactionTest().main() diff --git a/test/functional/httpbasics.py b/test/functional/httpbasics.py index af709a088..3105fd70b 100755 --- a/test/functional/httpbasics.py +++ b/test/functional/httpbasics.py @@ -1,128 +1,125 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test rpc http basics # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import http.client import urllib.parse class HTTPBasicsTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 3 - self.setup_clean_chain = False def setup_network(self): self.setup_nodes() def run_test(self): # # lowlevel check for http persistent connection # # url = urllib.parse.urlparse(self.nodes[0].url) authpair = url.username + ':' + url.password headers = {"Authorization": "Basic " + str_to_b64str(authpair)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) out1 = conn.getresponse().read() assert(b'"error":null' in out1) assert(conn.sock != None) # according to http/1.1 connection must still be open! # send 2nd request without closing connection conn.request('POST', '/', '{"method": "getchaintips"}', headers) out1 = conn.getresponse().read() assert(b'"error":null' in out1) # must also response with a correct json-rpc message assert(conn.sock != None) # according to http/1.1 connection must still be open! conn.close() # same should be if we add keep-alive because this should be the std. # behaviour headers = {"Authorization": "Basic " + str_to_b64str(authpair), "Connection": "keep-alive"} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) out1 = conn.getresponse().read() assert(b'"error":null' in out1) assert(conn.sock != None) # according to http/1.1 connection must still be open! # send 2nd request without closing connection conn.request('POST', '/', '{"method": "getchaintips"}', headers) out1 = conn.getresponse().read() assert(b'"error":null' in out1) # must also response with a correct json-rpc message assert(conn.sock != None) # according to http/1.1 connection must still be open! conn.close() # now do the same with "Connection: close" headers = {"Authorization": "Basic " + str_to_b64str(authpair), "Connection": "close"} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) out1 = conn.getresponse().read() assert(b'"error":null' in out1) assert(conn.sock == None) # now the connection must be closed after the response # node1 (2nd node) is running with disabled keep-alive option urlNode1 = urllib.parse.urlparse(self.nodes[1].url) authpair = urlNode1.username + ':' + urlNode1.password headers = {"Authorization": "Basic " + str_to_b64str(authpair)} conn = http.client.HTTPConnection(urlNode1.hostname, urlNode1.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) out1 = conn.getresponse().read() assert(b'"error":null' in out1) # node2 (third node) is running with standard keep-alive parameters # which means keep-alive is on urlNode2 = urllib.parse.urlparse(self.nodes[2].url) authpair = urlNode2.username + ':' + urlNode2.password headers = {"Authorization": "Basic " + str_to_b64str(authpair)} conn = http.client.HTTPConnection(urlNode2.hostname, urlNode2.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) out1 = conn.getresponse().read() assert(b'"error":null' in out1) assert(conn.sock != None) # connection must be closed because bitcoind should use # keep-alive by default # Check excessive request size conn = http.client.HTTPConnection(urlNode2.hostname, urlNode2.port) conn.connect() conn.request('GET', '/' + ('x' * 1000), '', headers) out1 = conn.getresponse() assert_equal(out1.status, http.client.NOT_FOUND) conn = http.client.HTTPConnection(urlNode2.hostname, urlNode2.port) conn.connect() conn.request('GET', '/' + ('x' * 10000), '', headers) out1 = conn.getresponse() assert_equal(out1.status, http.client.BAD_REQUEST) if __name__ == '__main__': HTTPBasicsTest().main() diff --git a/test/functional/import-rescan.py b/test/functional/import-rescan.py index 8316969b6..8c6f7abdf 100755 --- a/test/functional/import-rescan.py +++ b/test/functional/import-rescan.py @@ -1,207 +1,205 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. ''' Test rescan behavior of importaddress, importpubkey, importprivkey, and importmulti RPCs with different types of keys and rescan options. In the first part of the test, node 0 creates an address for each type of import RPC call and node 0 sends BTC to it. Then other nodes import the addresses, and the test makes listtransactions and getbalance calls to confirm that the importing node either did or did not execute rescans picking up the send transactions. In the second part of the test, node 0 sends more BTC to each address, and the test makes more listtransactions and getbalance calls to confirm that the importing nodes pick up the new transactions regardless of whether rescans happened previously. ''' from test_framework.authproxy import JSONRPCException from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( connect_nodes, sync_blocks, assert_equal, set_node_times) import collections import enum import itertools Call = enum.Enum("Call", "single multi") Data = enum.Enum("Data", "address pub priv") Rescan = enum.Enum("Rescan", "no yes late_timestamp") class Variant(collections.namedtuple("Variant", "call data rescan prune")): """Helper for importing one key and verifying scanned transactions.""" def do_import(self, timestamp): """Call one key import RPC.""" if self.call == Call.single: if self.data == Data.address: response, error = try_rpc( self.node.importaddress, self.address[ "address"], self.label, self.rescan == Rescan.yes) elif self.data == Data.pub: response, error = try_rpc( self.node.importpubkey, self.address["pubkey"], self.label, self.rescan == Rescan.yes) elif self.data == Data.priv: response, error = try_rpc( self.node.importprivkey, self.key, self.label, self.rescan == Rescan.yes) assert_equal(response, None) assert_equal( error, {'message': 'Rescan is disabled in pruned mode', 'code': -4} if self.expect_disabled else None) elif self.call == Call.multi: response = self.node.importmulti([{ "scriptPubKey": { "address": self.address["address"] }, "timestamp": timestamp + TIMESTAMP_WINDOW + (1 if self.rescan == Rescan.late_timestamp else 0), "pubkeys": [self.address["pubkey"]] if self.data == Data.pub else [], "keys": [self.key] if self.data == Data.priv else [], "label": self.label, "watchonly": self.data != Data.priv }], {"rescan": self.rescan in (Rescan.yes, Rescan.late_timestamp)}) assert_equal(response, [{"success": True}]) def check(self, txid=None, amount=None, confirmations=None): """Verify that getbalance/listtransactions return expected values.""" balance = self.node.getbalance(self.label, 0, True) assert_equal(balance, self.expected_balance) txs = self.node.listtransactions(self.label, 10000, 0, True) assert_equal(len(txs), self.expected_txs) if txid is not None: tx, = [tx for tx in txs if tx["txid"] == txid] assert_equal(tx["account"], self.label) assert_equal(tx["address"], self.address["address"]) assert_equal(tx["amount"], amount) assert_equal(tx["category"], "receive") assert_equal(tx["label"], self.label) assert_equal(tx["txid"], txid) assert_equal(tx["confirmations"], confirmations) assert_equal("trusted" not in tx, True) # Verify the transaction is correctly marked watchonly depending on # whether the transaction pays to an imported public key or # imported private key. The test setup ensures that transaction # inputs will not be from watchonly keys (important because # involvesWatchonly will be true if either the transaction output # or inputs are watchonly). if self.data != Data.priv: assert_equal(tx["involvesWatchonly"], True) else: assert_equal("involvesWatchonly" not in tx, True) # List of Variants for each way a key or address could be imported. IMPORT_VARIANTS = [Variant(*variants) for variants in itertools.product(Call, Data, Rescan, (False, True))] # List of nodes to import keys to. Half the nodes will have pruning disabled, # half will have it enabled. Different nodes will be used for imports that are # expected to cause rescans, and imports that are not expected to cause # rescans, in order to prevent rescans during later imports picking up # transactions associated with earlier imports. This makes it easier to keep # track of expected balances and transactions. ImportNode = collections.namedtuple("ImportNode", "prune rescan") IMPORT_NODES = [ImportNode(*fields) for fields in itertools.product((False, True), repeat=2)] # Rescans start at the earliest block up to 2 hours before the key timestamp. TIMESTAMP_WINDOW = 2 * 60 * 60 class ImportRescanTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 + len(IMPORT_NODES) def setup_network(self): extra_args = [[] for _ in range(self.num_nodes)] for i, import_node in enumerate(IMPORT_NODES, 2): if import_node.prune: extra_args[i] += ["-prune=1"] - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, extra_args) + self.add_nodes(self.num_nodes, extra_args) + self.start_nodes() for i in range(1, self.num_nodes): connect_nodes(self.nodes[i], 0) def run_test(self): # Create one transaction on node 0 with a unique amount and label for # each possible type of wallet import RPC. for i, variant in enumerate(IMPORT_VARIANTS): variant.label = "label {} {}".format(i, variant) variant.address = self.nodes[1].validateaddress( self.nodes[1].getnewaddress(variant.label)) variant.key = self.nodes[1].dumpprivkey(variant.address["address"]) variant.initial_amount = 10 - (i + 1) / 4.0 variant.initial_txid = self.nodes[0].sendtoaddress( variant.address["address"], variant.initial_amount) # Generate a block containing the initial transactions, then another # block further in the future (past the rescan window). self.nodes[0].generate(1) assert_equal(self.nodes[0].getrawmempool(), []) timestamp = self.nodes[0].getblockheader( self.nodes[0].getbestblockhash())["time"] set_node_times(self.nodes, timestamp + TIMESTAMP_WINDOW + 1) self.nodes[0].generate(1) sync_blocks(self.nodes) # For each variation of wallet key import, invoke the import RPC and # check the results from getbalance and listtransactions. for variant in IMPORT_VARIANTS: variant.expect_disabled = variant.rescan == Rescan.yes and variant.prune and variant.call == Call.single expect_rescan = variant.rescan == Rescan.yes and not variant.expect_disabled variant.node = self.nodes[ 2 + IMPORT_NODES.index(ImportNode(variant.prune, expect_rescan))] variant.do_import(timestamp) if expect_rescan: variant.expected_balance = variant.initial_amount variant.expected_txs = 1 variant.check(variant.initial_txid, variant.initial_amount, 2) else: variant.expected_balance = 0 variant.expected_txs = 0 variant.check() # Create new transactions sending to each address. fee = self.nodes[0].getnetworkinfo()["relayfee"] for i, variant in enumerate(IMPORT_VARIANTS): variant.sent_amount = 10 - (2 * i + 1) / 8.0 variant.sent_txid = self.nodes[0].sendtoaddress( variant.address["address"], variant.sent_amount) # Generate a block containing the new transactions. self.nodes[0].generate(1) assert_equal(self.nodes[0].getrawmempool(), []) sync_blocks(self.nodes) # Check the latest results from getbalance and listtransactions. for variant in IMPORT_VARIANTS: if not variant.expect_disabled: variant.expected_balance += variant.sent_amount variant.expected_txs += 1 variant.check(variant.sent_txid, variant.sent_amount, 1) else: variant.check() def try_rpc(func, *args, **kwargs): try: return func(*args, **kwargs), None except JSONRPCException as e: return None, e.error if __name__ == "__main__": ImportRescanTest().main() diff --git a/test/functional/importmulti.py b/test/functional/importmulti.py index 76d73aba5..5bdb44c44 100755 --- a/test/functional/importmulti.py +++ b/test/functional/importmulti.py @@ -1,505 +1,503 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class ImportMultiTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 self.setup_clean_chain = True def setup_network(self, split=False): self.setup_nodes() def run_test(self): self.log.info("Mining blocks...") self.nodes[0].generate(1) self.nodes[1].generate(1) timestamp = self.nodes[1].getblock( self.nodes[1].getbestblockhash())['mediantime'] # keyword definition PRIV_KEY = 'privkey' PUB_KEY = 'pubkey' ADDRESS_KEY = 'address' SCRIPT_KEY = 'script' node0_address1 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) node0_address2 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) node0_address3 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) # Check only one address assert_equal(node0_address1['ismine'], True) # Node 1 sync test assert_equal(self.nodes[1].getblockcount(), 1) # Address Test - before import address_info = self.nodes[1].validateaddress(node0_address1['address']) assert_equal(address_info['iswatchonly'], False) assert_equal(address_info['ismine'], False) # RPC importmulti ----------------------------------------------- # Bitcoin Address self.log.info("Should import an address") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": address['address'] }, "timestamp": "now", }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], True) assert_equal(address_assert['ismine'], False) assert_equal(address_assert['timestamp'], timestamp) watchonly_address = address['address'] watchonly_timestamp = timestamp self.log.info("Should not import an invalid address") result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": "not valid address", }, "timestamp": "now", }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -5) assert_equal(result[0]['error']['message'], 'Invalid address') # ScriptPubKey + internal self.log.info("Should import a scriptPubKey with internal flag") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", "internal": True }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], True) assert_equal(address_assert['ismine'], False) assert_equal(address_assert['timestamp'], timestamp) # ScriptPubKey + !internal self.log.info("Should not import a scriptPubKey without internal flag") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -8) assert_equal(result[0]['error']['message'], 'Internal must be set for hex scriptPubKey') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # Address + Public key + !Internal self.log.info("Should import an address with public key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": address['address'] }, "timestamp": "now", "pubkeys": [address['pubkey']] }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], True) assert_equal(address_assert['ismine'], False) assert_equal(address_assert['timestamp'], timestamp) # ScriptPubKey + Public key + internal self.log.info( "Should import a scriptPubKey with internal and with public key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) request = [{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", "pubkeys": [address['pubkey']], "internal": True }] result = self.nodes[1].importmulti(request) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], True) assert_equal(address_assert['ismine'], False) assert_equal(address_assert['timestamp'], timestamp) # ScriptPubKey + Public key + !internal self.log.info( "Should not import a scriptPubKey without internal and with public key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) request = [{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", "pubkeys": [address['pubkey']] }] result = self.nodes[1].importmulti(request) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -8) assert_equal(result[0]['error']['message'], 'Internal must be set for hex scriptPubKey') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # Address + Private key + !watchonly self.log.info("Should import an address with private key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": address['address'] }, "timestamp": "now", "keys": [self.nodes[0].dumpprivkey(address['address'])] }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], True) assert_equal(address_assert['timestamp'], timestamp) self.log.info( "Should not import an address with private key if is already imported") result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": address['address'] }, "timestamp": "now", "keys": [self.nodes[0].dumpprivkey(address['address'])] }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -4) assert_equal(result[0]['error']['message'], 'The wallet already contains the private key for this address or script') # Address + Private key + watchonly self.log.info( "Should not import an address with private key and with watchonly") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": address['address'] }, "timestamp": "now", "keys": [self.nodes[0].dumpprivkey(address['address'])], "watchonly": True }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -8) assert_equal(result[0]['error']['message'], 'Incompatibility found between watchonly and keys') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # ScriptPubKey + Private key + internal self.log.info( "Should import a scriptPubKey with internal and with private key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", "keys": [self.nodes[0].dumpprivkey(address['address'])], "internal": True }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], True) assert_equal(address_assert['timestamp'], timestamp) # ScriptPubKey + Private key + !internal self.log.info( "Should not import a scriptPubKey without internal and with private key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", "keys": [self.nodes[0].dumpprivkey(address['address'])] }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -8) assert_equal(result[0]['error']['message'], 'Internal must be set for hex scriptPubKey') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # P2SH address sig_address_1 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_2 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_3 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) multi_sig_script = self.nodes[0].createmultisig( 2, [sig_address_1['address'], sig_address_2['address'], sig_address_3['pubkey']]) self.nodes[1].generate(100) transactionid = self.nodes[1].sendtoaddress( multi_sig_script['address'], 10.00) self.nodes[1].generate(1) timestamp = self.nodes[1].getblock( self.nodes[1].getbestblockhash())['mediantime'] transaction = self.nodes[1].gettransaction(transactionid) self.log.info("Should import a p2sh") result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": multi_sig_script['address'] }, "timestamp": "now", }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress( multi_sig_script['address']) assert_equal(address_assert['isscript'], True) assert_equal(address_assert['iswatchonly'], True) assert_equal(address_assert['timestamp'], timestamp) p2shunspent = self.nodes[1].listunspent( 0, 999999, [multi_sig_script['address']])[0] assert_equal(p2shunspent['spendable'], False) assert_equal(p2shunspent['solvable'], False) # P2SH + Redeem script sig_address_1 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_2 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_3 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) multi_sig_script = self.nodes[0].createmultisig( 2, [sig_address_1['address'], sig_address_2['address'], sig_address_3['pubkey']]) self.nodes[1].generate(100) transactionid = self.nodes[1].sendtoaddress( multi_sig_script['address'], 10.00) self.nodes[1].generate(1) timestamp = self.nodes[1].getblock( self.nodes[1].getbestblockhash())['mediantime'] transaction = self.nodes[1].gettransaction(transactionid) self.log.info("Should import a p2sh with respective redeem script") result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": multi_sig_script['address'] }, "timestamp": "now", "redeemscript": multi_sig_script['redeemScript'] }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress( multi_sig_script['address']) assert_equal(address_assert['timestamp'], timestamp) p2shunspent = self.nodes[1].listunspent( 0, 999999, [multi_sig_script['address']])[0] assert_equal(p2shunspent['spendable'], False) assert_equal(p2shunspent['solvable'], True) # P2SH + Redeem script + Private Keys + !Watchonly sig_address_1 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_2 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_3 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) multi_sig_script = self.nodes[0].createmultisig( 2, [sig_address_1['address'], sig_address_2['address'], sig_address_3['pubkey']]) self.nodes[1].generate(100) transactionid = self.nodes[1].sendtoaddress( multi_sig_script['address'], 10.00) self.nodes[1].generate(1) timestamp = self.nodes[1].getblock( self.nodes[1].getbestblockhash())['mediantime'] transaction = self.nodes[1].gettransaction(transactionid) self.log.info( "Should import a p2sh with respective redeem script and private keys") result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": multi_sig_script['address'] }, "timestamp": "now", "redeemscript": multi_sig_script['redeemScript'], "keys": [self.nodes[0].dumpprivkey(sig_address_1['address']), self.nodes[0].dumpprivkey(sig_address_2['address'])] }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress( multi_sig_script['address']) assert_equal(address_assert['timestamp'], timestamp) p2shunspent = self.nodes[1].listunspent( 0, 999999, [multi_sig_script['address']])[0] assert_equal(p2shunspent['spendable'], False) assert_equal(p2shunspent['solvable'], True) # P2SH + Redeem script + Private Keys + Watchonly sig_address_1 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_2 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) sig_address_3 = self.nodes[0].validateaddress( self.nodes[0].getnewaddress()) multi_sig_script = self.nodes[0].createmultisig( 2, [sig_address_1['address'], sig_address_2['address'], sig_address_3['pubkey']]) self.nodes[1].generate(100) transactionid = self.nodes[1].sendtoaddress( multi_sig_script['address'], 10.00) self.nodes[1].generate(1) timestamp = self.nodes[1].getblock( self.nodes[1].getbestblockhash())['mediantime'] transaction = self.nodes[1].gettransaction(transactionid) self.log.info( "Should import a p2sh with respective redeem script and private keys") result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": multi_sig_script['address'] }, "timestamp": "now", "redeemscript": multi_sig_script['redeemScript'], "keys": [self.nodes[0].dumpprivkey(sig_address_1['address']), self.nodes[0].dumpprivkey(sig_address_2['address'])], "watchonly": True }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -8) assert_equal(result[0]['error']['message'], 'Incompatibility found between watchonly and keys') # Address + Public key + !Internal + Wrong pubkey self.log.info("Should not import an address with a wrong public key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) address2 = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": address['address'] }, "timestamp": "now", "pubkeys": [address2['pubkey']] }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -5) assert_equal(result[0]['error']['message'], 'Consistency check failed') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # ScriptPubKey + Public key + internal + Wrong pubkey self.log.info( "Should not import a scriptPubKey with internal and with a wrong public key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) address2 = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) request = [{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", "pubkeys": [address2['pubkey']], "internal": True }] result = self.nodes[1].importmulti(request) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -5) assert_equal(result[0]['error']['message'], 'Consistency check failed') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # Address + Private key + !watchonly + Wrong private key self.log.info("Should not import an address with a wrong private key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) address2 = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": address['address'] }, "timestamp": "now", "keys": [self.nodes[0].dumpprivkey(address2['address'])] }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -5) assert_equal(result[0]['error']['message'], 'Consistency check failed') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # ScriptPubKey + Private key + internal + Wrong private key self.log.info( "Should not import a scriptPubKey with internal and with a wrong private key") address = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) address2 = self.nodes[0].validateaddress(self.nodes[0].getnewaddress()) result = self.nodes[1].importmulti([{ "scriptPubKey": address['scriptPubKey'], "timestamp": "now", "keys": [self.nodes[0].dumpprivkey(address2['address'])], "internal": True }]) assert_equal(result[0]['success'], False) assert_equal(result[0]['error']['code'], -5) assert_equal(result[0]['error']['message'], 'Consistency check failed') address_assert = self.nodes[1].validateaddress(address['address']) assert_equal(address_assert['iswatchonly'], False) assert_equal(address_assert['ismine'], False) assert_equal('timestamp' in address_assert, False) # Importing existing watch only address with new timestamp should replace saved timestamp. assert_greater_than(timestamp, watchonly_timestamp) self.log.info("Should replace previously saved watch only timestamp.") result = self.nodes[1].importmulti([{ "scriptPubKey": { "address": watchonly_address, }, "timestamp": "now", }]) assert_equal(result[0]['success'], True) address_assert = self.nodes[1].validateaddress(watchonly_address) assert_equal(address_assert['iswatchonly'], True) assert_equal(address_assert['ismine'], False) assert_equal(address_assert['timestamp'], timestamp) watchonly_timestamp = timestamp # restart nodes to check for proper serialization/deserialization of watch only address self.stop_nodes() - self.nodes = self.start_nodes(2, self.options.tmpdir) + self.start_nodes() address_assert = self.nodes[1].validateaddress(watchonly_address) assert_equal(address_assert['iswatchonly'], True) assert_equal(address_assert['ismine'], False) assert_equal(address_assert['timestamp'], watchonly_timestamp) # Bad or missing timestamps self.log.info("Should throw on invalid or missing timestamp values") assert_raises_message( JSONRPCException, 'Missing required timestamp field for key', self.nodes[1].importmulti, [{ "scriptPubKey": address['scriptPubKey'], }]) assert_raises_message( JSONRPCException, 'Expected number or "now" timestamp value for key. got type string', self.nodes[1].importmulti, [{ "scriptPubKey": address['scriptPubKey'], "timestamp": "", }]) if __name__ == '__main__': ImportMultiTest().main() diff --git a/test/functional/importprunedfunds.py b/test/functional/importprunedfunds.py index f2705f07f..ca944cc8e 100755 --- a/test/functional/importprunedfunds.py +++ b/test/functional/importprunedfunds.py @@ -1,123 +1,121 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class ImportPrunedFundsTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 def run_test(self): self.log.info("Mining blocks...") self.nodes[0].generate(101) self.sync_all() # address address1 = self.nodes[0].getnewaddress() # pubkey address2 = self.nodes[0].getnewaddress() # Using pubkey address2_pubkey = self.nodes[0].validateaddress(address2)['pubkey'] # privkey address3 = self.nodes[0].getnewaddress() # Using privkey address3_privkey = self.nodes[0].dumpprivkey(address3) # Check only one address address_info = self.nodes[0].validateaddress(address1) assert_equal(address_info['ismine'], True) self.sync_all() # Node 1 sync test assert_equal(self.nodes[1].getblockcount(), 101) # Address Test - before import address_info = self.nodes[1].validateaddress(address1) assert_equal(address_info['iswatchonly'], False) assert_equal(address_info['ismine'], False) address_info = self.nodes[1].validateaddress(address2) assert_equal(address_info['iswatchonly'], False) assert_equal(address_info['ismine'], False) address_info = self.nodes[1].validateaddress(address3) assert_equal(address_info['iswatchonly'], False) assert_equal(address_info['ismine'], False) # Send funds to self txnid1 = self.nodes[0].sendtoaddress(address1, 0.1) self.nodes[0].generate(1) rawtxn1 = self.nodes[0].gettransaction(txnid1)['hex'] proof1 = self.nodes[0].gettxoutproof([txnid1]) txnid2 = self.nodes[0].sendtoaddress(address2, 0.05) self.nodes[0].generate(1) rawtxn2 = self.nodes[0].gettransaction(txnid2)['hex'] proof2 = self.nodes[0].gettxoutproof([txnid2]) txnid3 = self.nodes[0].sendtoaddress(address3, 0.025) self.nodes[0].generate(1) rawtxn3 = self.nodes[0].gettransaction(txnid3)['hex'] proof3 = self.nodes[0].gettxoutproof([txnid3]) self.sync_all() # Import with no affiliated address assert_raises_jsonrpc( -5, "No addresses", self.nodes[1].importprunedfunds, rawtxn1, proof1) balance1 = self.nodes[1].getbalance("", 0, True) assert_equal(balance1, Decimal(0)) # Import with affiliated address with no rescan self.nodes[1].importaddress(address2, "add2", False) result2 = self.nodes[1].importprunedfunds(rawtxn2, proof2) balance2 = self.nodes[1].getbalance("add2", 0, True) assert_equal(balance2, Decimal('0.05')) # Import with private key with no rescan self.nodes[1].importprivkey(address3_privkey, "add3", False) result3 = self.nodes[1].importprunedfunds(rawtxn3, proof3) balance3 = self.nodes[1].getbalance("add3", 0, False) assert_equal(balance3, Decimal('0.025')) balance3 = self.nodes[1].getbalance("*", 0, True) assert_equal(balance3, Decimal('0.075')) # Addresses Test - after import address_info = self.nodes[1].validateaddress(address1) assert_equal(address_info['iswatchonly'], False) assert_equal(address_info['ismine'], False) address_info = self.nodes[1].validateaddress(address2) assert_equal(address_info['iswatchonly'], True) assert_equal(address_info['ismine'], False) address_info = self.nodes[1].validateaddress(address3) assert_equal(address_info['iswatchonly'], False) assert_equal(address_info['ismine'], True) # Remove transactions assert_raises_jsonrpc( -8, "Transaction does not exist in wallet.", self.nodes[1].removeprunedfunds, txnid1) balance1 = self.nodes[1].getbalance("*", 0, True) assert_equal(balance1, Decimal('0.075')) self.nodes[1].removeprunedfunds(txnid2) balance2 = self.nodes[1].getbalance("*", 0, True) assert_equal(balance2, Decimal('0.025')) self.nodes[1].removeprunedfunds(txnid3) balance3 = self.nodes[1].getbalance("*", 0, True) assert_equal(balance3, Decimal('0.0')) if __name__ == '__main__': ImportPrunedFundsTest().main() diff --git a/test/functional/invalidateblock.py b/test/functional/invalidateblock.py index 0e7ee2a5a..9ff6f49e2 100755 --- a/test/functional/invalidateblock.py +++ b/test/functional/invalidateblock.py @@ -1,75 +1,73 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test InvalidateBlock code # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class InvalidateTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 3 def setup_network(self): self.setup_nodes() def run_test(self): self.log.info( "Make sure we repopulate setBlockIndexCandidates after InvalidateBlock:") self.log.info("Mine 4 blocks on Node 0") self.nodes[0].generate(4) assert(self.nodes[0].getblockcount() == 4) besthash = self.nodes[0].getbestblockhash() self.log.info("Mine competing 6 blocks on Node 1") self.nodes[1].generate(6) assert(self.nodes[1].getblockcount() == 6) self.log.info("Connect nodes to force a reorg") connect_nodes_bi(self.nodes, 0, 1) sync_blocks(self.nodes[0:2]) assert(self.nodes[0].getblockcount() == 6) badhash = self.nodes[1].getblockhash(2) self.log.info( "Invalidate block 2 on node 0 and verify we reorg to node 0's original chain") self.nodes[0].invalidateblock(badhash) newheight = self.nodes[0].getblockcount() newhash = self.nodes[0].getbestblockhash() if (newheight != 4 or newhash != besthash): raise AssertionError( "Wrong tip for node0, hash %s, height %d" % (newhash, newheight)) self.log.info("\nMake sure we won't reorg to a lower work chain:") connect_nodes_bi(self.nodes, 1, 2) self.log.info("Sync node 2 to node 1 so both have 6 blocks") sync_blocks(self.nodes[1:3]) assert(self.nodes[2].getblockcount() == 6) self.log.info("Invalidate block 5 on node 1 so its tip is now at 4") self.nodes[1].invalidateblock(self.nodes[1].getblockhash(5)) assert(self.nodes[1].getblockcount() == 4) self.log.info("Invalidate block 3 on node 2, so its tip is now 2") self.nodes[2].invalidateblock(self.nodes[2].getblockhash(3)) assert(self.nodes[2].getblockcount() == 2) self.log.info("..and then mine a block") self.nodes[2].generate(1) self.log.info("Verify all nodes are at the right height") time.sleep(5) assert_equal(self.nodes[2].getblockcount(), 3) assert_equal(self.nodes[0].getblockcount(), 4) node1height = self.nodes[1].getblockcount() if node1height < 4: raise AssertionError( "Node 1 reorged to a lower height: %d" % node1height) if __name__ == '__main__': InvalidateTest().main() diff --git a/test/functional/invalidblockrequest.py b/test/functional/invalidblockrequest.py index fb4c43743..e71cc63b3 100755 --- a/test/functional/invalidblockrequest.py +++ b/test/functional/invalidblockrequest.py @@ -1,124 +1,124 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import copy import time ''' In this test we connect to one node over p2p, and test block requests: 1) Valid blocks should be requested and become chain tip. 2) Invalid block with duplicated transaction should be re-requested. 3) Invalid block with bad coinbase value should be rejected and not re-requested. ''' # Use the ComparisonTestFramework with 1 node: only use --testbinary. class InvalidBlockRequestTest(ComparisonTestFramework): ''' Can either run this test as 1 node with expected answers, or two and compare them. Change the "outcome" variable from each TestInstance object to only do the comparison. ''' - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 + self.setup_clean_chain = True def run_test(self): test = TestManager(self, self.options.tmpdir) test.add_all_connections(self.nodes) self.tip = None self.block_time = None NetworkThread().start() # Start up network handling in another thread test.run() def get_tests(self): if self.tip is None: self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0) self.block_time = int(time.time()) + 1 ''' Create a new block with an anyone-can-spend coinbase ''' height = 1 block = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block.solve() # Save the coinbase for later self.block1 = block self.tip = block.sha256 height += 1 yield TestInstance([[block, True]]) ''' Now we need that block to mature so we can spend the coinbase. ''' test = TestInstance(sync_every_block=False) for i in range(100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() self.tip = block.sha256 self.block_time += 1 test.blocks_and_transactions.append([block, True]) height += 1 yield test ''' Now we use merkle-root malleability to generate an invalid block with same blockheader. Manufacture a block with 3 transactions (coinbase, spend of prior coinbase, spend of that spend). Duplicate the 3rd transaction to leave merkle root and blockheader unchanged but invalidate the block. ''' block2 = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 # b'0x51' is OP_TRUE tx1 = create_transaction(self.block1.vtx[0], 0, b'\x51', 50 * COIN) tx2 = create_transaction(tx1, 0, b'\x51', 50 * COIN) block2.vtx.extend([tx1, tx2]) block2.hashMerkleRoot = block2.calc_merkle_root() block2.rehash() block2.solve() orig_hash = block2.sha256 block2_orig = copy.deepcopy(block2) # Mutate block 2 block2.vtx.append(tx2) assert_equal(block2.hashMerkleRoot, block2.calc_merkle_root()) assert_equal(orig_hash, block2.rehash()) assert(block2_orig.vtx != block2.vtx) self.tip = block2.sha256 yield TestInstance([[block2, RejectResult(16, b'bad-txns-duplicate')], [block2_orig, True]]) height += 1 ''' Make sure that a totally screwed up block is not valid. ''' block3 = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block3.vtx[0].vout[0].nValue = 100 * COIN # Too high! block3.vtx[0].sha256 = None block3.vtx[0].calc_sha256() block3.hashMerkleRoot = block3.calc_merkle_root() block3.rehash() block3.solve() yield TestInstance([[block3, RejectResult(16, b'bad-cb-amount')]]) if __name__ == '__main__': InvalidBlockRequestTest().main() diff --git a/test/functional/invalidtxrequest.py b/test/functional/invalidtxrequest.py index 2df322ee6..ff5b01c7a 100755 --- a/test/functional/invalidtxrequest.py +++ b/test/functional/invalidtxrequest.py @@ -1,82 +1,79 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import ComparisonTestFramework from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time ''' In this test we connect to one node over p2p, and test tx requests. ''' # Use the ComparisonTestFramework with 1 node: only use --testbinary. class InvalidTxRequestTest(ComparisonTestFramework): - ''' - Can either run this test as 1 node with expected answers, or two and - compare them. Change the "outcome" variable from each TestInstance object - to only do the comparison. - ''' - - def __init__(self): - super().__init__() + ''' Can either run this test as 1 node with expected answers, or two and compare them. + Change the "outcome" variable from each TestInstance object to only do the comparison. ''' + + def set_test_params(self): self.num_nodes = 1 + self.setup_clean_chain = True def run_test(self): test = TestManager(self, self.options.tmpdir) test.add_all_connections(self.nodes) self.tip = None self.block_time = None NetworkThread().start() # Start up network handling in another thread test.run() def get_tests(self): if self.tip is None: self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0) self.block_time = int(time.time()) + 1 ''' Create a new block with an anyone-can-spend coinbase ''' height = 1 block = create_block( self.tip, create_coinbase(height), self.block_time) self.block_time += 1 block.solve() # Save the coinbase for later self.block1 = block self.tip = block.sha256 height += 1 yield TestInstance([[block, True]]) ''' Now we need that block to mature so we can spend the coinbase. ''' test = TestInstance(sync_every_block=False) for i in range(100): block = create_block( self.tip, create_coinbase(height), self.block_time) block.solve() self.tip = block.sha256 self.block_time += 1 test.blocks_and_transactions.append([block, True]) height += 1 yield test # b'\x64' is OP_NOTIF # Transaction will be rejected with code 16 (REJECT_INVALID) tx1 = create_transaction( self.block1.vtx[0], 0, b'\x64', 50 * COIN - 12000) yield TestInstance([[tx1, RejectResult(16, b'mandatory-script-verify-flag-failed')]]) # TODO: test further transactions... if __name__ == '__main__': InvalidTxRequestTest().main() diff --git a/test/functional/keypool-topup.py b/test/functional/keypool-topup.py index a26542fa3..5544cdadf 100755 --- a/test/functional/keypool-topup.py +++ b/test/functional/keypool-topup.py @@ -1,82 +1,81 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test HD Wallet keypool restore function. Two nodes. Node1 is under test. Node0 is providing transactions and generating blocks. - Start node1, shutdown and backup wallet. - Generate 110 keys (enough to drain the keypool). Store key 90 (in the initial keypool) and key 110 (beyond the initial keypool). Send funds to key 90 and key 110. - Stop node1, clear the datadir, move wallet file back into the datadir and restart node1. - connect node1 to node0. Verify that they sync and node1 receives its funds.""" import shutil from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, connect_nodes_bi, sync_blocks, ) class KeypoolRestoreTest(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 self.extra_args = [['-usehd=0'], ['-usehd=1', '-keypool=100', '-keypoolmin=20']] def run_test(self): self.tmpdir = self.options.tmpdir self.nodes[0].generate(101) self.log.info("Make backup of wallet") self.stop_node(1) shutil.copyfile(self.tmpdir + "/node1/regtest/wallet.dat", self.tmpdir + "/wallet.bak") - self.nodes[1] = self.start_node(1, self.tmpdir, self.extra_args[1]) + self.start_node(1, self.extra_args[1]) connect_nodes_bi(self.nodes, 0, 1) self.log.info("Generate keys for wallet") for _ in range(90): addr_oldpool = self.nodes[1].getnewaddress() for _ in range(20): addr_extpool = self.nodes[1].getnewaddress() self.log.info("Send funds to wallet") self.nodes[0].sendtoaddress(addr_oldpool, 10) self.nodes[0].generate(1) self.nodes[0].sendtoaddress(addr_extpool, 5) self.nodes[0].generate(1) sync_blocks(self.nodes) self.log.info("Restart node with wallet backup") self.stop_node(1) shutil.copyfile(self.tmpdir + "/wallet.bak", self.tmpdir + "/node1/regtest/wallet.dat") self.log.info("Verify keypool is restored and balance is correct") - self.nodes[1] = self.start_node(1, self.tmpdir, self.extra_args[1]) + self.start_node(1, self.extra_args[1]) connect_nodes_bi(self.nodes, 0, 1) self.sync_all() assert_equal(self.nodes[1].getbalance(), 15) assert_equal(self.nodes[1].listtransactions() [0]['category'], "receive") # Check that we have marked all keys up to the used keypool key as used assert_equal(self.nodes[1].validateaddress( self.nodes[1].getnewaddress())['hdkeypath'], "m/0'/0'/110'") if __name__ == '__main__': KeypoolRestoreTest().main() diff --git a/test/functional/keypool.py b/test/functional/keypool.py index 682a98e92..1fb58e6cb 100755 --- a/test/functional/keypool.py +++ b/test/functional/keypool.py @@ -1,96 +1,93 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # Exercise the wallet keypool, and interaction with wallet encryption/locking from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class KeyPoolTest(BitcoinTestFramework): + def set_test_params(self): + self.num_nodes = 1 def run_test(self): nodes = self.nodes addr_before_encrypting = nodes[0].getnewaddress() addr_before_encrypting_data = nodes[ 0].validateaddress(addr_before_encrypting) wallet_info_old = nodes[0].getwalletinfo() assert(addr_before_encrypting_data[ 'hdmasterkeyid'] == wallet_info_old['hdmasterkeyid']) # Encrypt wallet and wait to terminate nodes[0].node_encrypt_wallet('test') # Restart node 0 - nodes[0] = self.start_node(0, self.options.tmpdir) + self.start_node(0) # Keep creating keys addr = nodes[0].getnewaddress() addr_data = nodes[0].validateaddress(addr) wallet_info = nodes[0].getwalletinfo() assert(addr_before_encrypting_data[ 'hdmasterkeyid'] != wallet_info['hdmasterkeyid']) assert(addr_data['hdmasterkeyid'] == wallet_info['hdmasterkeyid']) assert_raises_jsonrpc( -12, "Error: Keypool ran out, please call keypoolrefill first", nodes[0].getnewaddress) # put six (plus 2) new keys in the keypool (100% external-, +100% internal-keys, 1 in min) nodes[0].walletpassphrase('test', 12000) nodes[0].keypoolrefill(6) nodes[0].walletlock() wi = nodes[0].getwalletinfo() assert_equal(wi['keypoolsize_hd_internal'], 6) assert_equal(wi['keypoolsize'], 6) # drain the internal keys nodes[0].getrawchangeaddress() nodes[0].getrawchangeaddress() nodes[0].getrawchangeaddress() nodes[0].getrawchangeaddress() nodes[0].getrawchangeaddress() nodes[0].getrawchangeaddress() addr = set() # the next one should fail assert_raises_jsonrpc(-12, "Keypool ran out", nodes[0].getrawchangeaddress) # drain the external keys addr.add(nodes[0].getnewaddress()) addr.add(nodes[0].getnewaddress()) addr.add(nodes[0].getnewaddress()) addr.add(nodes[0].getnewaddress()) addr.add(nodes[0].getnewaddress()) addr.add(nodes[0].getnewaddress()) assert(len(addr) == 6) # the next one should fail assert_raises_jsonrpc( -12, "Error: Keypool ran out, please call keypoolrefill first", nodes[0].getnewaddress) # refill keypool with three new addresses nodes[0].walletpassphrase('test', 1) nodes[0].keypoolrefill(3) # test walletpassphrase timeout time.sleep(1.1) assert_equal(nodes[0].getwalletinfo()["unlocked_until"], 0) # drain them by mining nodes[0].generate(1) nodes[0].generate(1) nodes[0].generate(1) assert_raises_jsonrpc(-12, "Keypool ran out", nodes[0].generate, 1) nodes[0].walletpassphrase('test', 100) nodes[0].keypoolrefill(100) wi = nodes[0].getwalletinfo() assert_equal(wi['keypoolsize_hd_internal'], 100) assert_equal(wi['keypoolsize'], 100) - def __init__(self): - super().__init__() - self.setup_clean_chain = False - self.num_nodes = 1 - if __name__ == '__main__': KeyPoolTest().main() diff --git a/test/functional/listsinceblock.py b/test/functional/listsinceblock.py index de92b0353..9184ff58b 100755 --- a/test/functional/listsinceblock.py +++ b/test/functional/listsinceblock.py @@ -1,81 +1,79 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal class ListSinceBlockTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.setup_clean_chain = True + def set_test_params(self): self.num_nodes = 4 + self.setup_clean_chain = True def run_test(self): ''' `listsinceblock` did not behave correctly when handed a block that was no longer in the main chain: ab0 / \ aa1 [tx0] bb1 | | aa2 bb2 | | aa3 bb3 | bb4 Consider a client that has only seen block `aa3` above. It asks the node to `listsinceblock aa3`. But at some point prior the main chain switched to the bb chain. Previously: listsinceblock would find height=4 for block aa3 and compare this to height=5 for the tip of the chain (bb4). It would then return results restricted to bb3-bb4. Now: listsinceblock finds the fork at ab0 and returns results in the range bb1-bb4. This test only checks that [tx0] is present. ''' self.nodes[2].generate(101) self.sync_all() assert_equal(self.nodes[0].getbalance(), 0) assert_equal(self.nodes[1].getbalance(), 0) assert_equal(self.nodes[2].getbalance(), 50) assert_equal(self.nodes[3].getbalance(), 0) # Split network into two self.split_network() # send to nodes[0] from nodes[2] senttx = self.nodes[2].sendtoaddress(self.nodes[0].getnewaddress(), 1) # generate on both sides lastblockhash = self.nodes[1].generate(6)[5] self.nodes[2].generate(7) self.log.info('lastblockhash=%s' % (lastblockhash)) self.sync_all([self.nodes[:2], self.nodes[2:]]) self.join_network() # listsinceblock(lastblockhash) should now include tx, as seen from # nodes[0] lsbres = self.nodes[0].listsinceblock(lastblockhash) found = False for tx in lsbres['transactions']: if tx['txid'] == senttx: found = True break assert_equal(found, True) if __name__ == '__main__': ListSinceBlockTest().main() diff --git a/test/functional/listtransactions.py b/test/functional/listtransactions.py index fac2c1113..84da1f895 100755 --- a/test/functional/listtransactions.py +++ b/test/functional/listtransactions.py @@ -1,110 +1,103 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # Exercise the listtransactions API from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import CTransaction, COIN from io import BytesIO def txFromHex(hexstring): tx = CTransaction() f = BytesIO(hex_str_to_bytes(hexstring)) tx.deserialize(f) return tx class ListTransactionsTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.num_nodes = 4 - self.setup_clean_chain = False - - def setup_nodes(self): - # This test requires mocktime + def set_test_params(self): + self.num_nodes = 2 self.enable_mocktime() - self.nodes = self.start_nodes(self.num_nodes, self.options.tmpdir) def run_test(self): # Simple send, 0 to 1: txid = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 0.1) self.sync_all() assert_array_result(self.nodes[0].listtransactions(), {"txid": txid}, {"category": "send", "account": "", "amount": Decimal("-0.1"), "confirmations": 0}) assert_array_result(self.nodes[1].listtransactions(), {"txid": txid}, {"category": "receive", "account": "", "amount": Decimal("0.1"), "confirmations": 0}) # mine a block, confirmations should change: self.nodes[0].generate(1) self.sync_all() assert_array_result(self.nodes[0].listtransactions(), {"txid": txid}, {"category": "send", "account": "", "amount": Decimal("-0.1"), "confirmations": 1}) assert_array_result(self.nodes[1].listtransactions(), {"txid": txid}, {"category": "receive", "account": "", "amount": Decimal("0.1"), "confirmations": 1}) # send-to-self: txid = self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 0.2) assert_array_result(self.nodes[0].listtransactions(), {"txid": txid, "category": "send"}, {"amount": Decimal("-0.2")}) assert_array_result(self.nodes[0].listtransactions(), {"txid": txid, "category": "receive"}, {"amount": Decimal("0.2")}) # sendmany from node1: twice to self, twice to node2: send_to = {self.nodes[0].getnewaddress(): 0.11, self.nodes[1].getnewaddress(): 0.22, self.nodes[0].getaccountaddress("from1"): 0.33, self.nodes[1].getaccountaddress("toself"): 0.44} txid = self.nodes[1].sendmany("", send_to) self.sync_all() assert_array_result(self.nodes[1].listtransactions(), {"category": "send", "amount": Decimal("-0.11")}, {"txid": txid}) assert_array_result(self.nodes[0].listtransactions(), {"category": "receive", "amount": Decimal("0.11")}, {"txid": txid}) assert_array_result(self.nodes[1].listtransactions(), {"category": "send", "amount": Decimal("-0.22")}, {"txid": txid}) assert_array_result(self.nodes[1].listtransactions(), {"category": "receive", "amount": Decimal("0.22")}, {"txid": txid}) assert_array_result(self.nodes[1].listtransactions(), {"category": "send", "amount": Decimal("-0.33")}, {"txid": txid}) assert_array_result(self.nodes[0].listtransactions(), {"category": "receive", "amount": Decimal("0.33")}, {"txid": txid, "account": "from1"}) assert_array_result(self.nodes[1].listtransactions(), {"category": "send", "amount": Decimal("-0.44")}, {"txid": txid, "account": ""}) assert_array_result(self.nodes[1].listtransactions(), {"category": "receive", "amount": Decimal("0.44")}, {"txid": txid, "account": "toself"}) multisig = self.nodes[1].createmultisig( 1, [self.nodes[1].getnewaddress()]) self.nodes[0].importaddress( multisig["redeemScript"], "watchonly", False, True) txid = self.nodes[1].sendtoaddress(multisig["address"], 0.1) self.nodes[1].generate(1) self.sync_all() assert( len(self.nodes[0].listtransactions("watchonly", 100, 0, False)) == 0) assert_array_result( self.nodes[0].listtransactions("watchonly", 100, 0, True), {"category": "receive", "amount": Decimal("0.1")}, {"txid": txid, "account": "watchonly"}) if __name__ == '__main__': ListTransactionsTest().main() diff --git a/test/functional/maxuploadtarget.py b/test/functional/maxuploadtarget.py index 953cc709d..3dc372fcf 100755 --- a/test/functional/maxuploadtarget.py +++ b/test/functional/maxuploadtarget.py @@ -1,191 +1,189 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. ''' Test behavior of -maxuploadtarget. * Verify that getdata requests for old blocks (>1week) are dropped if uploadtarget has been reached. * Verify that getdata requests for recent blocks are respecteved even if uploadtarget has been reached. * Verify that the upload counters are reset after 24 hours. ''' from collections import defaultdict import time from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE class TestNode(NodeConnCB): def __init__(self): super().__init__() self.block_receive_map = defaultdict(int) def on_inv(self, conn, message): pass def on_block(self, conn, message): message.block.calc_sha256() self.block_receive_map[message.block.sha256] += 1 class MaxUploadTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 # Start a node with maxuploadtarget of 200 MB (/24h) self.extra_args = [["-maxuploadtarget=200"]] # Cache for utxos, as the listunspent may take a long time later in the # test self.utxo_cache = [] def run_test(self): # Before we connect anything, we first set the time on the node # to be in the past, otherwise things break because the CNode # time counters can't be reset backward after initialization old_time = int(time.time() - 2 * 60 * 60 * 24 * 7) self.nodes[0].setmocktime(old_time) # Generate some old blocks self.nodes[0].generate(130) # test_nodes[0] will only request old blocks # test_nodes[1] will only request new blocks # test_nodes[2] will test resetting the counters test_nodes = [] connections = [] for i in range(3): test_nodes.append(TestNode()) connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], test_nodes[i])) test_nodes[i].add_connection(connections[i]) # Start up network handling in another thread NetworkThread().start() [x.wait_for_verack() for x in test_nodes] # Test logic begins here # Now mine a big block mine_large_block(self.nodes[0], self.utxo_cache) # Store the hash; we'll request this later big_old_block = self.nodes[0].getbestblockhash() old_block_size = self.nodes[0].getblock(big_old_block, True)['size'] big_old_block = int(big_old_block, 16) # Advance to two days ago self.nodes[0].setmocktime(int(time.time()) - 2 * 60 * 60 * 24) # Mine one more block, so that the prior block looks old mine_large_block(self.nodes[0], self.utxo_cache) # We'll be requesting this new block too big_new_block = self.nodes[0].getbestblockhash() big_new_block = int(big_new_block, 16) # test_nodes[0] will test what happens if we just keep requesting the # the same big old block too many times (expect: disconnect) getdata_request = msg_getdata() getdata_request.inv.append(CInv(2, big_old_block)) max_bytes_per_day = 200 * 1024 * 1024 daily_buffer = 144 * LEGACY_MAX_BLOCK_SIZE max_bytes_available = max_bytes_per_day - daily_buffer success_count = max_bytes_available // old_block_size # 144MB will be reserved for relaying new blocks, so expect this to # succeed for ~70 tries. for i in range(success_count): test_nodes[0].send_message(getdata_request) test_nodes[0].sync_with_ping() assert_equal(test_nodes[0].block_receive_map[big_old_block], i + 1) assert_equal(len(self.nodes[0].getpeerinfo()), 3) # At most a couple more tries should succeed (depending on how long # the test has been running so far). for i in range(3): test_nodes[0].send_message(getdata_request) test_nodes[0].wait_for_disconnect() assert_equal(len(self.nodes[0].getpeerinfo()), 2) self.log.info( "Peer 0 disconnected after downloading old block too many times") # Requesting the current block on test_nodes[1] should succeed indefinitely, # even when over the max upload target. # We'll try 200 times getdata_request.inv = [CInv(2, big_new_block)] for i in range(200): test_nodes[1].send_message(getdata_request) test_nodes[1].sync_with_ping() assert_equal(test_nodes[1].block_receive_map[big_new_block], i + 1) self.log.info("Peer 1 able to repeatedly download new block") # But if test_nodes[1] tries for an old block, it gets disconnected # too. getdata_request.inv = [CInv(2, big_old_block)] test_nodes[1].send_message(getdata_request) test_nodes[1].wait_for_disconnect() assert_equal(len(self.nodes[0].getpeerinfo()), 1) self.log.info("Peer 1 disconnected after trying to download old block") self.log.info("Advancing system time on node to clear counters...") # If we advance the time by 24 hours, then the counters should reset, # and test_nodes[2] should be able to retrieve the old block. self.nodes[0].setmocktime(int(time.time())) test_nodes[2].sync_with_ping() test_nodes[2].send_message(getdata_request) test_nodes[2].sync_with_ping() assert_equal(test_nodes[2].block_receive_map[big_old_block], 1) self.log.info("Peer 2 able to download old block") [c.disconnect_node() for c in connections] # stop and start node 0 with 1MB maxuploadtarget, whitelist 127.0.0.1 self.log.info("Restarting nodes with -whitelist=127.0.0.1") self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir, [ - "-whitelist=127.0.0.1", "-maxuploadtarget=1"]) + self.start_node(0, ["-whitelist=127.0.0.1", + "-maxuploadtarget=1", "-blockmaxsize=999000"]) # recreate/reconnect a test node test_nodes = [TestNode()] connections = [NodeConn('127.0.0.1', p2p_port( 0), self.nodes[0], test_nodes[0])] test_nodes[0].add_connection(connections[0]) NetworkThread().start() # Start up network handling in another thread test_nodes[0].wait_for_verack() # retrieve 20 blocks which should be enough to break the 1MB limit getdata_request.inv = [CInv(2, big_new_block)] for i in range(20): test_nodes[0].send_message(getdata_request) test_nodes[0].sync_with_ping() assert_equal(test_nodes[0].block_receive_map[big_new_block], i + 1) getdata_request.inv = [CInv(2, big_old_block)] test_nodes[0].send_and_ping(getdata_request) # node is still connected because of the whitelist assert_equal(len(self.nodes[0].getpeerinfo()), 1) self.log.info( "Peer still connected after trying to download old block (whitelisted)") if __name__ == '__main__': MaxUploadTest().main() diff --git a/test/functional/mempool-accept-txn.py b/test/functional/mempool-accept-txn.py index f39c0bb7b..7ceda3791 100755 --- a/test/functional/mempool-accept-txn.py +++ b/test/functional/mempool-accept-txn.py @@ -1,270 +1,269 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ This test checks acceptance of transactions by the mempool It is derived from the much more complex p2p-fullblocktest. """ from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.comptool import TestManager, TestInstance from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * import struct from test_framework.cdefs import MAX_STANDARD_TX_SIGOPS # Error for too many sigops in one TX TXNS_TOO_MANY_SIGOPS_ERROR = b'bad-txns-too-many-sigops' RPC_TXNS_TOO_MANY_SIGOPS_ERROR = "64: " + \ TXNS_TOO_MANY_SIGOPS_ERROR.decode("utf-8") class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending class FullBlockTest(ComparisonTestFramework): # Can either run this test as 1 node with expected answers, or two and compare them. # Change the "outcome" variable from each TestInstance object to only do # the comparison. - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 + self.setup_clean_chain = True self.block_heights = {} self.coinbase_key = CECKey() self.coinbase_key.set_secretbytes(b"horsebattery") self.coinbase_pubkey = self.coinbase_key.get_pubkey() self.tip = None self.blocks = {} def setup_network(self): self.extra_args = [['-norelaypriority']] - self.nodes = self.start_nodes(self.num_nodes, self.options.tmpdir, - self.extra_args, - binary=[self.options.testbinary]) + self.add_nodes(self.num_nodes, self.extra_args) + self.start_nodes() def add_options(self, parser): super().add_options(parser) parser.add_option( "--runbarelyexpensive", dest="runbarelyexpensive", default=True) def run_test(self): self.test = TestManager(self, self.options.tmpdir) self.test.add_all_connections(self.nodes) # Start up network handling in another thread NetworkThread().start() self.test.run() def add_transactions_to_block(self, block, tx_list): [tx.rehash() for tx in tx_list] block.vtx.extend(tx_list) # this is a little handier to use than the version in blocktools.py def create_tx(self, spend_tx, n, value, script=CScript([OP_TRUE])): tx = create_transaction(spend_tx, n, b"", value, script) return tx # sign a transaction, using the key we know about # this signs input 0 in tx, which is assumed to be spending output n in # spend_tx def sign_tx(self, tx, spend_tx, n): scriptPubKey = bytearray(spend_tx.vout[n].scriptPubKey) if (scriptPubKey[0] == OP_TRUE): # an anyone-can-spend tx.vin[0].scriptSig = CScript() return sighash = SignatureHashForkId( spend_tx.vout[n].scriptPubKey, tx, 0, SIGHASH_ALL | SIGHASH_FORKID, spend_tx.vout[n].nValue) tx.vin[0].scriptSig = CScript( [self.coinbase_key.sign(sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID]))]) def create_and_sign_transaction(self, spend_tx, n, value, script=CScript([OP_TRUE])): tx = self.create_tx(spend_tx, n, value, script) self.sign_tx(tx, spend_tx, n) tx.rehash() return tx def next_block(self, number, spend=None, additional_coinbase_value=0, script=CScript([OP_TRUE]), solve=True): if self.tip == None: base_block_hash = self.genesis_hash block_time = int(time.time()) + 1 else: base_block_hash = self.tip.sha256 block_time = self.tip.nTime + 1 # First create the coinbase height = self.block_heights[base_block_hash] + 1 coinbase = create_coinbase(height, self.coinbase_pubkey) coinbase.vout[0].nValue += additional_coinbase_value coinbase.rehash() if spend == None: block = create_block(base_block_hash, coinbase, block_time) else: # all but one satoshi to fees coinbase.vout[0].nValue += spend.tx.vout[ spend.n].nValue - 1 coinbase.rehash() block = create_block(base_block_hash, coinbase, block_time) # spend 1 satoshi tx = create_transaction(spend.tx, spend.n, b"", 1, script) self.sign_tx(tx, spend.tx, spend.n) self.add_transactions_to_block(block, [tx]) block.hashMerkleRoot = block.calc_merkle_root() if solve: block.solve() self.tip = block self.block_heights[block.sha256] = height assert number not in self.blocks self.blocks[number] = block return block def get_tests(self): self.genesis_hash = int(self.nodes[0].getbestblockhash(), 16) self.block_heights[self.genesis_hash] = 0 spendable_outputs = [] # save the current tip so it can be spent by a later block def save_spendable_output(): spendable_outputs.append(self.tip) # get an output that we previously marked as spendable def get_spendable_output(): return PreviousSpendableOutput(spendable_outputs.pop(0).vtx[0], 0) # returns a test case that asserts that the current tip was accepted def accepted(): return TestInstance([[self.tip, True]]) # returns a test case that asserts that the current tip was rejected def rejected(reject=None): if reject is None: return TestInstance([[self.tip, False]]) else: return TestInstance([[self.tip, reject]]) # move the tip back to a previous block def tip(number): self.tip = self.blocks[number] # adds transactions to the block and updates state def update_block(block_number, new_transactions): block = self.blocks[block_number] self.add_transactions_to_block(block, new_transactions) old_sha256 = block.sha256 block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Update the internal state just like in next_block self.tip = block if block.sha256 != old_sha256: self.block_heights[ block.sha256] = self.block_heights[old_sha256] del self.block_heights[old_sha256] self.blocks[block_number] = block return block # shorthand for functions block = self.next_block create_tx = self.create_tx # shorthand for variables node = self.nodes[0] # Create a new block block(0) save_spendable_output() yield accepted() # Now we need that block to mature so we can spend the coinbase. test = TestInstance(sync_every_block=False) for i in range(99): block(5000 + i) test.blocks_and_transactions.append([self.tip, True]) save_spendable_output() yield test # Collect spendable outputs now to avoid cluttering the code later on out = [] for i in range(33): out.append(get_spendable_output()) # P2SH # Build the redeem script, hash it, use hash to create the p2sh script redeem_script = CScript([self.coinbase_pubkey] + [ OP_2DUP, OP_CHECKSIGVERIFY] * 5 + [OP_CHECKSIG]) redeem_script_hash = hash160(redeem_script) p2sh_script = CScript([OP_HASH160, redeem_script_hash, OP_EQUAL]) # Creates a new transaction using a p2sh transaction as input def spend_p2sh_tx(p2sh_tx_to_spend, output_script=CScript([OP_TRUE])): # Create the transaction spent_p2sh_tx = CTransaction() spent_p2sh_tx.vin.append( CTxIn(COutPoint(p2sh_tx_to_spend.sha256, 0), b'')) spent_p2sh_tx.vout.append(CTxOut(1, output_script)) # Sign the transaction using the redeem script sighash = SignatureHashForkId( redeem_script, spent_p2sh_tx, 0, SIGHASH_ALL | SIGHASH_FORKID, p2sh_tx_to_spend.vout[0].nValue) sig = self.coinbase_key.sign( sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID])) spent_p2sh_tx.vin[0].scriptSig = CScript([sig, redeem_script]) spent_p2sh_tx.rehash() return spent_p2sh_tx # P2SH tests # Create a p2sh transaction p2sh_tx = self.create_and_sign_transaction( out[0].tx, out[0].n, 1, p2sh_script) # Add the transaction to the block block(1) update_block(1, [p2sh_tx]) yield accepted() # Sigops p2sh limit for the mempool test p2sh_sigops_limit_mempool = MAX_STANDARD_TX_SIGOPS - \ redeem_script.GetSigOpCount(True) # Too many sigops in one p2sh script too_many_p2sh_sigops_mempool = CScript( [OP_CHECKSIG] * (p2sh_sigops_limit_mempool + 1)) # A transaction with this output script can't get into the mempool try: node.sendrawtransaction( ToHex(spend_p2sh_tx(p2sh_tx, too_many_p2sh_sigops_mempool))) except JSONRPCException as exp: assert_equal(exp.error["message"], RPC_TXNS_TOO_MANY_SIGOPS_ERROR) else: assert(False) # The transaction is rejected, so the mempool should still be empty assert_equal(set(node.getrawmempool()), set()) # Max sigops in one p2sh txn max_p2sh_sigops_mempool = CScript( [OP_CHECKSIG] * (p2sh_sigops_limit_mempool)) # A transaction with this output script can get into the mempool max_p2sh_sigops_txn = spend_p2sh_tx(p2sh_tx, max_p2sh_sigops_mempool) max_p2sh_sigops_txn_id = node.sendrawtransaction( ToHex(max_p2sh_sigops_txn)) assert_equal(set(node.getrawmempool()), {max_p2sh_sigops_txn_id}) # Mine the transaction block(2, spend=out[1]) update_block(2, [max_p2sh_sigops_txn]) yield accepted() # The transaction has been mined, it's not in the mempool anymore assert_equal(set(node.getrawmempool()), set()) if __name__ == '__main__': FullBlockTest().main() diff --git a/test/functional/mempool_limit.py b/test/functional/mempool_limit.py index b8afde551..aaa2deabc 100755 --- a/test/functional/mempool_limit.py +++ b/test/functional/mempool_limit.py @@ -1,56 +1,54 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # Test mempool limiting together/eviction with the wallet from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class MempoolLimitTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [["-maxmempool=5", "-spendzeroconfchange=0"]] def run_test(self): txouts = gen_return_txouts() relayfee = self.nodes[0].getnetworkinfo()['relayfee'] txids = [] utxos = create_confirmed_utxos(relayfee, self.nodes[0], 91) # create a mempool tx that will be evicted us0 = utxos.pop() inputs = [{"txid": us0["txid"], "vout": us0["vout"]}] outputs = {self.nodes[0].getnewaddress(): 0.0001} tx = self.nodes[0].createrawtransaction(inputs, outputs) # specifically fund this tx with low fee self.nodes[0].settxfee(relayfee) txF = self.nodes[0].fundrawtransaction(tx) # return to automatic fee selection self.nodes[0].settxfee(0) txFS = self.nodes[0].signrawtransaction( txF['hex'], None, None, "ALL|FORKID") txid = self.nodes[0].sendrawtransaction(txFS['hex']) relayfee = self.nodes[0].getnetworkinfo()['relayfee'] base_fee = relayfee * 100 for i in range(3): txids.append([]) txids[i] = create_lots_of_big_transactions( self.nodes[0], txouts, utxos[30 * i:30 * i + 30], 30, (i + 1) * base_fee) # by now, the tx should be evicted, check confirmation state assert(txid not in self.nodes[0].getrawmempool()) txdata = self.nodes[0].gettransaction(txid) assert(txdata['confirmations'] == 0) # confirmation should still be 0 if __name__ == '__main__': MempoolLimitTest().main() diff --git a/test/functional/mempool_packages.py b/test/functional/mempool_packages.py index 53826b9be..82606eb54 100755 --- a/test/functional/mempool_packages.py +++ b/test/functional/mempool_packages.py @@ -1,266 +1,264 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test descendant package tracking code.""" from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import COIN MAX_ANCESTORS = 25 MAX_DESCENDANTS = 25 class MempoolPackagesTest(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False self.extra_args = [["-maxorphantx=1000"], ["-maxorphantx=1000", "-limitancestorcount=5"]] # Build a transaction that spends parent_txid:vout # Return amount sent def chain_transaction(self, node, parent_txid, vout, value, fee, num_outputs): send_value = satoshi_round((value - fee) / num_outputs) inputs = [{'txid': parent_txid, 'vout': vout}] outputs = {} for i in range(num_outputs): outputs[node.getnewaddress()] = send_value rawtx = node.createrawtransaction(inputs, outputs) signedtx = node.signrawtransaction(rawtx, None, None, "ALL|FORKID") txid = node.sendrawtransaction(signedtx['hex']) fulltx = node.getrawtransaction(txid, 1) # make sure we didn't generate a change output assert(len(fulltx['vout']) == num_outputs) return (txid, send_value) def run_test(self): ''' Mine some blocks and have them mature. ''' self.nodes[0].generate(101) utxo = self.nodes[0].listunspent(10) txid = utxo[0]['txid'] vout = utxo[0]['vout'] value = utxo[0]['amount'] fee = Decimal("0.0001") # MAX_ANCESTORS transactions off a confirmed tx should be fine chain = [] for i in range(MAX_ANCESTORS): (txid, sent_value) = self.chain_transaction( self.nodes[0], txid, 0, value, fee, 1) value = sent_value chain.append(txid) # Check mempool has MAX_ANCESTORS transactions in it, and descendant # count and fees should look correct mempool = self.nodes[0].getrawmempool(True) assert_equal(len(mempool), MAX_ANCESTORS) descendant_count = 1 descendant_fees = 0 descendant_size = 0 descendants = [] ancestors = list(chain) for x in reversed(chain): # Check that getmempoolentry is consistent with getrawmempool entry = self.nodes[0].getmempoolentry(x) assert_equal(entry, mempool[x]) # Check that the descendant calculations are correct assert_equal(mempool[x]['descendantcount'], descendant_count) descendant_fees += mempool[x]['fee'] assert_equal(mempool[x]['modifiedfee'], mempool[x]['fee']) assert_equal(mempool[x]['descendantfees'], descendant_fees * COIN) descendant_size += mempool[x]['size'] assert_equal(mempool[x]['descendantsize'], descendant_size) descendant_count += 1 # Check that getmempooldescendants is correct assert_equal(sorted(descendants), sorted( self.nodes[0].getmempooldescendants(x))) descendants.append(x) # Check that getmempoolancestors is correct ancestors.remove(x) assert_equal(sorted(ancestors), sorted( self.nodes[0].getmempoolancestors(x))) # Check that getmempoolancestors/getmempooldescendants correctly handle verbose=true v_ancestors = self.nodes[0].getmempoolancestors(chain[-1], True) assert_equal(len(v_ancestors), len(chain) - 1) for x in v_ancestors.keys(): assert_equal(mempool[x], v_ancestors[x]) assert(chain[-1] not in v_ancestors.keys()) v_descendants = self.nodes[0].getmempooldescendants(chain[0], True) assert_equal(len(v_descendants), len(chain) - 1) for x in v_descendants.keys(): assert_equal(mempool[x], v_descendants[x]) assert(chain[0] not in v_descendants.keys()) # Check that ancestor modified fees includes fee deltas from # prioritisetransaction self.nodes[0].prioritisetransaction(chain[0], 0, 1000) mempool = self.nodes[0].getrawmempool(True) ancestor_fees = 0 for x in chain: ancestor_fees += mempool[x]['fee'] assert_equal(mempool[x]['ancestorfees'], ancestor_fees * COIN + 1000) # Undo the prioritisetransaction for later tests self.nodes[0].prioritisetransaction(chain[0], 0, -1000) # Check that descendant modified fees includes fee deltas from # prioritisetransaction self.nodes[0].prioritisetransaction(chain[-1], 0, 1000) mempool = self.nodes[0].getrawmempool(True) descendant_fees = 0 for x in reversed(chain): descendant_fees += mempool[x]['fee'] assert_equal(mempool[x]['descendantfees'], descendant_fees * COIN + 1000) # Adding one more transaction on to the chain should fail. assert_raises_jsonrpc(-26, "too-long-mempool-chain", self.chain_transaction, self.nodes[0], txid, vout, value, fee, 1) # Check that prioritising a tx before it's added to the mempool works # First clear the mempool by mining a block. self.nodes[0].generate(1) sync_blocks(self.nodes) assert_equal(len(self.nodes[0].getrawmempool()), 0) # Prioritise a transaction that has been mined, then add it back to the # mempool by using invalidateblock. self.nodes[0].prioritisetransaction(chain[-1], 0, 2000) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Keep node1's tip synced with node0 self.nodes[1].invalidateblock(self.nodes[1].getbestblockhash()) # Now check that the transaction is in the mempool, with the right modified fee mempool = self.nodes[0].getrawmempool(True) descendant_fees = 0 for x in reversed(chain): descendant_fees += mempool[x]['fee'] if (x == chain[-1]): assert_equal(mempool[x]['modifiedfee'], mempool[x]['fee'] + satoshi_round(0.00002)) assert_equal(mempool[x]['descendantfees'], descendant_fees * COIN + 2000) # TODO: check that node1's mempool is as expected # TODO: test ancestor size limits # Now test descendant chain limits txid = utxo[1]['txid'] value = utxo[1]['amount'] vout = utxo[1]['vout'] transaction_package = [] # First create one parent tx with 10 children (txid, sent_value) = self.chain_transaction( self.nodes[0], txid, vout, value, fee, 10) parent_transaction = txid for i in range(10): transaction_package.append( {'txid': txid, 'vout': i, 'amount': sent_value}) # Sign and send up to MAX_DESCENDANT transactions chained off the parent tx for i in range(MAX_DESCENDANTS - 1): utxo = transaction_package.pop(0) (txid, sent_value) = self.chain_transaction( self.nodes[0], utxo['txid'], utxo['vout'], utxo['amount'], fee, 10) for j in range(10): transaction_package.append( {'txid': txid, 'vout': j, 'amount': sent_value}) mempool = self.nodes[0].getrawmempool(True) assert_equal(mempool[parent_transaction] ['descendantcount'], MAX_DESCENDANTS) # Sending one more chained transaction will fail utxo = transaction_package.pop(0) assert_raises_jsonrpc(-26, "too-long-mempool-chain", self.chain_transaction, self.nodes[0], utxo['txid'], utxo['vout'], utxo['amount'], fee, 10) # TODO: check that node1's mempool is as expected # TODO: test descendant size limits # Test reorg handling # First, the basics: self.nodes[0].generate(1) sync_blocks(self.nodes) self.nodes[1].invalidateblock(self.nodes[0].getbestblockhash()) self.nodes[1].reconsiderblock(self.nodes[0].getbestblockhash()) # Now test the case where node1 has a transaction T in its mempool that # depends on transactions A and B which are in a mined block, and the # block containing A and B is disconnected, AND B is not accepted back # into node1's mempool because its ancestor count is too high. # Create 8 transactions, like so: # Tx0 -> Tx1 (vout0) # \--> Tx2 (vout1) -> Tx3 -> Tx4 -> Tx5 -> Tx6 -> Tx7 # # Mine them in the next block, then generate a new tx8 that spends # Tx1 and Tx7, and add to node1's mempool, then disconnect the # last block. # Create tx0 with 2 outputs utxo = self.nodes[0].listunspent() txid = utxo[0]['txid'] value = utxo[0]['amount'] vout = utxo[0]['vout'] send_value = satoshi_round((value - fee) / 2) inputs = [{'txid': txid, 'vout': vout}] outputs = {} for i in range(2): outputs[self.nodes[0].getnewaddress()] = send_value rawtx = self.nodes[0].createrawtransaction(inputs, outputs) signedtx = self.nodes[0].signrawtransaction( rawtx, None, None, "ALL|FORKID") txid = self.nodes[0].sendrawtransaction(signedtx['hex']) tx0_id = txid value = send_value # Create tx1 (tx1_id, tx1_value) = self.chain_transaction( self.nodes[0], tx0_id, 0, value, fee, 1) # Create tx2-7 vout = 1 txid = tx0_id for i in range(6): (txid, sent_value) = self.chain_transaction( self.nodes[0], txid, vout, value, fee, 1) vout = 0 value = sent_value # Mine these in a block self.nodes[0].generate(1) self.sync_all() # Now generate tx8, with a big fee inputs = [{'txid': tx1_id, 'vout': 0}, {'txid': txid, 'vout': 0}] outputs = {self.nodes[0].getnewaddress(): send_value + value - 4 * fee} rawtx = self.nodes[0].createrawtransaction(inputs, outputs) signedtx = self.nodes[0].signrawtransaction( rawtx, None, None, "ALL|FORKID") txid = self.nodes[0].sendrawtransaction(signedtx['hex']) sync_mempools(self.nodes) # Now try to disconnect the tip on each node... self.nodes[1].invalidateblock(self.nodes[1].getbestblockhash()) self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) sync_blocks(self.nodes) if __name__ == '__main__': MempoolPackagesTest().main() diff --git a/test/functional/mempool_persist.py b/test/functional/mempool_persist.py index e04cdf4d3..dc8f9a932 100755 --- a/test/functional/mempool_persist.py +++ b/test/functional/mempool_persist.py @@ -1,97 +1,89 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test mempool persistence. By default, bitcoind will dump mempool on shutdown and then reload it on startup. This can be overridden with the -persistmempool=0 command line option. Test is as follows: - start node0, node1 and node2. node1 has -persistmempool=0 - create 5 transactions on node2 to its own address. Note that these are not sent to node0 or node1 addresses because we don't want them to be saved in the wallet. - check that node0 and node1 have 5 transactions in their mempools - shutdown all nodes. - startup node0. Verify that it still has 5 transactions in its mempool. Shutdown node0. This tests that by default the mempool is persistent. - startup node1. Verify that its mempool is empty. Shutdown node1. This tests that with -persistmempool=0, the mempool is not dumped to disk when the node is shut down. - Restart node0 with -persistmempool=0. Verify that its mempool is empty. Shutdown node0. This tests that with -persistmempool=0, the mempool is not loaded from disk on start up. - Restart node0 with -persistmempool. Verify that it has 5 transactions in its mempool. This tests that -persistmempool=0 does not overwrite a previously valid mempool stored on disk. """ import time from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class MempoolPersistTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - # We need 3 nodes for this test. Node1 does not have a persistent mempool. + def set_test_params(self): self.num_nodes = 3 - self.setup_clean_chain = False self.extra_args = [[], ["-persistmempool=0"], []] def run_test(self): chain_height = self.nodes[0].getblockcount() assert_equal(chain_height, 200) self.log.debug("Mine a single block to get out of IBD") self.nodes[0].generate(1) self.sync_all() self.log.debug("Send 5 transactions from node2 (to its own address)") for i in range(5): self.nodes[2].sendtoaddress( self.nodes[2].getnewaddress(), Decimal("10")) self.sync_all() self.log.debug( "Verify that node0 and node1 have 5 transactions in their mempools") assert_equal(len(self.nodes[0].getrawmempool()), 5) assert_equal(len(self.nodes[1].getrawmempool()), 5) self.log.debug( "Stop-start node0 and node1. Verify that node0 has the transactions in its mempool and node1 does not.") self.stop_nodes() - self.nodes = [] - self.nodes.append(self.start_node(0, self.options.tmpdir)) - self.nodes.append(self.start_node(1, self.options.tmpdir)) + self.start_node(0) + self.start_node(1) # Give bitcoind a second to reload the mempool time.sleep(1) wait_until(lambda: len(self.nodes[0].getrawmempool()) == 5) assert_equal(len(self.nodes[1].getrawmempool()), 0) self.log.debug( "Stop-start node0 with -persistmempool=0. Verify that it doesn't load its mempool.dat file.") self.stop_nodes() - self.nodes = [] - self.nodes.append(self.start_node( - 0, self.options.tmpdir, ["-persistmempool=0"])) + self.start_node(0, extra_args=["-persistmempool=0"]) # Give bitcoind a second to reload the mempool time.sleep(1) assert_equal(len(self.nodes[0].getrawmempool()), 0) self.log.debug( "Stop-start node0. Verify that it has the transactions in its mempool.") self.stop_nodes() - self.nodes = [] - self.nodes.append(self.start_node(0, self.options.tmpdir)) + self.start_node(0) wait_until(lambda: len(self.nodes[0].getrawmempool()) == 5) if __name__ == '__main__': MempoolPersistTest().main() diff --git a/test/functional/mempool_reorg.py b/test/functional/mempool_reorg.py index 6dab03d8b..80d7488e2 100755 --- a/test/functional/mempool_reorg.py +++ b/test/functional/mempool_reorg.py @@ -1,115 +1,112 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test re-org scenarios with a mempool that contains transactions # that spend (directly or indirectly) coinbase transactions. # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * # Create one-input, one-output, no-fee transaction: class MempoolCoinbaseTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False - self.extra_args = [["-checkmempool"]] * self.num_nodes + self.extra_args = [["-checkmempool"]] * 2 alert_filename = None # Set by setup_network def run_test(self): # Start with a 200 block chain assert_equal(self.nodes[0].getblockcount(), 200) # Mine four blocks. After this, nodes[0] blocks # 101, 102, and 103 are spend-able. new_blocks = self.nodes[1].generate(4) self.sync_all() node0_address = self.nodes[0].getnewaddress() node1_address = self.nodes[1].getnewaddress() # Three scenarios for re-orging coinbase spends in the memory pool: # 1. Direct coinbase spend : spend_101 # 2. Indirect (coinbase spend in chain, child in mempool) : spend_102 and spend_102_1 # 3. Indirect (coinbase and child both in chain) : spend_103 and spend_103_1 # Use invalidatblock to make all of the above coinbase spends invalid (immature coinbase), # and make sure the mempool code behaves correctly. b = [self.nodes[0].getblockhash(n) for n in range(101, 105)] coinbase_txids = [self.nodes[0].getblock(h)['tx'][0] for h in b] spend_101_raw = create_tx( self.nodes[0], coinbase_txids[1], node1_address, 49.99) spend_102_raw = create_tx( self.nodes[0], coinbase_txids[2], node0_address, 49.99) spend_103_raw = create_tx( self.nodes[0], coinbase_txids[3], node0_address, 49.99) # Create a transaction which is time-locked to two blocks in the future timelock_tx = self.nodes[0].createrawtransaction( [{"txid": coinbase_txids[0], "vout": 0}], {node0_address: 49.99}) # Set the time lock timelock_tx = timelock_tx.replace("ffffffff", "11111191", 1) timelock_tx = timelock_tx[:-8] + \ hex(self.nodes[0].getblockcount() + 2)[2:] + "000000" timelock_tx = self.nodes[0].signrawtransaction( timelock_tx, None, None, "ALL|FORKID")["hex"] # This will raise an exception because the timelock transaction is too immature to spend assert_raises_jsonrpc(-26, "bad-txns-nonfinal", self.nodes[0].sendrawtransaction, timelock_tx) # Broadcast and mine spend_102 and 103: spend_102_id = self.nodes[0].sendrawtransaction(spend_102_raw) spend_103_id = self.nodes[0].sendrawtransaction(spend_103_raw) self.nodes[0].generate(1) # Time-locked transaction is still too immature to spend assert_raises_jsonrpc(-26, 'bad-txns-nonfinal', self.nodes[0].sendrawtransaction, timelock_tx) # Create 102_1 and 103_1: spend_102_1_raw = create_tx( self.nodes[0], spend_102_id, node1_address, 49.98) spend_103_1_raw = create_tx( self.nodes[0], spend_103_id, node1_address, 49.98) # Broadcast and mine 103_1: spend_103_1_id = self.nodes[0].sendrawtransaction(spend_103_1_raw) last_block = self.nodes[0].generate(1) # Time-locked transaction can now be spent timelock_tx_id = self.nodes[0].sendrawtransaction(timelock_tx) # ... now put spend_101 and spend_102_1 in memory pools: spend_101_id = self.nodes[0].sendrawtransaction(spend_101_raw) spend_102_1_id = self.nodes[0].sendrawtransaction(spend_102_1_raw) self.sync_all() assert_equal(set(self.nodes[0].getrawmempool()), { spend_101_id, spend_102_1_id, timelock_tx_id}) for node in self.nodes: node.invalidateblock(last_block[0]) # Time-locked transaction is now too immature and has been removed from the mempool # spend_103_1 has been re-orged out of the chain and is back in the mempool assert_equal(set(self.nodes[0].getrawmempool()), { spend_101_id, spend_102_1_id, spend_103_1_id}) # Use invalidateblock to re-org back and make all those coinbase spends # immature/invalid: for node in self.nodes: node.invalidateblock(new_blocks[0]) self.sync_all() # mempool should be empty. assert_equal(set(self.nodes[0].getrawmempool()), set()) if __name__ == '__main__': MempoolCoinbaseTest().main() diff --git a/test/functional/mempool_resurrect_test.py b/test/functional/mempool_resurrect_test.py index 8a8952c2b..9cd392c0f 100755 --- a/test/functional/mempool_resurrect_test.py +++ b/test/functional/mempool_resurrect_test.py @@ -1,84 +1,79 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test resurrection of mined transactions when # the blockchain is re-organized. # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * # Create one-input, one-output, no-fee transaction: class MempoolCoinbaseTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 - self.setup_clean_chain = False - - # Just need one node for this test self.extra_args = [["-checkmempool"]] def run_test(self): node0_address = self.nodes[0].getnewaddress() # Spend block 1/2/3's coinbase transactions # Mine a block. # Create three more transactions, spending the spends # Mine another block. # ... make sure all the transactions are confirmed # Invalidate both blocks # ... make sure all the transactions are put back in the mempool # Mine a new block # ... make sure all the transactions are confirmed again. b = [self.nodes[0].getblockhash(n) for n in range(1, 4)] coinbase_txids = [self.nodes[0].getblock(h)['tx'][0] for h in b] spends1_raw = [create_tx(self.nodes[0], txid, node0_address, 49.99) for txid in coinbase_txids] spends1_id = [self.nodes[0].sendrawtransaction(tx) for tx in spends1_raw] blocks = [] blocks.extend(self.nodes[0].generate(1)) spends2_raw = [create_tx(self.nodes[0], txid, node0_address, 49.98) for txid in spends1_id] spends2_id = [self.nodes[0].sendrawtransaction(tx) for tx in spends2_raw] blocks.extend(self.nodes[0].generate(1)) # mempool should be empty, all txns confirmed assert_equal(set(self.nodes[0].getrawmempool()), set()) for txid in spends1_id + spends2_id: tx = self.nodes[0].gettransaction(txid) assert(tx["confirmations"] > 0) # Use invalidateblock to re-org back; all transactions should # end up unconfirmed and back in the mempool for node in self.nodes: node.invalidateblock(blocks[0]) # mempool should be empty, all txns confirmed assert_equal( set(self.nodes[0].getrawmempool()), set(spends1_id + spends2_id)) for txid in spends1_id + spends2_id: tx = self.nodes[0].gettransaction(txid) assert(tx["confirmations"] == 0) # Generate another block, they should all get mined self.nodes[0].generate(1) # mempool should be empty, all txns confirmed assert_equal(set(self.nodes[0].getrawmempool()), set()) for txid in spends1_id + spends2_id: tx = self.nodes[0].gettransaction(txid) assert(tx["confirmations"] > 0) if __name__ == '__main__': MempoolCoinbaseTest().main() diff --git a/test/functional/mempool_spendcoinbase.py b/test/functional/mempool_spendcoinbase.py index c0bade093..740663bc1 100755 --- a/test/functional/mempool_spendcoinbase.py +++ b/test/functional/mempool_spendcoinbase.py @@ -1,64 +1,59 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test spending coinbase transactions. # The coinbase transaction in block N can appear in block # N+100... so is valid in the mempool when the best block # height is N+99. # This test makes sure coinbase spends that will be mature # in the next block are accepted into the memory pool, # but less mature coinbase spends are NOT. # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * # Create one-input, one-output, no-fee transaction: class MempoolSpendCoinbaseTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 - self.setup_clean_chain = False - - # Just need one node for this test self.extra_args = [["-checkmempool"]] def run_test(self): chain_height = self.nodes[0].getblockcount() assert_equal(chain_height, 200) node0_address = self.nodes[0].getnewaddress() # Coinbase at height chain_height-100+1 ok in mempool, should # get mined. Coinbase at height chain_height-100+2 is # is too immature to spend. b = [self.nodes[0].getblockhash(n) for n in range(101, 103)] coinbase_txids = [self.nodes[0].getblock(h)['tx'][0] for h in b] spends_raw = [create_tx(self.nodes[0], txid, node0_address, 49.99) for txid in coinbase_txids] spend_101_id = self.nodes[0].sendrawtransaction(spends_raw[0]) # coinbase at height 102 should be too immature to spend assert_raises_jsonrpc(-26, "bad-txns-premature-spend-of-coinbase", self.nodes[0].sendrawtransaction, spends_raw[1]) # mempool should have just spend_101: assert_equal(self.nodes[0].getrawmempool(), [spend_101_id]) # mine a block, spend_101 should get confirmed self.nodes[0].generate(1) assert_equal(set(self.nodes[0].getrawmempool()), set()) # ... and now height 102 can be spent: spend_102_id = self.nodes[0].sendrawtransaction(spends_raw[1]) assert_equal(self.nodes[0].getrawmempool(), [spend_102_id]) if __name__ == '__main__': MempoolSpendCoinbaseTest().main() diff --git a/test/functional/merkle_blocks.py b/test/functional/merkle_blocks.py index 23327f46b..420f234d2 100755 --- a/test/functional/merkle_blocks.py +++ b/test/functional/merkle_blocks.py @@ -1,107 +1,105 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test merkleblock fetch/validation # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class MerkleBlockTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.setup_clean_chain = True + def set_test_params(self): self.num_nodes = 4 + self.setup_clean_chain = True # Nodes 0/1 are "wallet" nodes, Nodes 2/3 are used for testing self.extra_args = [[], [], [], ["-txindex"]] def setup_network(self): self.setup_nodes() connect_nodes(self.nodes[0], 1) connect_nodes(self.nodes[0], 2) connect_nodes(self.nodes[0], 3) self.sync_all() def run_test(self): self.log.info("Mining blocks...") self.nodes[0].generate(105) self.sync_all() chain_height = self.nodes[1].getblockcount() assert_equal(chain_height, 105) assert_equal(self.nodes[1].getbalance(), 0) assert_equal(self.nodes[2].getbalance(), 0) node0utxos = self.nodes[0].listunspent(1) tx1 = self.nodes[0].createrawtransaction( [node0utxos.pop()], {self.nodes[1].getnewaddress(): 49.99}) txid1 = self.nodes[0].sendrawtransaction( self.nodes[0].signrawtransaction(tx1, None, None, "ALL|FORKID")["hex"]) tx2 = self.nodes[0].createrawtransaction( [node0utxos.pop()], {self.nodes[1].getnewaddress(): 49.99}) txid2 = self.nodes[0].sendrawtransaction( self.nodes[0].signrawtransaction(tx2, None, None, "ALL|FORKID")["hex"]) # This will raise an exception because the transaction is not yet in a block assert_raises_jsonrpc(-5, "Transaction not yet in block", self.nodes[0].gettxoutproof, [txid1]) self.nodes[0].generate(1) blockhash = self.nodes[0].getblockhash(chain_height + 1) self.sync_all() txlist = [] blocktxn = self.nodes[0].getblock(blockhash, True)["tx"] txlist.append(blocktxn[1]) txlist.append(blocktxn[2]) assert_equal(self.nodes[2].verifytxoutproof( self.nodes[2].gettxoutproof([txid1])), [txid1]) assert_equal(self.nodes[2].verifytxoutproof( self.nodes[2].gettxoutproof([txid1, txid2])), txlist) assert_equal(self.nodes[2].verifytxoutproof( self.nodes[2].gettxoutproof([txid1, txid2], blockhash)), txlist) txin_spent = self.nodes[1].listunspent(1).pop() tx3 = self.nodes[1].createrawtransaction( [txin_spent], {self.nodes[0].getnewaddress(): 49.98}) txid3 = self.nodes[0].sendrawtransaction( self.nodes[1].signrawtransaction(tx3, None, None, "ALL|FORKID")["hex"]) self.nodes[0].generate(1) self.sync_all() txid_spent = txin_spent["txid"] txid_unspent = txid1 if txin_spent["txid"] != txid1 else txid2 # We can't find the block from a fully-spent tx assert_raises_jsonrpc(-5, "Transaction not yet in block", self.nodes[2].gettxoutproof, [txid_spent]) # We can get the proof if we specify the block assert_equal(self.nodes[2].verifytxoutproof( self.nodes[2].gettxoutproof([txid_spent], blockhash)), [txid_spent]) # We can't get the proof if we specify a non-existent block assert_raises_jsonrpc(-5, "Block not found", self.nodes[2].gettxoutproof, [ txid_spent], "00000000000000000000000000000000") # We can get the proof if the transaction is unspent assert_equal(self.nodes[2].verifytxoutproof( self.nodes[2].gettxoutproof([txid_unspent])), [txid_unspent]) # We can get the proof if we provide a list of transactions and one of them is unspent. The ordering of the list should not matter. assert_equal(sorted(self.nodes[2].verifytxoutproof( self.nodes[2].gettxoutproof([txid1, txid2]))), sorted(txlist)) assert_equal(sorted(self.nodes[2].verifytxoutproof( self.nodes[2].gettxoutproof([txid2, txid1]))), sorted(txlist)) # We can always get a proof if we have a -txindex assert_equal(self.nodes[2].verifytxoutproof( self.nodes[3].gettxoutproof([txid_spent])), [txid_spent]) # We can't get a proof if we specify transactions from different blocks assert_raises_jsonrpc(-5, "Not all transactions found in specified or retrieved block", self.nodes[2].gettxoutproof, [txid1, txid3]) if __name__ == '__main__': MerkleBlockTest().main() diff --git a/test/functional/mining.py b/test/functional/mining.py index feaeedd09..eec8e4fe4 100755 --- a/test/functional/mining.py +++ b/test/functional/mining.py @@ -1,137 +1,132 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test mining RPCs - getblocktemplate proposal mode - submitblock""" from binascii import b2a_hex import copy from test_framework.blocktools import create_coinbase from test_framework.test_framework import BitcoinTestFramework from test_framework.mininode import CBlock from test_framework.util import * def b2x(b): return b2a_hex(b).decode('ascii') def assert_template(node, block, expect, rehash=True): if rehash: block.hashMerkleRoot = block.calc_merkle_root() rsp = node.getblocktemplate( {'data': b2x(block.serialize()), 'mode': 'proposal'}) assert_equal(rsp, expect) class MiningTest(BitcoinTestFramework): - ''' - Test block proposals with getblocktemplate. - ''' - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 self.setup_clean_chain = False def run_test(self): node = self.nodes[0] # Mine a block to leave initial block download node.generate(1) tmpl = node.getblocktemplate() self.log.info("getblocktemplate: Test capability advertised") assert 'proposal' in tmpl['capabilities'] assert 'coinbasetxn' not in tmpl coinbase_tx = create_coinbase(height=int(tmpl["height"]) + 1) # sequence numbers must not be max for nLockTime to have effect coinbase_tx.vin[0].nSequence = 2 ** 32 - 2 coinbase_tx.rehash() block = CBlock() block.nVersion = tmpl["version"] block.hashPrevBlock = int(tmpl["previousblockhash"], 16) block.nTime = tmpl["curtime"] block.nBits = int(tmpl["bits"], 16) block.nNonce = 0 block.vtx = [coinbase_tx] self.log.info("getblocktemplate: Test valid block") assert_template(node, block, None) self.log.info("submitblock: Test block decode failure") assert_raises_jsonrpc(-22, "Block decode failed", node.submitblock, b2x(block.serialize()[:-15])) self.log.info( "getblocktemplate: Test bad input hash for coinbase transaction") bad_block = copy.deepcopy(block) bad_block.vtx[0].vin[0].prevout.hash += 1 bad_block.vtx[0].rehash() assert_template(node, bad_block, 'bad-cb-missing') self.log.info("submitblock: Test invalid coinbase transaction") assert_raises_jsonrpc(-22, "Block does not start with a coinbase", node.submitblock, b2x(bad_block.serialize())) self.log.info("getblocktemplate: Test truncated final transaction") assert_raises_jsonrpc(-22, "Block decode failed", node.getblocktemplate, {'data': b2x(block.serialize()[:-1]), 'mode': 'proposal'}) self.log.info("getblocktemplate: Test duplicate transaction") bad_block = copy.deepcopy(block) bad_block.vtx.append(bad_block.vtx[0]) assert_template(node, bad_block, 'bad-txns-duplicate') self.log.info("getblocktemplate: Test invalid transaction") bad_block = copy.deepcopy(block) bad_tx = copy.deepcopy(bad_block.vtx[0]) bad_tx.vin[0].prevout.hash = 255 bad_tx.rehash() bad_block.vtx.append(bad_tx) assert_template(node, bad_block, 'bad-txns-inputs-missingorspent') self.log.info("getblocktemplate: Test nonfinal transaction") bad_block = copy.deepcopy(block) bad_block.vtx[0].nLockTime = 2 ** 32 - 1 bad_block.vtx[0].rehash() assert_template(node, bad_block, 'bad-txns-nonfinal') self.log.info("getblocktemplate: Test bad tx count") # The tx count is immediately after the block header TX_COUNT_OFFSET = 80 bad_block_sn = bytearray(block.serialize()) assert_equal(bad_block_sn[TX_COUNT_OFFSET], 1) bad_block_sn[TX_COUNT_OFFSET] += 1 assert_raises_jsonrpc(-22, "Block decode failed", node.getblocktemplate, {'data': b2x(bad_block_sn), 'mode': 'proposal'}) self.log.info("getblocktemplate: Test bad bits") bad_block = copy.deepcopy(block) bad_block.nBits = 469762303 # impossible in the real world assert_template(node, bad_block, 'bad-diffbits') self.log.info("getblocktemplate: Test bad merkle root") bad_block = copy.deepcopy(block) bad_block.hashMerkleRoot += 1 assert_template(node, bad_block, 'bad-txnmrklroot', False) self.log.info("getblocktemplate: Test bad timestamps") bad_block = copy.deepcopy(block) bad_block.nTime = 2 ** 31 - 1 assert_template(node, bad_block, 'time-too-new') bad_block.nTime = 0 assert_template(node, bad_block, 'time-too-old') self.log.info("getblocktemplate: Test not best block") bad_block = copy.deepcopy(block) bad_block.hashPrevBlock = 123 assert_template(node, bad_block, 'inconclusive-not-best-prevblk') if __name__ == '__main__': MiningTest().main() diff --git a/test/functional/multi_rpc.py b/test/functional/multi_rpc.py index 0c6d0a796..8ff382c21 100755 --- a/test/functional/multi_rpc.py +++ b/test/functional/multi_rpc.py @@ -1,162 +1,159 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test multiple rpc user config option rpcauth # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import str_to_b64str, assert_equal import os import http.client import urllib.parse class HTTPBasicsTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.setup_clean_chain = False + def set_test_params(self): self.num_nodes = 2 def setup_chain(self): super().setup_chain() # Append rpcauth to bitcoin.conf before initialization rpcauth = "rpcauth=rt:93648e835a54c573682c2eb19f882535$7681e9c5b74bdd85e78166031d2058e1069b3ed7ed967c93fc63abba06f31144" rpcauth2 = "rpcauth=rt2:f8607b1a88861fac29dfccf9b52ff9f$ff36a0c23c8c62b4846112e50fa888416e94c17bfd4c42f88fd8f55ec6a3137e" rpcuser = "rpcuser=rpcuser💻" rpcpassword = "rpcpassword=rpcpassword🔑" with open(os.path.join(self.options.tmpdir + "/node0", "bitcoin.conf"), 'a', encoding='utf8') as f: f.write(rpcauth + "\n") f.write(rpcauth2 + "\n") with open(os.path.join(self.options.tmpdir + "/node1", "bitcoin.conf"), 'a', encoding='utf8') as f: f.write(rpcuser + "\n") f.write(rpcpassword + "\n") def run_test(self): # # Check correctness of the rpcauth config option # # url = urllib.parse.urlparse(self.nodes[0].url) # Old authpair authpair = url.username + ':' + url.password # New authpair generated via share/rpcuser tool rpcauth = "rpcauth=rt:93648e835a54c573682c2eb19f882535$7681e9c5b74bdd85e78166031d2058e1069b3ed7ed967c93fc63abba06f31144" password = "cA773lm788buwYe4g4WT+05pKyNruVKjQ25x3n0DQcM=" # Second authpair with different username rpcauth2 = "rpcauth=rt2:f8607b1a88861fac29dfccf9b52ff9f$ff36a0c23c8c62b4846112e50fa888416e94c17bfd4c42f88fd8f55ec6a3137e" password2 = "8/F3uMDw4KSEbw96U3CA1C4X05dkHDN2BPFjTgZW4KI=" authpairnew = "rt:" + password headers = {"Authorization": "Basic " + str_to_b64str(authpair)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 200) conn.close() # Use new authpair to confirm both work headers = {"Authorization": "Basic " + str_to_b64str(authpairnew)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 200) conn.close() # Wrong login name with rt's password authpairnew = "rtwrong:" + password headers = {"Authorization": "Basic " + str_to_b64str(authpairnew)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 401) conn.close() # Wrong password for rt authpairnew = "rt:" + password + "wrong" headers = {"Authorization": "Basic " + str_to_b64str(authpairnew)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 401) conn.close() # Correct for rt2 authpairnew = "rt2:" + password2 headers = {"Authorization": "Basic " + str_to_b64str(authpairnew)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 200) conn.close() # Wrong password for rt2 authpairnew = "rt2:" + password2 + "wrong" headers = {"Authorization": "Basic " + str_to_b64str(authpairnew)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 401) conn.close() ############################################################### # Check correctness of the rpcuser/rpcpassword config options # ############################################################### url = urllib.parse.urlparse(self.nodes[1].url) # rpcuser and rpcpassword authpair rpcuserauthpair = "rpcuser💻:rpcpassword🔑" headers = {"Authorization": "Basic " + str_to_b64str(rpcuserauthpair)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 200) conn.close() # Wrong login name with rpcuser's password rpcuserauthpair = "rpcuserwrong:rpcpassword" headers = {"Authorization": "Basic " + str_to_b64str(rpcuserauthpair)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 401) conn.close() # Wrong password for rpcuser rpcuserauthpair = "rpcuser:rpcpasswordwrong" headers = {"Authorization": "Basic " + str_to_b64str(rpcuserauthpair)} conn = http.client.HTTPConnection(url.hostname, url.port) conn.connect() conn.request('POST', '/', '{"method": "getbestblockhash"}', headers) resp = conn.getresponse() assert_equal(resp.status, 401) conn.close() if __name__ == '__main__': HTTPBasicsTest().main() diff --git a/test/functional/multiwallet.py b/test/functional/multiwallet.py index 23160be74..aa4b4e7f1 100755 --- a/test/functional/multiwallet.py +++ b/test/functional/multiwallet.py @@ -1,98 +1,95 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test multiwallet. Verify that a bitcoind node can load multiple wallet files """ import os from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal, assert_raises_jsonrpc class MultiWalletTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [ ['-wallet=w1', '-wallet=w2', '-wallet=w3', '-wallet=w']] def run_test(self): assert_equal(set(self.nodes[0].listwallets()), {"w1", "w2", "w3", "w"}) self.stop_node(0) # should not initialize if there are duplicate wallets - self.assert_start_raises_init_error(0, self.options.tmpdir, [ - '-wallet=w1', '-wallet=w1'], 'Error loading wallet w1. Duplicate -wallet filename specified.') + self.assert_start_raises_init_error( + 0, ['-wallet=w1', '-wallet=w1'], 'Error loading wallet w1. Duplicate -wallet filename specified.') # should not initialize if wallet file is a directory os.mkdir(os.path.join(self.options.tmpdir, 'node0', 'regtest', 'w11')) - self.assert_start_raises_init_error(0, self.options.tmpdir, [ - '-wallet=w11'], 'Error loading wallet w11. -wallet filename must be a regular file.') + self.assert_start_raises_init_error( + 0, ['-wallet=w11'], 'Error loading wallet w11. -wallet filename must be a regular file.') # should not initialize if wallet file is a symlink wallet_dir = os.path.abspath(os.path.join( self.options.tmpdir, 'node0', 'regtest')) os.symlink(os.path.join(wallet_dir, 'w1'), os.path.join(wallet_dir, 'w12')) - self.assert_start_raises_init_error(0, self.options.tmpdir, [ - '-wallet=w12'], 'Error loading wallet w12. -wallet filename must be a regular file.') + self.assert_start_raises_init_error( + 0, ['-wallet=w12'], 'Error loading wallet w12. -wallet filename must be a regular file.') - self.nodes[0] = self.start_node( - 0, self.options.tmpdir, self.extra_args[0]) + self.start_node(0, self.extra_args[0]) w1 = self.nodes[0].get_wallet_rpc("w1") w2 = self.nodes[0].get_wallet_rpc("w2") w3 = self.nodes[0].get_wallet_rpc("w3") w4 = self.nodes[0].get_wallet_rpc("w") wallet_bad = self.nodes[0].get_wallet_rpc("bad") w1.generate(1) # accessing invalid wallet fails assert_raises_jsonrpc(-18, "Requested wallet does not exist or is not loaded", wallet_bad.getwalletinfo) # accessing wallet RPC without using wallet endpoint fails assert_raises_jsonrpc(-19, "Wallet file not specified (must request wallet RPC through /wallet/ uri-path).", self.nodes[0].getwalletinfo) # check w1 wallet balance w1_info = w1.getwalletinfo() assert_equal(w1_info['immature_balance'], 50) w1_name = w1_info['walletname'] assert_equal(w1_name, "w1") # check w2 wallet balance w2_info = w2.getwalletinfo() assert_equal(w2_info['immature_balance'], 0) w2_name = w2_info['walletname'] assert_equal(w2_name, "w2") w3_name = w3.getwalletinfo()['walletname'] assert_equal(w3_name, "w3") w4_name = w4.getwalletinfo()['walletname'] assert_equal(w4_name, "w") w1.generate(101) assert_equal(w1.getbalance(), 100) assert_equal(w2.getbalance(), 0) assert_equal(w3.getbalance(), 0) assert_equal(w4.getbalance(), 0) w1.sendtoaddress(w2.getnewaddress(), 1) w1.sendtoaddress(w3.getnewaddress(), 2) w1.sendtoaddress(w4.getnewaddress(), 3) w1.generate(1) assert_equal(w2.getbalance(), 1) assert_equal(w3.getbalance(), 2) assert_equal(w4.getbalance(), 3) if __name__ == '__main__': MultiWalletTest().main() diff --git a/test/functional/net.py b/test/functional/net.py index 15d19dcee..71bd60db0 100755 --- a/test/functional/net.py +++ b/test/functional/net.py @@ -1,97 +1,96 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test RPC calls related to net. Tests correspond to code in rpc/net.cpp. """ import time from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, assert_raises_jsonrpc, connect_nodes_bi, p2p_port, ) class NetTest(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 def run_test(self): self._test_connection_count() self._test_getnettotals() self._test_getnetworkinginfo() self._test_getaddednodeinfo() def _test_connection_count(self): # connect_nodes_bi connects each node to the other assert_equal(self.nodes[0].getconnectioncount(), 2) def _test_getnettotals(self): # check that getnettotals totalbytesrecv and totalbytessent # are consistent with getpeerinfo peer_info = self.nodes[0].getpeerinfo() assert_equal(len(peer_info), 2) net_totals = self.nodes[0].getnettotals() assert_equal(sum([peer['bytesrecv'] for peer in peer_info]), net_totals['totalbytesrecv']) assert_equal(sum([peer['bytessent'] for peer in peer_info]), net_totals['totalbytessent']) # test getnettotals and getpeerinfo by doing a ping # the bytes sent/received should change # note ping and pong are 32 bytes each self.nodes[0].ping() time.sleep(0.1) peer_info_after_ping = self.nodes[0].getpeerinfo() net_totals_after_ping = self.nodes[0].getnettotals() for before, after in zip(peer_info, peer_info_after_ping): assert_equal(before['bytesrecv_per_msg']['pong'] + 32, after['bytesrecv_per_msg']['pong']) assert_equal(before['bytessent_per_msg']['ping'] + 32, after['bytessent_per_msg']['ping']) assert_equal(net_totals['totalbytesrecv'] + 32 * 2, net_totals_after_ping['totalbytesrecv']) assert_equal(net_totals['totalbytessent'] + 32 * 2, net_totals_after_ping['totalbytessent']) def _test_getnetworkinginfo(self): assert_equal(self.nodes[0].getnetworkinfo()['networkactive'], True) assert_equal(self.nodes[0].getnetworkinfo()['connections'], 2) self.nodes[0].setnetworkactive(False) assert_equal(self.nodes[0].getnetworkinfo()['networkactive'], False) timeout = 3 while self.nodes[0].getnetworkinfo()['connections'] != 0: # Wait a bit for all sockets to close assert timeout > 0, 'not all connections closed in time' timeout -= 0.1 time.sleep(0.1) self.nodes[0].setnetworkactive(True) connect_nodes_bi(self.nodes, 0, 1) assert_equal(self.nodes[0].getnetworkinfo()['networkactive'], True) assert_equal(self.nodes[0].getnetworkinfo()['connections'], 2) def _test_getaddednodeinfo(self): assert_equal(self.nodes[0].getaddednodeinfo(), []) # add a node (node2) to node0 ip_port = "127.0.0.1:{}".format(p2p_port(2)) self.nodes[0].addnode(ip_port, 'add') # check that the node has indeed been added added_nodes = self.nodes[0].getaddednodeinfo(ip_port) assert_equal(len(added_nodes), 1) assert_equal(added_nodes[0]['addednode'], ip_port) # check that a non-existant node returns an error assert_raises_jsonrpc(-24, "Node has not been added", self.nodes[0].getaddednodeinfo, '1.1.1.1') if __name__ == '__main__': NetTest().main() diff --git a/test/functional/nulldummy.py b/test/functional/nulldummy.py index ff6f441b4..2c9bb5b6e 100755 --- a/test/functional/nulldummy.py +++ b/test/functional/nulldummy.py @@ -1,121 +1,120 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.mininode import CTransaction, NetworkThread from test_framework.blocktools import create_coinbase, create_block from test_framework.script import CScript from io import BytesIO import time NULLDUMMY_ERROR = "64: non-mandatory-script-verify-flag (Dummy CHECKMULTISIG argument must be zero)" def trueDummy(tx): scriptSig = CScript(tx.vin[0].scriptSig) newscript = [] for i in scriptSig: if (len(newscript) == 0): assert(len(i) == 0) newscript.append(b'\x51') else: newscript.append(i) tx.vin[0].scriptSig = CScript(newscript) tx.rehash() ''' This test is meant to exercise NULLDUMMY softfork. Connect to a single node. Generate 2 blocks (save the coinbases for later). Generate 427 more blocks. [Policy/Consensus] Check that NULLDUMMY compliant transactions are accepted in the 430th block. [Policy] Check that non-NULLDUMMY transactions are rejected before activation. [Consensus] Check that the new NULLDUMMY rules are not enforced on the 431st block. [Policy/Consensus] Check that the new NULLDUMMY rules are enforced on the 432nd block. ''' class NULLDUMMYTest(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True self.extra_args = [['-whitelist=127.0.0.1', '-walletprematurewitness']] def run_test(self): self.address = self.nodes[0].getnewaddress() self.ms_address = self.nodes[0].addmultisigaddress(1, [self.address]) NetworkThread().start() # Start up network handling in another thread self.coinbase_blocks = self.nodes[0].generate(2) # Block 2 coinbase_txid = [] for i in self.coinbase_blocks: coinbase_txid.append(self.nodes[0].getblock(i)['tx'][0]) self.nodes[0].generate(427) # Block 429 self.lastblockhash = self.nodes[0].getbestblockhash() self.tip = int("0x" + self.lastblockhash, 0) self.lastblockheight = 429 self.lastblocktime = int(time.time()) + 429 self.log.info( "Test 1: NULLDUMMY compliant base transactions should be accepted to mempool and mined before activation [430]") test1txs = [self.create_transaction( self.nodes[0], coinbase_txid[0], self.ms_address, 49)] txid1 = self.nodes[0].sendrawtransaction( bytes_to_hex_str(test1txs[0].serialize_with_witness()), True) test1txs.append(self.create_transaction( self.nodes[0], txid1, self.ms_address, 48)) txid2 = self.nodes[0].sendrawtransaction( bytes_to_hex_str(test1txs[1].serialize_with_witness()), True) self.block_submit(self.nodes[0], test1txs, False, True) self.log.info( "Test 2: Non-NULLDUMMY base multisig transaction should not be accepted to mempool before activation") test2tx = self.create_transaction( self.nodes[0], txid2, self.ms_address, 48) trueDummy(test2tx) assert_raises_jsonrpc(-26, NULLDUMMY_ERROR, self.nodes[0].sendrawtransaction, bytes_to_hex_str( test2tx.serialize_with_witness()), True) self.log.info( "Test 3: Non-NULLDUMMY base transactions should be accepted in a block before activation [431]") self.block_submit(self.nodes[0], [test2tx], False, True) def create_transaction(self, node, txid, to_address, amount): inputs = [{"txid": txid, "vout": 0}] outputs = {to_address: amount} rawtx = node.createrawtransaction(inputs, outputs) signresult = node.signrawtransaction(rawtx, None, None, "ALL|FORKID") tx = CTransaction() f = BytesIO(hex_str_to_bytes(signresult['hex'])) tx.deserialize(f) return tx def block_submit(self, node, txs, witness=False, accept=False): block = create_block(self.tip, create_coinbase( self.lastblockheight + 1), self.lastblocktime + 1) block.nVersion = 4 for tx in txs: tx.rehash() block.vtx.append(tx) block.hashMerkleRoot = block.calc_merkle_root() block.rehash() block.solve() node.submitblock(bytes_to_hex_str(block.serialize(True))) if (accept): assert_equal(node.getbestblockhash(), block.hash) self.tip = block.sha256 self.lastblockhash = block.hash self.lastblocktime += 1 self.lastblockheight += 1 else: assert_equal(node.getbestblockhash(), self.lastblockhash) if __name__ == '__main__': NULLDUMMYTest().main() diff --git a/test/functional/p2p-acceptblock.py b/test/functional/p2p-acceptblock.py index 2fd777e99..acd73229d 100755 --- a/test/functional/p2p-acceptblock.py +++ b/test/functional/p2p-acceptblock.py @@ -1,248 +1,247 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import time from test_framework.blocktools import create_block, create_coinbase ''' AcceptBlockTest -- test processing of unrequested blocks. Since behavior differs when receiving unrequested blocks from whitelisted peers versus non-whitelisted peers, this tests the behavior of both (effectively two separate tests running in parallel). Setup: two nodes, node0 and node1, not connected to each other. Node0 does not whitelist localhost, but node1 does. They will each be on their own chain for this test. We have one NodeConn connection to each, test_node and white_node respectively. The test: 1. Generate one block on each node, to leave IBD. 2. Mine a new block on each tip, and deliver to each node from node's peer. The tip should advance. 3. Mine a block that forks the previous block, and deliver to each node from corresponding peer. Node0 should not process this block (just accept the header), because it is unrequested and doesn't have more work than the tip. Node1 should process because this is coming from a whitelisted peer. 4. Send another block that builds on the forking block. Node0 should process this block but be stuck on the shorter chain, because it's missing an intermediate block. Node1 should reorg to this longer chain. 4b.Send 288 more blocks on the longer chain. Node0 should process all but the last block (too far ahead in height). Send all headers to Node1, and then send the last block in that chain. Node1 should accept the block because it's coming from a whitelisted peer. 5. Send a duplicate of the block in #3 to Node0. Node0 should not process the block because it is unrequested, and stay on the shorter chain. 6. Send Node0 an inv for the height 3 block produced in #4 above. Node0 should figure out that Node0 has the missing height 2 block and send a getdata. 7. Send Node0 the missing block again. Node0 should process and the tip should advance. ''' class AcceptBlockTest(BitcoinTestFramework): def add_options(self, parser): parser.add_option("--testbinary", dest="testbinary", default=os.getenv("BITCOIND", "bitcoind"), help="bitcoind binary to test") - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 self.extra_args = [[], ["-whitelist=127.0.0.1"]] def setup_network(self): # Node0 will be used to test behavior of processing unrequested blocks # from peers which are not whitelisted, while Node1 will be used for # the whitelisted case. self.setup_nodes() def run_test(self): # Setup the p2p connections and start up the network thread. test_node = NodeConnCB() # connects to node0 (not whitelisted) white_node = NodeConnCB() # connects to node1 (whitelisted) connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], test_node)) connections.append( NodeConn('127.0.0.1', p2p_port(1), self.nodes[1], white_node)) test_node.add_connection(connections[0]) white_node.add_connection(connections[1]) NetworkThread().start() # Start up network handling in another thread # Test logic begins here test_node.wait_for_verack() white_node.wait_for_verack() # 1. Have both nodes mine a block (leave IBD) [n.generate(1) for n in self.nodes] tips = [int("0x" + n.getbestblockhash(), 0) for n in self.nodes] # 2. Send one block that builds on each tip. # This should be accepted. blocks_h2 = [] # the height 2 blocks on each node's chain block_time = int(time.time()) + 1 for i in range(2): blocks_h2.append( create_block(tips[i], create_coinbase(2), block_time)) blocks_h2[i].solve() block_time += 1 test_node.send_message(msg_block(blocks_h2[0])) white_node.send_message(msg_block(blocks_h2[1])) for x in [test_node, white_node]: x.sync_with_ping() assert_equal(self.nodes[0].getblockcount(), 2) assert_equal(self.nodes[1].getblockcount(), 2) self.log.info("First height 2 block accepted by both nodes") # 3. Send another block that builds on the original tip. blocks_h2f = [] # Blocks at height 2 that fork off the main chain for i in range(2): blocks_h2f.append( create_block(tips[i], create_coinbase(2), blocks_h2[i].nTime + 1)) blocks_h2f[i].solve() test_node.send_message(msg_block(blocks_h2f[0])) white_node.send_message(msg_block(blocks_h2f[1])) for x in [test_node, white_node]: x.sync_with_ping() for x in self.nodes[0].getchaintips(): if x['hash'] == blocks_h2f[0].hash: assert_equal(x['status'], "headers-only") for x in self.nodes[1].getchaintips(): if x['hash'] == blocks_h2f[1].hash: assert_equal(x['status'], "valid-headers") self.log.info( "Second height 2 block accepted only from whitelisted peer") # 4. Now send another block that builds on the forking chain. blocks_h3 = [] for i in range(2): blocks_h3.append( create_block(blocks_h2f[i].sha256, create_coinbase(3), blocks_h2f[i].nTime + 1)) blocks_h3[i].solve() test_node.send_message(msg_block(blocks_h3[0])) white_node.send_message(msg_block(blocks_h3[1])) for x in [test_node, white_node]: x.sync_with_ping() # Since the earlier block was not processed by node0, the new block # can't be fully validated. for x in self.nodes[0].getchaintips(): if x['hash'] == blocks_h3[0].hash: assert_equal(x['status'], "headers-only") # But this block should be accepted by node0 since it has more work. self.nodes[0].getblock(blocks_h3[0].hash) self.log.info( "Unrequested more-work block accepted from non-whitelisted peer") # Node1 should have accepted and reorged. assert_equal(self.nodes[1].getblockcount(), 3) self.log.info( "Successfully reorged to length 3 chain from whitelisted peer") # 4b. Now mine 288 more blocks and deliver; all should be processed but # the last (height-too-high) on node0. Node1 should process the tip if # we give it the headers chain leading to the tip. tips = blocks_h3 headers_message = msg_headers() all_blocks = [] # node0's blocks for j in range(2): for i in range(288): next_block = create_block( tips[j].sha256, create_coinbase(i + 4), tips[j].nTime + 1) next_block.solve() if j == 0: test_node.send_message(msg_block(next_block)) all_blocks.append(next_block) else: headers_message.headers.append(CBlockHeader(next_block)) tips[j] = next_block time.sleep(2) # Blocks 1-287 should be accepted, block 288 should be ignored because # it's too far ahead for x in all_blocks[:-1]: self.nodes[0].getblock(x.hash) assert_raises_jsonrpc( -1, "Block not found on disk", self.nodes[0].getblock, all_blocks[-1].hash) headers_message.headers.pop() # Ensure the last block is unrequested white_node.send_message(headers_message) # Send headers leading to tip white_node.send_message(msg_block(tips[1])) # Now deliver the tip white_node.sync_with_ping() self.nodes[1].getblock(tips[1].hash) self.log.info( "Unrequested block far ahead of tip accepted from whitelisted peer") # 5. Test handling of unrequested block on the node that didn't process # Should still not be processed (even though it has a child that has more # work). test_node.send_message(msg_block(blocks_h2f[0])) # Here, if the sleep is too short, the test could falsely succeed (if the # node hasn't processed the block by the time the sleep returns, and then # the node processes it and incorrectly advances the tip). # But this would be caught later on, when we verify that an inv triggers # a getdata request for this block. test_node.sync_with_ping() assert_equal(self.nodes[0].getblockcount(), 2) self.log.info( "Unrequested block that would complete more-work chain was ignored") # 6. Try to get node to request the missing block. # Poke the node with an inv for block at height 3 and see if that # triggers a getdata on block 2 (it should if block 2 is missing). with mininode_lock: # Clear state so we can check the getdata request test_node.last_message.pop("getdata", None) test_node.send_message(msg_inv([CInv(2, blocks_h3[0].sha256)])) test_node.sync_with_ping() with mininode_lock: getdata = test_node.last_message["getdata"] # Check that the getdata includes the right block assert_equal(getdata.inv[0].hash, blocks_h2f[0].sha256) self.log.info("Inv at tip triggered getdata for unprocessed block") # 7. Send the missing block for the third time (now it is requested) test_node.send_message(msg_block(blocks_h2f[0])) test_node.sync_with_ping() assert_equal(self.nodes[0].getblockcount(), 290) self.log.info( "Successfully reorged to longer chain from non-whitelisted peer") [c.disconnect_node() for c in connections] if __name__ == '__main__': AcceptBlockTest().main() diff --git a/test/functional/p2p-compactblocks.py b/test/functional/p2p-compactblocks.py index 804d8116d..e8d7fc569 100755 --- a/test/functional/p2p-compactblocks.py +++ b/test/functional/p2p-compactblocks.py @@ -1,931 +1,929 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from test_framework.blocktools import create_block, create_coinbase, add_witness_commitment from test_framework.script import CScript, OP_TRUE ''' CompactBlocksTest -- test compact blocks (BIP 152) Only testing Version 1 compact blocks (txids) ''' # TestNode: A peer we use to send messages to bitcoind, and store responses. class TestNode(NodeConnCB): def __init__(self): super().__init__() self.last_sendcmpct = [] self.block_announced = False # Store the hashes of blocks we've seen announced. # This is for synchronizing the p2p message traffic, # so we can eg wait until a particular block is announced. self.announced_blockhashes = set() def on_sendcmpct(self, conn, message): self.last_sendcmpct.append(message) def on_cmpctblock(self, conn, message): self.block_announced = True self.last_message["cmpctblock"].header_and_shortids.header.calc_sha256() self.announced_blockhashes.add( self.last_message["cmpctblock"].header_and_shortids.header.sha256) def on_headers(self, conn, message): self.block_announced = True for x in self.last_message["headers"].headers: x.calc_sha256() self.announced_blockhashes.add(x.sha256) def on_inv(self, conn, message): for x in self.last_message["inv"].inv: if x.type == 2: self.block_announced = True self.announced_blockhashes.add(x.hash) # Requires caller to hold mininode_lock def received_block_announcement(self): return self.block_announced def clear_block_announcement(self): with mininode_lock: self.block_announced = False self.last_message.pop("inv", None) self.last_message.pop("headers", None) self.last_message.pop("cmpctblock", None) def get_headers(self, locator, hashstop): msg = msg_getheaders() msg.locator.vHave = locator msg.hashstop = hashstop self.connection.send_message(msg) def send_header_for_blocks(self, new_blocks): headers_message = msg_headers() headers_message.headers = [CBlockHeader(b) for b in new_blocks] self.send_message(headers_message) def request_headers_and_sync(self, locator, hashstop=0): self.clear_block_announcement() self.get_headers(locator, hashstop) wait_until(self.received_block_announcement, timeout=30, lock=mininode_lock) self.clear_block_announcement() # Block until a block announcement for a particular block hash is # received. def wait_for_block_announcement(self, block_hash, timeout=30): def received_hash(): return (block_hash in self.announced_blockhashes) wait_until(received_hash, timeout=timeout, lock=mininode_lock) def send_await_disconnect(self, message, timeout=30): """Sends a message to the node and wait for disconnect. This is used when we want to send a message into the node that we expect will get us disconnected, eg an invalid block.""" self.send_message(message) wait_until(lambda: not self.connected, timeout=timeout, lock=mininode_lock) class CompactBlocksTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 self.extra_args = [[], ["-txindex"]] self.utxos = [] def build_block_on_tip(self, node): height = node.getblockcount() tip = node.getbestblockhash() mtp = node.getblockheader(tip)['mediantime'] block = create_block( int(tip, 16), create_coinbase(height + 1), mtp + 1) block.nVersion = 4 block.solve() return block # Create 10 more anyone-can-spend utxo's for testing. def make_utxos(self): # Doesn't matter which node we use, just use node0. block = self.build_block_on_tip(self.nodes[0]) self.test_node.send_and_ping(msg_block(block)) assert(int(self.nodes[0].getbestblockhash(), 16) == block.sha256) self.nodes[0].generate(100) total_value = block.vtx[0].vout[0].nValue out_value = total_value // 10 tx = CTransaction() tx.vin.append(CTxIn(COutPoint(block.vtx[0].sha256, 0), b'')) for i in range(10): tx.vout.append(CTxOut(out_value, CScript([OP_TRUE]))) tx.rehash() block2 = self.build_block_on_tip(self.nodes[0]) block2.vtx.append(tx) block2.hashMerkleRoot = block2.calc_merkle_root() block2.solve() self.test_node.send_and_ping(msg_block(block2)) assert_equal(int(self.nodes[0].getbestblockhash(), 16), block2.sha256) self.utxos.extend([[tx.sha256, i, out_value] for i in range(10)]) return # Test "sendcmpct" (between peers preferring the same version): # - No compact block announcements unless sendcmpct is sent. # - If sendcmpct is sent with version > preferred_version, the message is ignored. # - If sendcmpct is sent with boolean 0, then block announcements are not # made with compact blocks. # - If sendcmpct is then sent with boolean 1, then new block announcements # are made with compact blocks. # If old_node is passed in, request compact blocks with version=preferred-1 # and verify that it receives block announcements via compact block. def test_sendcmpct(self, node, test_node, preferred_version, old_node=None): # Make sure we get a SENDCMPCT message from our peer def received_sendcmpct(): return (len(test_node.last_sendcmpct) > 0) wait_until(received_sendcmpct, timeout=30, lock=mininode_lock) with mininode_lock: # Check that the first version received is the preferred one assert_equal( test_node.last_sendcmpct[0].version, preferred_version) # And that we receive versions down to 1. assert_equal(test_node.last_sendcmpct[-1].version, 1) test_node.last_sendcmpct = [] tip = int(node.getbestblockhash(), 16) def check_announcement_of_new_block(node, peer, predicate): peer.clear_block_announcement() block_hash = int(node.generate(1)[0], 16) peer.wait_for_block_announcement(block_hash, timeout=30) assert(peer.block_announced) with mininode_lock: assert predicate(peer), ( "block_hash={!r}, cmpctblock={!r}, inv={!r}".format( block_hash, peer.last_message.get("cmpctblock", None), peer.last_message.get("inv", None))) # We shouldn't get any block announcements via cmpctblock yet. check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message) # Try one more time, this time after requesting headers. test_node.request_headers_and_sync(locator=[tip]) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message and "inv" in p.last_message) # Test a few ways of using sendcmpct that should NOT # result in compact block announcements. # Before each test, sync the headers chain. test_node.request_headers_and_sync(locator=[tip]) # Now try a SENDCMPCT message with too-high version sendcmpct = msg_sendcmpct() sendcmpct.version = 999 # was: preferred_version+1 sendcmpct.announce = True test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message) # Headers sync before next test. test_node.request_headers_and_sync(locator=[tip]) # Now try a SENDCMPCT message with valid version, but announce=False sendcmpct.version = preferred_version sendcmpct.announce = False test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message) # Headers sync before next test. test_node.request_headers_and_sync(locator=[tip]) # Finally, try a SENDCMPCT message with announce=True sendcmpct.version = preferred_version sendcmpct.announce = True test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Try one more time (no headers sync should be needed!) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Try one more time, after turning on sendheaders test_node.send_and_ping(msg_sendheaders()) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Try one more time, after sending a version-1, announce=false message. sendcmpct.version = preferred_version - 1 sendcmpct.announce = False test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" in p.last_message) # Now turn off announcements sendcmpct.version = preferred_version sendcmpct.announce = False test_node.send_and_ping(sendcmpct) check_announcement_of_new_block( node, test_node, lambda p: "cmpctblock" not in p.last_message and "headers" in p.last_message) if old_node is not None: # Verify that a peer using an older protocol version can receive # announcements from this node. sendcmpct.version = 1 # preferred_version-1 sendcmpct.announce = True old_node.send_and_ping(sendcmpct) # Header sync old_node.request_headers_and_sync(locator=[tip]) check_announcement_of_new_block( node, old_node, lambda p: "cmpctblock" in p.last_message) # This test actually causes bitcoind to (reasonably!) disconnect us, so do # this last. def test_invalid_cmpctblock_message(self): self.nodes[0].generate(101) block = self.build_block_on_tip(self.nodes[0]) cmpct_block = P2PHeaderAndShortIDs() cmpct_block.header = CBlockHeader(block) cmpct_block.prefilled_txn_length = 1 # This index will be too high prefilled_txn = PrefilledTransaction(1, block.vtx[0]) cmpct_block.prefilled_txn = [prefilled_txn] self.test_node.send_await_disconnect(msg_cmpctblock(cmpct_block)) assert_equal( int(self.nodes[0].getbestblockhash(), 16), block.hashPrevBlock) # Compare the generated shortids to what we expect based on BIP 152, given # bitcoind's choice of nonce. def test_compactblock_construction(self, node, test_node): # Generate a bunch of transactions. node.generate(101) num_transactions = 25 address = node.getnewaddress() for i in range(num_transactions): txid = node.sendtoaddress(address, 0.1) hex_tx = node.gettransaction(txid)["hex"] tx = FromHex(CTransaction(), hex_tx) assert(tx.wit.is_null()) # Wait until we've seen the block announcement for the resulting tip tip = int(node.getbestblockhash(), 16) test_node.wait_for_block_announcement(tip) # Make sure we will receive a fast-announce compact block self.request_cb_announcements(test_node, node) # Now mine a block, and look at the resulting compact block. test_node.clear_block_announcement() block_hash = int(node.generate(1)[0], 16) # Store the raw block in our internal format. block = FromHex(CBlock(), node.getblock("%02x" % block_hash, False)) [tx.calc_sha256() for tx in block.vtx] block.rehash() # Wait until the block was announced (via compact blocks) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) # Now fetch and check the compact block header_and_shortids = None with mininode_lock: assert("cmpctblock" in test_node.last_message) # Convert the on-the-wire representation to absolute indexes header_and_shortids = HeaderAndShortIDs( test_node.last_message["cmpctblock"].header_and_shortids) self.check_compactblock_construction_from_block( header_and_shortids, block_hash, block) # Now fetch the compact block using a normal non-announce getdata with mininode_lock: test_node.clear_block_announcement() inv = CInv(4, block_hash) # 4 == "CompactBlock" test_node.send_message(msg_getdata([inv])) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) # Now fetch and check the compact block header_and_shortids = None with mininode_lock: assert("cmpctblock" in test_node.last_message) # Convert the on-the-wire representation to absolute indexes header_and_shortids = HeaderAndShortIDs( test_node.last_message["cmpctblock"].header_and_shortids) self.check_compactblock_construction_from_block( header_and_shortids, block_hash, block) def check_compactblock_construction_from_block(self, header_and_shortids, block_hash, block): # Check that we got the right block! header_and_shortids.header.calc_sha256() assert_equal(header_and_shortids.header.sha256, block_hash) # Make sure the prefilled_txn appears to have included the coinbase assert(len(header_and_shortids.prefilled_txn) >= 1) assert_equal(header_and_shortids.prefilled_txn[0].index, 0) # Check that all prefilled_txn entries match what's in the block. for entry in header_and_shortids.prefilled_txn: entry.tx.calc_sha256() # This checks the non-witness parts of the tx agree assert_equal(entry.tx.sha256, block.vtx[entry.index].sha256) # And this checks the witness wtxid = entry.tx.calc_sha256(True) # Shouldn't have received a witness assert(entry.tx.wit.is_null()) # Check that the cmpctblock message announced all the transactions. assert_equal(len(header_and_shortids.prefilled_txn) + len(header_and_shortids.shortids), len(block.vtx)) # And now check that all the shortids are as expected as well. # Determine the siphash keys to use. [k0, k1] = header_and_shortids.get_siphash_keys() index = 0 while index < len(block.vtx): if (len(header_and_shortids.prefilled_txn) > 0 and header_and_shortids.prefilled_txn[0].index == index): # Already checked prefilled transactions above header_and_shortids.prefilled_txn.pop(0) else: tx_hash = block.vtx[index].sha256 shortid = calculate_shortid(k0, k1, tx_hash) assert_equal(shortid, header_and_shortids.shortids[0]) header_and_shortids.shortids.pop(0) index += 1 # Test that bitcoind requests compact blocks when we announce new blocks # via header or inv, and that responding to getblocktxn causes the block # to be successfully reconstructed. def test_compactblock_requests(self, node, test_node, version): # Try announcing a block with an inv or header, expect a compactblock # request for announce in ["inv", "header"]: block = self.build_block_on_tip(node) with mininode_lock: test_node.last_message.pop("getdata", None) if announce == "inv": test_node.send_message(msg_inv([CInv(2, block.sha256)])) wait_until(lambda: "getheaders" in test_node.last_message, timeout=30, lock=mininode_lock) test_node.send_header_for_blocks([block]) else: test_node.send_header_for_blocks([block]) wait_until(lambda: "getdata" in test_node.last_message, timeout=30, lock=mininode_lock) assert_equal(len(test_node.last_message["getdata"].inv), 1) assert_equal(test_node.last_message["getdata"].inv[0].type, 4) assert_equal( test_node.last_message["getdata"].inv[0].hash, block.sha256) # Send back a compactblock message that omits the coinbase comp_block = HeaderAndShortIDs() comp_block.header = CBlockHeader(block) comp_block.nonce = 0 [k0, k1] = comp_block.get_siphash_keys() coinbase_hash = block.vtx[0].sha256 if version == 2: coinbase_hash = block.vtx[0].calc_sha256(True) comp_block.shortids = [ calculate_shortid(k0, k1, coinbase_hash)] test_node.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) assert_equal(int(node.getbestblockhash(), 16), block.hashPrevBlock) # Expect a getblocktxn message. with mininode_lock: assert("getblocktxn" in test_node.last_message) absolute_indexes = test_node.last_message["getblocktxn"].block_txn_request.to_absolute( ) assert_equal(absolute_indexes, [0]) # should be a coinbase request # Send the coinbase, and verify that the tip advances. if version == 2: msg = msg_witness_blocktxn() else: msg = msg_blocktxn() msg.block_transactions.blockhash = block.sha256 msg.block_transactions.transactions = [block.vtx[0]] test_node.send_and_ping(msg) assert_equal(int(node.getbestblockhash(), 16), block.sha256) # Create a chain of transactions from given utxo, and add to a new block. def build_block_with_transactions(self, node, utxo, num_transactions): block = self.build_block_on_tip(node) for i in range(num_transactions): tx = CTransaction() tx.vin.append(CTxIn(COutPoint(utxo[0], utxo[1]), b'')) tx.vout.append(CTxOut(utxo[2] - 1000, CScript([OP_TRUE]))) tx.rehash() utxo = [tx.sha256, 0, tx.vout[0].nValue] block.vtx.append(tx) block.hashMerkleRoot = block.calc_merkle_root() block.solve() return block # Test that we only receive getblocktxn requests for transactions that the # node needs, and that responding to them causes the block to be # reconstructed. def test_getblocktxn_requests(self, node, test_node, version): with_witness = (version == 2) def test_getblocktxn_response(compact_block, peer, expected_result): msg = msg_cmpctblock(compact_block.to_p2p()) peer.send_and_ping(msg) with mininode_lock: assert("getblocktxn" in peer.last_message) absolute_indexes = peer.last_message["getblocktxn"].block_txn_request.to_absolute( ) assert_equal(absolute_indexes, expected_result) def test_tip_after_message(node, peer, msg, tip): peer.send_and_ping(msg) assert_equal(int(node.getbestblockhash(), 16), tip) # First try announcing compactblocks that won't reconstruct, and verify # that we receive getblocktxn messages back. utxo = self.utxos.pop(0) block = self.build_block_with_transactions(node, utxo, 5) self.utxos.append( [block.vtx[-1].sha256, 0, block.vtx[-1].vout[0].nValue]) comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(block, use_witness=with_witness) test_getblocktxn_response(comp_block, test_node, [1, 2, 3, 4, 5]) msg_bt = msg_blocktxn() if with_witness: msg_bt = msg_witness_blocktxn() # serialize with witnesses msg_bt.block_transactions = BlockTransactions( block.sha256, block.vtx[1:]) test_tip_after_message(node, test_node, msg_bt, block.sha256) utxo = self.utxos.pop(0) block = self.build_block_with_transactions(node, utxo, 5) self.utxos.append( [block.vtx[-1].sha256, 0, block.vtx[-1].vout[0].nValue]) # Now try interspersing the prefilled transactions comp_block.initialize_from_block( block, prefill_list=[0, 1, 5], use_witness=with_witness) test_getblocktxn_response(comp_block, test_node, [2, 3, 4]) msg_bt.block_transactions = BlockTransactions( block.sha256, block.vtx[2:5]) test_tip_after_message(node, test_node, msg_bt, block.sha256) # Now try giving one transaction ahead of time. utxo = self.utxos.pop(0) block = self.build_block_with_transactions(node, utxo, 5) self.utxos.append( [block.vtx[-1].sha256, 0, block.vtx[-1].vout[0].nValue]) test_node.send_and_ping(msg_tx(block.vtx[1])) assert(block.vtx[1].hash in node.getrawmempool()) # Prefill 4 out of the 6 transactions, and verify that only the one # that was not in the mempool is requested. comp_block.initialize_from_block( block, prefill_list=[0, 2, 3, 4], use_witness=with_witness) test_getblocktxn_response(comp_block, test_node, [5]) msg_bt.block_transactions = BlockTransactions( block.sha256, [block.vtx[5]]) test_tip_after_message(node, test_node, msg_bt, block.sha256) # Now provide all transactions to the node before the block is # announced and verify reconstruction happens immediately. utxo = self.utxos.pop(0) block = self.build_block_with_transactions(node, utxo, 10) self.utxos.append( [block.vtx[-1].sha256, 0, block.vtx[-1].vout[0].nValue]) for tx in block.vtx[1:]: test_node.send_message(msg_tx(tx)) test_node.sync_with_ping() # Make sure all transactions were accepted. mempool = node.getrawmempool() for tx in block.vtx[1:]: assert(tx.hash in mempool) # Clear out last request. with mininode_lock: test_node.last_message.pop("getblocktxn", None) # Send compact block comp_block.initialize_from_block( block, prefill_list=[0], use_witness=with_witness) test_tip_after_message( node, test_node, msg_cmpctblock(comp_block.to_p2p()), block.sha256) with mininode_lock: # Shouldn't have gotten a request for any transaction assert("getblocktxn" not in test_node.last_message) # Incorrectly responding to a getblocktxn shouldn't cause the block to be # permanently failed. def test_incorrect_blocktxn_response(self, node, test_node, version): if (len(self.utxos) == 0): self.make_utxos() utxo = self.utxos.pop(0) block = self.build_block_with_transactions(node, utxo, 10) self.utxos.append( [block.vtx[-1].sha256, 0, block.vtx[-1].vout[0].nValue]) # Relay the first 5 transactions from the block in advance for tx in block.vtx[1:6]: test_node.send_message(msg_tx(tx)) test_node.sync_with_ping() # Make sure all transactions were accepted. mempool = node.getrawmempool() for tx in block.vtx[1:6]: assert(tx.hash in mempool) # Send compact block comp_block = HeaderAndShortIDs() comp_block.initialize_from_block( block, prefill_list=[0], use_witness=(version == 2)) test_node.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) absolute_indexes = [] with mininode_lock: assert("getblocktxn" in test_node.last_message) absolute_indexes = test_node.last_message["getblocktxn"].block_txn_request.to_absolute( ) assert_equal(absolute_indexes, [6, 7, 8, 9, 10]) # Now give an incorrect response. # Note that it's possible for bitcoind to be smart enough to know we're # lying, since it could check to see if the shortid matches what we're # sending, and eg disconnect us for misbehavior. If that behavior # change were made, we could just modify this test by having a # different peer provide the block further down, so that we're still # verifying that the block isn't marked bad permanently. This is good # enough for now. msg = msg_blocktxn() if version == 2: msg = msg_witness_blocktxn() msg.block_transactions = BlockTransactions( block.sha256, [block.vtx[5]] + block.vtx[7:]) test_node.send_and_ping(msg) # Tip should not have updated assert_equal(int(node.getbestblockhash(), 16), block.hashPrevBlock) # We should receive a getdata request wait_until(lambda: "getdata" in test_node.last_message, timeout=10, lock=mininode_lock) assert_equal(len(test_node.last_message["getdata"].inv), 1) assert(test_node.last_message["getdata"].inv[0].type == 2 or test_node.last_message["getdata"].inv[0].type == 2 | MSG_WITNESS_FLAG) assert_equal( test_node.last_message["getdata"].inv[0].hash, block.sha256) # Deliver the block if version == 2: test_node.send_and_ping(msg_witness_block(block)) else: test_node.send_and_ping(msg_block(block)) assert_equal(int(node.getbestblockhash(), 16), block.sha256) def test_getblocktxn_handler(self, node, test_node, version): # bitcoind will not send blocktxn responses for blocks whose height is # more than 10 blocks deep. MAX_GETBLOCKTXN_DEPTH = 10 chain_height = node.getblockcount() current_height = chain_height while (current_height >= chain_height - MAX_GETBLOCKTXN_DEPTH): block_hash = node.getblockhash(current_height) block = FromHex(CBlock(), node.getblock(block_hash, False)) msg = msg_getblocktxn() msg.block_txn_request = BlockTransactionsRequest( int(block_hash, 16), []) num_to_request = random.randint(1, len(block.vtx)) msg.block_txn_request.from_absolute( sorted(random.sample(range(len(block.vtx)), num_to_request))) test_node.send_message(msg) wait_until(lambda: "blocktxn" in test_node.last_message, timeout=10, lock=mininode_lock) [tx.calc_sha256() for tx in block.vtx] with mininode_lock: assert_equal(test_node.last_message["blocktxn"].block_transactions.blockhash, int( block_hash, 16)) all_indices = msg.block_txn_request.to_absolute() for index in all_indices: tx = test_node.last_message["blocktxn"].block_transactions.transactions.pop( 0) tx.calc_sha256() assert_equal(tx.sha256, block.vtx[index].sha256) if version == 1: # Witnesses should have been stripped assert(tx.wit.is_null()) else: # Check that the witness matches assert_equal(tx.calc_sha256(True), block.vtx[index].calc_sha256(True)) test_node.last_message.pop("blocktxn", None) current_height -= 1 # Next request should send a full block response, as we're past the # allowed depth for a blocktxn response. block_hash = node.getblockhash(current_height) msg.block_txn_request = BlockTransactionsRequest( int(block_hash, 16), [0]) with mininode_lock: test_node.last_message.pop("block", None) test_node.last_message.pop("blocktxn", None) test_node.send_and_ping(msg) with mininode_lock: test_node.last_message["block"].block.calc_sha256() assert_equal( test_node.last_message["block"].block.sha256, int(block_hash, 16)) assert "blocktxn" not in test_node.last_message def test_compactblocks_not_at_tip(self, node, test_node): # Test that requesting old compactblocks doesn't work. MAX_CMPCTBLOCK_DEPTH = 5 new_blocks = [] for i in range(MAX_CMPCTBLOCK_DEPTH + 1): test_node.clear_block_announcement() new_blocks.append(node.generate(1)[0]) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) test_node.clear_block_announcement() test_node.send_message(msg_getdata([CInv(4, int(new_blocks[0], 16))])) wait_until(lambda: "cmpctblock" in test_node.last_message, timeout=30, lock=mininode_lock) test_node.clear_block_announcement() node.generate(1) wait_until(test_node.received_block_announcement, timeout=30, lock=mininode_lock) test_node.clear_block_announcement() with mininode_lock: test_node.last_message.pop("block", None) test_node.send_message(msg_getdata([CInv(4, int(new_blocks[0], 16))])) wait_until(lambda: "block" in test_node.last_message, timeout=30, lock=mininode_lock) with mininode_lock: test_node.last_message["block"].block.calc_sha256() assert_equal( test_node.last_message["block"].block.sha256, int(new_blocks[0], 16)) # Generate an old compactblock, and verify that it's not accepted. cur_height = node.getblockcount() hashPrevBlock = int(node.getblockhash(cur_height - 5), 16) block = self.build_block_on_tip(node) block.hashPrevBlock = hashPrevBlock block.solve() comp_block = HeaderAndShortIDs() comp_block.initialize_from_block(block) test_node.send_and_ping(msg_cmpctblock(comp_block.to_p2p())) tips = node.getchaintips() found = False for x in tips: if x["hash"] == block.hash: assert_equal(x["status"], "headers-only") found = True break assert(found) # Requesting this block via getblocktxn should silently fail # (to avoid fingerprinting attacks). msg = msg_getblocktxn() msg.block_txn_request = BlockTransactionsRequest(block.sha256, [0]) with mininode_lock: test_node.last_message.pop("blocktxn", None) test_node.send_and_ping(msg) with mininode_lock: assert "blocktxn" not in test_node.last_message def test_end_to_end_block_relay(self, node, listeners): utxo = self.utxos.pop(0) block = self.build_block_with_transactions(node, utxo, 10) [l.clear_block_announcement() for l in listeners] node.submitblock(ToHex(block)) for l in listeners: wait_until(lambda: l.received_block_announcement(), timeout=30, lock=mininode_lock) with mininode_lock: for l in listeners: assert "cmpctblock" in l.last_message l.last_message["cmpctblock"].header_and_shortids.header.calc_sha256( ) assert_equal( l.last_message["cmpctblock"].header_and_shortids.header.sha256, block.sha256) # Test that we don't get disconnected if we relay a compact block with valid header, # but invalid transactions. def test_invalid_tx_in_compactblock(self, node, test_node): assert(len(self.utxos)) utxo = self.utxos[0] block = self.build_block_with_transactions(node, utxo, 5) del block.vtx[3] block.hashMerkleRoot = block.calc_merkle_root() block.solve() # Now send the compact block with all transactions prefilled, and # verify that we don't get disconnected. comp_block = HeaderAndShortIDs() comp_block.initialize_from_block( block, prefill_list=[0, 1, 2, 3, 4], use_witness=False) msg = msg_cmpctblock(comp_block.to_p2p()) test_node.send_and_ping(msg) # Check that the tip didn't advance assert(int(node.getbestblockhash(), 16) is not block.sha256) test_node.sync_with_ping() # Helper for enabling cb announcements # Send the sendcmpct request and sync headers def request_cb_announcements(self, peer, node, version=1): tip = node.getbestblockhash() peer.get_headers(locator=[int(tip, 16)], hashstop=0) msg = msg_sendcmpct() msg.version = version msg.announce = True peer.send_and_ping(msg) def test_compactblock_reconstruction_multiple_peers(self, node, stalling_peer, delivery_peer): assert(len(self.utxos)) def announce_cmpct_block(node, peer): utxo = self.utxos.pop(0) block = self.build_block_with_transactions(node, utxo, 5) cmpct_block = HeaderAndShortIDs() cmpct_block.initialize_from_block(block) msg = msg_cmpctblock(cmpct_block.to_p2p()) peer.send_and_ping(msg) with mininode_lock: assert "getblocktxn" in peer.last_message return block, cmpct_block block, cmpct_block = announce_cmpct_block(node, stalling_peer) for tx in block.vtx[1:]: delivery_peer.send_message(msg_tx(tx)) delivery_peer.sync_with_ping() mempool = node.getrawmempool() for tx in block.vtx[1:]: assert(tx.hash in mempool) delivery_peer.send_and_ping(msg_cmpctblock(cmpct_block.to_p2p())) assert_equal(int(node.getbestblockhash(), 16), block.sha256) self.utxos.append( [block.vtx[-1].sha256, 0, block.vtx[-1].vout[0].nValue]) # Now test that delivering an invalid compact block won't break relay block, cmpct_block = announce_cmpct_block(node, stalling_peer) for tx in block.vtx[1:]: delivery_peer.send_message(msg_tx(tx)) delivery_peer.sync_with_ping() cmpct_block.prefilled_txn[0].tx.wit.vtxinwit = [CTxInWitness()] cmpct_block.prefilled_txn[0].tx.wit.vtxinwit[ 0].scriptWitness.stack = [ser_uint256(0)] cmpct_block.use_witness = True delivery_peer.send_and_ping(msg_cmpctblock(cmpct_block.to_p2p())) assert(int(node.getbestblockhash(), 16) != block.sha256) msg = msg_blocktxn() msg.block_transactions.blockhash = block.sha256 msg.block_transactions.transactions = block.vtx[1:] stalling_peer.send_and_ping(msg) assert_equal(int(node.getbestblockhash(), 16), block.sha256) def run_test(self): # Setup the p2p connections and start up the network thread. self.test_node = TestNode() self.ex_softfork_node = TestNode() self.old_node = TestNode() # version 1 peer connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], self.test_node)) connections.append(NodeConn('127.0.0.1', p2p_port(1), self.nodes[1], self.ex_softfork_node, services=NODE_NETWORK)) connections.append(NodeConn('127.0.0.1', p2p_port(1), self.nodes[1], self.old_node, services=NODE_NETWORK)) self.test_node.add_connection(connections[0]) self.ex_softfork_node.add_connection(connections[1]) self.old_node.add_connection(connections[2]) NetworkThread().start() # Start up network handling in another thread # Test logic begins here self.test_node.wait_for_verack() # We will need UTXOs to construct transactions in later tests. self.make_utxos() self.log.info("Running tests:") self.log.info("\tTesting SENDCMPCT p2p message... ") self.test_sendcmpct(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_sendcmpct( self.nodes[1], self.ex_softfork_node, 1, old_node=self.old_node) sync_blocks(self.nodes) self.log.info("\tTesting compactblock construction...") self.test_compactblock_construction(self.nodes[0], self.test_node) sync_blocks(self.nodes) self.test_compactblock_construction( self.nodes[1], self.ex_softfork_node) sync_blocks(self.nodes) self.log.info("\tTesting compactblock requests... ") self.test_compactblock_requests(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_compactblock_requests( self.nodes[1], self.ex_softfork_node, 2) sync_blocks(self.nodes) self.log.info("\tTesting getblocktxn requests...") self.test_getblocktxn_requests(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_getblocktxn_requests(self.nodes[1], self.ex_softfork_node, 2) sync_blocks(self.nodes) self.log.info("\tTesting getblocktxn handler...") self.test_getblocktxn_handler(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_getblocktxn_handler(self.nodes[1], self.ex_softfork_node, 2) self.test_getblocktxn_handler(self.nodes[1], self.old_node, 1) sync_blocks(self.nodes) self.log.info( "\tTesting compactblock requests/announcements not at chain tip...") self.test_compactblocks_not_at_tip(self.nodes[0], self.test_node) sync_blocks(self.nodes) self.test_compactblocks_not_at_tip( self.nodes[1], self.ex_softfork_node) self.test_compactblocks_not_at_tip(self.nodes[1], self.old_node) sync_blocks(self.nodes) self.log.info("\tTesting handling of incorrect blocktxn responses...") self.test_incorrect_blocktxn_response(self.nodes[0], self.test_node, 1) sync_blocks(self.nodes) self.test_incorrect_blocktxn_response( self.nodes[1], self.ex_softfork_node, 2) sync_blocks(self.nodes) # End-to-end block relay tests self.log.info("\tTesting end-to-end block relay...") self.request_cb_announcements(self.test_node, self.nodes[0]) self.request_cb_announcements(self.old_node, self.nodes[1]) self.request_cb_announcements( self.ex_softfork_node, self.nodes[1], version=2) self.test_end_to_end_block_relay( self.nodes[0], [self.ex_softfork_node, self.test_node, self.old_node]) self.test_end_to_end_block_relay( self.nodes[1], [self.ex_softfork_node, self.test_node, self.old_node]) self.log.info("\tTesting handling of invalid compact blocks...") self.test_invalid_tx_in_compactblock(self.nodes[0], self.test_node) self.test_invalid_tx_in_compactblock( self.nodes[1], self.ex_softfork_node) self.test_invalid_tx_in_compactblock(self.nodes[1], self.old_node) self.log.info( "\tTesting reconstructing compact blocks from all peers...") self.test_compactblock_reconstruction_multiple_peers( self.nodes[1], self.ex_softfork_node, self.old_node) sync_blocks(self.nodes) self.log.info("\tTesting invalid index in cmpctblock message...") self.test_invalid_cmpctblock_message() if __name__ == '__main__': CompactBlocksTest().main() diff --git a/test/functional/p2p-feefilter.py b/test/functional/p2p-feefilter.py index 4c6d5fe6b..5b21634b4 100755 --- a/test/functional/p2p-feefilter.py +++ b/test/functional/p2p-feefilter.py @@ -1,112 +1,109 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import time ''' FeeFilterTest -- test processing of feefilter messages ''' def hashToHex(hash): return format(hash, '064x') # Wait up to 60 secs to see if the testnode has received all the expected invs def allInvsMatch(invsExpected, testnode): for x in range(60): with mininode_lock: if (sorted(invsExpected) == sorted(testnode.txinvs)): return True time.sleep(1) return False class TestNode(NodeConnCB): def __init__(self): super().__init__() self.txinvs = [] def on_inv(self, conn, message): for i in message.inv: if (i.type == 1): self.txinvs.append(hashToHex(i.hash)) def clear_invs(self): with mininode_lock: self.txinvs = [] class FeeFilterTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 - self.setup_clean_chain = False def run_test(self): node1 = self.nodes[1] node0 = self.nodes[0] # Get out of IBD node1.generate(1) sync_blocks(self.nodes) # Setup the p2p connections and start up the network thread. test_node = TestNode() connection = NodeConn( '127.0.0.1', p2p_port(0), self.nodes[0], test_node) test_node.add_connection(connection) NetworkThread().start() test_node.wait_for_verack() # Test that invs are received for all txs at feerate of 20 sat/byte node1.settxfee(Decimal("0.00020000")) txids = [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] assert(allInvsMatch(txids, test_node)) test_node.clear_invs() # Set a filter of 15 sat/byte test_node.send_and_ping(msg_feefilter(15000)) # Test that txs are still being received (paying 20 sat/byte) txids = [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] assert(allInvsMatch(txids, test_node)) test_node.clear_invs() # Change tx fee rate to 10 sat/byte and test they are no longer # received node1.settxfee(Decimal("0.00010000")) [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] sync_mempools(self.nodes) # must be sure node 0 has received all txs # Send one transaction from node0 that should be received, so that we # we can sync the test on receipt (if node1's txs were relayed, they'd # be received by the time this node0 tx is received). This is # unfortunately reliant on the current relay behavior where we batch up # to 35 entries in an inv, which means that when this next transaction # is eligible for relay, the prior transactions from node1 are eligible # as well. node0.settxfee(Decimal("0.00020000")) txids = [node0.sendtoaddress(node0.getnewaddress(), 1)] assert(allInvsMatch(txids, test_node)) test_node.clear_invs() # Remove fee filter and check that txs are received again test_node.send_and_ping(msg_feefilter(0)) txids = [node1.sendtoaddress(node1.getnewaddress(), 1) for x in range(3)] assert(allInvsMatch(txids, test_node)) test_node.clear_invs() if __name__ == '__main__': FeeFilterTest().main() diff --git a/test/functional/p2p-fullblocktest.py b/test/functional/p2p-fullblocktest.py index 85f078443..51cf76708 100755 --- a/test/functional/p2p-fullblocktest.py +++ b/test/functional/p2p-fullblocktest.py @@ -1,1317 +1,1314 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import ComparisonTestFramework from test_framework.util import * from test_framework.comptool import TestManager, TestInstance, RejectResult from test_framework.blocktools import * import time from test_framework.key import CECKey from test_framework.script import * import struct from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE, MAX_BLOCK_SIGOPS_PER_MB class PreviousSpendableOutput(): def __init__(self, tx=CTransaction(), n=-1): self.tx = tx self.n = n # the output we're spending ''' This reimplements tests from the bitcoinj/FullBlockTestGenerator used by the pull-tester. We use the testing framework in which we expect a particular answer from each test. ''' # Use this class for tests that require behavior other than normal "mininode" behavior. # For now, it is used to serialize a bloated varint (b64). class CBrokenBlock(CBlock): def __init__(self, header=None): super(CBrokenBlock, self).__init__(header) def initialize(self, base_block): self.vtx = copy.deepcopy(base_block.vtx) self.hashMerkleRoot = self.calc_merkle_root() def serialize(self): r = b"" r += super(CBlock, self).serialize() r += struct.pack(" b1 (0) -> b2 (1) block(1, spend=out[0]) save_spendable_output() yield accepted() block(2, spend=out[1]) yield accepted() save_spendable_output() # so fork like this: # # genesis -> b1 (0) -> b2 (1) # \-> b3 (1) # # Nothing should happen at this point. We saw b2 first so it takes # priority. tip(1) b3 = block(3, spend=out[1]) txout_b3 = PreviousSpendableOutput(b3.vtx[1], 0) yield rejected() # Now we add another block to make the alternative chain longer. # # genesis -> b1 (0) -> b2 (1) # \-> b3 (1) -> b4 (2) block(4, spend=out[2]) yield accepted() # ... and back to the first chain. # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b3 (1) -> b4 (2) tip(2) block(5, spend=out[2]) save_spendable_output() yield rejected() block(6, spend=out[3]) yield accepted() # Try to create a fork that double-spends # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b7 (2) -> b8 (4) # \-> b3 (1) -> b4 (2) tip(5) block(7, spend=out[2]) yield rejected() block(8, spend=out[4]) yield rejected() # Try to create a block that has too much fee # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b9 (4) # \-> b3 (1) -> b4 (2) tip(6) block(9, spend=out[4], additional_coinbase_value=1) yield rejected(RejectResult(16, b'bad-cb-amount')) # Create a fork that ends in a block with too much fee (the one that causes the reorg) # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b10 (3) -> b11 (4) # \-> b3 (1) -> b4 (2) tip(5) block(10, spend=out[3]) yield rejected() block(11, spend=out[4], additional_coinbase_value=1) yield rejected(RejectResult(16, b'bad-cb-amount')) # Try again, but with a valid fork first # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b14 (5) # (b12 added last) # \-> b3 (1) -> b4 (2) tip(5) b12 = block(12, spend=out[3]) save_spendable_output() b13 = block(13, spend=out[4]) # Deliver the block header for b12, and the block b13. # b13 should be accepted but the tip won't advance until b12 is # delivered. yield TestInstance([[CBlockHeader(b12), None], [b13, False]]) save_spendable_output() # b14 is invalid, but the node won't know that until it tries to connect # Tip still can't advance because b12 is missing block(14, spend=out[5], additional_coinbase_value=1) yield rejected() yield TestInstance([[b12, True, b13.sha256]]) # New tip should be b13. # Add a block with MAX_BLOCK_SIGOPS_PER_MB and one with one more sigop # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b16 (6) # \-> b3 (1) -> b4 (2) # Test that a block with a lot of checksigs is okay lots_of_checksigs = CScript( [OP_CHECKSIG] * (MAX_BLOCK_SIGOPS_PER_MB - 1)) tip(13) block(15, spend=out[5], script=lots_of_checksigs) yield accepted() save_spendable_output() # Test that a block with too many checksigs is rejected too_many_checksigs = CScript([OP_CHECKSIG] * (MAX_BLOCK_SIGOPS_PER_MB)) block(16, spend=out[6], script=too_many_checksigs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Attempt to spend a transaction created on a different fork # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b17 (b3.vtx[1]) # \-> b3 (1) -> b4 (2) tip(15) block(17, spend=txout_b3) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # Attempt to spend a transaction created on a different fork (on a fork this time) # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) # \-> b18 (b3.vtx[1]) -> b19 (6) # \-> b3 (1) -> b4 (2) tip(13) block(18, spend=txout_b3) yield rejected() block(19, spend=out[6]) yield rejected() # Attempt to spend a coinbase at depth too low # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b20 (7) # \-> b3 (1) -> b4 (2) tip(15) block(20, spend=out[7]) yield rejected(RejectResult(16, b'bad-txns-premature-spend-of-coinbase')) # Attempt to spend a coinbase at depth too low (on a fork this time) # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) # \-> b21 (6) -> b22 (5) # \-> b3 (1) -> b4 (2) tip(13) block(21, spend=out[6]) yield rejected() block(22, spend=out[5]) yield rejected() # Create a block on either side of LEGACY_MAX_BLOCK_SIZE and make sure its accepted/rejected # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) # \-> b24 (6) -> b25 (7) # \-> b3 (1) -> b4 (2) tip(15) b23 = block(23, spend=out[6]) tx = CTransaction() script_length = LEGACY_MAX_BLOCK_SIZE - len(b23.serialize()) - 69 script_output = CScript([b'\x00' * script_length]) tx.vout.append(CTxOut(0, script_output)) tx.vin.append(CTxIn(COutPoint(b23.vtx[1].sha256, 0))) b23 = update_block(23, [tx]) # Make sure the math above worked out to produce a max-sized block assert_equal(len(b23.serialize()), LEGACY_MAX_BLOCK_SIZE) yield accepted() save_spendable_output() # Create blocks with a coinbase input script size out of range # genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3) # \-> b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) -> b30 (7) # \-> ... (6) -> ... (7) # \-> b3 (1) -> b4 (2) tip(15) b26 = block(26, spend=out[6]) b26.vtx[0].vin[0].scriptSig = b'\x00' b26.vtx[0].rehash() # update_block causes the merkle root to get updated, even with no new # transactions, and updates the required state. b26 = update_block(26, []) yield rejected(RejectResult(16, b'bad-cb-length')) # Extend the b26 chain to make sure bitcoind isn't accepting b26 b27 = block(27, spend=out[7]) yield rejected(False) # Now try a too-large-coinbase script tip(15) b28 = block(28, spend=out[6]) b28.vtx[0].vin[0].scriptSig = b'\x00' * 101 b28.vtx[0].rehash() b28 = update_block(28, []) yield rejected(RejectResult(16, b'bad-cb-length')) # Extend the b28 chain to make sure bitcoind isn't accepting b28 b29 = block(29, spend=out[7]) yield rejected(False) # b30 has a max-sized coinbase scriptSig. tip(23) b30 = block(30) b30.vtx[0].vin[0].scriptSig = b'\x00' * 100 b30.vtx[0].rehash() b30 = update_block(30, []) yield accepted() save_spendable_output() # b31 - b35 - check sigops of OP_CHECKMULTISIG / OP_CHECKMULTISIGVERIFY / OP_CHECKSIGVERIFY # # genesis -> ... -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) # \-> b36 (11) # \-> b34 (10) # \-> b32 (9) # # MULTISIG: each op code counts as 20 sigops. To create the edge case, # pack another 19 sigops at the end. lots_of_multisigs = CScript([OP_CHECKMULTISIG] * ( (MAX_BLOCK_SIGOPS_PER_MB - 1) // 20) + [OP_CHECKSIG] * 19) b31 = block(31, spend=out[8], script=lots_of_multisigs) assert_equal(get_legacy_sigopcount_block(b31), MAX_BLOCK_SIGOPS_PER_MB) yield accepted() save_spendable_output() # this goes over the limit because the coinbase has one sigop too_many_multisigs = CScript( [OP_CHECKMULTISIG] * (MAX_BLOCK_SIGOPS_PER_MB // 20)) b32 = block(32, spend=out[9], script=too_many_multisigs) assert_equal(get_legacy_sigopcount_block( b32), MAX_BLOCK_SIGOPS_PER_MB + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # CHECKMULTISIGVERIFY tip(31) lots_of_multisigs = CScript([OP_CHECKMULTISIGVERIFY] * ( (MAX_BLOCK_SIGOPS_PER_MB - 1) // 20) + [OP_CHECKSIG] * 19) block(33, spend=out[9], script=lots_of_multisigs) yield accepted() save_spendable_output() too_many_multisigs = CScript( [OP_CHECKMULTISIGVERIFY] * (MAX_BLOCK_SIGOPS_PER_MB // 20)) block(34, spend=out[10], script=too_many_multisigs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # CHECKSIGVERIFY tip(33) lots_of_checksigs = CScript( [OP_CHECKSIGVERIFY] * (MAX_BLOCK_SIGOPS_PER_MB - 1)) b35 = block(35, spend=out[10], script=lots_of_checksigs) yield accepted() save_spendable_output() too_many_checksigs = CScript( [OP_CHECKSIGVERIFY] * (MAX_BLOCK_SIGOPS_PER_MB)) block(36, spend=out[11], script=too_many_checksigs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # Check spending of a transaction in a block which failed to connect # # b6 (3) # b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) # \-> b37 (11) # \-> b38 (11/37) # # save 37's spendable output, but then double-spend out11 to invalidate # the block tip(35) b37 = block(37, spend=out[11]) txout_b37 = PreviousSpendableOutput(b37.vtx[1], 0) tx = create_and_sign_tx(out[11].tx, out[11].n, 0) b37 = update_block(37, [tx]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # attempt to spend b37's first non-coinbase tx, at which point b37 was # still considered valid tip(35) block(38, spend=txout_b37) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # Check P2SH SigOp counting # # # 13 (4) -> b15 (5) -> b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b41 (12) # \-> b40 (12) # # b39 - create some P2SH outputs that will require 6 sigops to spend: # # redeem_script = COINBASE_PUBKEY, (OP_2DUP+OP_CHECKSIGVERIFY) * 5, OP_CHECKSIG # p2sh_script = OP_HASH160, ripemd160(sha256(script)), OP_EQUAL # tip(35) b39 = block(39) b39_outputs = 0 b39_sigops_per_output = 6 # Build the redeem script, hash it, use hash to create the p2sh script redeem_script = CScript([self.coinbase_pubkey] + [ OP_2DUP, OP_CHECKSIGVERIFY] * 5 + [OP_CHECKSIG]) redeem_script_hash = hash160(redeem_script) p2sh_script = CScript([OP_HASH160, redeem_script_hash, OP_EQUAL]) # Create a transaction that spends one satoshi to the p2sh_script, the rest to OP_TRUE # This must be signed because it is spending a coinbase spend = out[11] tx = create_tx(spend.tx, spend.n, 1, p2sh_script) tx.vout.append( CTxOut(spend.tx.vout[spend.n].nValue - 1, CScript([OP_TRUE]))) self.sign_tx(tx, spend.tx, spend.n) tx.rehash() b39 = update_block(39, [tx]) b39_outputs += 1 # Until block is full, add tx's with 1 satoshi to p2sh_script, the rest # to OP_TRUE tx_new = None tx_last = tx total_size = len(b39.serialize()) while(total_size < LEGACY_MAX_BLOCK_SIZE): tx_new = create_tx(tx_last, 1, 1, p2sh_script) tx_new.vout.append( CTxOut(tx_last.vout[1].nValue - 1, CScript([OP_TRUE]))) tx_new.rehash() total_size += len(tx_new.serialize()) if total_size >= LEGACY_MAX_BLOCK_SIZE: break b39.vtx.append(tx_new) # add tx to block tx_last = tx_new b39_outputs += 1 b39 = update_block(39, []) yield accepted() save_spendable_output() # Test sigops in P2SH redeem scripts # # b40 creates 3333 tx's spending the 6-sigop P2SH outputs from b39 for a total of 19998 sigops. # The first tx has one sigop and then at the end we add 2 more to put us just over the max. # # b41 does the same, less one, so it has the maximum sigops permitted. # tip(39) b40 = block(40, spend=out[12]) sigops = get_legacy_sigopcount_block(b40) numTxes = (MAX_BLOCK_SIGOPS_PER_MB - sigops) // b39_sigops_per_output assert_equal(numTxes <= b39_outputs, True) lastOutpoint = COutPoint(b40.vtx[1].sha256, 0) lastAmount = b40.vtx[1].vout[0].nValue new_txs = [] for i in range(1, numTxes + 1): tx = CTransaction() tx.vout.append(CTxOut(1, CScript([OP_TRUE]))) tx.vin.append(CTxIn(lastOutpoint, b'')) # second input is corresponding P2SH output from b39 tx.vin.append(CTxIn(COutPoint(b39.vtx[i].sha256, 0), b'')) # Note: must pass the redeem_script (not p2sh_script) to the # signature hash function sighash = SignatureHashForkId( redeem_script, tx, 1, SIGHASH_ALL | SIGHASH_FORKID, lastAmount) sig = self.coinbase_key.sign( sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID])) scriptSig = CScript([sig, redeem_script]) tx.vin[1].scriptSig = scriptSig tx.rehash() new_txs.append(tx) lastOutpoint = COutPoint(tx.sha256, 0) lastAmount = tx.vout[0].nValue b40_sigops_to_fill = MAX_BLOCK_SIGOPS_PER_MB - \ (numTxes * b39_sigops_per_output + sigops) + 1 tx = CTransaction() tx.vin.append(CTxIn(lastOutpoint, b'')) tx.vout.append(CTxOut(1, CScript([OP_CHECKSIG] * b40_sigops_to_fill))) tx.rehash() new_txs.append(tx) update_block(40, new_txs) yield rejected(RejectResult(16, b'bad-blk-sigops')) # same as b40, but one less sigop tip(39) b41 = block(41, spend=None) update_block(41, b40.vtx[1:-1]) b41_sigops_to_fill = b40_sigops_to_fill - 1 tx = CTransaction() tx.vin.append(CTxIn(lastOutpoint, b'')) tx.vout.append(CTxOut(1, CScript([OP_CHECKSIG] * b41_sigops_to_fill))) tx.rehash() update_block(41, [tx]) yield accepted() # Fork off of b39 to create a constant base again # # b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) # \-> b41 (12) # tip(39) block(42, spend=out[12]) yield rejected() save_spendable_output() block(43, spend=out[13]) yield accepted() save_spendable_output() # Test a number of really invalid scenarios # # -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b44 (14) # \-> ??? (15) # The next few blocks are going to be created "by hand" since they'll do funky things, such as having # the first transaction be non-coinbase, etc. The purpose of b44 is to # make sure this works. height = self.block_heights[self.tip.sha256] + 1 coinbase = create_coinbase(height, self.coinbase_pubkey) b44 = CBlock() b44.nTime = self.tip.nTime + 1 b44.hashPrevBlock = self.tip.sha256 b44.nBits = 0x207fffff b44.vtx.append(coinbase) b44.hashMerkleRoot = b44.calc_merkle_root() b44.solve() self.tip = b44 self.block_heights[b44.sha256] = height self.blocks[44] = b44 yield accepted() # A block with a non-coinbase as the first tx non_coinbase = create_tx(out[15].tx, out[15].n, 1) b45 = CBlock() b45.nTime = self.tip.nTime + 1 b45.hashPrevBlock = self.tip.sha256 b45.nBits = 0x207fffff b45.vtx.append(non_coinbase) b45.hashMerkleRoot = b45.calc_merkle_root() b45.calc_sha256() b45.solve() self.block_heights[b45.sha256] = self.block_heights[ self.tip.sha256] + 1 self.tip = b45 self.blocks[45] = b45 yield rejected(RejectResult(16, b'bad-cb-missing')) # A block with no txns tip(44) b46 = CBlock() b46.nTime = b44.nTime + 1 b46.hashPrevBlock = b44.sha256 b46.nBits = 0x207fffff b46.vtx = [] b46.hashMerkleRoot = 0 b46.solve() self.block_heights[b46.sha256] = self.block_heights[b44.sha256] + 1 self.tip = b46 assert 46 not in self.blocks self.blocks[46] = b46 s = ser_uint256(b46.hashMerkleRoot) yield rejected(RejectResult(16, b'bad-cb-missing')) # A block with invalid work tip(44) b47 = block(47, solve=False) target = uint256_from_compact(b47.nBits) while b47.sha256 < target: # changed > to < b47.nNonce += 1 b47.rehash() yield rejected(RejectResult(16, b'high-hash')) # A block with timestamp > 2 hrs in the future tip(44) b48 = block(48, solve=False) b48.nTime = int(time.time()) + 60 * 60 * 3 b48.solve() yield rejected(RejectResult(16, b'time-too-new')) # A block with an invalid merkle hash tip(44) b49 = block(49) b49.hashMerkleRoot += 1 b49.solve() yield rejected(RejectResult(16, b'bad-txnmrklroot')) # A block with an incorrect POW limit tip(44) b50 = block(50) b50.nBits = b50.nBits - 1 b50.solve() yield rejected(RejectResult(16, b'bad-diffbits')) # A block with two coinbase txns tip(44) b51 = block(51) cb2 = create_coinbase(51, self.coinbase_pubkey) b51 = update_block(51, [cb2]) yield rejected(RejectResult(16, b'bad-tx-coinbase')) # A block w/ duplicate txns # Note: txns have to be in the right position in the merkle tree to # trigger this error tip(44) b52 = block(52, spend=out[15]) tx = create_tx(b52.vtx[1], 0, 1) b52 = update_block(52, [tx, tx]) yield rejected(RejectResult(16, b'bad-txns-duplicate')) # Test block timestamps # -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) # \-> b54 (15) # tip(43) block(53, spend=out[14]) yield rejected() # rejected since b44 is at same height save_spendable_output() # invalid timestamp (b35 is 5 blocks back, so its time is # MedianTimePast) b54 = block(54, spend=out[15]) b54.nTime = b35.nTime - 1 b54.solve() yield rejected(RejectResult(16, b'time-too-old')) # valid timestamp tip(53) b55 = block(55, spend=out[15]) b55.nTime = b35.nTime update_block(55, []) yield accepted() save_spendable_output() # Test CVE-2012-2459 # # -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57p2 (16) # \-> b57 (16) # \-> b56p2 (16) # \-> b56 (16) # # Merkle tree malleability (CVE-2012-2459): repeating sequences of transactions in a block without # affecting the merkle root of a block, while still invalidating it. # See: src/consensus/merkle.h # # b57 has three txns: coinbase, tx, tx1. The merkle root computation will duplicate tx. # Result: OK # # b56 copies b57 but duplicates tx1 and does not recalculate the block hash. So it has a valid merkle # root but duplicate transactions. # Result: Fails # # b57p2 has six transactions in its merkle tree: # - coinbase, tx, tx1, tx2, tx3, tx4 # Merkle root calculation will duplicate as necessary. # Result: OK. # # b56p2 copies b57p2 but adds both tx3 and tx4. The purpose of the test is to make sure the code catches # duplicate txns that are not next to one another with the "bad-txns-duplicate" error (which indicates # that the error was caught early, avoiding a DOS vulnerability.) # b57 - a good block with 2 txs, don't submit until end tip(55) b57 = block(57) tx = create_and_sign_tx(out[16].tx, out[16].n, 1) tx1 = create_tx(tx, 0, 1) b57 = update_block(57, [tx, tx1]) # b56 - copy b57, add a duplicate tx tip(55) b56 = copy.deepcopy(b57) self.blocks[56] = b56 assert_equal(len(b56.vtx), 3) b56 = update_block(56, [tx1]) assert_equal(b56.hash, b57.hash) yield rejected(RejectResult(16, b'bad-txns-duplicate')) # b57p2 - a good block with 6 tx'es, don't submit until end tip(55) b57p2 = block("57p2") tx = create_and_sign_tx(out[16].tx, out[16].n, 1) tx1 = create_tx(tx, 0, 1) tx2 = create_tx(tx1, 0, 1) tx3 = create_tx(tx2, 0, 1) tx4 = create_tx(tx3, 0, 1) b57p2 = update_block("57p2", [tx, tx1, tx2, tx3, tx4]) # b56p2 - copy b57p2, duplicate two non-consecutive tx's tip(55) b56p2 = copy.deepcopy(b57p2) self.blocks["b56p2"] = b56p2 assert_equal(b56p2.hash, b57p2.hash) assert_equal(len(b56p2.vtx), 6) b56p2 = update_block("b56p2", [tx3, tx4]) yield rejected(RejectResult(16, b'bad-txns-duplicate')) tip("57p2") yield accepted() tip(57) yield rejected() # rejected because 57p2 seen first save_spendable_output() # Test a few invalid tx types # # -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> ??? (17) # # tx with prevout.n out of range tip(57) b58 = block(58, spend=out[17]) tx = CTransaction() assert(len(out[17].tx.vout) < 42) tx.vin.append( CTxIn(COutPoint(out[17].tx.sha256, 42), CScript([OP_TRUE]), 0xffffffff)) tx.vout.append(CTxOut(0, b"")) tx.calc_sha256() b58 = update_block(58, [tx]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # tx with output value > input value out of range tip(57) b59 = block(59) tx = create_and_sign_tx(out[17].tx, out[17].n, 51 * COIN) b59 = update_block(59, [tx]) yield rejected(RejectResult(16, b'bad-txns-in-belowout')) # reset to good chain tip(57) b60 = block(60, spend=out[17]) yield accepted() save_spendable_output() # Test BIP30 # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> b61 (18) # # Blocks are not allowed to contain a transaction whose id matches that of an earlier, # not-fully-spent transaction in the same chain. To test, make identical coinbases; # the second one should be rejected. # tip(60) b61 = block(61, spend=out[18]) b61.vtx[0].vin[0].scriptSig = b60.vtx[ 0].vin[0].scriptSig # equalize the coinbases b61.vtx[0].rehash() b61 = update_block(61, []) assert_equal(b60.vtx[0].serialize(), b61.vtx[0].serialize()) yield rejected(RejectResult(16, b'bad-txns-BIP30')) # Test tx.isFinal is properly rejected (not an exhaustive tx.isFinal test, that should be in data-driven transaction tests) # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> b62 (18) # tip(60) b62 = block(62) tx = CTransaction() tx.nLockTime = 0xffffffff # this locktime is non-final assert(out[18].n < len(out[18].tx.vout)) tx.vin.append( CTxIn(COutPoint(out[18].tx.sha256, out[18].n))) # don't set nSequence tx.vout.append(CTxOut(0, CScript([OP_TRUE]))) assert(tx.vin[0].nSequence < 0xffffffff) tx.calc_sha256() b62 = update_block(62, [tx]) yield rejected(RejectResult(16, b'bad-txns-nonfinal')) # Test a non-final coinbase is also rejected # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) # \-> b63 (-) # tip(60) b63 = block(63) b63.vtx[0].nLockTime = 0xffffffff b63.vtx[0].vin[0].nSequence = 0xDEADBEEF b63.vtx[0].rehash() b63 = update_block(63, []) yield rejected(RejectResult(16, b'bad-txns-nonfinal')) # This checks that a block with a bloated VARINT between the block_header and the array of tx such that # the block is > LEGACY_MAX_BLOCK_SIZE with the bloated varint, but <= LEGACY_MAX_BLOCK_SIZE without the bloated varint, # does not cause a subsequent, identical block with canonical encoding to be rejected. The test does not # care whether the bloated block is accepted or rejected; it only cares that the second block is accepted. # # What matters is that the receiving node should not reject the bloated block, and then reject the canonical # block on the basis that it's the same as an already-rejected block (which would be a consensus failure.) # # -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) # \ # b64a (18) # b64a is a bloated block (non-canonical varint) # b64 is a good block (same as b64 but w/ canonical varint) # tip(60) regular_block = block("64a", spend=out[18]) # make it a "broken_block," with non-canonical serialization b64a = CBrokenBlock(regular_block) b64a.initialize(regular_block) self.blocks["64a"] = b64a self.tip = b64a tx = CTransaction() # use canonical serialization to calculate size script_length = LEGACY_MAX_BLOCK_SIZE - \ len(b64a.normal_serialize()) - 69 script_output = CScript([b'\x00' * script_length]) tx.vout.append(CTxOut(0, script_output)) tx.vin.append(CTxIn(COutPoint(b64a.vtx[1].sha256, 0))) b64a = update_block("64a", [tx]) assert_equal(len(b64a.serialize()), LEGACY_MAX_BLOCK_SIZE + 8) yield TestInstance([[self.tip, None]]) # comptool workaround: to make sure b64 is delivered, manually erase # b64a from blockstore self.test.block_store.erase(b64a.sha256) tip(60) b64 = CBlock(b64a) b64.vtx = copy.deepcopy(b64a.vtx) assert_equal(b64.hash, b64a.hash) assert_equal(len(b64.serialize()), LEGACY_MAX_BLOCK_SIZE) self.blocks[64] = b64 update_block(64, []) yield accepted() save_spendable_output() # Spend an output created in the block itself # # -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) # tip(64) b65 = block(65) tx1 = create_and_sign_tx( out[19].tx, out[19].n, out[19].tx.vout[0].nValue) tx2 = create_and_sign_tx(tx1, 0, 0) update_block(65, [tx1, tx2]) yield accepted() save_spendable_output() # Attempt to spend an output created later in the same block # # -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) # \-> b66 (20) tip(65) b66 = block(66) tx1 = create_and_sign_tx( out[20].tx, out[20].n, out[20].tx.vout[0].nValue) tx2 = create_and_sign_tx(tx1, 0, 1) update_block(66, [tx2, tx1]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # Attempt to double-spend a transaction created in a block # # -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) # \-> b67 (20) # # tip(65) b67 = block(67) tx1 = create_and_sign_tx( out[20].tx, out[20].n, out[20].tx.vout[0].nValue) tx2 = create_and_sign_tx(tx1, 0, 1) tx3 = create_and_sign_tx(tx1, 0, 2) update_block(67, [tx1, tx2, tx3]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # More tests of block subsidy # # -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) # \-> b68 (20) # # b68 - coinbase with an extra 10 satoshis, # creates a tx that has 9 satoshis from out[20] go to fees # this fails because the coinbase is trying to claim 1 satoshi too much in fees # # b69 - coinbase with extra 10 satoshis, and a tx that gives a 10 satoshi fee # this succeeds # tip(65) b68 = block(68, additional_coinbase_value=10) tx = create_and_sign_tx( out[20].tx, out[20].n, out[20].tx.vout[0].nValue - 9) update_block(68, [tx]) yield rejected(RejectResult(16, b'bad-cb-amount')) tip(65) b69 = block(69, additional_coinbase_value=10) tx = create_and_sign_tx( out[20].tx, out[20].n, out[20].tx.vout[0].nValue - 10) update_block(69, [tx]) yield accepted() save_spendable_output() # Test spending the outpoint of a non-existent transaction # # -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) # \-> b70 (21) # tip(69) block(70, spend=out[21]) bogus_tx = CTransaction() bogus_tx.sha256 = uint256_from_str( b"23c70ed7c0506e9178fc1a987f40a33946d4ad4c962b5ae3a52546da53af0c5c") tx = CTransaction() tx.vin.append(CTxIn(COutPoint(bogus_tx.sha256, 0), b"", 0xffffffff)) tx.vout.append(CTxOut(1, b"")) update_block(70, [tx]) yield rejected(RejectResult(16, b'bad-txns-inputs-missingorspent')) # Test accepting an invalid block which has the same hash as a valid one (via merkle tree tricks) # # -> b53 (14) -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) -> b72 (21) # \-> b71 (21) # # b72 is a good block. # b71 is a copy of 72, but re-adds one of its transactions. However, it has the same hash as b71. # tip(69) b72 = block(72) tx1 = create_and_sign_tx(out[21].tx, out[21].n, 2) tx2 = create_and_sign_tx(tx1, 0, 1) b72 = update_block(72, [tx1, tx2]) # now tip is 72 b71 = copy.deepcopy(b72) b71.vtx.append(tx2) # add duplicate tx2 self.block_heights[b71.sha256] = self.block_heights[ b69.sha256] + 1 # b71 builds off b69 self.blocks[71] = b71 assert_equal(len(b71.vtx), 4) assert_equal(len(b72.vtx), 3) assert_equal(b72.sha256, b71.sha256) tip(71) yield rejected(RejectResult(16, b'bad-txns-duplicate')) tip(72) yield accepted() save_spendable_output() # Test some invalid scripts and MAX_BLOCK_SIGOPS_PER_MB # # -> b55 (15) -> b57 (16) -> b60 (17) -> b64 (18) -> b65 (19) -> b69 (20) -> b72 (21) # \-> b** (22) # # b73 - tx with excessive sigops that are placed after an excessively large script element. # The purpose of the test is to make sure those sigops are counted. # # script is a bytearray of size 20,526 # # bytearray[0-19,998] : OP_CHECKSIG # bytearray[19,999] : OP_PUSHDATA4 # bytearray[20,000-20,003]: 521 (max_script_element_size+1, in little-endian format) # bytearray[20,004-20,525]: unread data (script_element) # bytearray[20,526] : OP_CHECKSIG (this puts us over the limit) # tip(72) b73 = block(73) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + \ MAX_SCRIPT_ELEMENT_SIZE + 1 + 5 + 1 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB - 1] = int("4e", 16) # OP_PUSHDATA4 element_size = MAX_SCRIPT_ELEMENT_SIZE + 1 a[MAX_BLOCK_SIGOPS_PER_MB] = element_size % 256 a[MAX_BLOCK_SIGOPS_PER_MB + 1] = element_size // 256 a[MAX_BLOCK_SIGOPS_PER_MB + 2] = 0 a[MAX_BLOCK_SIGOPS_PER_MB + 3] = 0 tx = create_and_sign_tx(out[22].tx, 0, 1, CScript(a)) b73 = update_block(73, [tx]) assert_equal( get_legacy_sigopcount_block(b73), MAX_BLOCK_SIGOPS_PER_MB + 1) yield rejected(RejectResult(16, b'bad-blk-sigops')) # b74/75 - if we push an invalid script element, all prevous sigops are counted, # but sigops after the element are not counted. # # The invalid script element is that the push_data indicates that # there will be a large amount of data (0xffffff bytes), but we only # provide a much smaller number. These bytes are CHECKSIGS so they would # cause b75 to fail for excessive sigops, if those bytes were counted. # # b74 fails because we put MAX_BLOCK_SIGOPS_PER_MB+1 before the element # b75 succeeds because we put MAX_BLOCK_SIGOPS_PER_MB before the element # # tip(72) b74 = block(74) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + \ MAX_SCRIPT_ELEMENT_SIZE + 42 # total = 20,561 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB] = 0x4e a[MAX_BLOCK_SIGOPS_PER_MB + 1] = 0xfe a[MAX_BLOCK_SIGOPS_PER_MB + 2] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 3] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 4] = 0xff tx = create_and_sign_tx(out[22].tx, 0, 1, CScript(a)) b74 = update_block(74, [tx]) yield rejected(RejectResult(16, b'bad-blk-sigops')) tip(72) b75 = block(75) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + MAX_SCRIPT_ELEMENT_SIZE + 42 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB - 1] = 0x4e a[MAX_BLOCK_SIGOPS_PER_MB] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 1] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 2] = 0xff a[MAX_BLOCK_SIGOPS_PER_MB + 3] = 0xff tx = create_and_sign_tx(out[22].tx, 0, 1, CScript(a)) b75 = update_block(75, [tx]) yield accepted() save_spendable_output() # Check that if we push an element filled with CHECKSIGs, they are not # counted tip(75) b76 = block(76) size = MAX_BLOCK_SIGOPS_PER_MB - 1 + MAX_SCRIPT_ELEMENT_SIZE + 1 + 5 a = bytearray([OP_CHECKSIG] * size) a[MAX_BLOCK_SIGOPS_PER_MB - 1] = 0x4e # PUSHDATA4, but leave the following bytes as just checksigs tx = create_and_sign_tx(out[23].tx, 0, 1, CScript(a)) b76 = update_block(76, [tx]) yield accepted() save_spendable_output() # Test transaction resurrection # # -> b77 (24) -> b78 (25) -> b79 (26) # \-> b80 (25) -> b81 (26) -> b82 (27) # # b78 creates a tx, which is spent in b79. After b82, both should be in mempool # # The tx'es must be unsigned and pass the node's mempool policy. It is unsigned for the # rather obscure reason that the Python signature code does not distinguish between # Low-S and High-S values (whereas the bitcoin code has custom code which does so); # as a result of which, the odds are 50% that the python code will use the right # value and the transaction will be accepted into the mempool. Until we modify the # test framework to support low-S signing, we are out of luck. # # To get around this issue, we construct transactions which are not signed and which # spend to OP_TRUE. If the standard-ness rules change, this test would need to be # updated. (Perhaps to spend to a P2SH OP_TRUE script) # tip(76) block(77) tx77 = create_and_sign_tx(out[24].tx, out[24].n, 10 * COIN) update_block(77, [tx77]) yield accepted() save_spendable_output() block(78) tx78 = create_tx(tx77, 0, 9 * COIN) update_block(78, [tx78]) yield accepted() block(79) tx79 = create_tx(tx78, 0, 8 * COIN) update_block(79, [tx79]) yield accepted() # mempool should be empty assert_equal(len(self.nodes[0].getrawmempool()), 0) tip(77) block(80, spend=out[25]) yield rejected() save_spendable_output() block(81, spend=out[26]) yield rejected() # other chain is same length save_spendable_output() block(82, spend=out[27]) yield accepted() # now this chain is longer, triggers re-org save_spendable_output() # now check that tx78 and tx79 have been put back into the peer's # mempool mempool = self.nodes[0].getrawmempool() assert_equal(len(mempool), 2) assert(tx78.hash in mempool) assert(tx79.hash in mempool) # Test invalid opcodes in dead execution paths. # # -> b81 (26) -> b82 (27) -> b83 (28) # b83 = block(83) op_codes = [OP_IF, OP_INVALIDOPCODE, OP_ELSE, OP_TRUE, OP_ENDIF] script = CScript(op_codes) tx1 = create_and_sign_tx( out[28].tx, out[28].n, out[28].tx.vout[0].nValue, script) tx2 = create_and_sign_tx(tx1, 0, 0, CScript([OP_TRUE])) tx2.vin[0].scriptSig = CScript([OP_FALSE]) tx2.rehash() update_block(83, [tx1, tx2]) yield accepted() save_spendable_output() # Reorg on/off blocks that have OP_RETURN in them (and try to spend them) # # -> b81 (26) -> b82 (27) -> b83 (28) -> b84 (29) -> b87 (30) -> b88 (31) # \-> b85 (29) -> b86 (30) \-> b89a (32) # # b84 = block(84) tx1 = create_tx(out[29].tx, out[29].n, 0, CScript([OP_RETURN])) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx1.calc_sha256() self.sign_tx(tx1, out[29].tx, out[29].n) tx1.rehash() tx2 = create_tx(tx1, 1, 0, CScript([OP_RETURN])) tx2.vout.append(CTxOut(0, CScript([OP_RETURN]))) tx3 = create_tx(tx1, 2, 0, CScript([OP_RETURN])) tx3.vout.append(CTxOut(0, CScript([OP_TRUE]))) tx4 = create_tx(tx1, 3, 0, CScript([OP_TRUE])) tx4.vout.append(CTxOut(0, CScript([OP_RETURN]))) tx5 = create_tx(tx1, 4, 0, CScript([OP_RETURN])) update_block(84, [tx1, tx2, tx3, tx4, tx5]) yield accepted() save_spendable_output() tip(83) block(85, spend=out[29]) yield rejected() block(86, spend=out[30]) yield accepted() tip(84) block(87, spend=out[30]) yield rejected() save_spendable_output() block(88, spend=out[31]) yield accepted() save_spendable_output() # trying to spend the OP_RETURN output is rejected block("89a", spend=out[32]) tx = create_tx(tx1, 0, 0, CScript([OP_TRUE])) update_block("89a", [tx]) yield rejected() # Test re-org of a week's worth of blocks (1088 blocks) # This test takes a minute or two and can be accomplished in memory # if self.options.runbarelyexpensive: tip(88) LARGE_REORG_SIZE = 1088 test1 = TestInstance(sync_every_block=False) spend = out[32] for i in range(89, LARGE_REORG_SIZE + 89): b = block(i, spend) tx = CTransaction() script_length = LEGACY_MAX_BLOCK_SIZE - len(b.serialize()) - 69 script_output = CScript([b'\x00' * script_length]) tx.vout.append(CTxOut(0, script_output)) tx.vin.append(CTxIn(COutPoint(b.vtx[1].sha256, 0))) b = update_block(i, [tx]) assert_equal(len(b.serialize()), LEGACY_MAX_BLOCK_SIZE) test1.blocks_and_transactions.append([self.tip, True]) save_spendable_output() spend = get_spendable_output() yield test1 chain1_tip = i # now create alt chain of same length tip(88) test2 = TestInstance(sync_every_block=False) for i in range(89, LARGE_REORG_SIZE + 89): block("alt" + str(i)) test2.blocks_and_transactions.append([self.tip, False]) yield test2 # extend alt chain to trigger re-org block("alt" + str(chain1_tip + 1)) yield accepted() # ... and re-org back to the first chain tip(chain1_tip) block(chain1_tip + 1) yield rejected() block(chain1_tip + 2) yield accepted() chain1_tip += 2 if __name__ == '__main__': FullBlockTest().main() diff --git a/test/functional/p2p-leaktests.py b/test/functional/p2p-leaktests.py index 7d660237f..52fb6c144 100755 --- a/test/functional/p2p-leaktests.py +++ b/test/functional/p2p-leaktests.py @@ -1,178 +1,176 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * ''' Test for message sending before handshake completion A node should never send anything other than VERSION/VERACK/REJECT until it's received a VERACK. This test connects to a node and sends it a few messages, trying to intice it into sending us something it shouldn't. ''' banscore = 10 class CLazyNode(NodeConnCB): def __init__(self): super().__init__() self.unexpected_msg = False self.ever_connected = False def bad_message(self, message): self.unexpected_msg = True self.log.info("should not have received message: %s" % message.command) def on_open(self, conn): self.connected = True self.ever_connected = True def on_version(self, conn, message): self.bad_message(message) def on_verack(self, conn, message): self.bad_message(message) def on_reject(self, conn, message): self.bad_message(message) def on_inv(self, conn, message): self.bad_message(message) def on_addr(self, conn, message): self.bad_message(message) def on_alert(self, conn, message): self.bad_message(message) def on_getdata(self, conn, message): self.bad_message(message) def on_getblocks(self, conn, message): self.bad_message(message) def on_tx(self, conn, message): self.bad_message(message) def on_block(self, conn, message): self.bad_message(message) def on_getaddr(self, conn, message): self.bad_message(message) def on_headers(self, conn, message): self.bad_message(message) def on_getheaders(self, conn, message): self.bad_message(message) def on_ping(self, conn, message): self.bad_message(message) def on_mempool(self, conn): self.bad_message(message) def on_pong(self, conn, message): self.bad_message(message) def on_feefilter(self, conn, message): self.bad_message(message) def on_sendheaders(self, conn, message): self.bad_message(message) def on_sendcmpct(self, conn, message): self.bad_message(message) def on_cmpctblock(self, conn, message): self.bad_message(message) def on_getblocktxn(self, conn, message): self.bad_message(message) def on_blocktxn(self, conn, message): self.bad_message(message) # Node that never sends a version. We'll use this to send a bunch of messages # anyway, and eventually get disconnected. class CNodeNoVersionBan(CLazyNode): # send a bunch of veracks without sending a message. This should get us disconnected. # NOTE: implementation-specific check here. Remove if bitcoind ban # behavior changes def on_open(self, conn): super().on_open(conn) for i in range(banscore): self.send_message(msg_verack()) def on_reject(self, conn, message): pass # Node that never sends a version. This one just sits idle and hopes to receive # any message (it shouldn't!) class CNodeNoVersionIdle(CLazyNode): def __init__(self): super().__init__() # Node that sends a version but not a verack. class CNodeNoVerackIdle(CLazyNode): def __init__(self): self.version_received = False super().__init__() def on_reject(self, conn, message): pass def on_verack(self, conn, message): pass # When version is received, don't reply with a verack. Instead, see if the # node will give us a message that it shouldn't. This is not an exhaustive # list! def on_version(self, conn, message): self.version_received = True conn.send_message(msg_ping()) conn.send_message(msg_getaddr()) class P2PLeakTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 1 self.extra_args = [['-banscore=' + str(banscore)]] def run_test(self): no_version_bannode = CNodeNoVersionBan() no_version_idlenode = CNodeNoVersionIdle() no_verack_idlenode = CNodeNoVerackIdle() connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], no_version_bannode, send_version=False)) connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], no_version_idlenode, send_version=False)) connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], no_verack_idlenode)) no_version_bannode.add_connection(connections[0]) no_version_idlenode.add_connection(connections[1]) no_verack_idlenode.add_connection(connections[2]) NetworkThread().start() # Start up network handling in another thread wait_until(lambda: no_version_bannode.ever_connected, timeout=10, lock=mininode_lock) wait_until(lambda: no_version_idlenode.ever_connected, timeout=10, lock=mininode_lock) wait_until(lambda: no_verack_idlenode.version_received, timeout=10, lock=mininode_lock) # Mine a block and make sure that it's not sent to the connected nodes self.nodes[0].generate(1) # Give the node enough time to possibly leak out a message time.sleep(5) # This node should have been banned assert not no_version_bannode.connected [conn.disconnect_node() for conn in connections] # Make sure no unexpected messages came in assert(no_version_bannode.unexpected_msg == False) assert(no_version_idlenode.unexpected_msg == False) assert(no_verack_idlenode.unexpected_msg == False) if __name__ == '__main__': P2PLeakTest().main() diff --git a/test/functional/p2p-mempool.py b/test/functional/p2p-mempool.py index b3ea371dc..48b02aa46 100755 --- a/test/functional/p2p-mempool.py +++ b/test/functional/p2p-mempool.py @@ -1,36 +1,34 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class P2PMempoolTests(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [["-peerbloomfilters=0"]] def run_test(self): # connect a mininode aTestNode = NodeConnCB() node = NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], aTestNode) aTestNode.add_connection(node) NetworkThread().start() aTestNode.wait_for_verack() # request mempool aTestNode.send_message(msg_mempool()) aTestNode.wait_for_disconnect() # mininode must be disconnected at this point assert_equal(len(self.nodes[0].getpeerinfo()), 0) if __name__ == '__main__': P2PMempoolTests().main() diff --git a/test/functional/p2p-timeouts.py b/test/functional/p2p-timeouts.py index 118088487..8750fd374 100755 --- a/test/functional/p2p-timeouts.py +++ b/test/functional/p2p-timeouts.py @@ -1,92 +1,90 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ TimeoutsTest -- test various net timeouts (only in extended tests) - Create three bitcoind nodes: no_verack_node - we never send a verack in response to their version no_version_node - we never send a version (only a ping) no_send_node - we never send any P2P message. - Start all three nodes - Wait 1 second - Assert that we're connected - Send a ping to no_verack_node and no_version_node - Wait 30 seconds - Assert that we're still connected - Send a ping to no_verack_node and no_version_node - Wait 31 seconds - Assert that we're no longer connected (timeout to receive version/verack is 60 seconds) """ from time import sleep from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class TestNode(NodeConnCB): def on_version(self, conn, message): # Don't send a verack in response pass class TimeoutsTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def run_test(self): # Setup the p2p connections and start up the network thread. self.no_verack_node = TestNode() # never send verack self.no_version_node = TestNode() # never send version (just ping) self.no_send_node = TestNode() # never send anything connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], self.no_verack_node)) connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], self.no_version_node, send_version=False)) connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], self.no_send_node, send_version=False)) self.no_verack_node.add_connection(connections[0]) self.no_version_node.add_connection(connections[1]) self.no_send_node.add_connection(connections[2]) NetworkThread().start() # Start up network handling in another thread sleep(1) assert(self.no_verack_node.connected) assert(self.no_version_node.connected) assert(self.no_send_node.connected) ping_msg = msg_ping() connections[0].send_message(ping_msg) connections[1].send_message(ping_msg) sleep(30) assert "version" in self.no_verack_node.last_message assert(self.no_verack_node.connected) assert(self.no_version_node.connected) assert(self.no_send_node.connected) connections[0].send_message(ping_msg) connections[1].send_message(ping_msg) sleep(31) assert(not self.no_verack_node.connected) assert(not self.no_version_node.connected) assert(not self.no_send_node.connected) if __name__ == '__main__': TimeoutsTest().main() diff --git a/test/functional/p2p-versionbits-warning.py b/test/functional/p2p-versionbits-warning.py index fffbd82b7..55633d441 100755 --- a/test/functional/p2p-versionbits-warning.py +++ b/test/functional/p2p-versionbits-warning.py @@ -1,146 +1,142 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.mininode import * from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import re import time from test_framework.blocktools import create_block, create_coinbase ''' Test version bits' warning system. Generate chains with block versions that appear to be signalling unknown soft-forks, and test that warning alerts are generated. ''' VB_PERIOD = 144 # versionbits period length for regtest VB_THRESHOLD = 108 # versionbits activation threshold for regtest VB_TOP_BITS = 0x20000000 VB_UNKNOWN_BIT = 27 # Choose a bit unassigned to any deployment WARN_UNKNOWN_RULES_MINED = "Unknown block versions being mined! It's possible unknown rules are in effect" WARN_UNKNOWN_RULES_ACTIVE = "unknown new rules activated (versionbit {})".format( VB_UNKNOWN_BIT) VB_PATTERN = re.compile("^Warning.*versionbit") class TestNode(NodeConnCB): def on_inv(self, conn, message): pass class VersionBitsWarningTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def setup_network(self): self.alert_filename = os.path.join(self.options.tmpdir, "alert.txt") # Open and close to create zero-length file with open(self.alert_filename, 'w', encoding='utf8') as _: pass self.extra_args = [ ["-alertnotify=echo %s >> \"" + self.alert_filename + "\""]] self.setup_nodes() # Send numblocks blocks via peer with nVersionToUse set. def send_blocks_with_version(self, peer, numblocks, nVersionToUse): tip = self.nodes[0].getbestblockhash() height = self.nodes[0].getblockcount() block_time = self.nodes[0].getblockheader(tip)["time"] + 1 tip = int(tip, 16) for _ in range(numblocks): block = create_block(tip, create_coinbase(height + 1), block_time) block.nVersion = nVersionToUse block.solve() peer.send_message(msg_block(block)) block_time += 1 height += 1 tip = block.sha256 peer.sync_with_ping() def test_versionbits_in_alert_file(self): with open(self.alert_filename, 'r', encoding='utf8') as f: alert_text = f.read() assert(VB_PATTERN.match(alert_text)) def run_test(self): # Setup the p2p connection and start up the network thread. test_node = TestNode() connections = [] connections.append( NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], test_node)) test_node.add_connection(connections[0]) NetworkThread().start() # Start up network handling in another thread # Test logic begins here test_node.wait_for_verack() # 1. Have the node mine one period worth of blocks self.nodes[0].generate(VB_PERIOD) # 2. Now build one period of blocks on the tip, with < VB_THRESHOLD # blocks signaling some unknown bit. nVersion = VB_TOP_BITS | (1 << VB_UNKNOWN_BIT) self.send_blocks_with_version(test_node, VB_THRESHOLD - 1, nVersion) # Fill rest of period with regular version blocks self.nodes[0].generate(VB_PERIOD - VB_THRESHOLD + 1) # Check that we're not getting any versionbit-related errors in # get*info() assert(not VB_PATTERN.match(self.nodes[0].getinfo()["errors"])) assert(not VB_PATTERN.match(self.nodes[0].getmininginfo()["errors"])) assert(not VB_PATTERN.match( self.nodes[0].getnetworkinfo()["warnings"])) # 3. Now build one period of blocks with >= VB_THRESHOLD blocks signaling # some unknown bit self.send_blocks_with_version(test_node, VB_THRESHOLD, nVersion) self.nodes[0].generate(VB_PERIOD - VB_THRESHOLD) # Might not get a versionbits-related alert yet, as we should # have gotten a different alert due to more than 51/100 blocks # being of unexpected version. # Check that get*info() shows some kind of error. assert(WARN_UNKNOWN_RULES_MINED in self.nodes[0].getinfo()["errors"]) assert(WARN_UNKNOWN_RULES_MINED in self.nodes[ 0].getmininginfo()["errors"]) assert(WARN_UNKNOWN_RULES_MINED in self.nodes[ 0].getnetworkinfo()["warnings"]) # Mine a period worth of expected blocks so the generic block-version warning # is cleared, and restart the node. This should move the versionbit state # to ACTIVE. self.nodes[0].generate(VB_PERIOD) self.stop_nodes() # Empty out the alert file with open(self.alert_filename, 'w', encoding='utf8') as _: pass - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, self.extra_args) + self.start_nodes() # Connecting one block should be enough to generate an error. self.nodes[0].generate(1) assert(WARN_UNKNOWN_RULES_ACTIVE in self.nodes[0].getinfo()["errors"]) assert( WARN_UNKNOWN_RULES_ACTIVE in self.nodes[0].getmininginfo()["errors"]) assert(WARN_UNKNOWN_RULES_ACTIVE in self.nodes[0].getnetworkinfo()[ "warnings"]) self.stop_nodes() self.test_versionbits_in_alert_file() # Test framework expects the node to still be running... - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, self.extra_args) + self.start_nodes() if __name__ == '__main__': VersionBitsWarningTest().main() diff --git a/test/functional/preciousblock.py b/test/functional/preciousblock.py index 1455eb641..8264267d1 100755 --- a/test/functional/preciousblock.py +++ b/test/functional/preciousblock.py @@ -1,128 +1,126 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test PreciousBlock code # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, connect_nodes_bi, sync_chain, sync_blocks, ) def unidirectional_node_sync_via_rpc(node_src, node_dest): blocks_to_copy = [] blockhash = node_src.getbestblockhash() while True: try: assert(len(node_dest.getblock(blockhash, False)) > 0) break except: blocks_to_copy.append(blockhash) blockhash = node_src.getblockheader( blockhash, True)['previousblockhash'] blocks_to_copy.reverse() for blockhash in blocks_to_copy: blockdata = node_src.getblock(blockhash, False) assert(node_dest.submitblock(blockdata) in (None, 'inconclusive')) def node_sync_via_rpc(nodes): for node_src in nodes: for node_dest in nodes: if node_src is node_dest: continue unidirectional_node_sync_via_rpc(node_src, node_dest) class PreciousTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 3 def setup_network(self): self.setup_nodes() def run_test(self): self.log.info( "Ensure submitblock can in principle reorg to a competing chain") self.nodes[0].generate(1) assert_equal(self.nodes[0].getblockcount(), 1) (hashY, hashZ) = self.nodes[1].generate(2) assert_equal(self.nodes[1].getblockcount(), 2) node_sync_via_rpc(self.nodes[0:3]) assert_equal(self.nodes[0].getbestblockhash(), hashZ) self.log.info("Mine blocks A-B-C on Node 0") (hashA, hashB, hashC) = self.nodes[0].generate(3) assert_equal(self.nodes[0].getblockcount(), 5) self.log.info("Mine competing blocks E-F-G on Node 1") (hashE, hashF, hashG) = self.nodes[1].generate(3) assert_equal(self.nodes[1].getblockcount(), 5) assert(hashC != hashG) self.log.info("Connect nodes and check no reorg occurs") # Submit competing blocks via RPC so any reorg should occur before we # proceed (no way to wait on inaction for p2p sync) node_sync_via_rpc(self.nodes[0:2]) connect_nodes_bi(self.nodes, 0, 1) assert_equal(self.nodes[0].getbestblockhash(), hashC) assert_equal(self.nodes[1].getbestblockhash(), hashG) self.log.info("Make Node0 prefer block G") self.nodes[0].preciousblock(hashG) assert_equal(self.nodes[0].getbestblockhash(), hashG) self.log.info("Make Node0 prefer block C again") self.nodes[0].preciousblock(hashC) assert_equal(self.nodes[0].getbestblockhash(), hashC) self.log.info("Make Node1 prefer block C") self.nodes[1].preciousblock(hashC) sync_chain(self.nodes[0:2]) # wait because node 1 may not have downloaded hashC assert_equal(self.nodes[1].getbestblockhash(), hashC) self.log.info("Make Node1 prefer block G again") self.nodes[1].preciousblock(hashG) assert_equal(self.nodes[1].getbestblockhash(), hashG) self.log.info("Make Node0 prefer block G again") self.nodes[0].preciousblock(hashG) assert_equal(self.nodes[0].getbestblockhash(), hashG) self.log.info("Make Node1 prefer block C again") self.nodes[1].preciousblock(hashC) assert_equal(self.nodes[1].getbestblockhash(), hashC) self.log.info( "Mine another block (E-F-G-)H on Node 0 and reorg Node 1") self.nodes[0].generate(1) assert_equal(self.nodes[0].getblockcount(), 6) sync_blocks(self.nodes[0:2]) hashH = self.nodes[0].getbestblockhash() assert_equal(self.nodes[1].getbestblockhash(), hashH) self.log.info("Node1 should not be able to prefer block C anymore") self.nodes[1].preciousblock(hashC) assert_equal(self.nodes[1].getbestblockhash(), hashH) self.log.info("Mine competing blocks I-J-K-L on Node 2") self.nodes[2].generate(4) assert_equal(self.nodes[2].getblockcount(), 6) hashL = self.nodes[2].getbestblockhash() self.log.info("Connect nodes and check no reorg occurs") node_sync_via_rpc(self.nodes[1:3]) connect_nodes_bi(self.nodes, 1, 2) connect_nodes_bi(self.nodes, 0, 2) assert_equal(self.nodes[0].getbestblockhash(), hashH) assert_equal(self.nodes[1].getbestblockhash(), hashH) assert_equal(self.nodes[2].getbestblockhash(), hashL) self.log.info("Make Node1 prefer block L") self.nodes[1].preciousblock(hashL) assert_equal(self.nodes[1].getbestblockhash(), hashL) self.log.info("Make Node2 prefer block H") self.nodes[2].preciousblock(hashH) assert_equal(self.nodes[2].getbestblockhash(), hashH) if __name__ == '__main__': PreciousTest().main() diff --git a/test/functional/prioritise_transaction.py b/test/functional/prioritise_transaction.py index 33777f6f0..246530552 100755 --- a/test/functional/prioritise_transaction.py +++ b/test/functional/prioritise_transaction.py @@ -1,151 +1,149 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test PrioritiseTransaction code # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * # FIXME: review how this test needs to be adapted w.r.t _LEGACY_MAX_BLOCK_SIZE from test_framework.mininode import COIN from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE class PrioritiseTransactionTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [["-printpriority=1"]] def run_test(self): self.txouts = gen_return_txouts() self.relayfee = self.nodes[0].getnetworkinfo()['relayfee'] utxo_count = 90 utxos = create_confirmed_utxos( self.relayfee, self.nodes[0], utxo_count) # our transactions are smaller than 100kb base_fee = self.relayfee * 100 txids = [] # Create 3 batches of transactions at 3 different fee rate levels range_size = utxo_count // 3 for i in range(3): txids.append([]) start_range = i * range_size end_range = start_range + range_size txids[i] = create_lots_of_big_transactions(self.nodes[0], self.txouts, utxos[ start_range:end_range], end_range - start_range, (i + 1) * base_fee) # Make sure that the size of each group of transactions exceeds # LEGACY_MAX_BLOCK_SIZE -- otherwise the test needs to be revised to create # more transactions. mempool = self.nodes[0].getrawmempool(True) sizes = [0, 0, 0] for i in range(3): for j in txids[i]: assert(j in mempool) sizes[i] += mempool[j]['size'] # Fail => raise utxo_count assert(sizes[i] > LEGACY_MAX_BLOCK_SIZE) # add a fee delta to something in the cheapest bucket and make sure it gets mined # also check that a different entry in the cheapest bucket is NOT mined (lower # the priority to ensure its not mined due to priority) self.nodes[0].prioritisetransaction( txids[0][0], 0, int(3 * base_fee * COIN)) self.nodes[0].prioritisetransaction(txids[0][1], -1e15, 0) self.nodes[0].generate(1) mempool = self.nodes[0].getrawmempool() self.log.info("Assert that prioritised transaction was mined") assert(txids[0][0] not in mempool) assert(txids[0][1] in mempool) high_fee_tx = None for x in txids[2]: if x not in mempool: high_fee_tx = x # Something high-fee should have been mined! assert(high_fee_tx != None) # Add a prioritisation before a tx is in the mempool (de-prioritising a # high-fee transaction so that it's now low fee). self.nodes[0].prioritisetransaction( high_fee_tx, -1e15, -int(2 * base_fee * COIN)) # Add everything back to mempool self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash()) # Check to make sure our high fee rate tx is back in the mempool mempool = self.nodes[0].getrawmempool() assert(high_fee_tx in mempool) # Now verify the modified-high feerate transaction isn't mined before # the other high fee transactions. Keep mining until our mempool has # decreased by all the high fee size that we calculated above. while (self.nodes[0].getmempoolinfo()['bytes'] > sizes[0] + sizes[1]): self.nodes[0].generate(1) # High fee transaction should not have been mined, but other high fee rate # transactions should have been. mempool = self.nodes[0].getrawmempool() self.log.info( "Assert that de-prioritised transaction is still in mempool") assert(high_fee_tx in mempool) for x in txids[2]: if (x != high_fee_tx): assert(x not in mempool) # Create a free, low priority transaction. Should be rejected. utxo_list = self.nodes[0].listunspent() assert(len(utxo_list) > 0) utxo = utxo_list[0] inputs = [] outputs = {} inputs.append({"txid": utxo["txid"], "vout": utxo["vout"]}) outputs[self.nodes[0].getnewaddress()] = utxo["amount"] - self.relayfee raw_tx = self.nodes[0].createrawtransaction(inputs, outputs) tx_hex = self.nodes[0].signrawtransaction( raw_tx, None, None, "ALL|FORKID")["hex"] txid = self.nodes[0].sendrawtransaction(tx_hex) # A tx that spends an in-mempool tx has 0 priority, so we can use it to # test the effect of using prioritise transaction for mempool # acceptance inputs = [] inputs.append({"txid": txid, "vout": 0}) outputs = {} outputs[self.nodes[0].getnewaddress()] = utxo["amount"] - self.relayfee raw_tx2 = self.nodes[0].createrawtransaction(inputs, outputs) tx2_hex = self.nodes[0].signrawtransaction( raw_tx2, None, None, "ALL|FORKID")["hex"] tx2_id = self.nodes[0].decoderawtransaction(tx2_hex)["txid"] # This will raise an exception due to min relay fee not being met assert_raises_jsonrpc(-26, "66: insufficient priority", self.nodes[0].sendrawtransaction, tx2_hex) assert(tx2_id not in self.nodes[0].getrawmempool()) # This is a less than 1000-byte transaction, so just set the fee # to be the minimum for a 1000 byte transaction and check that it is # accepted. self.nodes[0].prioritisetransaction( tx2_id, 0, int(self.relayfee * COIN)) self.log.info( "Assert that prioritised free transaction is accepted to mempool") assert_equal(self.nodes[0].sendrawtransaction(tx2_hex), tx2_id) assert(tx2_id in self.nodes[0].getrawmempool()) if __name__ == '__main__': PrioritiseTransactionTest().main() diff --git a/test/functional/proxy_test.py b/test/functional/proxy_test.py index 1a0f09613..7b2c49814 100755 --- a/test/functional/proxy_test.py +++ b/test/functional/proxy_test.py @@ -1,220 +1,217 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. ''' Test plan: - Start bitcoind's with different proxy configurations - Use addnode to initiate connections - Verify that proxies are connected to, and the right connection command is given - Proxy configurations to test on bitcoind side: - `-proxy` (proxy everything) - `-onion` (proxy just onions) - `-proxyrandomize` Circuit randomization - Proxy configurations to test on proxy side, - support no authentication (other proxy) - support no authentication + user/pass authentication (Tor) - proxy on IPv6 - Create various proxies (as threads) - Create bitcoinds that connect to them - Manipulate the bitcoinds using addnode (onetry) an observe effects addnode connect to IPv4 addnode connect to IPv6 addnode connect to onion addnode connect to generic DNS name ''' import socket import os from test_framework.socks5 import Socks5Configuration, Socks5Command, Socks5Server, AddressType from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( PORT_MIN, PORT_RANGE, assert_equal, ) from test_framework.netutil import test_ipv6_local RANGE_BEGIN = PORT_MIN + 2 * PORT_RANGE # Start after p2p and rpc ports class ProxyTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 4 - self.setup_clean_chain = False def setup_nodes(self): self.have_ipv6 = test_ipv6_local() # Create two proxies on different ports # ... one unauthenticated self.conf1 = Socks5Configuration() self.conf1.addr = ('127.0.0.1', RANGE_BEGIN + (os.getpid() % 1000)) self.conf1.unauth = True self.conf1.auth = False # ... one supporting authenticated and unauthenticated (Tor) self.conf2 = Socks5Configuration() self.conf2.addr = ( '127.0.0.1', RANGE_BEGIN + 1000 + (os.getpid() % 1000)) self.conf2.unauth = True self.conf2.auth = True if self.have_ipv6: # ... one on IPv6 with similar configuration self.conf3 = Socks5Configuration() self.conf3.af = socket.AF_INET6 self.conf3.addr = ( '::1', RANGE_BEGIN + 2000 + (os.getpid() % 1000)) self.conf3.unauth = True self.conf3.auth = True else: self.log.info("Warning: testing without local IPv6 support") self.serv1 = Socks5Server(self.conf1) self.serv1.start() self.serv2 = Socks5Server(self.conf2) self.serv2.start() if self.have_ipv6: self.serv3 = Socks5Server(self.conf3) self.serv3.start() # Note: proxies are not used to connect to local nodes # this is because the proxy to use is based on CService.GetNetwork(), # which return NET_UNROUTABLE for localhost args = [ ['-listen', '-proxy=%s:%i' % (self.conf1.addr), '-proxyrandomize=1'], ['-listen', '-proxy=%s:%i' % (self.conf1.addr), '-onion=%s:%i' % (self.conf2.addr), '-proxyrandomize=0'], ['-listen', '-proxy=%s:%i' % (self.conf2.addr), '-proxyrandomize=1'], [] ] if self.have_ipv6: args[3] = ['-listen', '-proxy=[%s]:%i' % (self.conf3.addr), '-proxyrandomize=0', '-noonion'] - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, extra_args=args) + self.add_nodes(self.num_nodes, extra_args=args) + self.start_nodes() def node_test(self, node, proxies, auth, test_onion=True): rv = [] # Test: outgoing IPv4 connection through node node.addnode("15.61.23.23:1234", "onetry") cmd = proxies[0].queue.get() assert(isinstance(cmd, Socks5Command)) # Note: bitcoind's SOCKS5 implementation only sends atyp DOMAINNAME, # even if connecting directly to IPv4/IPv6 assert_equal(cmd.atyp, AddressType.DOMAINNAME) assert_equal(cmd.addr, b"15.61.23.23") assert_equal(cmd.port, 1234) if not auth: assert_equal(cmd.username, None) assert_equal(cmd.password, None) rv.append(cmd) if self.have_ipv6: # Test: outgoing IPv6 connection through node node.addnode( "[1233:3432:2434:2343:3234:2345:6546:4534]:5443", "onetry") cmd = proxies[1].queue.get() assert(isinstance(cmd, Socks5Command)) # Note: bitcoind's SOCKS5 implementation only sends atyp # DOMAINNAME, even if connecting directly to IPv4/IPv6 assert_equal(cmd.atyp, AddressType.DOMAINNAME) assert_equal(cmd.addr, b"1233:3432:2434:2343:3234:2345:6546:4534") assert_equal(cmd.port, 5443) if not auth: assert_equal(cmd.username, None) assert_equal(cmd.password, None) rv.append(cmd) if test_onion: # Test: outgoing onion connection through node node.addnode("bitcoinostk4e4re.onion:8333", "onetry") cmd = proxies[2].queue.get() assert(isinstance(cmd, Socks5Command)) assert_equal(cmd.atyp, AddressType.DOMAINNAME) assert_equal(cmd.addr, b"bitcoinostk4e4re.onion") assert_equal(cmd.port, 8333) if not auth: assert_equal(cmd.username, None) assert_equal(cmd.password, None) rv.append(cmd) # Test: outgoing DNS name connection through node node.addnode("node.noumenon:8333", "onetry") cmd = proxies[3].queue.get() assert(isinstance(cmd, Socks5Command)) assert_equal(cmd.atyp, AddressType.DOMAINNAME) assert_equal(cmd.addr, b"node.noumenon") assert_equal(cmd.port, 8333) if not auth: assert_equal(cmd.username, None) assert_equal(cmd.password, None) rv.append(cmd) return rv def run_test(self): # basic -proxy self.node_test( self.nodes[0], [self.serv1, self.serv1, self.serv1, self.serv1], False) # -proxy plus -onion self.node_test( self.nodes[1], [self.serv1, self.serv1, self.serv2, self.serv1], False) # -proxy plus -onion, -proxyrandomize rv = self.node_test( self.nodes[2], [self.serv2, self.serv2, self.serv2, self.serv2], True) # Check that credentials as used for -proxyrandomize connections are # unique credentials = set((x.username, x.password) for x in rv) assert_equal(len(credentials), len(rv)) if self.have_ipv6: # proxy on IPv6 localhost self.node_test( self.nodes[3], [self.serv3, self.serv3, self.serv3, self.serv3], False, False) def networks_dict(d): r = {} for x in d['networks']: r[x['name']] = x return r # test RPC getnetworkinfo n0 = networks_dict(self.nodes[0].getnetworkinfo()) for net in ['ipv4', 'ipv6', 'onion']: assert_equal(n0[net]['proxy'], '%s:%i' % (self.conf1.addr)) assert_equal(n0[net]['proxy_randomize_credentials'], True) assert_equal(n0['onion']['reachable'], True) n1 = networks_dict(self.nodes[1].getnetworkinfo()) for net in ['ipv4', 'ipv6']: assert_equal(n1[net]['proxy'], '%s:%i' % (self.conf1.addr)) assert_equal(n1[net]['proxy_randomize_credentials'], False) assert_equal(n1['onion']['proxy'], '%s:%i' % (self.conf2.addr)) assert_equal(n1['onion']['proxy_randomize_credentials'], False) assert_equal(n1['onion']['reachable'], True) n2 = networks_dict(self.nodes[2].getnetworkinfo()) for net in ['ipv4', 'ipv6', 'onion']: assert_equal(n2[net]['proxy'], '%s:%i' % (self.conf2.addr)) assert_equal(n2[net]['proxy_randomize_credentials'], True) assert_equal(n2['onion']['reachable'], True) if self.have_ipv6: n3 = networks_dict(self.nodes[3].getnetworkinfo()) for net in ['ipv4', 'ipv6']: assert_equal(n3[net]['proxy'], '[%s]:%i' % (self.conf3.addr)) assert_equal(n3[net]['proxy_randomize_credentials'], False) assert_equal(n3['onion']['reachable'], False) if __name__ == '__main__': ProxyTest().main() diff --git a/test/functional/pruning.py b/test/functional/pruning.py index 117705ec7..780dce423 100755 --- a/test/functional/pruning.py +++ b/test/functional/pruning.py @@ -1,503 +1,503 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test pruning code # ******** # WARNING: # This test uses 4GB of disk space. # This test takes 30 mins or more (up to 2 hours) # ******** from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * import time import os MIN_BLOCKS_TO_KEEP = 288 # Rescans start at the earliest block up to 2 hours before a key timestamp, so # the manual prune RPC avoids pruning blocks in the same window to be # compatible with pruning based on key creation time. TIMESTAMP_WINDOW = 2 * 60 * 60 def calc_usage(blockdir): return sum(os.path.getsize(blockdir + f) for f in os.listdir(blockdir) if os.path.isfile(blockdir + f)) / (1024. * 1024.) class PruneTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 6 # Create nodes 0 and 1 to mine. # Create node 2 to test pruning. self.full_node_default_args = ["-maxreceivebuffer=20000", "-blockmaxsize=999000", "-checkblocks=5", "-limitdescendantcount=100", "-limitdescendantsize=5000", "-limitancestorcount=100", "-limitancestorsize=5000"] # Create nodes 3 and 4 to test manual pruning (they will be re-started with manual pruning later) # Create nodes 5 to test wallet in prune mode, but do not connect self.extra_args = [self.full_node_default_args, self.full_node_default_args, ["-maxreceivebuffer=20000", "-prune=550"], ["-maxreceivebuffer=20000", "-blockmaxsize=999000"], ["-maxreceivebuffer=20000", "-blockmaxsize=999000"], ["-prune=550"]] def setup_network(self): self.setup_nodes() self.prunedir = self.options.tmpdir + "/node2/regtest/blocks/" connect_nodes(self.nodes[0], 1) connect_nodes(self.nodes[1], 2) connect_nodes(self.nodes[2], 0) connect_nodes(self.nodes[0], 3) connect_nodes(self.nodes[0], 4) sync_blocks(self.nodes[0:5]) + def setup_nodes(self): + self.add_nodes(self.num_nodes, self.extra_args, timewait=900) + self.start_nodes() + def create_big_chain(self): # Start by creating some coinbases we can spend later self.nodes[1].generate(200) sync_blocks(self.nodes[0:2]) self.nodes[0].generate(150) # Then mine enough full blocks to create more than 550MiB of data for i in range(645): mine_large_block(self.nodes[0], self.utxo_cache_0) sync_blocks(self.nodes[0:5]) def test_height_min(self): if not os.path.isfile(self.prunedir + "blk00000.dat"): raise AssertionError("blk00000.dat is missing, pruning too early") self.log.info("Success") self.log.info("Though we're already using more than 550MiB, current usage: %d" % calc_usage(self.prunedir)) self.log.info( "Mining 25 more blocks should cause the first block file to be pruned") # Pruning doesn't run until we're allocating another chunk, 20 full # blocks past the height cutoff will ensure this for i in range(25): mine_large_block(self.nodes[0], self.utxo_cache_0) waitstart = time.time() while os.path.isfile(self.prunedir + "blk00000.dat"): time.sleep(0.1) if time.time() - waitstart > 30: raise AssertionError( "blk00000.dat not pruned when it should be") self.log.info("Success") usage = calc_usage(self.prunedir) self.log.info("Usage should be below target: %d" % usage) if (usage > 550): raise AssertionError("Pruning target not being met") def create_chain_with_staleblocks(self): # Create stale blocks in manageable sized chunks self.log.info( "Mine 24 (stale) blocks on Node 1, followed by 25 (main chain) block reorg from Node 0, for 12 rounds") for j in range(12): # Disconnect node 0 so it can mine a longer reorg chain without knowing about node 1's soon-to-be-stale chain # Node 2 stays connected, so it hears about the stale blocks and then reorg's when node0 reconnects # Stopping node 0 also clears its mempool, so it doesn't have # node1's transactions to accidentally mine self.stop_node(0) - self.nodes[0] = self.start_node( - 0, self.options.tmpdir, self.full_node_default_args, timewait=900) + self.start_node(0, extra_args=self.full_node_default_args) # Mine 24 blocks in node 1 for i in range(24): if j == 0: mine_large_block(self.nodes[1], self.utxo_cache_1) else: # Add node1's wallet transactions back to the mempool, to # avoid the mined blocks from being too small. self.nodes[1].resendwallettransactions() # tx's already in mempool from previous disconnects self.nodes[1].generate(1) # Reorg back with 25 block chain from node 0 for i in range(25): mine_large_block(self.nodes[0], self.utxo_cache_0) # Create connections in the order so both nodes can see the reorg # at the same time connect_nodes(self.nodes[1], 0) connect_nodes(self.nodes[2], 0) sync_blocks(self.nodes[0:3]) self.log.info("Usage can be over target because of high stale rate: %d" % calc_usage(self.prunedir)) def reorg_test(self): # Node 1 will mine a 300 block chain starting 287 blocks back from Node # 0 and Node 2's tip. This will cause Node 2 to do a reorg requiring # 288 blocks of undo data to the reorg_test chain. Reboot node 1 to # clear its mempool (hopefully make the invalidate faster). Lower the # block max size so we don't keep mining all our big mempool # transactions (from disconnected blocks) self.stop_node(1) - self.nodes[1] = self.start_node(1, self.options.tmpdir, [ - "-maxreceivebuffer=20000", "-blockmaxsize=5000", "-checkblocks=5", "-disablesafemode"], timewait=900) + self.start_node(1, extra_args=[ + "-maxreceivebuffer=20000", "-blockmaxsize=5000", "-checkblocks=5", "-disablesafemode"]) height = self.nodes[1].getblockcount() self.log.info("Current block height: %d" % height) invalidheight = height - 287 badhash = self.nodes[1].getblockhash(invalidheight) self.log.info("Invalidating block %s at height %d" % (badhash, invalidheight)) self.nodes[1].invalidateblock(badhash) # We've now switched to our previously mined-24 block fork on node 1, but thats not what we want. # So invalidate that fork as well, until we're on the same chain as # node 0/2 (but at an ancestor 288 blocks ago) mainchainhash = self.nodes[0].getblockhash(invalidheight - 1) curhash = self.nodes[1].getblockhash(invalidheight - 1) while curhash != mainchainhash: self.nodes[1].invalidateblock(curhash) curhash = self.nodes[1].getblockhash(invalidheight - 1) assert(self.nodes[1].getblockcount() == invalidheight - 1) self.log.info("New best height: %d" % self.nodes[1].getblockcount()) # Reboot node1 to clear those giant tx's from mempool self.stop_node(1) - self.nodes[1] = self.start_node(1, self.options.tmpdir, [ - "-maxreceivebuffer=20000", "-blockmaxsize=5000", "-checkblocks=5", "-disablesafemode"], timewait=900) + self.start_node(1, extra_args=[ + "-maxreceivebuffer=20000", "-blockmaxsize=5000", "-checkblocks=5", "-disablesafemode"]) self.log.info("Generating new longer chain of 300 more blocks") self.nodes[1].generate(300) self.log.info("Reconnect nodes") connect_nodes(self.nodes[0], 1) connect_nodes(self.nodes[2], 1) sync_blocks(self.nodes[0:3], timeout=120) self.log.info("Verify height on node 2: %d" % self.nodes[2].getblockcount()) self.log.info("Usage possibly still high bc of stale blocks in block files: %d" % calc_usage(self.prunedir)) self.log.info( "Mine 220 more blocks so we have requisite history (some blocks will be big and cause pruning of previous chain)") # Get node0's wallet transactions back in its mempool, to avoid the # mined blocks from being too small. self.nodes[0].resendwallettransactions() for i in range(22): # This can be slow, so do this in multiple RPC calls to avoid # RPC timeouts. # node 0 has many large tx's in its mempool from the disconnects self.nodes[0].generate(10) sync_blocks(self.nodes[0:3], timeout=300) usage = calc_usage(self.prunedir) self.log.info("Usage should be below target: %d" % usage) if (usage > 550): raise AssertionError("Pruning target not being met") return invalidheight, badhash def reorg_back(self): # Verify that a block on the old main chain fork has been pruned away assert_raises_jsonrpc( -1, "Block not available (pruned data)", self.nodes[2].getblock, self.forkhash) self.log.info("Will need to redownload block %d" % self.forkheight) # Verify that we have enough history to reorg back to the fork point. # Although this is more than 288 blocks, because this chain was written # more recently and only its other 299 small and 220 large block are in # the block files after it, its expected to still be retained. self.nodes[2].getblock(self.nodes[2].getblockhash(self.forkheight)) first_reorg_height = self.nodes[2].getblockcount() curchainhash = self.nodes[2].getblockhash(self.mainchainheight) self.nodes[2].invalidateblock(curchainhash) goalbestheight = self.mainchainheight goalbesthash = self.mainchainhash2 # As of 0.10 the current block download logic is not able to reorg to # the original chain created in create_chain_with_stale_blocks because # it doesn't know of any peer thats on that chain from which to # redownload its missing blocks. Invalidate the reorg_test chain in # node 0 as well, it can successfully switch to the original chain # because it has all the block data. However it must mine enough blocks # to have a more work chain than the reorg_test chain in order to # trigger node 2's block download logic. At this point node 2 is within # 288 blocks of the fork point so it will preserve its ability to # reorg. if self.nodes[2].getblockcount() < self.mainchainheight: blocks_to_mine = first_reorg_height + 1 - self.mainchainheight self.log.info( "Rewind node 0 to prev main chain to mine longer chain to trigger redownload. Blocks needed: %d" % blocks_to_mine) self.nodes[0].invalidateblock(curchainhash) assert(self.nodes[0].getblockcount() == self.mainchainheight) assert(self.nodes[0].getbestblockhash() == self.mainchainhash2) goalbesthash = self.nodes[0].generate(blocks_to_mine)[-1] goalbestheight = first_reorg_height + 1 self.log.info( "Verify node 2 reorged back to the main chain, some blocks of which it had to redownload") waitstart = time.time() while self.nodes[2].getblockcount() < goalbestheight: time.sleep(0.1) if time.time() - waitstart > 900: raise AssertionError("Node 2 didn't reorg to proper height") assert(self.nodes[2].getbestblockhash() == goalbesthash) # Verify we can now have the data for a block previously pruned assert(self.nodes[2].getblock( self.forkhash)["height"] == self.forkheight) def manual_test(self, node_number, use_timestamp): # at this point, node has 995 blocks and has not yet run in prune mode - node = self.nodes[node_number] = self.start_node( - node_number, self.options.tmpdir, timewait=900) + self.start_node(node_number) + node = self.nodes[node_number] assert_equal(node.getblockcount(), 995) - assert_raises_jsonrpc( - -1, "not in prune mode", node.pruneblockchain, 500) - self.stop_node(node_number) + assert_raises_jsonrpc(-1, "not in prune mode", + node.pruneblockchain, 500) # now re-start in manual pruning mode - node = self.nodes[node_number] = self.start_node( - node_number, self.options.tmpdir, ["-prune=1"], timewait=900) + self.stop_node(node_number) + self.start_node(node_number, extra_args=["-prune=1"]) + node = self.nodes[node_number] assert_equal(node.getblockcount(), 995) def height(index): if use_timestamp: return node.getblockheader(node.getblockhash(index))["time"] + TIMESTAMP_WINDOW else: return index def prune(index, expected_ret=None): ret = node.pruneblockchain(height(index)) # Check the return value. When use_timestamp is True, just check # that the return value is less than or equal to the expected # value, because when more than one block is generated per second, # a timestamp will not be granular enough to uniquely identify an # individual block. if expected_ret is None: expected_ret = index if use_timestamp: assert_greater_than(ret, 0) assert_greater_than(expected_ret + 1, ret) else: assert_equal(ret, expected_ret) def has_block(index): return os.path.isfile(self.options.tmpdir + "/node{}/regtest/blocks/blk{:05}.dat".format(node_number, index)) # should not prune because chain tip of node 3 (995) < PruneAfterHeight # (1000) assert_raises_jsonrpc( -1, "Blockchain is too short for pruning", node.pruneblockchain, height(500)) # mine 6 blocks so we are at height 1001 (i.e., above PruneAfterHeight) node.generate(6) assert_equal(node.getblockchaininfo()["blocks"], 1001) # negative heights should raise an exception assert_raises_jsonrpc(-8, "Negative", node.pruneblockchain, -10) # height=100 too low to prune first block file so this is a no-op prune(100) if not has_block(0): raise AssertionError( "blk00000.dat is missing when should still be there") # Does nothing node.pruneblockchain(height(0)) if not has_block(0): raise AssertionError( "blk00000.dat is missing when should still be there") # height=500 should prune first file prune(500) if has_block(0): raise AssertionError( "blk00000.dat is still there, should be pruned by now") if not has_block(1): raise AssertionError( "blk00001.dat is missing when should still be there") # height=650 should prune second file prune(650) if has_block(1): raise AssertionError( "blk00001.dat is still there, should be pruned by now") # height=1000 should not prune anything more, because tip-288 is in # blk00002.dat. prune(1000, 1001 - MIN_BLOCKS_TO_KEEP) if not has_block(2): raise AssertionError( "blk00002.dat is still there, should be pruned by now") # advance the tip so blk00002.dat and blk00003.dat can be pruned (the # last 288 blocks should now be in blk00004.dat) node.generate(288) prune(1000) if has_block(2): raise AssertionError( "blk00002.dat is still there, should be pruned by now") if has_block(3): raise AssertionError( "blk00003.dat is still there, should be pruned by now") # stop node, start back up with auto-prune at 550MB, make sure still # runs self.stop_node(node_number) - self.nodes[node_number] = self.start_node( - node_number, self.options.tmpdir, ["-prune=550"], timewait=900) + self.start_node(node_number, extra_args=["-prune=550"]) self.log.info("Success") def wallet_test(self): # check that the pruning node's wallet is still in good shape self.log.info("Stop and start pruning node to trigger wallet rescan") self.stop_node(2) - self.nodes[2] = self.start_node(2, self.options.tmpdir, ["-prune=550"]) + self.start_node(2, extra_args=["-prune=550"]) self.log.info("Success") # check that wallet loads loads successfully when restarting a pruned node after IBD. # this was reported to fail in #7494. self.log.info("Syncing node 5 to test wallet") connect_nodes(self.nodes[0], 5) nds = [self.nodes[0], self.nodes[5]] sync_blocks(nds, wait=5, timeout=300) self.stop_node(5) # stop and start to trigger rescan - self.nodes[5] = self.start_node(5, self.options.tmpdir, ["-prune=550"]) + self.start_node(5, extra_args=["-prune=550"]) self.log.info("Success") def run_test(self): self.log.info( "Warning! This test requires 4GB of disk space and takes over 30 mins (up to 2 hours)") self.log.info("Mining a big blockchain of 995 blocks") # Determine default relay fee self.relayfee = self.nodes[0].getnetworkinfo()["relayfee"] # Cache for utxos, as the listunspent may take a long time later in the # test self.utxo_cache_0 = [] self.utxo_cache_1 = [] self.create_big_chain() # Chain diagram key: # * blocks on main chain # +,&,$,@ blocks on other forks # X invalidated block # N1 Node 1 # # Start by mining a simple chain that all nodes have # N0=N1=N2 **...*(995) # stop manual-pruning node with 995 blocks self.stop_node(3) self.stop_node(4) self.log.info( "Check that we haven't started pruning yet because we're below PruneAfterHeight") self.test_height_min() # Extend this chain past the PruneAfterHeight # N0=N1=N2 **...*(1020) self.log.info( "Check that we'll exceed disk space target if we have a very high stale block rate") self.create_chain_with_staleblocks() # Disconnect N0 # And mine a 24 block chain on N1 and a separate 25 block chain on N0 # N1=N2 **...*+...+(1044) # N0 **...**...**(1045) # # reconnect nodes causing reorg on N1 and N2 # N1=N2 **...*(1020) *...**(1045) # \ # +...+(1044) # # repeat this process until you have 12 stale forks hanging off the # main chain on N1 and N2 # N0 *************************...***************************(1320) # # N1=N2 **...*(1020) *...**(1045) *.. ..**(1295) *...**(1320) # \ \ \ # +...+(1044) &.. $...$(1319) # Save some current chain state for later use self.mainchainheight = self.nodes[2].getblockcount() # 1320 self.mainchainhash2 = self.nodes[2].getblockhash(self.mainchainheight) self.log.info("Check that we can survive a 288 block reorg still") (self.forkheight, self.forkhash) = self.reorg_test() # (1033, ) # Now create a 288 block reorg by mining a longer chain on N1 # First disconnect N1 # Then invalidate 1033 on main chain and 1032 on fork so height is 1032 on main chain # N1 **...*(1020) **...**(1032)X.. # \ # ++...+(1031)X.. # # Now mine 300 more blocks on N1 # N1 **...*(1020) **...**(1032) @@...@(1332) # \ \ # \ X... # \ \ # ++...+(1031)X.. .. # # Reconnect nodes and mine 220 more blocks on N1 # N1 **...*(1020) **...**(1032) @@...@@@(1552) # \ \ # \ X... # \ \ # ++...+(1031)X.. .. # # N2 **...*(1020) **...**(1032) @@...@@@(1552) # \ \ # \ *...**(1320) # \ \ # ++...++(1044) .. # # N0 ********************(1032) @@...@@@(1552) # \ # *...**(1320) self.log.info( "Test that we can rerequest a block we previously pruned if needed for a reorg") self.reorg_back() # Verify that N2 still has block 1033 on current chain (@), but not on main chain (*) # Invalidate 1033 on current chain (@) on N2 and we should be able to reorg to # original main chain (*), but will require redownload of some blocks # In order to have a peer we think we can download from, must also perform this invalidation # on N0 and mine a new longest chain to trigger. # Final result: # N0 ********************(1032) **...****(1553) # \ # X@...@@@(1552) # # N2 **...*(1020) **...**(1032) **...****(1553) # \ \ # \ X@...@@@(1552) # \ # +.. # # N1 doesn't change because 1033 on main chain (*) is invalid self.log.info("Test manual pruning with block indices") self.manual_test(3, use_timestamp=False) self.log.info("Test manual pruning with timestamps") self.manual_test(4, use_timestamp=True) self.log.info("Test wallet re-scan") self.wallet_test() self.log.info("Done") if __name__ == '__main__': PruneTest().main() diff --git a/test/functional/rawtransactions.py b/test/functional/rawtransactions.py index e21a5d4d5..3aaaf0932 100755 --- a/test/functional/rawtransactions.py +++ b/test/functional/rawtransactions.py @@ -1,217 +1,215 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """rawtranscation RPCs QA test. # Tests the following RPCs: # - createrawtransaction # - signrawtransaction # - sendrawtransaction # - decoderawtransaction # - getrawtransaction """ from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * # Create one-input, one-output, no-fee transaction: class RawTransactionsTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 3 def setup_network(self, split=False): super().setup_network() connect_nodes_bi(self.nodes, 0, 2) def run_test(self): # prepare some coins for multiple *rawtransaction commands self.nodes[2].generate(1) self.sync_all() self.nodes[0].generate(101) self.sync_all() self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 1.5) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 1.0) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 5.0) self.sync_all() self.nodes[0].generate(5) self.sync_all() # # sendrawtransaction with missing input # # inputs = [ {'txid': "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout': 1}] # won't exists outputs = {self.nodes[0].getnewaddress(): 4.998} rawtx = self.nodes[2].createrawtransaction(inputs, outputs) rawtx = self.nodes[2].signrawtransaction( rawtx, None, None, "ALL|FORKID") # This will raise an exception since there are missing inputs assert_raises_jsonrpc( -25, "Missing inputs", self.nodes[2].sendrawtransaction, rawtx['hex']) # # RAW TX MULTISIG TESTS # # # 2of2 test addr1 = self.nodes[2].getnewaddress() addr2 = self.nodes[2].getnewaddress() addr1Obj = self.nodes[2].validateaddress(addr1) addr2Obj = self.nodes[2].validateaddress(addr2) mSigObj = self.nodes[2].addmultisigaddress( 2, [addr1Obj['pubkey'], addr2Obj['pubkey']]) mSigObjValid = self.nodes[2].validateaddress(mSigObj) # use balance deltas instead of absolute values bal = self.nodes[2].getbalance() # send 1.2 BTC to msig adr txId = self.nodes[0].sendtoaddress(mSigObj, 1.2) self.sync_all() self.nodes[0].generate(1) self.sync_all() # node2 has both keys of the 2of2 ms addr., tx should affect the # balance assert_equal(self.nodes[2].getbalance(), bal + Decimal('1.20000000')) # 2of3 test from different nodes bal = self.nodes[2].getbalance() addr1 = self.nodes[1].getnewaddress() addr2 = self.nodes[2].getnewaddress() addr3 = self.nodes[2].getnewaddress() addr1Obj = self.nodes[1].validateaddress(addr1) addr2Obj = self.nodes[2].validateaddress(addr2) addr3Obj = self.nodes[2].validateaddress(addr3) mSigObj = self.nodes[2].addmultisigaddress( 2, [addr1Obj['pubkey'], addr2Obj['pubkey'], addr3Obj['pubkey']]) mSigObjValid = self.nodes[2].validateaddress(mSigObj) txId = self.nodes[0].sendtoaddress(mSigObj, 2.2) decTx = self.nodes[0].gettransaction(txId) rawTx = self.nodes[0].decoderawtransaction(decTx['hex']) sPK = rawTx['vout'][0]['scriptPubKey']['hex'] self.sync_all() self.nodes[0].generate(1) self.sync_all() # THIS IS A INCOMPLETE FEATURE # NODE2 HAS TWO OF THREE KEY AND THE FUNDS SHOULD BE SPENDABLE AND # COUNT AT BALANCE CALCULATION # for now, assume the funds of a 2of3 multisig tx are not marked as # spendable assert_equal(self.nodes[2].getbalance(), bal) txDetails = self.nodes[0].gettransaction(txId, True) rawTx = self.nodes[0].decoderawtransaction(txDetails['hex']) vout = False for outpoint in rawTx['vout']: if outpoint['value'] == Decimal('2.20000000'): vout = outpoint break bal = self.nodes[0].getbalance() inputs = [{ "txid": txId, "vout": vout['n'], "scriptPubKey": vout['scriptPubKey']['hex'], "amount": vout['value'], }] outputs = {self.nodes[0].getnewaddress(): 2.19} rawTx = self.nodes[2].createrawtransaction(inputs, outputs) rawTxPartialSigned = self.nodes[ 1].signrawtransaction(rawTx, inputs, None, "ALL|FORKID") # node1 only has one key, can't comp. sign the tx assert_equal(rawTxPartialSigned['complete'], False) rawTxSigned = self.nodes[2].signrawtransaction( rawTx, inputs, None, "ALL|FORKID") # node2 can sign the tx compl., own two of three keys assert_equal(rawTxSigned['complete'], True) self.nodes[2].sendrawtransaction(rawTxSigned['hex']) rawTx = self.nodes[0].decoderawtransaction(rawTxSigned['hex']) self.sync_all() self.nodes[0].generate(1) self.sync_all() assert_equal(self.nodes[0].getbalance(), bal + Decimal( '50.00000000') + Decimal('2.19000000')) # block reward + tx # getrawtransaction tests # 1. valid parameters - only supply txid txHash = rawTx["hash"] assert_equal( self.nodes[0].getrawtransaction(txHash), rawTxSigned['hex']) # 2. valid parameters - supply txid and 0 for non-verbose assert_equal( self.nodes[0].getrawtransaction(txHash, 0), rawTxSigned['hex']) # 3. valid parameters - supply txid and False for non-verbose assert_equal(self.nodes[0].getrawtransaction( txHash, False), rawTxSigned['hex']) # 4. valid parameters - supply txid and 1 for verbose. # We only check the "hex" field of the output so we don't need to # update this test every time the output format changes. assert_equal(self.nodes[0].getrawtransaction( txHash, 1)["hex"], rawTxSigned['hex']) # 5. valid parameters - supply txid and True for non-verbose assert_equal(self.nodes[0].getrawtransaction( txHash, True)["hex"], rawTxSigned['hex']) # 6. invalid parameters - supply txid and string "Flase" assert_raises_jsonrpc( -3, "Invalid type", self.nodes[0].getrawtransaction, txHash, "False") # 7. invalid parameters - supply txid and empty array assert_raises_jsonrpc( -3, "Invalid type", self.nodes[0].getrawtransaction, txHash, []) # 8. invalid parameters - supply txid and empty dict assert_raises_jsonrpc( -3, "Invalid type", self.nodes[0].getrawtransaction, txHash, {}) inputs = [ {'txid': "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout': 1, 'sequence': 1000}] outputs = {self.nodes[0].getnewaddress(): 1} rawtx = self.nodes[0].createrawtransaction(inputs, outputs) decrawtx = self.nodes[0].decoderawtransaction(rawtx) assert_equal(decrawtx['vin'][0]['sequence'], 1000) # 9. invalid parameters - sequence number out of range inputs = [ {'txid': "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout': 1, 'sequence': -1}] outputs = {self.nodes[0].getnewaddress(): 1} assert_raises_jsonrpc( -8, 'Invalid parameter, sequence number is out of range', self.nodes[0].createrawtransaction, inputs, outputs) # 10. invalid parameters - sequence number out of range inputs = [ {'txid': "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout': 1, 'sequence': 4294967296}] outputs = {self.nodes[0].getnewaddress(): 1} assert_raises_jsonrpc( -8, 'Invalid parameter, sequence number is out of range', self.nodes[0].createrawtransaction, inputs, outputs) inputs = [ {'txid': "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout': 1, 'sequence': 4294967294}] outputs = {self.nodes[0].getnewaddress(): 1} rawtx = self.nodes[0].createrawtransaction(inputs, outputs) decrawtx = self.nodes[0].decoderawtransaction(rawtx) assert_equal(decrawtx['vin'][0]['sequence'], 4294967294) if __name__ == '__main__': RawTransactionsTest().main() diff --git a/test/functional/receivedby.py b/test/functional/receivedby.py index f5000ea24..4d6aa0d6d 100755 --- a/test/functional/receivedby.py +++ b/test/functional/receivedby.py @@ -1,165 +1,158 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # Exercise the listreceivedbyaddress API from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * def get_sub_array_from_array(object_array, to_match): ''' Finds and returns a sub array from an array of arrays. to_match should be a unique idetifier of a sub array ''' for item in object_array: all_match = True for key, value in to_match.items(): if item[key] != value: all_match = False if not all_match: continue return item return [] class ReceivedByTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.num_nodes = 4 - self.setup_clean_chain = False - - def setup_nodes(self): - # This test requires mocktime + def set_test_params(self): + self.num_nodes = 2 self.enable_mocktime() - self.nodes = self.start_nodes(self.num_nodes, self.options.tmpdir) def run_test(self): ''' listreceivedbyaddress Test ''' # Send from node 0 to 1 addr = self.nodes[1].getnewaddress() txid = self.nodes[0].sendtoaddress(addr, 0.1) self.sync_all() # Check not listed in listreceivedbyaddress because has 0 confirmations assert_array_result(self.nodes[1].listreceivedbyaddress(), {"address": addr}, {}, True) # Bury Tx under 10 block so it will be returned by # listreceivedbyaddress self.nodes[1].generate(10) self.sync_all() assert_array_result(self.nodes[1].listreceivedbyaddress(), {"address": addr}, {"address": addr, "account": "", "amount": Decimal("0.1"), "confirmations": 10, "txids": [txid, ]}) # With min confidence < 10 assert_array_result(self.nodes[1].listreceivedbyaddress(5), {"address": addr}, {"address": addr, "account": "", "amount": Decimal("0.1"), "confirmations": 10, "txids": [txid, ]}) # With min confidence > 10, should not find Tx assert_array_result( self.nodes[1].listreceivedbyaddress(11), {"address": addr}, {}, True) # Empty Tx addr = self.nodes[1].getnewaddress() assert_array_result(self.nodes[1].listreceivedbyaddress(0, True), {"address": addr}, {"address": addr, "account": "", "amount": 0, "confirmations": 0, "txids": []}) ''' getreceivedbyaddress Test ''' # Send from node 0 to 1 addr = self.nodes[1].getnewaddress() txid = self.nodes[0].sendtoaddress(addr, 0.1) self.sync_all() # Check balance is 0 because of 0 confirmations balance = self.nodes[1].getreceivedbyaddress(addr) if balance != Decimal("0.0"): raise AssertionError( "Wrong balance returned by getreceivedbyaddress, %0.2f" % (balance)) # Check balance is 0.1 balance = self.nodes[1].getreceivedbyaddress(addr, 0) if balance != Decimal("0.1"): raise AssertionError( "Wrong balance returned by getreceivedbyaddress, %0.2f" % (balance)) # Bury Tx under 10 block so it will be returned by the default # getreceivedbyaddress self.nodes[1].generate(10) self.sync_all() balance = self.nodes[1].getreceivedbyaddress(addr) if balance != Decimal("0.1"): raise AssertionError( "Wrong balance returned by getreceivedbyaddress, %0.2f" % (balance)) ''' listreceivedbyaccount + getreceivedbyaccount Test ''' # set pre-state addrArr = self.nodes[1].getnewaddress() account = self.nodes[1].getaccount(addrArr) received_by_account_json = get_sub_array_from_array( self.nodes[1].listreceivedbyaccount(), {"account": account}) if len(received_by_account_json) == 0: raise AssertionError("No accounts found in node") balance_by_account = self.nodes[1].getreceivedbyaccount(account) txid = self.nodes[0].sendtoaddress(addr, 0.1) self.sync_all() # listreceivedbyaccount should return received_by_account_json because # of 0 confirmations assert_array_result(self.nodes[1].listreceivedbyaccount(), {"account": account}, received_by_account_json) # getreceivedbyaddress should return same balance because of 0 # confirmations balance = self.nodes[1].getreceivedbyaccount(account) if balance != balance_by_account: raise AssertionError( "Wrong balance returned by getreceivedbyaccount, %0.2f" % (balance)) self.nodes[1].generate(10) self.sync_all() # listreceivedbyaccount should return updated account balance assert_array_result(self.nodes[1].listreceivedbyaccount(), {"account": account}, {"account": received_by_account_json["account"], "amount": (received_by_account_json["amount"] + Decimal("0.1"))}) # getreceivedbyaddress should return updates balance balance = self.nodes[1].getreceivedbyaccount(account) if balance != balance_by_account + Decimal("0.1"): raise AssertionError( "Wrong balance returned by getreceivedbyaccount, %0.2f" % (balance)) # Create a new account named "mynewaccount" that has a 0 balance self.nodes[1].getaccountaddress("mynewaccount") received_by_account_json = get_sub_array_from_array( self.nodes[1].listreceivedbyaccount(0, True), {"account": "mynewaccount"}) if len(received_by_account_json) == 0: raise AssertionError("No accounts found in node") # Test includeempty of listreceivedbyaccount if received_by_account_json["amount"] != Decimal("0.0"): raise AssertionError("Wrong balance returned by listreceivedbyaccount, %0.2f" % (received_by_account_json["amount"])) # Test getreceivedbyaccount for 0 amount accounts balance = self.nodes[1].getreceivedbyaccount("mynewaccount") if balance != Decimal("0.0"): raise AssertionError( "Wrong balance returned by getreceivedbyaccount, %0.2f" % (balance)) if __name__ == '__main__': ReceivedByTest().main() diff --git a/test/functional/reindex.py b/test/functional/reindex.py index 1718de90e..3bc4bfc74 100755 --- a/test/functional/reindex.py +++ b/test/functional/reindex.py @@ -1,42 +1,40 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test -reindex and -reindex-chainstate with CheckBlockIndex # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal import time class ReindexTest(BitcoinTestFramework): - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def reindex(self, justchainstate=False): self.nodes[0].generate(3) blockcount = self.nodes[0].getblockcount() self.stop_nodes() extra_args = [ ["-reindex-chainstate" if justchainstate else "-reindex", "-checkblockindex=1"]] - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, extra_args) + self.start_nodes(extra_args) while self.nodes[0].getblockcount() < blockcount: time.sleep(0.1) assert_equal(self.nodes[0].getblockcount(), blockcount) self.log.info("Success") def run_test(self): self.reindex(False) self.reindex(True) self.reindex(False) self.reindex(True) if __name__ == '__main__': ReindexTest().main() diff --git a/test/functional/resendwallettransactions.py b/test/functional/resendwallettransactions.py index 6cef6ac9d..8dc4b515a 100755 --- a/test/functional/resendwallettransactions.py +++ b/test/functional/resendwallettransactions.py @@ -1,34 +1,32 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test resendwallettransactions RPC.""" from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal, assert_raises_jsonrpc class ResendWalletTransactionsTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.extra_args = [['--walletbroadcast=false']] + def set_test_params(self): self.num_nodes = 1 + self.extra_args = [['--walletbroadcast=false']] def run_test(self): # Should raise RPC_WALLET_ERROR (-4) if walletbroadcast is disabled. assert_raises_jsonrpc(-4, "Error: Wallet transaction broadcasting is disabled with -walletbroadcast", self.nodes[0].resendwallettransactions) # Should return an empty array if there aren't unconfirmed wallet transactions. self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir) + self.start_node(0, extra_args=[]) assert_equal(self.nodes[0].resendwallettransactions(), []) # Should return an array with the unconfirmed wallet transaction. txid = self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 1) assert_equal(self.nodes[0].resendwallettransactions(), [txid]) if __name__ == '__main__': ResendWalletTransactionsTest().main() diff --git a/test/functional/rest.py b/test/functional/rest.py index 089d7d5ad..8eb6acb70 100755 --- a/test/functional/rest.py +++ b/test/functional/rest.py @@ -1,393 +1,392 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test REST interface # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from struct import * from io import BytesIO from codecs import encode import http.client import urllib.parse def deser_uint256(f): r = 0 for i in range(8): t = unpack(b" this won't connect. with mininode_lock: test_node.last_message.pop("getheaders", None) test_node.send_header_for_blocks([blocks[1]]) test_node.wait_for_getheaders(timeout=1) test_node.send_header_for_blocks(blocks) test_node.wait_for_getdata([x.sha256 for x in blocks]) [test_node.send_message(msg_block(x)) for x in blocks] test_node.sync_with_ping() assert_equal( int(self.nodes[0].getbestblockhash(), 16), blocks[1].sha256) blocks = [] # Now we test that if we repeatedly don't send connecting headers, we # don't go into an infinite loop trying to get them to connect. MAX_UNCONNECTING_HEADERS = 10 for j in range(MAX_UNCONNECTING_HEADERS + 1): blocks.append( create_block(tip, create_coinbase(height), block_time)) blocks[-1].solve() tip = blocks[-1].sha256 block_time += 1 height += 1 for i in range(1, MAX_UNCONNECTING_HEADERS): # Send a header that doesn't connect, check that we get a # getheaders. with mininode_lock: test_node.last_message.pop("getheaders", None) test_node.send_header_for_blocks([blocks[i]]) test_node.wait_for_getheaders(timeout=1) # Next header will connect, should re-set our count: test_node.send_header_for_blocks([blocks[0]]) # Remove the first two entries (blocks[1] would connect): blocks = blocks[2:] # Now try to see how many unconnecting headers we can send # before we get disconnected. Should be 5*MAX_UNCONNECTING_HEADERS for i in range(5 * MAX_UNCONNECTING_HEADERS - 1): # Send a header that doesn't connect, check that we get a # getheaders. with mininode_lock: test_node.last_message.pop("getheaders", None) test_node.send_header_for_blocks([blocks[i % len(blocks)]]) test_node.wait_for_getheaders(timeout=1) # Eventually this stops working. test_node.send_header_for_blocks([blocks[-1]]) # Should get disconnected test_node.wait_for_disconnect() self.log.info("Part 5: success!") # Finally, check that the inv node never received a getdata request, # throughout the test assert "getdata" not in inv_node.last_message if __name__ == '__main__': SendHeadersTest().main() diff --git a/test/functional/signmessages.py b/test/functional/signmessages.py index a5494e733..ad3e84919 100755 --- a/test/functional/signmessages.py +++ b/test/functional/signmessages.py @@ -1,39 +1,35 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class SignMessagesTest(BitcoinTestFramework): - - """Tests RPC commands for signing and verifying messages.""" - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def run_test(self): message = 'This is just a test message' # Test the signing with a privkey privKey = 'cUeKHd5orzT3mz8P9pxyREHfsWtVfgsfDjiZZBcjUBAaGk1BTj7N' address = 'mpLQjfK79b7CCV4VMJWEWAj5Mpx8Up5zxB' signature = self.nodes[0].signmessagewithprivkey(privKey, message) # Verify the message assert(self.nodes[0].verifymessage(address, signature, message)) # Test the signing with an address with wallet address = self.nodes[0].getnewaddress() signature = self.nodes[0].signmessage(address, message) # Verify the message assert(self.nodes[0].verifymessage(address, signature, message)) if __name__ == '__main__': SignMessagesTest().main() diff --git a/test/functional/signrawtransactions.py b/test/functional/signrawtransactions.py index 9823b6a24..18040a587 100755 --- a/test/functional/signrawtransactions.py +++ b/test/functional/signrawtransactions.py @@ -1,150 +1,146 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class SignRawTransactionsTest(BitcoinTestFramework): - - """Tests transaction signing via RPC command "signrawtransaction".""" - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def successful_signing_test(self): """Creates and signs a valid raw transaction with one input. Expected results: 1) The transaction has a complete set of signatures 2) No script verification error occurred""" privKeys = ['cUeKHd5orzT3mz8P9pxyREHfsWtVfgsfDjiZZBcjUBAaGk1BTj7N', 'cVKpPfVKSJxKqVpE9awvXNWuLHCa5j5tiE7K6zbUSptFpTEtiFrA'] inputs = [ # Valid pay-to-pubkey scripts {'txid': '9b907ef1e3c26fc71fe4a4b3580bc75264112f95050014157059c736f0202e71', 'vout': 0, 'amount': 3.14159, 'scriptPubKey': '76a91460baa0f494b38ce3c940dea67f3804dc52d1fb9488ac'}, {'txid': '83a4f6a6b73660e13ee6cb3c6063fa3759c50c9b7521d0536022961898f4fb02', 'vout': 0, 'amount': '123.456', 'scriptPubKey': '76a914669b857c03a5ed269d5d85a1ffac9ed5d663072788ac'}, ] outputs = {'mpLQjfK79b7CCV4VMJWEWAj5Mpx8Up5zxB': 0.1} rawTx = self.nodes[0].createrawtransaction(inputs, outputs) rawTxSigned = self.nodes[0].signrawtransaction(rawTx, inputs, privKeys) # 1) The transaction has a complete set of signatures assert 'complete' in rawTxSigned assert_equal(rawTxSigned['complete'], True) # 2) No script verification error occurred assert 'errors' not in rawTxSigned # Check that signrawtransaction doesn't blow up on garbage merge # attempts dummyTxInconsistent = self.nodes[ 0].createrawtransaction([inputs[0]], outputs) rawTxUnsigned = self.nodes[0].signrawtransaction( rawTx + dummyTxInconsistent, inputs) assert 'complete' in rawTxUnsigned assert_equal(rawTxUnsigned['complete'], False) # Check that signrawtransaction properly merges unsigned and signed # txn, even with garbage in the middle rawTxSigned2 = self.nodes[0].signrawtransaction( rawTxUnsigned["hex"] + dummyTxInconsistent + rawTxSigned["hex"], inputs) assert 'complete' in rawTxSigned2 assert_equal(rawTxSigned2['complete'], True) assert 'errors' not in rawTxSigned2 def script_verification_error_test(self): """Creates and signs a raw transaction with valid (vin 0), invalid (vin 1) and one missing (vin 2) input script. Expected results: 3) The transaction has no complete set of signatures 4) Two script verification errors occurred 5) Script verification errors have certain properties ("txid", "vout", "scriptSig", "sequence", "error") 6) The verification errors refer to the invalid (vin 1) and missing input (vin 2)""" privKeys = ['cUeKHd5orzT3mz8P9pxyREHfsWtVfgsfDjiZZBcjUBAaGk1BTj7N'] inputs = [ # Valid pay-to-pubkey script {'txid': '9b907ef1e3c26fc71fe4a4b3580bc75264112f95050014157059c736f0202e71', 'vout': 0, 'amount': 0}, # Invalid script {'txid': '5b8673686910442c644b1f4993d8f7753c7c8fcb5c87ee40d56eaeef25204547', 'vout': 7, 'amount': '1.1'}, # Missing scriptPubKey {'txid': '9b907ef1e3c26fc71fe4a4b3580bc75264112f95050014157059c736f0202e71', 'vout': 1, 'amount': 2.0}, ] scripts = [ # Valid pay-to-pubkey script {'txid': '9b907ef1e3c26fc71fe4a4b3580bc75264112f95050014157059c736f0202e71', 'vout': 0, 'amount': 0, 'scriptPubKey': '76a91460baa0f494b38ce3c940dea67f3804dc52d1fb9488ac'}, # Invalid script {'txid': '5b8673686910442c644b1f4993d8f7753c7c8fcb5c87ee40d56eaeef25204547', 'vout': 7, 'amount': '1.1', 'scriptPubKey': 'badbadbadbad'} ] outputs = {'mpLQjfK79b7CCV4VMJWEWAj5Mpx8Up5zxB': 0.1} rawTx = self.nodes[0].createrawtransaction(inputs, outputs) # Make sure decoderawtransaction is at least marginally sane decodedRawTx = self.nodes[0].decoderawtransaction(rawTx) for i, inp in enumerate(inputs): assert_equal(decodedRawTx["vin"][i]["txid"], inp["txid"]) assert_equal(decodedRawTx["vin"][i]["vout"], inp["vout"]) # Make sure decoderawtransaction throws if there is extra data assert_raises(JSONRPCException, self.nodes[ 0].decoderawtransaction, rawTx + "00") rawTxSigned = self.nodes[0].signrawtransaction( rawTx, scripts, privKeys) # 3) The transaction has no complete set of signatures assert 'complete' in rawTxSigned assert_equal(rawTxSigned['complete'], False) # 4) Two script verification errors occurred assert 'errors' in rawTxSigned assert_equal(len(rawTxSigned['errors']), 2) # 5) Script verification errors have certain properties assert 'txid' in rawTxSigned['errors'][0] assert 'vout' in rawTxSigned['errors'][0] assert 'scriptSig' in rawTxSigned['errors'][0] assert 'sequence' in rawTxSigned['errors'][0] assert 'error' in rawTxSigned['errors'][0] # 6) The verification errors refer to the invalid (vin 1) and missing # input (vin 2) assert_equal(rawTxSigned['errors'][0]['txid'], inputs[1]['txid']) assert_equal(rawTxSigned['errors'][0]['vout'], inputs[1]['vout']) assert_equal(rawTxSigned['errors'][1]['txid'], inputs[2]['txid']) assert_equal(rawTxSigned['errors'][1]['vout'], inputs[2]['vout']) def run_test(self): self.successful_signing_test() self.script_verification_error_test() if __name__ == '__main__': SignRawTransactionsTest().main() diff --git a/test/functional/smartfees.py b/test/functional/smartfees.py index fd0a9c540..5ab3cf788 100755 --- a/test/functional/smartfees.py +++ b/test/functional/smartfees.py @@ -1,293 +1,285 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test fee estimation code # from collections import OrderedDict from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * # Construct 2 trivial P2SH's and the ScriptSigs that spend them # So we can create many many transactions without needing to spend # time signing. P2SH_1 = "2MySexEGVzZpRgNQ1JdjdP5bRETznm3roQ2" # P2SH of "OP_1 OP_DROP" P2SH_2 = "2NBdpwq8Aoo1EEKEXPNrKvr5xQr3M9UfcZA" # P2SH of "OP_2 OP_DROP" # Associated ScriptSig's to spend satisfy P2SH_1 and P2SH_2 # 4 bytes of OP_TRUE and push 2-byte redeem script of "OP_1 OP_DROP" or # "OP_2 OP_DROP" SCRIPT_SIG = ["0451025175", "0451025275"] global log def small_txpuzzle_randfee(from_node, conflist, unconflist, amount, min_fee, fee_increment): ''' Create and send a transaction with a random fee. The transaction pays to a trivial P2SH script, and assumes that its inputs are of the same form. The function takes a list of confirmed outputs and unconfirmed outputs and attempts to use the confirmed list first for its inputs. It adds the newly created outputs to the unconfirmed list. Returns (raw transaction, fee) ''' # It's best to exponentially distribute our random fees # because the buckets are exponentially spaced. # Exponentially distributed from 1-128 * fee_increment rand_fee = float(fee_increment) * (1.1892**random.randint(0, 28)) # Total fee ranges from min_fee to min_fee + 127*fee_increment fee = min_fee - fee_increment + satoshi_round(rand_fee) inputs = [] total_in = Decimal("0.00000000") while total_in <= (amount + fee) and len(conflist) > 0: t = conflist.pop(0) total_in += t["amount"] inputs.append({"txid": t["txid"], "vout": t["vout"]}) if total_in <= amount + fee: while total_in <= (amount + fee) and len(unconflist) > 0: t = unconflist.pop(0) total_in += t["amount"] inputs.append({"txid": t["txid"], "vout": t["vout"]}) if total_in <= amount + fee: raise RuntimeError( "Insufficient funds: need %d, have %d" % (amount + fee, total_in)) outputs = {} outputs = OrderedDict([(P2SH_1, total_in - amount - fee), (P2SH_2, amount)]) rawtx = from_node.createrawtransaction(inputs, outputs) # createrawtransaction constructs a transaction that is ready to be signed. # These transactions don't need to be signed, but we still have to insert the ScriptSig # that will satisfy the ScriptPubKey. completetx = rawtx[0:10] inputnum = 0 for inp in inputs: completetx += rawtx[10 + 82 * inputnum:82 + 82 * inputnum] completetx += SCRIPT_SIG[inp["vout"]] completetx += rawtx[84 + 82 * inputnum:92 + 82 * inputnum] inputnum += 1 completetx += rawtx[10 + 82 * inputnum:] txid = from_node.sendrawtransaction(completetx, True) unconflist.append( {"txid": txid, "vout": 0, "amount": total_in - amount - fee}) unconflist.append({"txid": txid, "vout": 1, "amount": amount}) return (completetx, fee) def split_inputs(from_node, txins, txouts, initial_split=False): ''' We need to generate a lot of very small inputs so we can generate a ton of transactions and they will have low priority. This function takes an input from txins, and creates and sends a transaction which splits the value into 2 outputs which are appended to txouts. ''' prevtxout = txins.pop() inputs = [] inputs.append({"txid": prevtxout["txid"], "vout": prevtxout["vout"]}) half_change = satoshi_round(prevtxout["amount"] / 2) rem_change = prevtxout["amount"] - half_change - Decimal("0.00001000") outputs = OrderedDict([(P2SH_1, half_change), (P2SH_2, rem_change)]) rawtx = from_node.createrawtransaction(inputs, outputs) # If this is the initial split we actually need to sign the transaction # Otherwise we just need to insert the property ScriptSig if (initial_split): completetx = from_node.signrawtransaction( rawtx, None, None, "ALL|FORKID")["hex"] else: completetx = rawtx[0:82] + SCRIPT_SIG[prevtxout["vout"]] + rawtx[84:] txid = from_node.sendrawtransaction(completetx, True) txouts.append({"txid": txid, "vout": 0, "amount": half_change}) txouts.append({"txid": txid, "vout": 1, "amount": rem_change}) def check_estimates(node, fees_seen, max_invalid, print_estimates=True): ''' This function calls estimatefee and verifies that the estimates meet certain invariants. ''' all_estimates = [node.estimatefee(i) for i in range(1, 26)] if print_estimates: log.info([str(all_estimates[e - 1]) for e in [1, 2, 3, 6, 15, 25]]) delta = 1.0e-6 # account for rounding error last_e = max(fees_seen) for e in [x for x in all_estimates if x >= 0]: # Estimates should be within the bounds of what transactions fees # actually were: if float(e) + delta < min(fees_seen) or float(e) - delta > max(fees_seen): raise AssertionError("Estimated fee (%f) out of range (%f,%f)" % (float(e), min(fees_seen), max(fees_seen))) # Estimates should be monotonically decreasing if float(e) - delta > last_e: raise AssertionError("Estimated fee (%f) larger than last fee (%f) for lower number of confirms" % (float(e), float(last_e))) last_e = e valid_estimate = False invalid_estimates = 0 for i, e in enumerate(all_estimates): # estimate is for i+1 if e >= 0: valid_estimate = True # estimatesmartfee should return the same result assert_equal(node.estimatesmartfee(i + 1)["feerate"], e) else: invalid_estimates += 1 # estimatesmartfee should still be valid approx_estimate = node.estimatesmartfee(i + 1)["feerate"] answer_found = node.estimatesmartfee(i + 1)["blocks"] assert(approx_estimate > 0) assert(answer_found > i + 1) # Once we're at a high enough confirmation count that we can give an estimate # We should have estimates for all higher confirmation counts if valid_estimate: raise AssertionError( "Invalid estimate appears at higher confirm count than valid estimate") # Check on the expected number of different confirmation counts # that we might not have valid estimates for if invalid_estimates > max_invalid: raise AssertionError( "More than (%d) invalid estimates" % (max_invalid)) return all_estimates class EstimateFeeTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 3 - self.setup_clean_chain = False def setup_network(self): - ''' + """ We'll setup the network to have 3 nodes that all mine with different parameters. But first we need to use one node to create a lot of small low priority outputs which we will use to generate our transactions. - ''' - self.nodes = [] + """ + self.add_nodes(3, extra_args=[["-maxorphantx=1000", "-whitelist=127.0.0.1"], + ["-blockmaxsize=17000", "-maxorphantx=1000"], + ["-blockmaxsize=8000", "-maxorphantx=1000"]]) # Use node0 to mine blocks for input splitting - self.nodes.append(self.start_node(0, self.options.tmpdir, ["-maxorphantx=1000", - "-whitelist=127.0.0.1"])) - - self.log.info("This test is time consuming, please be patient") - self.log.info( - "Splitting inputs to small size so we can generate low priority tx's") - self.txouts = [] - self.txouts2 = [] - # Split a coinbase into two transaction puzzle outputs - split_inputs(self.nodes[0], self.nodes[ - 0].listunspent(0), self.txouts, True) - - # Mine - while (len(self.nodes[0].getrawmempool()) > 0): - self.nodes[0].generate(1) - - # Repeatedly split those 2 outputs, doubling twice for each rep - # Use txouts to monitor the available utxo, since these won't be - # tracked in wallet - reps = 0 - while (reps < 5): - # Double txouts to txouts2 - while (len(self.txouts) > 0): - split_inputs(self.nodes[0], self.txouts, self.txouts2) - while (len(self.nodes[0].getrawmempool()) > 0): - self.nodes[0].generate(1) - # Double txouts2 to txouts - while (len(self.txouts2) > 0): - split_inputs(self.nodes[0], self.txouts2, self.txouts) - while (len(self.nodes[0].getrawmempool()) > 0): - self.nodes[0].generate(1) - reps += 1 - self.log.info("Finished splitting") - - # Now we can connect the other nodes, didn't want to connect them earlier - # so the estimates would not be affected by the splitting transactions - # Node1 mines small blocks but that are bigger than the expected transaction rate, - # and allows free transactions. + # Node1 mines small blocks but that are bigger than the expected transaction rate. # NOTE: the CreateNewBlock code starts counting block size at 1,000 bytes, # (17k is room enough for 110 or so transactions) - self.nodes.append(self.start_node(1, self.options.tmpdir, - ["-blockmaxsize=17000", "-maxorphantx=1000"])) - connect_nodes(self.nodes[1], 0) - # Node2 is a stingy miner, that # produces too small blocks (room for only 55 or so transactions) - node2args = ["-blockprioritypercentage=0", - "-blockmaxsize=8000", "-maxorphantx=1000"] - - self.nodes.append(self.start_node(2, self.options.tmpdir, node2args)) - connect_nodes(self.nodes[0], 2) - connect_nodes(self.nodes[2], 1) - - self.sync_all() def transact_and_mine(self, numblocks, mining_node): min_fee = Decimal("0.00001") # We will now mine numblocks blocks generating on average 100 transactions between each block # We shuffle our confirmed txout set before each set of transactions # small_txpuzzle_randfee will use the transactions that have inputs already in the chain when possible # resorting to tx's that depend on the mempool when those run out for i in range(numblocks): random.shuffle(self.confutxo) for j in range(random.randrange(100 - 50, 100 + 50)): from_index = random.randint(1, 2) (txhex, fee) = small_txpuzzle_randfee(self.nodes[ from_index], self.confutxo, self.memutxo, Decimal("0.005"), min_fee, min_fee) tx_kbytes = (len(txhex) // 2) / 1000.0 self.fees_per_kb.append(float(fee) / tx_kbytes) sync_mempools(self.nodes[0:3], wait=.1) mined = mining_node.getblock( mining_node.generate(1)[0], True)["tx"] sync_blocks(self.nodes[0:3], wait=.1) # update which txouts are confirmed newmem = [] for utx in self.memutxo: if utx["txid"] in mined: self.confutxo.append(utx) else: newmem.append(utx) self.memutxo = newmem def run_test(self): + self.log.info("This test is time consuming, please be patient") + self.log.info("Splitting inputs so we can generate tx's") + # Make log handler available to helper functions global log log = self.log + # Start node0 + self.start_node(0) + self.txouts = [] + self.txouts2 = [] + # Split a coinbase into two transaction puzzle outputs + split_inputs(self.nodes[0], self.nodes[0].listunspent( + 0), self.txouts, True) + + # Mine + while (len(self.nodes[0].getrawmempool()) > 0): + self.nodes[0].generate(1) + + # Repeatedly split those 2 outputs, doubling twice for each rep + # Use txouts to monitor the available utxo, since these won't be tracked in wallet + reps = 0 + while (reps < 5): + # Double txouts to txouts2 + while (len(self.txouts) > 0): + split_inputs(self.nodes[0], self.txouts, self.txouts2) + while (len(self.nodes[0].getrawmempool()) > 0): + self.nodes[0].generate(1) + # Double txouts2 to txouts + while (len(self.txouts2) > 0): + split_inputs(self.nodes[0], self.txouts2, self.txouts) + while (len(self.nodes[0].getrawmempool()) > 0): + self.nodes[0].generate(1) + reps += 1 + self.log.info("Finished splitting") + + # Now we can connect the other nodes, didn't want to connect them earlier + # so the estimates would not be affected by the splitting transactions + self.start_node(1) + self.start_node(2) + connect_nodes(self.nodes[1], 0) + connect_nodes(self.nodes[0], 2) + connect_nodes(self.nodes[2], 1) + + self.sync_all() + self.fees_per_kb = [] self.memutxo = [] self.confutxo = self.txouts # Start with the set of confirmed txouts after splitting self.log.info("Will output estimates for 1/2/3/6/15/25 blocks") for i in range(2): self.log.info( "Creating transactions and mining them with a block size that can't keep up") # Create transactions and mine 10 small blocks with node 2, but # create txs faster than we can mine self.transact_and_mine(10, self.nodes[2]) check_estimates(self.nodes[1], self.fees_per_kb, 14) self.log.info( "Creating transactions and mining them at a block size that is just big enough") # Generate transactions while mining 10 more blocks, this time with node1 # which mines blocks with capacity just above the rate that # transactions are being created self.transact_and_mine(10, self.nodes[1]) check_estimates(self.nodes[1], self.fees_per_kb, 2) # Finish by mining a normal-sized block: while len(self.nodes[1].getrawmempool()) > 0: self.nodes[1].generate(1) sync_blocks(self.nodes[0:3], wait=.1) self.log.info("Final estimates after emptying mempools") check_estimates(self.nodes[1], self.fees_per_kb, 2) if __name__ == '__main__': EstimateFeeTest().main() diff --git a/test/functional/test_framework/test_framework.py b/test/functional/test_framework/test_framework.py index 740ba0006..d5200edee 100755 --- a/test/functional/test_framework/test_framework.py +++ b/test/functional/test_framework/test_framework.py @@ -1,493 +1,503 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from collections import deque from enum import Enum import logging import optparse import os import shutil import sys import tempfile import time import traceback from .authproxy import JSONRPCException from . import coverage from .test_node import TestNode from .util import ( MAX_NODES, PortSeed, assert_equal, check_json_precision, connect_nodes_bi, disconnect_nodes, initialize_datadir, log_filename, p2p_port, set_node_times, sync_blocks, sync_mempools, ) class TestStatus(Enum): PASSED = 1 FAILED = 2 SKIPPED = 3 TEST_EXIT_PASSED = 0 TEST_EXIT_FAILED = 1 TEST_EXIT_SKIPPED = 77 BITCOIND_PROC_WAIT_TIMEOUT = 60 class BitcoinTestFramework(): """Base class for a bitcoin test script. - Individual bitcoin test scripts should subclass this class and override the following methods: + Individual bitcoin test scripts should subclass this class and override the set_test_params() and run_test() methods. + + Individual tests can also override the following methods to customize the test setup: - - __init__() - add_options() - setup_chain() - setup_network() - - run_test() + - setup_nodes() - The main() method should not be overridden. + The __init__() and main() methods should not be overridden. This class also contains various public and private helper methods.""" - # Methods to override in subclass test scripts. def __init__(self): - self.num_nodes = 4 + """Sets test framework defaults. Do not override this method. Instead, override the set_test_params() method""" self.setup_clean_chain = False self.nodes = [] self.mocktime = 0 + self.set_test_params() - def add_options(self, parser): - pass - - def setup_chain(self): - self.log.info("Initializing test directory " + self.options.tmpdir) - if self.setup_clean_chain: - self._initialize_chain_clean(self.options.tmpdir, self.num_nodes) - else: - self._initialize_chain(self.options.tmpdir, - self.num_nodes, self.options.cachedir) - - def setup_network(self): - ''' - Sets up network including starting up nodes. - ''' - self.setup_nodes() - - # Connect the nodes as a "chain". This allows us - # to split the network between nodes 1 and 2 to get - # two halves that can work on competing chains. - - for i in range(self.num_nodes - 1): - connect_nodes_bi(self.nodes, i, i + 1) - self.sync_all() - - def setup_nodes(self): - extra_args = None - if hasattr(self, "extra_args"): - extra_args = self.extra_args - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, extra_args) - - def run_test(self): - raise NotImplementedError + assert hasattr( + self, "num_nodes"), "Test must set self.num_nodes in set_test_params()" - # Main function. This should not be overridden by the subclass test scripts. def main(self): + """Main function. This should not be overridden by the subclass test scripts.""" + parser = optparse.OptionParser(usage="%prog [options]") parser.add_option("--nocleanup", dest="nocleanup", default=False, action="store_true", help="Leave bitcoinds and test.* datadir on exit or error") parser.add_option("--noshutdown", dest="noshutdown", default=False, action="store_true", help="Don't stop bitcoinds after the test execution") parser.add_option("--srcdir", dest="srcdir", default=os.path.normpath(os.path.dirname(os.path.realpath(__file__)) + "/../../../src"), help="Source directory containing bitcoind/bitcoin-cli (default: %default)") parser.add_option("--cachedir", dest="cachedir", default=os.path.normpath(os.path.dirname(os.path.realpath(__file__)) + "/../../cache"), help="Directory for caching pregenerated datadirs") parser.add_option("--tmpdir", dest="tmpdir", help="Root directory for datadirs") parser.add_option("-l", "--loglevel", dest="loglevel", default="INFO", help="log events at this level and higher to the console. Can be set to DEBUG, INFO, WARNING, ERROR or CRITICAL. Passing --loglevel DEBUG will output all logs to console. Note that logs at all levels are always written to the test_framework.log file in the temporary test directory.") parser.add_option( "--tracerpc", dest="trace_rpc", default=False, action="store_true", help="Print out all RPC calls as they are made") parser.add_option( "--portseed", dest="port_seed", default=os.getpid(), type='int', help="The seed to use for assigning port numbers (default: current process id)") parser.add_option("--coveragedir", dest="coveragedir", help="Write tested RPC commands into this directory") parser.add_option("--configfile", dest="configfile", help="Location of the test framework config file") self.add_options(parser) (self.options, self.args) = parser.parse_args() PortSeed.n = self.options.port_seed os.environ['PATH'] = self.options.srcdir + ":" + \ self.options.srcdir + "/qt:" + os.environ['PATH'] check_json_precision() self.options.cachedir = os.path.abspath(self.options.cachedir) # Set up temp directory and start logging if self.options.tmpdir: self.options.tmpdir = os.path.abspath(self.options.tmpdir) os.makedirs(self.options.tmpdir, exist_ok=False) else: self.options.tmpdir = tempfile.mkdtemp(prefix="test") self._start_logging() success = TestStatus.FAILED try: self.setup_chain() self.setup_network() self.run_test() success = TestStatus.PASSED except JSONRPCException as e: self.log.exception("JSONRPC error") except SkipTest as e: self.log.warning("Test Skipped: %s" % e.message) success = TestStatus.SKIPPED except AssertionError as e: self.log.exception("Assertion failed") except KeyError as e: self.log.exception("Key error") except Exception as e: self.log.exception("Unexpected exception caught during testing") except KeyboardInterrupt as e: self.log.warning("Exiting after keyboard interrupt") if not self.options.noshutdown: self.log.info("Stopping nodes") if self.nodes: self.stop_nodes() else: self.log.info( "Note: bitcoinds were not stopped and may still be running") if not self.options.nocleanup and not self.options.noshutdown and success != TestStatus.FAILED: self.log.info("Cleaning up") shutil.rmtree(self.options.tmpdir) else: self.log.warning("Not cleaning up dir %s" % self.options.tmpdir) if os.getenv("PYTHON_DEBUG", ""): # Dump the end of the debug logs, to aid in debugging rare # travis failures. import glob filenames = [self.options.tmpdir + "/test_framework.log"] filenames += glob.glob(self.options.tmpdir + "/node*/regtest/debug.log") MAX_LINES_TO_PRINT = 1000 for fn in filenames: try: with open(fn, 'r') as f: print("From", fn, ":") print("".join(deque(f, MAX_LINES_TO_PRINT))) except OSError: print("Opening file %s failed." % fn) traceback.print_exc() if success == TestStatus.PASSED: self.log.info("Tests successful") sys.exit(TEST_EXIT_PASSED) elif success == TestStatus.SKIPPED: self.log.info("Test skipped") sys.exit(TEST_EXIT_SKIPPED) else: self.log.error( "Test failed. Test logging available at %s/test_framework.log", self.options.tmpdir) logging.shutdown() sys.exit(TEST_EXIT_FAILED) - # Public helper methods. These can be accessed by the subclass test scripts. + # Methods to override in subclass test scripts. + def set_test_params(self): + """Tests must this method to change default values for number of nodes, topology, etc""" + raise NotImplementedError - def start_node(self, i, dirname, extra_args=None, rpchost=None, timewait=None, binary=None, stderr=None): - """Start a bitcoind and return RPC connection to it""" + def add_options(self, parser): + """Override this method to add command-line options to the test""" + pass - if extra_args is None: - extra_args = [] - if binary is None: - binary = os.getenv("BITCOIND", "bitcoind") - node = TestNode(i, dirname, extra_args, rpchost, timewait, binary, - stderr, self.mocktime, coverage_dir=self.options.coveragedir) - node.start() - node.wait_for_rpc_connection() + def setup_chain(self): + """Override this method to customize blockchain setup""" + self.log.info("Initializing test directory " + self.options.tmpdir) + if self.setup_clean_chain: + self._initialize_chain_clean() + else: + self._initialize_chain() - if self.options.coveragedir is not None: - coverage.write_all_rpc_commands(self.options.coveragedir, node.rpc) + def setup_network(self): + """Override this method to customize test network topology""" + self.setup_nodes() + + # Connect the nodes as a "chain". This allows us + # to split the network between nodes 1 and 2 to get + # two halves that can work on competing chains. + for i in range(self.num_nodes - 1): + connect_nodes_bi(self.nodes, i, i + 1) + self.sync_all() + + def setup_nodes(self): + """Override this method to customize test node setup""" + extra_args = None + if hasattr(self, "extra_args"): + extra_args = self.extra_args + self.add_nodes(self.num_nodes, extra_args) + self.start_nodes() - return node + def run_test(self): + """Tests must override this method to define test logic""" + raise NotImplementedError - def start_nodes(self, num_nodes, dirname, extra_args=None, rpchost=None, timewait=None, binary=None): - """Start multiple bitcoinds, return RPC connections to them""" + # Public helper methods. These can be accessed by the subclass test scripts. + + def add_nodes(self, num_nodes, extra_args=None, rpchost=None, timewait=None, binary=None): + """Instantiate TestNode objects""" if extra_args is None: extra_args = [[]] * num_nodes if binary is None: binary = [None] * num_nodes assert_equal(len(extra_args), num_nodes) assert_equal(len(binary), num_nodes) - nodes = [] + for i in range(num_nodes): + self.nodes.append(TestNode(i, self.options.tmpdir, extra_args[i], rpchost, timewait=timewait, + binary=binary[i], stderr=None, mocktime=self.mocktime, coverage_dir=self.options.coveragedir)) + + def start_node(self, i, extra_args=None, stderr=None): + """Start a bitcoind""" + + node = self.nodes[i] + + node.start(extra_args, stderr) + node.wait_for_rpc_connection() + + if self.options.coveragedir is not None: + coverage.write_all_rpc_commands(self.options.coveragedir, node.rpc) + + def start_nodes(self, extra_args=None): + """Start multiple bitcoinds""" + + if extra_args is None: + extra_args = [None] * self.num_nodes + assert_equal(len(extra_args), self.num_nodes) try: - for i in range(num_nodes): - nodes.append(TestNode(i, dirname, extra_args[i], rpchost, timewait=timewait, binary=binary[i], - stderr=None, mocktime=self.mocktime, coverage_dir=self.options.coveragedir)) - nodes[i].start() - for node in nodes: + for i, node in enumerate(self.nodes): + node.start(extra_args[i]) + for node in self.nodes: node.wait_for_rpc_connection() except: # If one node failed to start, stop the others self.stop_nodes() raise if self.options.coveragedir is not None: - for node in nodes: + for node in self.nodes: coverage.write_all_rpc_commands( self.options.coveragedir, node.rpc) - return nodes - def stop_node(self, i): """Stop a bitcoind test node""" self.nodes[i].stop_node() while not self.nodes[i].is_node_stopped(): time.sleep(0.1) def stop_nodes(self): """Stop multiple bitcoind test nodes""" for node in self.nodes: # Issue RPC to stop nodes node.stop_node() for node in self.nodes: # Wait for nodes to stop while not node.is_node_stopped(): time.sleep(0.1) - def assert_start_raises_init_error(self, i, dirname, extra_args=None, expected_msg=None): + def assert_start_raises_init_error(self, i, extra_args=None, expected_msg=None): with tempfile.SpooledTemporaryFile(max_size=2**16) as log_stderr: try: - self.start_node(i, dirname, extra_args, stderr=log_stderr) + self.start_node(i, extra_args, stderr=log_stderr) self.stop_node(i) except Exception as e: assert 'bitcoind exited' in str(e) # node must have shutdown self.nodes[i].running = False self.nodes[i].process = None if expected_msg is not None: log_stderr.seek(0) stderr = log_stderr.read().decode('utf-8') if expected_msg not in stderr: raise AssertionError( "Expected error \"" + expected_msg + "\" not found in:\n" + stderr) else: if expected_msg is None: assert_msg = "bitcoind should have exited with an error" else: assert_msg = "bitcoind should have exited with expected error " + expected_msg raise AssertionError(assert_msg) def wait_for_node_exit(self, i, timeout): self.nodes[i].process.wait(timeout) def split_network(self): """ Split the network of four nodes into nodes 0/1 and 2/3. """ disconnect_nodes(self.nodes[1], 2) disconnect_nodes(self.nodes[2], 1) self.sync_all([self.nodes[:2], self.nodes[2:]]) def join_network(self): """ Join the (previously split) network halves together. """ connect_nodes_bi(self.nodes, 1, 2) self.sync_all() def sync_all(self, node_groups=None): if not node_groups: node_groups = [self.nodes] for group in node_groups: sync_blocks(group) sync_mempools(group) def enable_mocktime(self): """Enable mocktime for the script. mocktime may be needed for scripts that use the cached version of the blockchain. If the cached version of the blockchain is used without mocktime then the mempools will not sync due to IBD. For backwared compatibility of the python scripts with previous versions of the cache, this helper function sets mocktime to Jan 1, 2014 + (201 * 10 * 60)""" self.mocktime = 1388534400 + (201 * 10 * 60) def disable_mocktime(self): self.mocktime = 0 # Private helper methods. These should not be accessed by the subclass test scripts. def _start_logging(self): # Add logger and logging handlers self.log = logging.getLogger('TestFramework') self.log.setLevel(logging.DEBUG) # Create file handler to log all messages fh = logging.FileHandler(self.options.tmpdir + '/test_framework.log') fh.setLevel(logging.DEBUG) # Create console handler to log messages to stderr. By default this # logs only error messages, but can be configured with --loglevel. ch = logging.StreamHandler(sys.stdout) # User can provide log level as a number or string (eg DEBUG). loglevel # was caught as a string, so try to convert it to an int ll = int( self.options.loglevel) if self.options.loglevel.isdigit() else self.options.loglevel.upper() ch.setLevel(ll) # Format logs the same as bitcoind's debug.log with microprecision (so log files can be concatenated and sorted) formatter = logging.Formatter( fmt='%(asctime)s.%(msecs)03d000 %(name)s (%(levelname)s): %(message)s', datefmt='%Y-%m-%d %H:%M:%S') formatter.converter = time.gmtime fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger self.log.addHandler(fh) self.log.addHandler(ch) if self.options.trace_rpc: rpc_logger = logging.getLogger("BitcoinRPC") rpc_logger.setLevel(logging.DEBUG) rpc_handler = logging.StreamHandler(sys.stdout) rpc_handler.setLevel(logging.DEBUG) rpc_logger.addHandler(rpc_handler) - def _initialize_chain(self, test_dir, num_nodes, cachedir): + def _initialize_chain(self): """Initialize a pre-mined blockchain for use by the test. Create a cache of a 200-block-long chain (with wallet) for MAX_NODES Afterward, create num_nodes copies from the cache.""" - assert num_nodes <= MAX_NODES + assert self.num_nodes <= MAX_NODES create_cache = False for i in range(MAX_NODES): - if not os.path.isdir(os.path.join(cachedir, 'node' + str(i))): + if not os.path.isdir(os.path.join(self.options.cachedir, 'node' + str(i))): create_cache = True break if create_cache: self.log.debug("Creating data directories from cached datadir") # find and delete old cache directories if any exist for i in range(MAX_NODES): - if os.path.isdir(os.path.join(cachedir, "node" + str(i))): - shutil.rmtree(os.path.join(cachedir, "node" + str(i))) + if os.path.isdir(os.path.join(self.options.cachedir, "node" + str(i))): + shutil.rmtree(os.path.join( + self.options.cachedir, "node" + str(i))) # Create cache directories, run bitcoinds: for i in range(MAX_NODES): - datadir = initialize_datadir(cachedir, i) + datadir = initialize_datadir(self.options.cachedir, i) args = [os.getenv("BITCOIND", "bitcoind"), "-server", "-keypool=1", "-datadir=" + datadir, "-discover=0"] if i > 0: args.append("-connect=127.0.0.1:" + str(p2p_port(0))) - self.nodes.append(TestNode(i, cachedir, extra_args=[], rpchost=None, timewait=None, - binary=None, stderr=None, mocktime=self.mocktime, coverage_dir=None)) + self.nodes.append(TestNode(i, self.options.cachedir, extra_args=[ + ], rpchost=None, timewait=None, binary=None, stderr=None, mocktime=self.mocktime, coverage_dir=None)) self.nodes[i].args = args - self.nodes[i].start() + self.start_node(i) # Wait for RPC connections to be ready for node in self.nodes: node.wait_for_rpc_connection() # Create a 200-block-long chain; each of the 4 first nodes # gets 25 mature blocks and 25 immature. # Note: To preserve compatibility with older versions of # initialize_chain, only 4 nodes will generate coins. # # blocks are created with timestamps 10 minutes apart # starting from 2010 minutes in the past self.enable_mocktime() block_time = self.mocktime - (201 * 10 * 60) for i in range(2): for peer in range(4): for j in range(25): set_node_times(self.nodes, block_time) self.nodes[peer].generate(1) block_time += 10 * 60 # Must sync before next peer starts generating blocks sync_blocks(self.nodes) # Shut them down, and clean up cache directories: self.stop_nodes() self.nodes = [] self.disable_mocktime() for i in range(MAX_NODES): - os.remove(log_filename(cachedir, i, "debug.log")) - os.remove(log_filename(cachedir, i, "db.log")) - os.remove(log_filename(cachedir, i, "peers.dat")) - os.remove(log_filename(cachedir, i, "fee_estimates.dat")) - - for i in range(num_nodes): - from_dir = os.path.join(cachedir, "node" + str(i)) - to_dir = os.path.join(test_dir, "node" + str(i)) + os.remove(log_filename(self.options.cachedir, i, "debug.log")) + os.remove(log_filename(self.options.cachedir, i, "db.log")) + os.remove(log_filename(self.options.cachedir, i, "peers.dat")) + os.remove(log_filename( + self.options.cachedir, i, "fee_estimates.dat")) + + for i in range(self.num_nodes): + from_dir = os.path.join(self.options.cachedir, "node" + str(i)) + to_dir = os.path.join(self.options.tmpdir, "node" + str(i)) shutil.copytree(from_dir, to_dir) # Overwrite port/rpcport in bitcoin.conf - initialize_datadir(test_dir, i) + initialize_datadir(self.options.tmpdir, i) - def _initialize_chain_clean(self, test_dir, num_nodes): + def _initialize_chain_clean(self): """Initialize empty blockchain for use by the test. Create an empty blockchain and num_nodes wallets. Useful if a test case wants complete control over initialization.""" - for i in range(num_nodes): - initialize_datadir(test_dir, i) + for i in range(self.num_nodes): + initialize_datadir(self.options.tmpdir, i) class ComparisonTestFramework(BitcoinTestFramework): """Test framework for doing p2p comparison testing Sets up some bitcoind binaries: - 1 binary: test binary - 2 binaries: 1 test binary, 1 ref binary - n>2 binaries: 1 test binary, n-1 ref binaries""" - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 self.setup_clean_chain = True def add_options(self, parser): parser.add_option("--testbinary", dest="testbinary", default=os.getenv("BITCOIND", "bitcoind"), help="bitcoind binary to test") parser.add_option("--refbinary", dest="refbinary", default=os.getenv("BITCOIND", "bitcoind"), help="bitcoind binary to use for reference nodes (if any)") def setup_network(self): extra_args = [['-whitelist=127.0.0.1']] * self.num_nodes if hasattr(self, "extra_args"): extra_args = self.extra_args - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, extra_args, - binary=[self.options.testbinary] + - [self.options.refbinary] * (self.num_nodes - 1)) + self.add_nodes(self.num_nodes, extra_args, + binary=[self.options.testbinary] + + [self.options.refbinary] * (self.num_nodes - 1)) + self.start_nodes() class SkipTest(Exception): """This exception is raised to skip a test""" def __init__(self, message): self.message = message diff --git a/test/functional/test_framework/test_node.py b/test/functional/test_framework/test_node.py index 110d2f9d2..6cf14fc46 100755 --- a/test/functional/test_framework/test_node.py +++ b/test/functional/test_framework/test_node.py @@ -1,192 +1,195 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Class for bitcoind node under test""" import decimal import errno import http.client import json import logging import os import subprocess import time from .util import ( assert_equal, get_rpc_proxy, rpc_url, ) from .authproxy import JSONRPCException class TestNode(): """A class for representing a bitcoind node under test. This class contains: - state about the node (whether it's running, etc) - a Python subprocess.Popen object representing the running process - an RPC connection to the node To make things easier for the test writer, a bit of magic is happening under the covers. Any unrecognised messages will be dispatched to the RPC connection.""" def __init__(self, i, dirname, extra_args, rpchost, timewait, binary, stderr, mocktime, coverage_dir): self.index = i self.datadir = os.path.join(dirname, "node" + str(i)) self.rpchost = rpchost if timewait: self.rpc_timeout = timewait else: # Wait for up to 60 seconds for the RPC server to respond self.rpc_timeout = 60 if binary is None: self.binary = os.getenv("BITCOIND", "bitcoind") else: self.binary = binary self.stderr = stderr self.coverage_dir = coverage_dir # Most callers will just need to add extra args to the standard list below. For those callers that need more flexibity, they can just set the args property directly. self.extra_args = extra_args self.args = [self.binary, "-datadir=" + self.datadir, "-server", "-keypool=1", "-discover=0", "-rest", "-logtimemicros", "-debug", "-debugexclude=libevent", "-debugexclude=leveldb", "-mocktime=" + str(mocktime), "-uacomment=testnode%d" % i] self.cli = TestNodeCLI( os.getenv("BITCOINCLI", "bitcoin-cli"), self.datadir) self.running = False self.process = None self.rpc_connected = False self.rpc = None self.url = None self.log = logging.getLogger('TestFramework.node%d' % i) def __getattr__(self, *args, **kwargs): """Dispatches any unrecognised messages to the RPC connection.""" assert self.rpc_connected and self.rpc is not None, "Error: no RPC connection" return self.rpc.__getattr__(*args, **kwargs) - def start(self): + def start(self, extra_args=None, stderr=None): """Start the node.""" - self.process = subprocess.Popen( - self.args + self.extra_args, stderr=self.stderr) + if extra_args is None: + extra_args = self.extra_args + if stderr is None: + stderr = self.stderr + self.process = subprocess.Popen(self.args + extra_args, stderr=stderr) self.running = True self.log.debug("bitcoind started, waiting for RPC to come up") def wait_for_rpc_connection(self): """Sets up an RPC connection to the bitcoind process. Returns False if unable to connect.""" # Poll at a rate of four times per second poll_per_s = 4 for _ in range(poll_per_s * self.rpc_timeout): assert self.process.poll( ) is None, "bitcoind exited with status %i during initialization" % self.process.returncode try: - self.rpc = get_rpc_proxy(rpc_url( - self.datadir, self.index, self.rpchost), self.index, coveragedir=self.coverage_dir) + self.rpc = get_rpc_proxy(rpc_url(self.datadir, self.index, self.rpchost), + self.index, timeout=self.rpc_timeout, coveragedir=self.coverage_dir) self.rpc.getblockcount() # If the call to getblockcount() succeeds then the RPC connection is up self.rpc_connected = True self.url = self.rpc.url self.log.debug("RPC successfully started") return except IOError as e: if e.errno != errno.ECONNREFUSED: # Port not yet open? raise # unknown IO error except JSONRPCException as e: # Initialization phase if e.error['code'] != -28: # RPC in warmup? raise # unknown JSON RPC exception except ValueError as e: # cookie file not found and no rpcuser or rpcassword. bitcoind still starting if "No RPC credentials" not in str(e): raise time.sleep(1.0 / poll_per_s) raise AssertionError("Unable to connect to bitcoind") def get_wallet_rpc(self, wallet_name): assert self.rpc_connected assert self.rpc wallet_path = "wallet/%s" % wallet_name return self.rpc / wallet_path def stop_node(self): """Stop the node.""" if not self.running: return self.log.debug("Stopping node") try: self.stop() except http.client.CannotSendRequest: self.log.exception("Unable to stop node.") def is_node_stopped(self): """Checks whether the node has stopped. Returns True if the node has stopped. False otherwise. This method is responsible for freeing resources (self.process).""" if not self.running: return True return_code = self.process.poll() if return_code is not None: # process has stopped. Assert that it didn't return an error code. assert_equal(return_code, 0) self.running = False self.process = None self.log.debug("Node stopped") return True return False def node_encrypt_wallet(self, passphrase): """"Encrypts the wallet. This causes bitcoind to shutdown, so this method takes care of cleaning up resources.""" self.encryptwallet(passphrase) while not self.is_node_stopped(): time.sleep(0.1) self.rpc = None self.rpc_connected = False class TestNodeCLI(): """Interface to bitcoin-cli for an individual node""" def __init__(self, binary, datadir): self.args = [] self.binary = binary self.datadir = datadir self.input = None def __call__(self, *args, input=None): # TestNodeCLI is callable with bitcoin-cli command-line args self.args = [str(arg) for arg in args] self.input = input return self def __getattr__(self, command): def dispatcher(*args, **kwargs): return self.send_cli(command, *args, **kwargs) return dispatcher def send_cli(self, command, *args, **kwargs): """Run bitcoin-cli command. Deserializes returned string as python object.""" pos_args = [str(arg) for arg in args] named_args = [str(key) + "=" + str(value) for (key, value) in kwargs.items()] assert not ( pos_args and named_args), "Cannot use positional arguments and named arguments in the same bitcoin-cli call" p_args = [self.binary, "-datadir=" + self.datadir] + self.args if named_args: p_args += ["-named"] p_args += [command] + pos_args + named_args process = subprocess.Popen(p_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) cli_stdout, cli_stderr = process.communicate(input=self.input) returncode = process.poll() if returncode: # Ignore cli_stdout, raise with cli_stderr raise subprocess.CalledProcessError( returncode, self.binary, output=cli_stderr) return json.loads(cli_stdout, parse_float=decimal.Decimal) diff --git a/test/functional/txn_clone.py b/test/functional/txn_clone.py index 8e2071701..914e63872 100755 --- a/test/functional/txn_clone.py +++ b/test/functional/txn_clone.py @@ -1,173 +1,170 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test proper accounting with an equivalent malleability clone # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class TxnMallTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 4 - self.setup_clean_chain = False def add_options(self, parser): parser.add_option( "--mineblock", dest="mine_block", default=False, action="store_true", help="Test double-spend of 1-confirmed transaction") def setup_network(self): # Start with split network: super(TxnMallTest, self).setup_network() disconnect_nodes(self.nodes[1], 2) disconnect_nodes(self.nodes[2], 1) def run_test(self): # All nodes should start with 1,250 BTC: starting_balance = 1250 for i in range(4): assert_equal(self.nodes[i].getbalance(), starting_balance) # bug workaround, coins generated assigned to first getnewaddress! self.nodes[i].getnewaddress("") # Assign coins to foo and bar accounts: self.nodes[0].settxfee(.001) node0_address_foo = self.nodes[0].getnewaddress("foo") fund_foo_txid = self.nodes[0].sendfrom("", node0_address_foo, 1219) fund_foo_tx = self.nodes[0].gettransaction(fund_foo_txid) node0_address_bar = self.nodes[0].getnewaddress("bar") fund_bar_txid = self.nodes[0].sendfrom("", node0_address_bar, 29) fund_bar_tx = self.nodes[0].gettransaction(fund_bar_txid) assert_equal(self.nodes[0].getbalance(""), starting_balance - 1219 - 29 + fund_foo_tx["fee"] + fund_bar_tx["fee"]) # Coins are sent to node1_address node1_address = self.nodes[1].getnewaddress("from0") # Send tx1, and another transaction tx2 that won't be cloned txid1 = self.nodes[0].sendfrom("foo", node1_address, 40, 0) txid2 = self.nodes[0].sendfrom("bar", node1_address, 20, 0) # Construct a clone of tx1, to be malleated rawtx1 = self.nodes[0].getrawtransaction(txid1, 1) clone_inputs = [ {"txid": rawtx1["vin"][0]["txid"], "vout":rawtx1["vin"][0]["vout"]}] clone_outputs = {rawtx1[ "vout"][0]["scriptPubKey"]["addresses"][0]: rawtx1["vout"][0]["value"], rawtx1["vout"][1]["scriptPubKey"]["addresses"][0]: rawtx1["vout"][1]["value"]} clone_locktime = rawtx1["locktime"] clone_raw = self.nodes[0].createrawtransaction( clone_inputs, clone_outputs, clone_locktime) # createrawtransaction randomizes the order of its outputs, so swap them if necessary. # output 0 is at version+#inputs+input+sigstub+sequence+#outputs # 40 BTC serialized is 00286bee00000000 pos0 = 2 * (4 + 1 + 36 + 1 + 4 + 1) hex40 = "00286bee00000000" output_len = 16 + 2 + 2 * \ int("0x" + clone_raw[pos0 + 16: pos0 + 16 + 2], 0) if (rawtx1["vout"][0]["value"] == 40 and clone_raw[pos0: pos0 + 16] != hex40 or rawtx1["vout"][0]["value"] != 40 and clone_raw[pos0: pos0 + 16] == hex40): output0 = clone_raw[pos0: pos0 + output_len] output1 = clone_raw[pos0 + output_len: pos0 + 2 * output_len] clone_raw = clone_raw[:pos0] + output1 + \ output0 + clone_raw[pos0 + 2 * output_len:] # Use a different signature hash type to sign. This creates an equivalent but malleated clone. # Don't send the clone anywhere yet tx1_clone = self.nodes[0].signrawtransaction( clone_raw, None, None, "ALL|FORKID|ANYONECANPAY") assert_equal(tx1_clone["complete"], True) # Have node0 mine a block, if requested: if (self.options.mine_block): self.nodes[0].generate(1) sync_blocks(self.nodes[0:2]) tx1 = self.nodes[0].gettransaction(txid1) tx2 = self.nodes[0].gettransaction(txid2) # Node0's balance should be starting balance, plus 50BTC for another # matured block, minus tx1 and tx2 amounts, and minus transaction fees: expected = starting_balance + fund_foo_tx["fee"] + fund_bar_tx["fee"] if self.options.mine_block: expected += 50 expected += tx1["amount"] + tx1["fee"] expected += tx2["amount"] + tx2["fee"] assert_equal(self.nodes[0].getbalance(), expected) # foo and bar accounts should be debited: assert_equal(self.nodes[0].getbalance( "foo", 0), 1219 + tx1["amount"] + tx1["fee"]) assert_equal(self.nodes[0].getbalance( "bar", 0), 29 + tx2["amount"] + tx2["fee"]) if self.options.mine_block: assert_equal(tx1["confirmations"], 1) assert_equal(tx2["confirmations"], 1) # Node1's "from0" balance should be both transaction amounts: assert_equal(self.nodes[1].getbalance( "from0"), -(tx1["amount"] + tx2["amount"])) else: assert_equal(tx1["confirmations"], 0) assert_equal(tx2["confirmations"], 0) # Send clone and its parent to miner self.nodes[2].sendrawtransaction(fund_foo_tx["hex"]) txid1_clone = self.nodes[2].sendrawtransaction(tx1_clone["hex"]) # ... mine a block... self.nodes[2].generate(1) # Reconnect the split network, and sync chain: connect_nodes(self.nodes[1], 2) self.nodes[2].sendrawtransaction(fund_bar_tx["hex"]) self.nodes[2].sendrawtransaction(tx2["hex"]) self.nodes[2].generate(1) # Mine another block to make sure we sync sync_blocks(self.nodes) # Re-fetch transaction info: tx1 = self.nodes[0].gettransaction(txid1) tx1_clone = self.nodes[0].gettransaction(txid1_clone) tx2 = self.nodes[0].gettransaction(txid2) # Verify expected confirmations assert_equal(tx1["confirmations"], -2) assert_equal(tx1_clone["confirmations"], 2) assert_equal(tx2["confirmations"], 1) # Check node0's total balance; should be same as before the clone, + 100 BTC for 2 matured, # less possible orphaned matured subsidy expected += 100 if (self.options.mine_block): expected -= 50 assert_equal(self.nodes[0].getbalance(), expected) assert_equal(self.nodes[0].getbalance("*", 0), expected) # Check node0's individual account balances. # "foo" should have been debited by the equivalent clone of tx1 assert_equal(self.nodes[0].getbalance( "foo"), 1219 + tx1["amount"] + tx1["fee"]) # "bar" should have been debited by (possibly unconfirmed) tx2 assert_equal(self.nodes[0].getbalance( "bar", 0), 29 + tx2["amount"] + tx2["fee"]) # "" should have starting balance, less funding txes, plus subsidies assert_equal(self.nodes[0].getbalance("", 0), starting_balance - 1219 + fund_foo_tx["fee"] - 29 + fund_bar_tx["fee"] + 100) # Node1's "from0" account balance assert_equal(self.nodes[1].getbalance( "from0", 0), -(tx1["amount"] + tx2["amount"])) if __name__ == '__main__': TxnMallTest().main() diff --git a/test/functional/txn_doublespend.py b/test/functional/txn_doublespend.py index efe82a356..4a78ce83f 100755 --- a/test/functional/txn_doublespend.py +++ b/test/functional/txn_doublespend.py @@ -1,154 +1,151 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. # # Test proper accounting with a double-spend conflict # from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * class TxnMallTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 4 - self.setup_clean_chain = False def add_options(self, parser): parser.add_option( "--mineblock", dest="mine_block", default=False, action="store_true", help="Test double-spend of 1-confirmed transaction") def setup_network(self): # Start with split network: super().setup_network() disconnect_nodes(self.nodes[1], 2) disconnect_nodes(self.nodes[2], 1) def run_test(self): # All nodes should start with 1,250 BTC: starting_balance = 1250 for i in range(4): assert_equal(self.nodes[i].getbalance(), starting_balance) # bug workaround, coins generated assigned to first getnewaddress! self.nodes[i].getnewaddress("") # Assign coins to foo and bar accounts: node0_address_foo = self.nodes[0].getnewaddress("foo") fund_foo_txid = self.nodes[0].sendfrom("", node0_address_foo, 1219) fund_foo_tx = self.nodes[0].gettransaction(fund_foo_txid) node0_address_bar = self.nodes[0].getnewaddress("bar") fund_bar_txid = self.nodes[0].sendfrom("", node0_address_bar, 29) fund_bar_tx = self.nodes[0].gettransaction(fund_bar_txid) assert_equal(self.nodes[0].getbalance(""), starting_balance - 1219 - 29 + fund_foo_tx["fee"] + fund_bar_tx["fee"]) # Coins are sent to node1_address node1_address = self.nodes[1].getnewaddress("from0") # First: use raw transaction API to send 1240 BTC to node1_address, # but don't broadcast: doublespend_fee = Decimal('-.02') rawtx_input_0 = {} rawtx_input_0["txid"] = fund_foo_txid rawtx_input_0["vout"] = find_output(self.nodes[0], fund_foo_txid, 1219) rawtx_input_1 = {} rawtx_input_1["txid"] = fund_bar_txid rawtx_input_1["vout"] = find_output(self.nodes[0], fund_bar_txid, 29) inputs = [rawtx_input_0, rawtx_input_1] change_address = self.nodes[0].getnewaddress() outputs = {} outputs[node1_address] = 1240 outputs[change_address] = 1248 - 1240 + doublespend_fee rawtx = self.nodes[0].createrawtransaction(inputs, outputs) doublespend = self.nodes[0].signrawtransaction( rawtx, None, None, "ALL|FORKID") assert_equal(doublespend["complete"], True) # Create two spends using 1 50 BTC coin each txid1 = self.nodes[0].sendfrom("foo", node1_address, 40, 0) txid2 = self.nodes[0].sendfrom("bar", node1_address, 20, 0) # Have node0 mine a block: if (self.options.mine_block): self.nodes[0].generate(1) sync_blocks(self.nodes[0:2]) tx1 = self.nodes[0].gettransaction(txid1) tx2 = self.nodes[0].gettransaction(txid2) # Node0's balance should be starting balance, plus 50BTC for another # matured block, minus 40, minus 20, and minus transaction fees: expected = starting_balance + fund_foo_tx["fee"] + fund_bar_tx["fee"] if self.options.mine_block: expected += 50 expected += tx1["amount"] + tx1["fee"] expected += tx2["amount"] + tx2["fee"] assert_equal(self.nodes[0].getbalance(), expected) # foo and bar accounts should be debited: assert_equal(self.nodes[0].getbalance( "foo", 0), 1219 + tx1["amount"] + tx1["fee"]) assert_equal(self.nodes[0].getbalance( "bar", 0), 29 + tx2["amount"] + tx2["fee"]) if self.options.mine_block: assert_equal(tx1["confirmations"], 1) assert_equal(tx2["confirmations"], 1) # Node1's "from0" balance should be both transaction amounts: assert_equal(self.nodes[1].getbalance( "from0"), -(tx1["amount"] + tx2["amount"])) else: assert_equal(tx1["confirmations"], 0) assert_equal(tx2["confirmations"], 0) # Now give doublespend and its parents to miner: self.nodes[2].sendrawtransaction(fund_foo_tx["hex"]) self.nodes[2].sendrawtransaction(fund_bar_tx["hex"]) doublespend_txid = self.nodes[2].sendrawtransaction(doublespend["hex"]) # ... mine a block... self.nodes[2].generate(1) # Reconnect the split network, and sync chain: connect_nodes(self.nodes[1], 2) # Mine another block to make sure we sync self.nodes[2].generate(1) sync_blocks(self.nodes) assert_equal(self.nodes[0].gettransaction( doublespend_txid)["confirmations"], 2) # Re-fetch transaction info: tx1 = self.nodes[0].gettransaction(txid1) tx2 = self.nodes[0].gettransaction(txid2) # Both transactions should be conflicted assert_equal(tx1["confirmations"], -2) assert_equal(tx2["confirmations"], -2) # Node0's total balance should be starting balance, plus 100BTC for # two more matured blocks, minus 1240 for the double-spend, plus fees (which are # negative): expected = starting_balance + 100 - 1240 + \ fund_foo_tx["fee"] + fund_bar_tx["fee"] + doublespend_fee assert_equal(self.nodes[0].getbalance(), expected) assert_equal(self.nodes[0].getbalance("*"), expected) # Final "" balance is starting_balance - amount moved to accounts - doublespend + subsidies + # fees (which are negative) assert_equal(self.nodes[0].getbalance("foo"), 1219) assert_equal(self.nodes[0].getbalance("bar"), 29) assert_equal(self.nodes[0].getbalance(""), starting_balance - 1219 - 29 - 1240 + 100 + fund_foo_tx["fee"] + fund_bar_tx["fee"] + doublespend_fee) # Node1's "from0" account balance should be just the doublespend: assert_equal(self.nodes[1].getbalance("from0"), 1240) if __name__ == '__main__': TxnMallTest().main() diff --git a/test/functional/uptime.py b/test/functional/uptime.py index b20d6f5cb..78236b239 100755 --- a/test/functional/uptime.py +++ b/test/functional/uptime.py @@ -1,32 +1,30 @@ #!/usr/bin/env python3 # Copyright (c) 2017 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test the RPC call related to the uptime command. Test corresponds to code in rpc/server.cpp. """ import time from test_framework.test_framework import BitcoinTestFramework class UptimeTest(BitcoinTestFramework): - def __init__(self): - super().__init__() - + def set_test_params(self): self.num_nodes = 1 self.setup_clean_chain = True def run_test(self): self._test_uptime() def _test_uptime(self): wait_time = 10 self.nodes[0].setmocktime(int(time.time() + wait_time)) assert(self.nodes[0].uptime() >= wait_time) if __name__ == '__main__': UptimeTest().main() diff --git a/test/functional/wallet-accounts.py b/test/functional/wallet-accounts.py index a4a817bda..bee40e241 100755 --- a/test/functional/wallet-accounts.py +++ b/test/functional/wallet-accounts.py @@ -1,86 +1,84 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal class WalletAccountsTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [[]] def run_test(self): node = self.nodes[0] # Check that there's no UTXO on any of the nodes assert_equal(len(node.listunspent()), 0) node.generate(101) assert_equal(node.getbalance(), 50) accounts = ["a", "b", "c", "d", "e"] amount_to_send = 1.0 account_addresses = dict() for account in accounts: address = node.getaccountaddress(account) account_addresses[account] = address node.getnewaddress(account) assert_equal(node.getaccount(address), account) assert(address in node.getaddressesbyaccount(account)) node.sendfrom("", address, amount_to_send) node.generate(1) for i in range(len(accounts)): from_account = accounts[i] to_account = accounts[(i + 1) % len(accounts)] to_address = account_addresses[to_account] node.sendfrom(from_account, to_address, amount_to_send) node.generate(1) for account in accounts: address = node.getaccountaddress(account) assert(address != account_addresses[account]) assert_equal(node.getreceivedbyaccount(account), 2) node.move(account, "", node.getbalance(account)) node.generate(101) expected_account_balances = {"": 5200} for account in accounts: expected_account_balances[account] = 0 assert_equal(node.listaccounts(), expected_account_balances) assert_equal(node.getbalance(""), 5200) for account in accounts: address = node.getaccountaddress("") node.setaccount(address, account) assert(address in node.getaddressesbyaccount(account)) assert(address not in node.getaddressesbyaccount("")) for account in accounts: addresses = [] for x in range(10): addresses.append(node.getnewaddress()) multisig_address = node.addmultisigaddress(5, addresses, account) node.sendfrom("", multisig_address, 50) node.generate(101) for account in accounts: assert_equal(node.getbalance(account), 50) if __name__ == '__main__': WalletAccountsTest().main() diff --git a/test/functional/wallet-dump.py b/test/functional/wallet-dump.py index 5a49cfc94..ac2c1a017 100755 --- a/test/functional/wallet-dump.py +++ b/test/functional/wallet-dump.py @@ -1,115 +1,111 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal def read_dump(file_name, addrs, hd_master_addr_old): """ Read the given dump, count the addrs that match, count change and reserve. Also check that the old hd_master is inactive """ with open(file_name, encoding='utf8') as inputfile: found_addr = 0 found_addr_chg = 0 found_addr_rsv = 0 hd_master_addr_ret = None for line in inputfile: # only read non comment lines if line[0] != "#" and len(line) > 10: # split out some data key_label, comment = line.split("#") # key = key_label.split(" ")[0] keytype = key_label.split(" ")[2] if len(comment) > 1: addr_keypath = comment.split(" addr=")[1] addr = addr_keypath.split(" ")[0] keypath = None if keytype == "inactivehdmaster=1": # ensure the old master is still available assert(hd_master_addr_old == addr) elif keytype == "hdmaster=1": # ensure we have generated a new hd master key assert(hd_master_addr_old != addr) hd_master_addr_ret = addr else: keypath = addr_keypath.rstrip().split("hdkeypath=")[1] # count key types for addrObj in addrs: if addrObj['address'] == addr and addrObj['hdkeypath'] == keypath and keytype == "label=": found_addr += 1 break elif keytype == "change=1": found_addr_chg += 1 break elif keytype == "reserve=1": found_addr_rsv += 1 break return found_addr, found_addr_chg, found_addr_rsv, hd_master_addr_ret class WalletDumpTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.setup_clean_chain = False + def set_test_params(self): self.num_nodes = 1 self.extra_args = [["-keypool=90"]] def setup_network(self, split=False): # Use 1 minute timeout because the initial getnewaddress RPC can take # longer than the default 30 seconds due to an expensive # CWallet::TopUpKeyPool call, and the encryptwallet RPC made later in # the test often takes even longer. - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, self.extra_args, timewait=60) + self.add_nodes(self.num_nodes, self.extra_args, timewait=60) + self.start_nodes() def run_test(self): tmpdir = self.options.tmpdir # generate 20 addresses to compare against the dump test_addr_count = 20 addrs = [] for i in range(0, test_addr_count): addr = self.nodes[0].getnewaddress() vaddr = self.nodes[0].validateaddress( addr) # required to get hd keypath addrs.append(vaddr) # Should be a no-op: self.nodes[0].keypoolrefill() # dump unencrypted wallet self.nodes[0].dumpwallet(tmpdir + "/node0/wallet.unencrypted.dump") found_addr, found_addr_chg, found_addr_rsv, hd_master_addr_unenc = \ read_dump(tmpdir + "/node0/wallet.unencrypted.dump", addrs, None) assert_equal(found_addr, test_addr_count) # all keys must be in the dump assert_equal(found_addr_chg, 50) # 50 blocks where mined # 90 keys plus 100% internal keys assert_equal(found_addr_rsv, 90 * 2) # encrypt wallet, restart, unlock and dump self.nodes[0].node_encrypt_wallet('test') - self.nodes[0] = self.start_node( - 0, self.options.tmpdir, self.extra_args[0]) + self.start_node(0) self.nodes[0].walletpassphrase('test', 10) # Should be a no-op: self.nodes[0].keypoolrefill() self.nodes[0].dumpwallet(tmpdir + "/node0/wallet.encrypted.dump") found_addr, found_addr_chg, found_addr_rsv, hd_master_addr_enc = \ read_dump( tmpdir + "/node0/wallet.encrypted.dump", addrs, hd_master_addr_unenc) assert_equal(found_addr, test_addr_count) # old reserve keys are marked as change now assert_equal(found_addr_chg, 90 * 2 + 50) assert_equal(found_addr_rsv, 90 * 2) if __name__ == '__main__': WalletDumpTest().main() diff --git a/test/functional/wallet-encryption.py b/test/functional/wallet-encryption.py index b900a9c1d..0960c923b 100755 --- a/test/functional/wallet-encryption.py +++ b/test/functional/wallet-encryption.py @@ -1,70 +1,68 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test Wallet encryption""" import time from test_framework.test_framework import BitcoinTestFramework, BITCOIND_PROC_WAIT_TIMEOUT from test_framework.util import ( assert_equal, assert_raises_jsonrpc, ) class WalletEncryptionTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 1 def run_test(self): passphrase = "WalletPassphrase" passphrase2 = "SecondWalletPassphrase" # Make sure the wallet isn't encrypted first address = self.nodes[0].getnewaddress() privkey = self.nodes[0].dumpprivkey(address) assert_equal(privkey[:1], "c") assert_equal(len(privkey), 52) # Encrypt the wallet self.nodes[0].node_encrypt_wallet(passphrase) - self.nodes[0] = self.start_node(0, self.options.tmpdir) + self.start_node(0) # Test that the wallet is encrypted assert_raises_jsonrpc(-13, "Please enter the wallet passphrase with walletpassphrase first", self.nodes[0].dumpprivkey, address) # Check that walletpassphrase works self.nodes[0].walletpassphrase(passphrase, 2) assert_equal(privkey, self.nodes[0].dumpprivkey(address)) # Check that the timeout is right time.sleep(2) assert_raises_jsonrpc(-13, "Please enter the wallet passphrase with walletpassphrase first", self.nodes[0].dumpprivkey, address) # Test wrong passphrase assert_raises_jsonrpc(-14, "wallet passphrase entered was incorrect", self.nodes[0].walletpassphrase, passphrase + "wrong", 10) # Test walletlock self.nodes[0].walletpassphrase(passphrase, 84600) assert_equal(privkey, self.nodes[0].dumpprivkey(address)) self.nodes[0].walletlock() assert_raises_jsonrpc(-13, "Please enter the wallet passphrase with walletpassphrase first", self.nodes[0].dumpprivkey, address) # Test passphrase changes self.nodes[0].walletpassphrasechange(passphrase, passphrase2) assert_raises_jsonrpc(-14, "wallet passphrase entered was incorrect", self.nodes[0].walletpassphrase, passphrase, 10) self.nodes[0].walletpassphrase(passphrase2, 10) assert_equal(privkey, self.nodes[0].dumpprivkey(address)) if __name__ == '__main__': WalletEncryptionTest().main() diff --git a/test/functional/wallet-hd.py b/test/functional/wallet-hd.py index 6031adade..0f978c55f 100755 --- a/test/functional/wallet-hd.py +++ b/test/functional/wallet-hd.py @@ -1,117 +1,112 @@ #!/usr/bin/env python3 # Copyright (c) 2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import ( assert_equal, connect_nodes_bi, ) import shutil class WalletHDTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 self.extra_args = [['-usehd=0'], ['-usehd=1', '-keypool=0']] def run_test(self): tmpdir = self.options.tmpdir # Make sure can't switch off usehd after wallet creation self.stop_node(1) self.assert_start_raises_init_error( - 1, self.options.tmpdir, ['-usehd=0'], 'already existing HD wallet') - self.nodes[1] = self.start_node( - 1, self.options.tmpdir, self.extra_args[1]) + 1, ['-usehd=0'], 'already existing HD wallet') + self.start_node(1) connect_nodes_bi(self.nodes, 0, 1) # Make sure we use hd, keep masterkeyid masterkeyid = self.nodes[1].getwalletinfo()['hdmasterkeyid'] assert_equal(len(masterkeyid), 40) # create an internal key change_addr = self.nodes[1].getrawchangeaddress() change_addrV = self.nodes[1].validateaddress(change_addr) # first internal child key assert_equal(change_addrV["hdkeypath"], "m/0'/1'/0'") # Import a non-HD private key in the HD wallet non_hd_add = self.nodes[0].getnewaddress() self.nodes[1].importprivkey(self.nodes[0].dumpprivkey(non_hd_add)) # This should be enough to keep the master key and the non-HD key self.nodes[1].backupwallet(tmpdir + "/hd.bak") # self.nodes[1].dumpwallet(tmpdir + "/hd.dump") # Derive some HD addresses and remember the last # Also send funds to each add self.nodes[0].generate(101) hd_add = None num_hd_adds = 300 for i in range(num_hd_adds): hd_add = self.nodes[1].getnewaddress() hd_info = self.nodes[1].validateaddress(hd_add) assert_equal(hd_info["hdkeypath"], "m/0'/0'/" + str(i) + "'") assert_equal(hd_info["hdmasterkeyid"], masterkeyid) self.nodes[0].sendtoaddress(hd_add, 1) self.nodes[0].generate(1) self.nodes[0].sendtoaddress(non_hd_add, 1) self.nodes[0].generate(1) # create an internal key (again) change_addr = self.nodes[1].getrawchangeaddress() change_addrV = self.nodes[1].validateaddress(change_addr) # second internal child key assert_equal(change_addrV["hdkeypath"], "m/0'/1'/1'") self.sync_all() assert_equal(self.nodes[1].getbalance(), num_hd_adds + 1) self.log.info("Restore backup ...") self.stop_node(1) # we need to delete the complete regtest directory # otherwise node1 would auto-recover all funds in flag the keypool keys as used shutil.rmtree(tmpdir + "/node1/regtest/blocks") shutil.rmtree(tmpdir + "/node1/regtest/chainstate") shutil.copyfile(tmpdir + "/hd.bak", tmpdir + "/node1/regtest/wallet.dat") - self.nodes[1] = self.start_node( - 1, self.options.tmpdir, self.extra_args[1]) + self.start_node(1) # Assert that derivation is deterministic hd_add_2 = None for _ in range(num_hd_adds): hd_add_2 = self.nodes[1].getnewaddress() hd_info_2 = self.nodes[1].validateaddress(hd_add_2) assert_equal(hd_info_2["hdkeypath"], "m/0'/0'/" + str(_) + "'") assert_equal(hd_info_2["hdmasterkeyid"], masterkeyid) assert_equal(hd_add, hd_add_2) connect_nodes_bi(self.nodes, 0, 1) self.sync_all() # Needs rescan self.stop_node(1) - self.nodes[1] = self.start_node( - 1, self.options.tmpdir, self.extra_args[1] + ['-rescan']) + self.start_node(1, extra_args=self.extra_args[1] + ['-rescan']) assert_equal(self.nodes[1].getbalance(), num_hd_adds + 1) # send a tx and make sure its using the internal chain for the changeoutput txid = self.nodes[1].sendtoaddress(self.nodes[0].getnewaddress(), 1) outs = self.nodes[1].decoderawtransaction( self.nodes[1].gettransaction(txid)['hex'])['vout'] keypath = "" for out in outs: if out['value'] != 1: keypath = self.nodes[1].validateaddress( out['scriptPubKey']['addresses'][0])['hdkeypath'] assert_equal(keypath[0:7], "m/0'/1'") if __name__ == '__main__': WalletHDTest().main() diff --git a/test/functional/wallet.py b/test/functional/wallet.py index 8a14fff8e..9d2f5cc05 100755 --- a/test/functional/wallet.py +++ b/test/functional/wallet.py @@ -1,435 +1,436 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * -class WalletTest (BitcoinTestFramework): - - def check_fee_amount(self, curr_balance, balance_with_fee, fee_per_byte, tx_size): - """Return curr_balance after asserting the fee was in range""" - fee = balance_with_fee - curr_balance - assert_fee_amount(fee, tx_size, fee_per_byte * 1000) - return curr_balance - - def __init__(self): - super().__init__() - self.setup_clean_chain = True +class WalletTest(BitcoinTestFramework): + def set_test_params(self): self.num_nodes = 4 - self.extra_args = [['-usehd={:d}'.format(i % 2 == 0)] - for i in range(4)] + self.setup_clean_chain = True + self.extra_args = [ + ['-usehd={:d}'.format(i % 2 == 0)] for i in range(4)] def setup_network(self): - self.nodes = self.start_nodes( - 3, self.options.tmpdir, self.extra_args[:3]) + self.add_nodes(4, self.extra_args) + self.start_node(0) + self.start_node(1) + self.start_node(2) connect_nodes_bi(self.nodes, 0, 1) connect_nodes_bi(self.nodes, 1, 2) connect_nodes_bi(self.nodes, 0, 2) - self.sync_all() + self.sync_all([self.nodes[0:3]]) - def run_test(self): + def check_fee_amount(self, curr_balance, balance_with_fee, fee_per_byte, tx_size): + """Return curr_balance after asserting the fee was in range""" + fee = balance_with_fee - curr_balance + assert_fee_amount(fee, tx_size, fee_per_byte * 1000) + return curr_balance + def run_test(self): # Check that there's no UTXO on none of the nodes assert_equal(len(self.nodes[0].listunspent()), 0) assert_equal(len(self.nodes[1].listunspent()), 0) assert_equal(len(self.nodes[2].listunspent()), 0) self.log.info("Mining blocks...") self.nodes[0].generate(1) walletinfo = self.nodes[0].getwalletinfo() assert_equal(walletinfo['immature_balance'], 50) assert_equal(walletinfo['balance'], 0) - self.sync_all() + self.sync_all([self.nodes[0:3]]) self.nodes[1].generate(101) - self.sync_all() + self.sync_all([self.nodes[0:3]]) assert_equal(self.nodes[0].getbalance(), 50) assert_equal(self.nodes[1].getbalance(), 50) assert_equal(self.nodes[2].getbalance(), 0) # Check that only first and second nodes have UTXOs assert_equal(len(self.nodes[0].listunspent()), 1) assert_equal(len(self.nodes[1].listunspent()), 1) assert_equal(len(self.nodes[2].listunspent()), 0) # Send 21 BTC from 0 to 2 using sendtoaddress call. self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 11) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 10) walletinfo = self.nodes[0].getwalletinfo() assert_equal(walletinfo['immature_balance'], 0) # Have node0 mine a block, thus it will collect its own fee. self.nodes[0].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) # Exercise locking of unspent outputs unspent_0 = self.nodes[2].listunspent()[0] unspent_0 = {"txid": unspent_0["txid"], "vout": unspent_0["vout"]} self.nodes[2].lockunspent(False, [unspent_0]) assert_raises_jsonrpc(-4, "Insufficient funds", self.nodes[2].sendtoaddress, self.nodes[2].getnewaddress(), 20) assert_equal([unspent_0], self.nodes[2].listlockunspent()) self.nodes[2].lockunspent(True, [unspent_0]) assert_equal(len(self.nodes[2].listlockunspent()), 0) # Have node1 generate 100 blocks (so node0 can recover the fee) self.nodes[1].generate(100) - self.sync_all() + self.sync_all([self.nodes[0:3]]) # node0 should end up with 100 btc in block rewards plus fees, but # minus the 21 plus fees sent to node2 assert_equal(self.nodes[0].getbalance(), 100 - 21) assert_equal(self.nodes[2].getbalance(), 21) # Node0 should have two unspent outputs. # Create a couple of transactions to send them to node2, submit them through # node1, and make sure both node0 and node2 pick them up properly: node0utxos = self.nodes[0].listunspent(1) assert_equal(len(node0utxos), 2) # create both transactions txns_to_send = [] for utxo in node0utxos: inputs = [] outputs = {} inputs.append({"txid": utxo["txid"], "vout": utxo["vout"]}) outputs[self.nodes[2].getnewaddress("from1")] = utxo["amount"] - 3 raw_tx = self.nodes[0].createrawtransaction(inputs, outputs) txns_to_send.append( self.nodes[0].signrawtransaction(raw_tx, None, None, "ALL|FORKID")) # Have node 1 (miner) send the transactions self.nodes[1].sendrawtransaction(txns_to_send[0]["hex"], True) self.nodes[1].sendrawtransaction(txns_to_send[1]["hex"], True) # Have node1 mine a block to confirm transactions: self.nodes[1].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) assert_equal(self.nodes[0].getbalance(), 0) assert_equal(self.nodes[2].getbalance(), 94) assert_equal(self.nodes[2].getbalance("from1"), 94 - 21) # Send 10 BTC normal address = self.nodes[0].getnewaddress("test") fee_per_byte = Decimal('0.001') / 1000 self.nodes[2].settxfee(fee_per_byte * 1000) txid = self.nodes[2].sendtoaddress(address, 10, "", "", False) self.nodes[2].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) node_2_bal = self.check_fee_amount(self.nodes[2].getbalance(), Decimal( '84'), fee_per_byte, count_bytes(self.nodes[2].getrawtransaction(txid))) assert_equal(self.nodes[0].getbalance(), Decimal('10')) # Send 10 BTC with subtract fee from amount txid = self.nodes[2].sendtoaddress(address, 10, "", "", True) self.nodes[2].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) node_2_bal -= Decimal('10') assert_equal(self.nodes[2].getbalance(), node_2_bal) node_0_bal = self.check_fee_amount(self.nodes[0].getbalance(), Decimal( '20'), fee_per_byte, count_bytes(self.nodes[2].getrawtransaction(txid))) # Sendmany 10 BTC txid = self.nodes[2].sendmany('from1', {address: 10}, 0, "", []) self.nodes[2].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) node_0_bal += Decimal('10') node_2_bal = self.check_fee_amount(self.nodes[2].getbalance(), node_2_bal - Decimal( '10'), fee_per_byte, count_bytes(self.nodes[2].getrawtransaction(txid))) assert_equal(self.nodes[0].getbalance(), node_0_bal) # Sendmany 10 BTC with subtract fee from amount txid = self.nodes[2].sendmany('from1', {address: 10}, 0, "", [address]) self.nodes[2].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) node_2_bal -= Decimal('10') assert_equal(self.nodes[2].getbalance(), node_2_bal) node_0_bal = self.check_fee_amount(self.nodes[0].getbalance(), node_0_bal + Decimal( '10'), fee_per_byte, count_bytes(self.nodes[2].getrawtransaction(txid))) # Test ResendWalletTransactions: # Create a couple of transactions, then start up a fourth # node (nodes[3]) and ask nodes[0] to rebroadcast. # EXPECT: nodes[3] should have those transactions in its mempool. txid1 = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 1) txid2 = self.nodes[1].sendtoaddress(self.nodes[0].getnewaddress(), 1) - sync_mempools(self.nodes) + sync_mempools(self.nodes[0:2]) - self.nodes.append(self.start_node( - 3, self.options.tmpdir, self.extra_args[3])) + self.start_node(3) connect_nodes_bi(self.nodes, 0, 3) sync_blocks(self.nodes) relayed = self.nodes[0].resendwallettransactions() assert_equal(set(relayed), {txid1, txid2}) sync_mempools(self.nodes) assert(txid1 in self.nodes[3].getrawmempool()) # Exercise balance rpcs assert_equal(self.nodes[0].getwalletinfo()["unconfirmed_balance"], 1) assert_equal(self.nodes[0].getunconfirmedbalance(), 1) # check if we can list zero value tx as available coins # 1. create rawtx # 2. hex-changed one output to 0.0 # 3. sign and send # 4. check if recipient (node0) can list the zero value tx usp = self.nodes[1].listunspent() inputs = [{"txid": usp[0]['txid'], "vout":usp[0]['vout']}] outputs = { self.nodes[1].getnewaddress(): 49.998, self.nodes[0].getnewaddress(): 11.11} rawTx = self.nodes[1].createrawtransaction(inputs, outputs).replace( "c0833842", "00000000") # replace 11.11 with 0.0 (int32) decRawTx = self.nodes[1].decoderawtransaction(rawTx) signedRawTx = self.nodes[ 1].signrawtransaction(rawTx, None, None, "ALL|FORKID") decRawTx = self.nodes[1].decoderawtransaction(signedRawTx['hex']) zeroValueTxid = decRawTx['txid'] sendResp = self.nodes[1].sendrawtransaction(signedRawTx['hex']) self.sync_all() self.nodes[1].generate(1) # mine a block self.sync_all() unspentTxs = self.nodes[ 0].listunspent() # zero value tx must be in listunspents output found = False for uTx in unspentTxs: if uTx['txid'] == zeroValueTxid: found = True assert_equal(uTx['amount'], Decimal('0')) assert(found) # do some -walletbroadcast tests self.stop_nodes() - self.nodes = self.start_nodes(3, self.options.tmpdir, [ - ["-walletbroadcast=0"], ["-walletbroadcast=0"], ["-walletbroadcast=0"]]) + self.start_node(0, ["-walletbroadcast=0"]) + self.start_node(1, ["-walletbroadcast=0"]) + self.start_node(2, ["-walletbroadcast=0"]) connect_nodes_bi(self.nodes, 0, 1) connect_nodes_bi(self.nodes, 1, 2) connect_nodes_bi(self.nodes, 0, 2) - self.sync_all() + self.sync_all([self.nodes[0:3]]) txIdNotBroadcasted = self.nodes[0].sendtoaddress( self.nodes[2].getnewaddress(), 2) txObjNotBroadcasted = self.nodes[0].gettransaction(txIdNotBroadcasted) - # mine a block, tx should not be in there - self.nodes[1].generate(1) - self.sync_all() + self.nodes[1].generate(1) # mine a block, tx should not be in there + self.sync_all([self.nodes[0:3]]) # should not be changed because tx was not broadcasted assert_equal(self.nodes[2].getbalance(), node_2_bal) # now broadcast from another node, mine a block, sync, and check the # balance self.nodes[1].sendrawtransaction(txObjNotBroadcasted['hex']) self.nodes[1].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) node_2_bal += 2 txObjNotBroadcasted = self.nodes[0].gettransaction(txIdNotBroadcasted) assert_equal(self.nodes[2].getbalance(), node_2_bal) # create another tx txIdNotBroadcasted = self.nodes[0].sendtoaddress( self.nodes[2].getnewaddress(), 2) # restart the nodes with -walletbroadcast=1 self.stop_nodes() - self.nodes = self.start_nodes(3, self.options.tmpdir) + self.start_node(0) + self.start_node(1) + self.start_node(2) connect_nodes_bi(self.nodes, 0, 1) connect_nodes_bi(self.nodes, 1, 2) connect_nodes_bi(self.nodes, 0, 2) - sync_blocks(self.nodes) + sync_blocks(self.nodes[0:3]) self.nodes[0].generate(1) - sync_blocks(self.nodes) + sync_blocks(self.nodes[0:3]) node_2_bal += 2 # tx should be added to balance because after restarting the nodes tx # should be broadcastet assert_equal(self.nodes[2].getbalance(), node_2_bal) # send a tx with value in a string (PR#6380 +) txId = self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), "2") txObj = self.nodes[0].gettransaction(txId) assert_equal(txObj['amount'], Decimal('-2')) txId = self.nodes[0].sendtoaddress( self.nodes[2].getnewaddress(), "0.0001") txObj = self.nodes[0].gettransaction(txId) assert_equal(txObj['amount'], Decimal('-0.0001')) # check if JSON parser can handle scientific notation in strings txId = self.nodes[0].sendtoaddress( self.nodes[2].getnewaddress(), "1e-4") txObj = self.nodes[0].gettransaction(txId) assert_equal(txObj['amount'], Decimal('-0.0001')) # This will raise an exception because the amount type is wrong assert_raises_jsonrpc(-3, "Invalid amount", self.nodes[0].sendtoaddress, self.nodes[2].getnewaddress(), "1f-4") # This will raise an exception since generate does not accept a string assert_raises_jsonrpc(-1, "not an integer", self.nodes[0].generate, "2") # Import address and private key to check correct behavior of spendable unspents # 1. Send some coins to generate new UTXO address_to_import = self.nodes[2].getnewaddress() txid = self.nodes[0].sendtoaddress(address_to_import, 1) self.nodes[0].generate(1) - self.sync_all() + self.sync_all([self.nodes[0:3]]) # 2. Import address from node2 to node1 self.nodes[1].importaddress(address_to_import) # 3. Validate that the imported address is watch-only on node1 assert(self.nodes[1].validateaddress(address_to_import)["iswatchonly"]) # 4. Check that the unspents after import are not spendable assert_array_result(self.nodes[1].listunspent(), {"address": address_to_import}, {"spendable": False}) # 5. Import private key of the previously imported address on node1 priv_key = self.nodes[2].dumpprivkey(address_to_import) self.nodes[1].importprivkey(priv_key) # 6. Check that the unspents are now spendable on node1 assert_array_result(self.nodes[1].listunspent(), {"address": address_to_import}, {"spendable": True}) # Mine a block from node0 to an address from node1 cbAddr = self.nodes[1].getnewaddress() blkHash = self.nodes[0].generatetoaddress(1, cbAddr)[0] cbTxId = self.nodes[0].getblock(blkHash)['tx'][0] - self.sync_all() + self.sync_all([self.nodes[0:3]]) # Check that the txid and balance is found by node1 self.nodes[1].gettransaction(cbTxId) # check if wallet or blockchain maintenance changes the balance - self.sync_all() + self.sync_all([self.nodes[0:3]]) blocks = self.nodes[0].generate(2) - self.sync_all() + self.sync_all([self.nodes[0:3]]) balance_nodes = [self.nodes[i].getbalance() for i in range(3)] block_count = self.nodes[0].getblockcount() # Check modes: # - True: unicode escaped as \u.... # - False: unicode directly as UTF-8 for mode in [True, False]: self.nodes[0].ensure_ascii = mode # unicode check: Basic Multilingual Plane, Supplementary Plane # respectively for s in [u'рыба', u'𝅘𝅥𝅯']: addr = self.nodes[0].getaccountaddress(s) label = self.nodes[0].getaccount(addr) assert_equal(label, s) assert(s in self.nodes[0].listaccounts().keys()) self.nodes[0].ensure_ascii = True # restore to default # maintenance tests maintenance = [ '-rescan', '-reindex', '-zapwallettxes=1', '-zapwallettxes=2', # disabled until issue is fixed: https://github.com/bitcoin/bitcoin/issues/7463 # '-salvagewallet', ] chainlimit = 6 for m in maintenance: self.log.info("check " + m) self.stop_nodes() # set lower ancestor limit for later - self.nodes = self.start_nodes(3, self.options.tmpdir, [ - [m, "-limitancestorcount=" + str(chainlimit)]] * 3) + self.start_node(0, [m, "-limitancestorcount=" + str(chainlimit)]) + self.start_node(1, [m, "-limitancestorcount=" + str(chainlimit)]) + self.start_node(2, [m, "-limitancestorcount=" + str(chainlimit)]) while m == '-reindex' and [block_count] * 3 != [self.nodes[i].getblockcount() for i in range(3)]: # reindex will leave rpc warm up "early"; Wait for it to finish time.sleep(0.1) assert_equal( balance_nodes, [self.nodes[i].getbalance() for i in range(3)]) # Exercise listsinceblock with the last two blocks coinbase_tx_1 = self.nodes[0].listsinceblock(blocks[0]) assert_equal(coinbase_tx_1["lastblock"], blocks[1]) assert_equal(len(coinbase_tx_1["transactions"]), 1) assert_equal(coinbase_tx_1["transactions"][0]["blockhash"], blocks[1]) assert_equal( len(self.nodes[0].listsinceblock(blocks[1])["transactions"]), 0) # ==Check that wallet prefers to use coins that don't exceed mempool li # Get all non-zero utxos together chain_addrs = [ self.nodes[0].getnewaddress(), self.nodes[0].getnewaddress()] singletxid = self.nodes[0].sendtoaddress( chain_addrs[0], self.nodes[0].getbalance(), "", "", True) self.nodes[0].generate(1) node0_balance = self.nodes[0].getbalance() # Split into two chains rawtx = self.nodes[0].createrawtransaction([{"txid": singletxid, "vout": 0}], { chain_addrs[0]: node0_balance / 2 - Decimal('0.01'), chain_addrs[1]: node0_balance / 2 - Decimal('0.01')}) signedtx = self.nodes[0].signrawtransaction( rawtx, None, None, "ALL|FORKID") singletxid = self.nodes[0].sendrawtransaction(signedtx["hex"]) self.nodes[0].generate(1) # Make a long chain of unconfirmed payments without hitting mempool limit # Each tx we make leaves only one output of change on a chain 1 longer # Since the amount to send is always much less than the outputs, we only ever need one output # So we should be able to generate exactly chainlimit txs for each # original output sending_addr = self.nodes[1].getnewaddress() txid_list = [] for i in range(chainlimit * 2): txid_list.append( self.nodes[0].sendtoaddress(sending_addr, Decimal('0.0001'))) assert_equal(self.nodes[0].getmempoolinfo()['size'], chainlimit * 2) assert_equal(len(txid_list), chainlimit * 2) # Without walletrejectlongchains, we will still generate a txid # The tx will be stored in the wallet but not accepted to the mempool extra_txid = self.nodes[0].sendtoaddress( sending_addr, Decimal('0.0001')) assert(extra_txid not in self.nodes[0].getrawmempool()) assert(extra_txid in [tx["txid"] for tx in self.nodes[0].listtransactions()]) self.nodes[0].abandontransaction(extra_txid) total_txs = len(self.nodes[0].listtransactions("*", 99999)) # Try with walletrejectlongchains # Double chain limit but require combining inputs, so we pass SelectCoinsMinConf self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir, [ - "-walletrejectlongchains", "-limitancestorcount=" + str(2 * chainlimit)]) + self.start_node(0, extra_args=[ + "-walletrejectlongchains", "-limitancestorcount=" + str(2 * chainlimit)]) # wait for loadmempool timeout = 10 while (timeout > 0 and len(self.nodes[0].getrawmempool()) < chainlimit * 2): time.sleep(0.5) timeout -= 0.5 assert_equal(len(self.nodes[0].getrawmempool()), chainlimit * 2) node0_balance = self.nodes[0].getbalance() # With walletrejectlongchains we will not create the tx and store it in our wallet. assert_raises_jsonrpc(-4, "Transaction has too long of a mempool chain", self.nodes[0].sendtoaddress, sending_addr, node0_balance - Decimal('0.01')) # Verify nothing new in wallet assert_equal( total_txs, len(self.nodes[0].listtransactions("*", 99999))) if __name__ == '__main__': WalletTest().main() diff --git a/test/functional/walletbackup.py b/test/functional/walletbackup.py index 4b2c850d6..3f073f823 100755 --- a/test/functional/walletbackup.py +++ b/test/functional/walletbackup.py @@ -1,216 +1,214 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """ Exercise the wallet backup code. Ported from walletbackup.sh. Test case is: 4 nodes. 1 2 and 3 send transactions between each other, fourth node is a miner. 1 2 3 each mine a block to start, then Miner creates 100 blocks so 1 2 3 each have 50 mature coins to spend. Then 5 iterations of 1/2/3 sending coins amongst themselves to get transactions in the wallets, and the miner mining one block. Wallets are backed up using dumpwallet/backupwallet. Then 5 more iterations of transactions and mining a block. Miner then generates 101 more blocks, so any transaction fees paid mature. Sanity check: Sum(1,2,3,4 balances) == 114*50 1/2/3 are shutdown, and their wallets erased. Then restore using wallet.dat backup. And confirm 1/2/3/4 balances are same as before. Shutdown again, restore using importwallet, and confirm again balances are correct. """ from test_framework.test_framework import BitcoinTestFramework from test_framework.util import * from random import randint import shutil class WalletBackupTest(BitcoinTestFramework): - - def __init__(self): - super().__init__() - self.setup_clean_chain = True + def set_test_params(self): self.num_nodes = 4 + self.setup_clean_chain = True # nodes 1, 2,3 are spenders, let's give them a keypool=100 self.extra_args = [ ["-keypool=100"], ["-keypool=100"], ["-keypool=100"], []] # This mirrors how the network was setup in the bash test def setup_network(self): self.setup_nodes() connect_nodes(self.nodes[0], 3) connect_nodes(self.nodes[1], 3) connect_nodes(self.nodes[2], 3) connect_nodes(self.nodes[2], 0) self.sync_all() def one_send(self, from_node, to_address): if (randint(1, 2) == 1): amount = Decimal(randint(1, 10)) / Decimal(10) self.nodes[from_node].sendtoaddress(to_address, amount) def do_one_round(self): a0 = self.nodes[0].getnewaddress() a1 = self.nodes[1].getnewaddress() a2 = self.nodes[2].getnewaddress() self.one_send(0, a1) self.one_send(0, a2) self.one_send(1, a0) self.one_send(1, a2) self.one_send(2, a0) self.one_send(2, a1) # Have the miner (node3) mine a block. # Must sync mempools before mining. sync_mempools(self.nodes) self.nodes[3].generate(1) sync_blocks(self.nodes) # As above, this mirrors the original bash test. def start_three(self): - self.nodes[0] = self.start_node(0, self.options.tmpdir) - self.nodes[1] = self.start_node(1, self.options.tmpdir) - self.nodes[2] = self.start_node(2, self.options.tmpdir) + self.start_node(0) + self.start_node(1) + self.start_node(2) connect_nodes(self.nodes[0], 3) connect_nodes(self.nodes[1], 3) connect_nodes(self.nodes[2], 3) connect_nodes(self.nodes[2], 0) def stop_three(self): self.stop_node(0) self.stop_node(1) self.stop_node(2) def erase_three(self): os.remove(self.options.tmpdir + "/node0/regtest/wallet.dat") os.remove(self.options.tmpdir + "/node1/regtest/wallet.dat") os.remove(self.options.tmpdir + "/node2/regtest/wallet.dat") def run_test(self): self.log.info("Generating initial blockchain") self.nodes[0].generate(1) sync_blocks(self.nodes) self.nodes[1].generate(1) sync_blocks(self.nodes) self.nodes[2].generate(1) sync_blocks(self.nodes) self.nodes[3].generate(100) sync_blocks(self.nodes) assert_equal(self.nodes[0].getbalance(), 50) assert_equal(self.nodes[1].getbalance(), 50) assert_equal(self.nodes[2].getbalance(), 50) assert_equal(self.nodes[3].getbalance(), 0) self.log.info("Creating transactions") # Five rounds of sending each other transactions. for i in range(5): self.do_one_round() self.log.info("Backing up") tmpdir = self.options.tmpdir self.nodes[0].backupwallet(tmpdir + "/node0/wallet.bak") self.nodes[0].dumpwallet(tmpdir + "/node0/wallet.dump") self.nodes[1].backupwallet(tmpdir + "/node1/wallet.bak") self.nodes[1].dumpwallet(tmpdir + "/node1/wallet.dump") self.nodes[2].backupwallet(tmpdir + "/node2/wallet.bak") self.nodes[2].dumpwallet(tmpdir + "/node2/wallet.dump") self.log.info("More transactions") for i in range(5): self.do_one_round() # Generate 101 more blocks, so any fees paid mature self.nodes[3].generate(101) self.sync_all() balance0 = self.nodes[0].getbalance() balance1 = self.nodes[1].getbalance() balance2 = self.nodes[2].getbalance() balance3 = self.nodes[3].getbalance() total = balance0 + balance1 + balance2 + balance3 # At this point, there are 214 blocks (103 for setup, then 10 rounds, then 101.) # 114 are mature, so the sum of all wallets should be 114 * 50 = 5700. assert_equal(total, 5700) # # Test restoring spender wallets from backups # self.log.info("Restoring using wallet.dat") self.stop_three() self.erase_three() # Start node2 with no chain shutil.rmtree(self.options.tmpdir + "/node2/regtest/blocks") shutil.rmtree(self.options.tmpdir + "/node2/regtest/chainstate") # Restore wallets from backup shutil.copyfile( tmpdir + "/node0/wallet.bak", tmpdir + "/node0/regtest/wallet.dat") shutil.copyfile( tmpdir + "/node1/wallet.bak", tmpdir + "/node1/regtest/wallet.dat") shutil.copyfile( tmpdir + "/node2/wallet.bak", tmpdir + "/node2/regtest/wallet.dat") self.log.info("Re-starting nodes") self.start_three() sync_blocks(self.nodes) assert_equal(self.nodes[0].getbalance(), balance0) assert_equal(self.nodes[1].getbalance(), balance1) assert_equal(self.nodes[2].getbalance(), balance2) self.log.info("Restoring using dumped wallet") self.stop_three() self.erase_three() # start node2 with no chain shutil.rmtree(self.options.tmpdir + "/node2/regtest/blocks") shutil.rmtree(self.options.tmpdir + "/node2/regtest/chainstate") self.start_three() assert_equal(self.nodes[0].getbalance(), 0) assert_equal(self.nodes[1].getbalance(), 0) assert_equal(self.nodes[2].getbalance(), 0) self.nodes[0].importwallet(tmpdir + "/node0/wallet.dump") self.nodes[1].importwallet(tmpdir + "/node1/wallet.dump") self.nodes[2].importwallet(tmpdir + "/node2/wallet.dump") sync_blocks(self.nodes) assert_equal(self.nodes[0].getbalance(), balance0) assert_equal(self.nodes[1].getbalance(), balance1) assert_equal(self.nodes[2].getbalance(), balance2) # Backup to source wallet file must fail sourcePaths = [ tmpdir + "/node0/regtest/wallet.dat", tmpdir + "/node0/./regtest/wallet.dat", tmpdir + "/node0/regtest/", tmpdir + "/node0/regtest"] for sourcePath in sourcePaths: assert_raises_jsonrpc(-4, "backup failed", self.nodes[0].backupwallet, sourcePath) if __name__ == '__main__': WalletBackupTest().main() diff --git a/test/functional/zapwallettxes.py b/test/functional/zapwallettxes.py index b5dfbecc8..677b3860d 100755 --- a/test/functional/zapwallettxes.py +++ b/test/functional/zapwallettxes.py @@ -1,81 +1,77 @@ #!/usr/bin/env python3 # Copyright (c) 2014-2016 The Bitcoin Core developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test the zapwallettxes functionality. - start two bitcoind nodes - create two transactions on node 0 - one is confirmed and one is unconfirmed. - restart node 0 and verify that both the confirmed and the unconfirmed transactions are still available. - restart node 0 with zapwallettxes and persistmempool, and verify that both the confirmed and the unconfirmed transactions are still available. - restart node 0 with just zapwallettxed and verify that the confirmed transactions are still available, but that the unconfirmed transaction has been zapped. """ from test_framework.test_framework import BitcoinTestFramework from test_framework.util import (assert_equal, assert_raises_jsonrpc, ) class ZapWalletTXesTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.setup_clean_chain = True self.num_nodes = 2 def run_test(self): self.log.info("Mining blocks...") self.nodes[0].generate(1) self.sync_all() self.nodes[1].generate(100) self.sync_all() # This transaction will be confirmed txid1 = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 10) self.nodes[0].generate(1) self.sync_all() # This transaction will not be confirmed txid2 = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 20) # Confirmed and unconfirmed transactions are now in the wallet. assert_equal(self.nodes[0].gettransaction(txid1)['txid'], txid1) assert_equal(self.nodes[0].gettransaction(txid2)['txid'], txid2) # Stop-start node0. Both confirmed and unconfirmed transactions remain in the wallet. self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir) + self.start_node(0) assert_equal(self.nodes[0].gettransaction(txid1)['txid'], txid1) assert_equal(self.nodes[0].gettransaction(txid2)['txid'], txid2) # Stop node0 and restart with zapwallettxes and persistmempool. The unconfirmed # transaction is zapped from the wallet, but is re-added when the mempool is reloaded. self.stop_node(0) - self.nodes[0] = self.start_node(0, self.options.tmpdir, [ - "-persistmempool=1", "-zapwallettxes=2"]) + self.start_node(0, ["-persistmempool=1", "-zapwallettxes=2"]) assert_equal(self.nodes[0].gettransaction(txid1)['txid'], txid1) assert_equal(self.nodes[0].gettransaction(txid2)['txid'], txid2) # Stop node0 and restart with zapwallettxes, but not persistmempool. # The unconfirmed transaction is zapped and is no longer in the wallet. self.stop_node(0) - self.nodes[0] = self.start_node( - 0, self.options.tmpdir, ["-zapwallettxes=2"]) + self.start_node(0, ["-zapwallettxes=2"]) # tx1 is still be available because it was confirmed assert_equal(self.nodes[0].gettransaction(txid1)['txid'], txid1) # This will raise an exception because the unconfirmed transaction has been zapped assert_raises_jsonrpc(-5, 'Invalid or non-wallet transaction id', self.nodes[0].gettransaction, txid2) if __name__ == '__main__': ZapWalletTXesTest().main() diff --git a/test/functional/zmq_test.py b/test/functional/zmq_test.py index 93ddce542..e9a420cfd 100755 --- a/test/functional/zmq_test.py +++ b/test/functional/zmq_test.py @@ -1,130 +1,128 @@ #!/usr/bin/env python3 # Copyright (c) 2015-2017 The Bitcoin Core developers # Copyright (c) 2017 The Bitcoin developers # Distributed under the MIT software license, see the accompanying # file COPYING or http://www.opensource.org/licenses/mit-license.php. """Test the ZMQ API.""" import configparser import os import struct from test_framework.test_framework import BitcoinTestFramework, SkipTest from test_framework.util import ( assert_equal, bytes_to_hex_str, ) class ZMQTest (BitcoinTestFramework): - - def __init__(self): - super().__init__() + def set_test_params(self): self.num_nodes = 2 def setup_nodes(self): # Try to import python3-zmq. Skip this test if the import fails. try: import zmq except ImportError: raise SkipTest("python3-zmq module not available.") # Check that bitcoin has been built with ZMQ enabled config = configparser.ConfigParser() if not self.options.configfile: self.options.configfile = os.path.dirname( __file__) + "/../config.ini" config.read_file(open(self.options.configfile)) if not config["components"].getboolean("ENABLE_ZMQ"): raise SkipTest("bitcoind has not been built with zmq enabled.") self.zmqContext = zmq.Context() self.zmqSubSocket = self.zmqContext.socket(zmq.SUB) self.zmqSubSocket.set(zmq.RCVTIMEO, 60000) self.zmqSubSocket.setsockopt(zmq.SUBSCRIBE, b"hashblock") self.zmqSubSocket.setsockopt(zmq.SUBSCRIBE, b"hashtx") ip_address = "tcp://127.0.0.1:28332" self.zmqSubSocket.connect(ip_address) - extra_args = [['-zmqpubhashtx=%s' % - ip_address, '-zmqpubhashblock=%s' % ip_address], []] - self.nodes = self.start_nodes( - self.num_nodes, self.options.tmpdir, extra_args) + self.extra_args = [['-zmqpubhashtx=%s' % + ip_address, '-zmqpubhashblock=%s' % ip_address], []] + self.add_nodes(self.num_nodes, self.extra_args) + self.start_nodes() def run_test(self): try: self._zmq_test() finally: # Destroy the zmq context self.log.debug("Destroying zmq context") self.zmqContext.destroy(linger=None) def _zmq_test(self): genhashes = self.nodes[0].generate(1) self.sync_all() self.log.info("Wait for tx") msg = self.zmqSubSocket.recv_multipart() topic = msg[0] assert_equal(topic, b"hashtx") body = msg[1] msgSequence = struct.unpack('