0420f99f429ce2382057e101859067f40de47be0 Create net_peer_connection unit tests (Jon Atack) 4b834f649921aceb44d3e0b5a2ffd7847903f9f7 Allow unit tests to access additional CConnman members (Jon Atack) 34b9ef443bc2655a85c8802edc5d5d48d792a286 net/rpc: Makes CConnman::GetAddedNodeInfo able to return only non-connected address on request (Sergi Delgado Segura) 94e8882d820969ddc83f24f4cbe1515a886da4ea rpc: Prevents adding the same ip more than once when formatted differently (Sergi Delgado Segura) 2574b7e177ef045e64f1dd48cb000640ff5103d3 net/rpc: Check all resolved addresses in ConnectNode rather than just one (Sergi Delgado Segura) Pull request description: ## Rationale Currently, `addnode` has a couple of corner cases that allow it to either connect to the same peer more than once, hence wasting outbound connection slots, or add redundant information to `m_added_nodes`, hence making Bitcoin iterate through useless data on a regular basis. ### Connecting to the same node more than once In general, connecting to the same node more than once is something we should try to prevent. Currently, this is possible via `addnode` in two different ways: 1. Calling `addnode` more than once in a short time period, using two equivalent but distinct addresses 2. Calling `addnode add` using an IP, and `addnode onetry` after with an address that resolved to the same IP For the former, the issue boils down to `CConnman::ThreadOpenAddedConnections` calling `CConnman::GetAddedNodeInfo` once, and iterating over the result to open connections (`CConman::OpenNetworkConnection`) on the same loop for all addresses.`CConnman::ConnectNode` only checks a single address, at random, when resolving from a hostname, and uses it to check whether we are already connected to it. An example to test this would be calling: ``` bitcoin-cli addnode "127.0.0.1:port" add bitcoin-cli addnode "localhost:port" add ``` And check how it allows us to perform both connections some times, and some times it fails. The latter boils down to the same issue, but takes advantage of `onetry` bypassing the `CConnman::ThreadOpenAddedConnections` logic and calling `CConnman::OpenNetworkConnection` straightaway. A way to test this would be: ``` bitcoin-cli addnode "127.0.0.1:port" add bitcoin-cli addnode "localhost:port" onetry ``` ### Adding the same peer with two different, yet equivalent, addresses The current implementation of `addnode` is pretty naive when checking what data is added to `m_added_nodes`. Given the collection stores strings, the checks at `CConnman::AddNode()` basically check wether the exact provided string is already in the collection. If so, the data is rejected, otherwise, it is accepted. However, ips can be formatted in several ways that would bypass those checks. Two examples would be `127.0.0.1` being equal to `127.1` and `[::1]` being equal to `[0:0:0:0:0:0:0:1]`. Adding any pair of these will be allowed by the rpc command, and both will be reported as connected by `getaddednodeinfo`, given they map to the same `CService`. This is less severe than the previous issue, since even tough both nodes are reported as connected by `getaddednodeinfo`, there is only a single connection to them (as properly reported by `getpeerinfo`). However, this adds redundant data to `m_added_nodes`, which is undesirable. ### Parametrize `CConnman::GetAddedNodeInfo` Finally, this PR also parametrizes `CConnman::GetAddedNodeInfo` so it returns either all added nodes info, or only info about the nodes we are **not** connected to. This method is used both for `rpc`, in `getaddednodeinfo`, in which we are reporting all data to the user, so the former applies, and to check what nodes we are not connected to, in `CConnman::ThreadOpenAddedConnections`, in which we are currently returning more data than needed and then actively filtering using `CService.fConnected()` ACKs for top commit: jonatack: re-ACK 0420f99f429ce2382057e101859067f40de47be0 kashifs: > > tACK [0420f9](0420f99f42
) sr-gi: > > > tACK [0420f9](0420f99f42
) mzumsande: Tested ACK 0420f99f429ce2382057e101859067f40de47be0 Tree-SHA512: a3a10e748c12d98d439dfb193c75bc8d9486717cda5f41560f5c0ace1baef523d001d5e7eabac9fa466a9159a30bb925cc1327c2d6c4efb89dcaf54e176d1752
This directory contains integration tests that test bitcoind and its utilities in their entirety. It does not contain unit tests, which can be found in /src/test, /src/wallet/test, etc.
This directory contains the following sets of tests:
- fuzz A runner to execute all fuzz targets from /src/test/fuzz.
- functional which test the functionality of bitcoind and bitcoin-qt by interacting with them through the RPC and P2P interfaces.
- util which tests the utilities (bitcoin-util, bitcoin-tx, ...).
- lint which perform various static analysis checks.
The util tests are run as part of make check
target. The fuzz tests, functional
tests and lint scripts can be run as explained in the sections below.
Running tests locally
Before tests can be run locally, Bitcoin Core must be built. See the building instructions for help.
Fuzz tests
See /doc/fuzzing.md
Functional tests
Dependencies and prerequisites
The ZMQ functional test requires a python ZMQ library. To install it:
- on Unix, run
sudo apt-get install python3-zmq
- on mac OS, run
pip3 install pyzmq
On Windows the PYTHONUTF8
environment variable must be set to 1:
set PYTHONUTF8=1
Running the tests
Individual tests can be run by directly calling the test script, e.g.:
test/functional/feature_rbf.py
or can be run through the test_runner harness, eg:
test/functional/test_runner.py feature_rbf.py
You can run any combination (incl. duplicates) of tests by calling:
test/functional/test_runner.py <testname1> <testname2> <testname3> ...
Wildcard test names can be passed, if the paths are coherent and the test runner
is called from a bash
shell or similar that does the globbing. For example,
to run all the wallet tests:
test/functional/test_runner.py test/functional/wallet*
functional/test_runner.py functional/wallet* (called from the test/ directory)
test_runner.py wallet* (called from the test/functional/ directory)
but not
test/functional/test_runner.py wallet*
Combinations of wildcards can be passed:
test/functional/test_runner.py ./test/functional/tool* test/functional/mempool*
test_runner.py tool* mempool*
Run the regression test suite with:
test/functional/test_runner.py
Run all possible tests with
test/functional/test_runner.py --extended
In order to run backwards compatibility tests, first run:
test/get_previous_releases.py -b
to download the necessary previous release binaries.
By default, up to 4 tests will be run in parallel by test_runner. To specify
how many jobs to run, append --jobs=n
The individual tests and the test_runner harness have many command-line
options. Run test/functional/test_runner.py -h
to see them all.
Speed up test runs with a RAM disk
If you have available RAM on your system you can create a RAM disk to use as the cache
and tmp
directories for the functional tests in order to speed them up.
Speed-up amount varies on each system (and according to your RAM speed and other variables), but a 2-3x speed-up is not uncommon.
Linux
To create a 4 GiB RAM disk at /mnt/tmp/
:
sudo mkdir -p /mnt/tmp
sudo mount -t tmpfs -o size=4g tmpfs /mnt/tmp/
Configure the size of the RAM disk using the size=
option.
The size of the RAM disk needed is relative to the number of concurrent jobs the test suite runs.
For example running the test suite with --jobs=100
might need a 4 GiB RAM disk, but running with --jobs=32
will only need a 2.5 GiB RAM disk.
To use, run the test suite specifying the RAM disk as the cachedir
and tmpdir
:
test/functional/test_runner.py --cachedir=/mnt/tmp/cache --tmpdir=/mnt/tmp
Once finished with the tests and the disk, and to free the RAM, simply unmount the disk:
sudo umount /mnt/tmp
macOS
To create a 4 GiB RAM disk named "ramdisk" at /Volumes/ramdisk/
:
diskutil erasevolume HFS+ ramdisk $(hdiutil attach -nomount ram://8388608)
Configure the RAM disk size, expressed as the number of blocks, at the end of the command
(4096 MiB * 2048 blocks/MiB = 8388608 blocks
for 4 GiB). To run the tests using the RAM disk:
test/functional/test_runner.py --cachedir=/Volumes/ramdisk/cache --tmpdir=/Volumes/ramdisk/tmp
To unmount:
umount /Volumes/ramdisk
Troubleshooting and debugging test failures
Resource contention
The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make conflicts with other processes unlikely. However, if there is another bitcoind process running on the system (perhaps from a previous test which hasn't successfully killed all its bitcoind nodes), then there may be a port conflict which will cause the test to fail. It is recommended that you run the tests on a system where no other bitcoind processes are running.
On linux, the test framework will warn if there is another bitcoind process running when the tests are started.
If there are zombie bitcoind processes after test failure, you can kill them by running the following commands. Note that these commands will kill all bitcoind processes running on the system, so should not be used if any non-test bitcoind processes are being run.
killall bitcoind
or
pkill -9 bitcoind
Data directory cache
A pre-mined blockchain with 200 blocks is generated the first time a functional test is run and is stored in test/cache. This speeds up test startup times since new blockchains don't need to be generated for each test. However, the cache may get into a bad state, in which case tests will fail. If this happens, remove the cache directory (and make sure bitcoind processes are stopped as above):
rm -rf test/cache
killall bitcoind
Test logging
The tests contain logging at five different levels (DEBUG, INFO, WARNING, ERROR
and CRITICAL). From within your functional tests you can log to these different
levels using the logger included in the test_framework, e.g.
self.log.debug(object)
. By default:
- when run through the test_runner harness, all logs are written to
test_framework.log
and no logs are output to the console. - when run directly, all logs are written to
test_framework.log
and INFO level and above are output to the console. - when run by our CI (Continuous Integration), no logs are output to the console. However, if a test
fails, the
test_framework.log
and bitcoinddebug.log
s will all be dumped to the console to help troubleshooting.
These log files can be located under the test data directory (which is always printed in the first line of test output):
<test data directory>/test_framework.log
<test data directory>/node<node number>/regtest/debug.log
.
The node number identifies the relevant test node, starting from node0
, which
corresponds to its position in the nodes list of the specific test,
e.g. self.nodes[0]
.
To change the level of logs output to the console, use the -l
command line
argument.
test_framework.log
and bitcoind debug.log
s can be combined into a single
aggregate log by running the combine_logs.py
script. The output can be plain
text, colorized text or html. For example:
test/functional/combine_logs.py -c <test data directory> | less -r
will pipe the colorized logs from the test into less.
Use --tracerpc
to trace out all the RPC calls and responses to the console. For
some tests (eg any that use submitblock
to submit a full block over RPC),
this can result in a lot of screen output.
By default, the test data directory will be deleted after a successful run.
Use --nocleanup
to leave the test data directory intact. The test data
directory is never deleted after a failed test.
Attaching a debugger
A python debugger can be attached to tests at any point. Just add the line:
import pdb; pdb.set_trace()
anywhere in the test. You will then be able to inspect variables, as well as call methods that interact with the bitcoind nodes-under-test.
If further introspection of the bitcoind instances themselves becomes
necessary, this can be accomplished by first setting a pdb breakpoint
at an appropriate location, running the test to that point, then using
gdb
(or lldb
on macOS) to attach to the process and debug.
For instance, to attach to self.node[1]
during a run you can get
the pid of the node within pdb
.
(pdb) self.node[1].process.pid
Alternatively, you can find the pid by inspecting the temp folder for the specific test you are running. The path to that folder is printed at the beginning of every test run:
2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3
Use the path to find the pid file in the temp folder:
cat /tmp/user/1000/testo9vsdjo3/node1/regtest/bitcoind.pid
Then you can use the pid to start gdb
:
gdb /home/example/bitcoind <pid>
Note: gdb attach step may require ptrace_scope to be modified, or sudo
preceding the gdb
.
See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt
Often while debugging RPC calls in functional tests, the test might time out before the
process can return a response. Use --timeout-factor 0
to disable all RPC timeouts for that particular
functional test. Ex: test/functional/wallet_hd.py --timeout-factor 0
.
Profiling
An easy way to profile node performance during functional tests is provided
for Linux platforms using perf
.
Perf will sample the running node and will generate profile data in the node's
datadir. The profile data can then be presented using perf report
or a graphical
tool like hotspot.
To generate a profile during test suite runs, use the --perf
flag.
To see render the output to text, run
perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less
For ways to generate more granular profiles, see the README in test/functional.
Util tests
Util tests can be run locally by running test/util/test_runner.py
.
Use the -v
option for verbose output.
Lint tests
Dependencies
Lint test | Dependency |
---|---|
lint-python.py |
flake8 |
lint-python.py |
lief |
lint-python.py |
mypy |
lint-python.py |
pyzmq |
lint-python-dead-code.py |
vulture |
lint-shell.py |
ShellCheck |
lint-spelling.py |
codespell |
In use versions and install instructions are available in the CI setup.
Please be aware that on Linux distributions all dependencies are usually available as packages, but could be outdated.
Running the tests
Individual tests can be run by directly calling the test script, e.g.:
test/lint/lint-files.py
You can run all the shell-based lint tests by running:
test/lint/all-lint.py
Writing functional tests
You are encouraged to write functional tests for new or existing features. Further information about the functional test framework and individual tests is found in test/functional.