Compare commits

..

6 Commits

Author SHA1 Message Date
merge-script
e9e6825b8c Merge bitcoin/bitcoin#32046: [29.x] bump to v29.0rc1
47e2fa86dc [doc] release notes link for 29.0 (glozow)
21f423939e [examples] generate example bitcoin.conf (glozow)
86a3ce6209 [doc] update man pages for 29.0rc1 (glozow)
95c21b1fdd [build] bump version to 29.0rc1 (glozow)
153bd443ec [build] bump CLIENT_VERSION_MAJOR to 29 (glozow)

Pull request description:

  - "backport" #32041
  - bump version to v29.0rc1
  - generate manpages
  - add example bitcoin.conf
  - add release-notes.md pointing to wiki

ACKs for top commit:
  achow101:
    ACK 47e2fa86dc
  davidgumberg:
    ACK 47e2fa86dc
  hebasto:
    ACK 47e2fa86dc.

Tree-SHA512: 4e4eec31ab12990d933b6313950e779b7b58fc349f294f59d2504a8db3c28d5dea64b79e588e2c0fe62836db306fb4c3fb3fcd7bd1f51350e880370cec3437d6
2025-03-13 11:49:25 +08:00
glozow
47e2fa86dc [doc] release notes link for 29.0 2025-03-12 15:09:22 -04:00
glozow
21f423939e [examples] generate example bitcoin.conf 2025-03-12 15:09:22 -04:00
glozow
86a3ce6209 [doc] update man pages for 29.0rc1 2025-03-12 15:02:24 -04:00
glozow
95c21b1fdd [build] bump version to 29.0rc1 2025-03-12 13:48:01 -04:00
glozow
153bd443ec [build] bump CLIENT_VERSION_MAJOR to 29
Github-Pull: #32041
Rebased-From: a3f0e9a
2025-03-12 13:47:38 -04:00
1470 changed files with 106737 additions and 76357 deletions

214
.cirrus.yml Normal file
View File

@@ -0,0 +1,214 @@
env: # Global defaults
CIRRUS_CLONE_DEPTH: 1
CIRRUS_LOG_TIMESTAMP: true
MAKEJOBS: "-j10"
TEST_RUNNER_PORT_MIN: "14000" # Must be larger than 12321, which is used for the http cache. See https://cirrus-ci.org/guide/writing-tasks/#http-cache
CI_FAILFAST_TEST_LEAVE_DANGLING: "1" # Cirrus CI does not care about dangling processes and setting this variable avoids killing the CI script itself on error
# A self-hosted machine(s) can be used via Cirrus CI. It can be configured with
# multiple users to run tasks in parallel. No sudo permission is required.
#
# https://cirrus-ci.org/guide/persistent-workers/
#
# Generally, a persistent worker must run Ubuntu 23.04+ or Debian 12+.
#
# The following specific types should exist, with the following requirements:
# - small: For an x86_64 machine, with at least 2 vCPUs and 8 GB of memory.
# - medium: For an x86_64 machine, with at least 4 vCPUs and 16 GB of memory.
# - arm64: For an aarch64 machine, with at least 2 vCPUs and 8 GB of memory.
#
# CI jobs for the latter configuration can be run on x86_64 hardware
# by installing qemu-user-static, which works out of the box with
# podman or docker. Background: https://stackoverflow.com/a/72890225/313633
#
# The above machine types are matched to each task by their label. Refer to the
# Cirrus CI docs for more details.
#
# When a contributor maintains a fork of the repo, any pull request they make
# to their own fork, or to the main repository, will trigger two CI runs:
# one for the branch push and one for the pull request.
# This can be avoided by setting SKIP_BRANCH_PUSH=true as a custom env variable
# in Cirrus repository settings, accessible from
# https://cirrus-ci.com/github/my-organization/my-repository
#
# On machines that are persisted between CI jobs, RESTART_CI_DOCKER_BEFORE_RUN=1
# ensures that previous containers and artifacts are cleared before each run.
# This requires installing Podman instead of Docker.
#
# Futhermore:
# - podman-docker-4.1+ is required due to the bugfix in 4.1
# (https://github.com/bitcoin/bitcoin/pull/21652#issuecomment-1657098200)
# - The ./ci/ dependencies (with cirrus-cli) should be installed. One-liner example
# for a single user setup with sudo permission:
#
# ```
# apt update && apt install git screen python3 bash podman-docker uidmap slirp4netns curl -y && curl -L -o cirrus "https://github.com/cirruslabs/cirrus-cli/releases/latest/download/cirrus-linux-$(dpkg --print-architecture)" && mv cirrus /usr/local/bin/cirrus && chmod +x /usr/local/bin/cirrus
# ```
#
# - There are no strict requirements on the hardware. Having fewer CPU threads
# than recommended merely causes the CI script to run slower.
# To avoid rare and intermittent OOM due to short memory usage spikes,
# it is recommended to add (and persist) swap:
#
# ```
# fallocate -l 16G /swapfile_ci && chmod 600 /swapfile_ci && mkswap /swapfile_ci && swapon /swapfile_ci && ( echo '/swapfile_ci none swap sw 0 0' | tee -a /etc/fstab )
# ```
#
# - To register the persistent worker, open a `screen` session and run:
#
# ```
# RESTART_CI_DOCKER_BEFORE_RUN=1 screen cirrus worker run --labels type=todo_fill_in_type --token todo_fill_in_token
# ```
# https://cirrus-ci.org/guide/tips-and-tricks/#sharing-configuration-between-tasks
filter_template: &FILTER_TEMPLATE
# Allow forks to specify SKIP_BRANCH_PUSH=true and skip CI runs when a branch is pushed,
# but still run CI when a PR is created.
# https://cirrus-ci.org/guide/writing-tasks/#conditional-task-execution
skip: $SKIP_BRANCH_PUSH == "true" && $CIRRUS_PR == ""
stateful: false # https://cirrus-ci.org/guide/writing-tasks/#stateful-tasks
base_template: &BASE_TEMPLATE
<< : *FILTER_TEMPLATE
merge_base_script:
# Require git (used in fingerprint_script).
- git --version || ( apt-get update && apt-get install -y git )
- if [ "$CIRRUS_PR" = "" ]; then exit 0; fi
- git fetch --depth=1 $CIRRUS_REPO_CLONE_URL "pull/${CIRRUS_PR}/merge"
- git checkout FETCH_HEAD # Use merged changes to detect silent merge conflicts
# Also, the merge commit is used to lint COMMIT_RANGE="HEAD~..HEAD"
main_template: &MAIN_TEMPLATE
timeout_in: 120m # https://cirrus-ci.org/faq/#instance-timed-out
ci_script:
- ./ci/test_run_all.sh
global_task_template: &GLOBAL_TASK_TEMPLATE
<< : *BASE_TEMPLATE
<< : *MAIN_TEMPLATE
compute_credits_template: &CREDITS_TEMPLATE
# https://cirrus-ci.org/pricing/#compute-credits
# Only use credits for pull requests to the main repo
use_compute_credits: $CIRRUS_REPO_FULL_NAME == 'bitcoin/bitcoin' && $CIRRUS_PR != ""
task:
name: 'lint'
<< : *BASE_TEMPLATE
container:
image: debian:bookworm
cpu: 1
memory: 1G
# For faster CI feedback, immediately schedule the linters
<< : *CREDITS_TEMPLATE
test_runner_cache:
folder: "/lint_test_runner"
fingerprint_script: echo $CIRRUS_TASK_NAME $(git rev-parse HEAD:test/lint/test_runner)
python_cache:
folder: "/python_build"
fingerprint_script: cat .python-version /etc/os-release
unshallow_script:
- git fetch --unshallow --no-tags
lint_script:
- ./ci/lint_run_all.sh
task:
name: 'tidy'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: medium
env:
FILE_ENV: "./ci/test/00_setup_env_native_tidy.sh"
task:
name: 'ARM, unit tests, no functional tests'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: arm64 # Use arm64 worker to sidestep qemu and avoid a slow CI: https://github.com/bitcoin/bitcoin/pull/28087#issuecomment-1649399453
env:
FILE_ENV: "./ci/test/00_setup_env_arm.sh"
task:
name: 'Win64-cross'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: small
env:
FILE_ENV: "./ci/test/00_setup_env_win64.sh"
task:
name: 'CentOS, depends, gui'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: small
env:
FILE_ENV: "./ci/test/00_setup_env_native_centos.sh"
task:
name: 'previous releases, depends DEBUG'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: small
env:
FILE_ENV: "./ci/test/00_setup_env_native_previous_releases.sh"
task:
name: 'TSan, depends, gui'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: medium
env:
FILE_ENV: "./ci/test/00_setup_env_native_tsan.sh"
task:
name: 'MSan, depends'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: small
timeout_in: 300m # Use longer timeout for the *rare* case where a full build (llvm + msan + depends + ...) needs to be done.
env:
FILE_ENV: "./ci/test/00_setup_env_native_msan.sh"
task:
name: 'fuzzer,address,undefined,integer, no depends'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: medium
timeout_in: 240m # larger timeout, due to the high CPU demand
env:
FILE_ENV: "./ci/test/00_setup_env_native_fuzz.sh"
task:
name: 'multiprocess, i686, DEBUG'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: medium
env:
FILE_ENV: "./ci/test/00_setup_env_i686_multiprocess.sh"
task:
name: 'no wallet, libbitcoinkernel'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: small
env:
FILE_ENV: "./ci/test/00_setup_env_native_nowallet_libbitcoinkernel.sh"
task:
name: 'macOS-cross, gui, no tests'
<< : *GLOBAL_TASK_TEMPLATE
persistent_worker:
labels:
type: small
env:
FILE_ENV: "./ci/test/00_setup_env_mac_cross.sh"

View File

@@ -28,7 +28,7 @@ body:
id: useful-skills
attributes:
label: Useful Skills
description: For example, “`std::thread`”, “Qt6 GUI and async GUI design” or “basic understanding of Bitcoin mining and the Bitcoin Core RPC interface”.
description: For example, “`std::thread`”, “Qt5 GUI and async GUI design” or “basic understanding of Bitcoin mining and the Bitcoin Core RPC interface”.
value: |
* Compiling Bitcoin Core from source
* Running the C++ unit tests and the Python functional tests

View File

@@ -1,68 +0,0 @@
name: 'Configure Docker'
description: 'Set up Docker build driver and configure build cache args'
inputs:
cache-provider:
description: 'gha or cirrus cache provider'
required: true
runs:
using: 'composite'
steps:
- name: Check inputs
shell: bash
run: |
# We expect only gha or cirrus as inputs to cache-provider
case "${{ inputs.cache-provider }}" in
gha|cirrus)
;;
*)
echo "::warning title=Unknown input to configure docker action::Provided value was ${{ inputs.cache-provider }}"
;;
esac
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
# Use host network to allow access to cirrus gha cache running on the host
driver-opts: |
network=host
# This is required to allow buildkit to access the actions cache
- name: Expose actions cache variables
uses: actions/github-script@v6
with:
script: |
Object.keys(process.env).forEach(function (key) {
if (key.startsWith('ACTIONS_')) {
core.info(`Exporting ${key}`);
core.exportVariable(key, process.env[key]);
}
});
- name: Construct docker build cache args
shell: bash
run: |
# Configure docker build cache backend
#
# On forks the gha cache will work but will use Github's cache backend.
# Docker will check for variables $ACTIONS_CACHE_URL, $ACTIONS_RESULTS_URL and $ACTIONS_RUNTIME_TOKEN
# which are set automatically when running on GitHub infra: https://docs.docker.com/build/cache/backends/gha/#synopsis
# Use cirrus cache host
if [[ ${{ inputs.cache-provider }} == 'cirrus' ]]; then
url_args="url=${CIRRUS_CACHE_HOST},url_v2=${CIRRUS_CACHE_HOST}"
else
url_args=""
fi
# Always optimistically --cachefrom in case a cache blob exists
args=(--cache-from "type=gha${url_args:+,${url_args}},scope=${CONTAINER_NAME}")
# If this is a push to the default branch, also add --cacheto to save the cache
if [[ ${{ github.event_name }} == "push" && ${{ github.ref_name }} == ${{ github.event.repository.default_branch }} ]]; then
args+=(--cache-to "type=gha${url_args:+,${url_args}},mode=max,ignore-error=true,scope=${CONTAINER_NAME}")
fi
# Always `--load` into docker images (needed when using the `docker-container` build driver).
args+=(--load)
echo "DOCKER_BUILD_CACHE_ARG=${args[*]}" >> $GITHUB_ENV

View File

@@ -1,27 +0,0 @@
name: 'Configure environment'
description: 'Configure CI, cache and container name environment variables'
runs:
using: 'composite'
steps:
- name: Set CI and cache directories
shell: bash
run: |
echo "BASE_ROOT_DIR=${{ runner.temp }}" >> "$GITHUB_ENV"
echo "BASE_BUILD_DIR=${{ runner.temp }}/build" >> "$GITHUB_ENV"
echo "CCACHE_DIR=${{ runner.temp }}/ccache_dir" >> $GITHUB_ENV
echo "DEPENDS_DIR=${{ runner.temp }}/depends" >> "$GITHUB_ENV"
echo "BASE_CACHE=${{ runner.temp }}/depends/built" >> $GITHUB_ENV
echo "SOURCES_PATH=${{ runner.temp }}/depends/sources" >> $GITHUB_ENV
echo "PREVIOUS_RELEASES_DIR=${{ runner.temp }}/previous_releases" >> $GITHUB_ENV
- name: Set cache hashes
shell: bash
run: |
echo "DEPENDS_HASH=$(git ls-tree HEAD depends "$FILE_ENV" | sha256sum | cut -d' ' -f1)" >> $GITHUB_ENV
echo "PREVIOUS_RELEASES_HASH=$(git ls-tree HEAD test/get_previous_releases.py | sha256sum | cut -d' ' -f1)" >> $GITHUB_ENV
- name: Get container name
shell: bash
run: |
source $FILE_ENV
echo "CONTAINER_NAME=$CONTAINER_NAME" >> "$GITHUB_ENV"

View File

@@ -1,47 +0,0 @@
name: 'Restore Caches'
description: 'Restore ccache, depends sources, and built depends caches'
runs:
using: 'composite'
steps:
- name: Restore Ccache cache
id: ccache-cache
uses: cirruslabs/cache/restore@v4
with:
path: ${{ env.CCACHE_DIR }}
key: ccache-${{ env.CONTAINER_NAME }}-${{ github.run_id }}
restore-keys: |
ccache-${{ env.CONTAINER_NAME }}-
- name: Restore depends sources cache
id: depends-sources
uses: cirruslabs/cache/restore@v4
with:
path: ${{ env.SOURCES_PATH }}
key: depends-sources-${{ env.CONTAINER_NAME }}-${{ env.DEPENDS_HASH }}
restore-keys: |
depends-sources-${{ env.CONTAINER_NAME }}-
- name: Restore built depends cache
id: depends-built
uses: cirruslabs/cache/restore@v4
with:
path: ${{ env.BASE_CACHE }}
key: depends-built-${{ env.CONTAINER_NAME }}-${{ env.DEPENDS_HASH }}
restore-keys: |
depends-built-${{ env.CONTAINER_NAME }}-
- name: Restore previous releases cache
id: previous-releases
uses: cirruslabs/cache/restore@v4
with:
path: ${{ env.PREVIOUS_RELEASES_DIR }}
key: previous-releases-${{ env.CONTAINER_NAME }}-${{ env.PREVIOUS_RELEASES_HASH }}
restore-keys: |
previous-releases-${{ env.CONTAINER_NAME }}-
- name: export cache hits
shell: bash
run: |
echo "depends-sources-cache-hit=${{ steps.depends-sources.outputs.cache-hit }}" >> $GITHUB_ENV
echo "depends-built-cache-hit=${{ steps.depends-built.outputs.cache-hit }}" >> $GITHUB_ENV
echo "previous-releases-cache-hit=${{ steps.previous-releases.outputs.cache-hit }}" >> $GITHUB_ENV

View File

@@ -1,39 +0,0 @@
name: 'Save Caches'
description: 'Save ccache, depends sources, and built depends caches'
runs:
using: 'composite'
steps:
- name: debug cache hit inputs
shell: bash
run: |
echo "depends sources direct cache hit to primary key: ${{ env.depends-sources-cache-hit }}"
echo "depends built direct cache hit to primary key: ${{ env.depends-built-cache-hit }}"
echo "previous releases direct cache hit to primary key: ${{ env.previous-releases-cache-hit }}"
- name: Save Ccache cache
uses: cirruslabs/cache/save@v4
if: ${{ (github.event_name == 'push') && (github.ref_name == github.event.repository.default_branch) }}
with:
path: ${{ env.CCACHE_DIR }}
key: ccache-${{ env.CONTAINER_NAME }}-${{ github.run_id }}
- name: Save depends sources cache
uses: cirruslabs/cache/save@v4
if: ${{ (github.event_name == 'push') && (github.ref_name == github.event.repository.default_branch) && (env.depends-sources-cache-hit != 'true') }}
with:
path: ${{ env.SOURCES_PATH }}
key: depends-sources-${{ env.CONTAINER_NAME }}-${{ env.DEPENDS_HASH }}
- name: Save built depends cache
uses: cirruslabs/cache/save@v4
if: ${{ (github.event_name == 'push') && (github.ref_name == github.event.repository.default_branch) && (env.depends-built-cache-hit != 'true' )}}
with:
path: ${{ env.BASE_CACHE }}
key: depends-built-${{ env.CONTAINER_NAME }}-${{ env.DEPENDS_HASH }}
- name: Save previous releases cache
uses: cirruslabs/cache/save@v4
if: ${{ (github.event_name == 'push') && (github.ref_name == github.event.repository.default_branch) && (env.previous-releases-cache-hit != 'true' )}}
with:
path: ${{ env.PREVIOUS_RELEASES_DIR }}
key: previous-releases-${{ env.CONTAINER_NAME }}-${{ env.PREVIOUS_RELEASES_HASH }}

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env python3
# Copyright (c) The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://opensource.org/license/mit/.
import subprocess
import sys
import shlex
def run(cmd, **kwargs):
print("+ " + shlex.join(cmd), flush=True)
try:
return subprocess.run(cmd, check=True, **kwargs)
except Exception as e:
sys.exit(e)
def main():
print("Running tests on commit ...")
run(["git", "log", "-1"])
num_procs = int(run(["nproc"], stdout=subprocess.PIPE).stdout)
run([
"cmake",
"-B",
"build",
"-Werror=dev",
# Use clang++, because it is a bit faster and uses less memory than g++
"-DCMAKE_C_COMPILER=clang",
"-DCMAKE_CXX_COMPILER=clang++",
# Use mold, because it is faster than the default linker
"-DCMAKE_EXE_LINKER_FLAGS=-fuse-ld=mold",
# Use Debug build type for more debug checks, but enable optimizations
"-DAPPEND_CXXFLAGS='-O3 -g2'",
"-DAPPEND_CFLAGS='-O3 -g2'",
"-DCMAKE_BUILD_TYPE=Debug",
"-DWERROR=ON",
"-DWITH_ZMQ=ON",
"-DBUILD_GUI=ON",
"-DBUILD_BENCH=ON",
"-DBUILD_FUZZ_BINARY=ON",
"-DWITH_USDT=ON",
"-DCMAKE_CXX_FLAGS=-Wno-error=unused-member-function",
])
run(["cmake", "--build", "build", "-j", str(num_procs)])
run([
"ctest",
"--output-on-failure",
"--stop-on-failure",
"--test-dir",
"build",
"-j",
str(num_procs),
])
run([
sys.executable,
"./build/test/functional/test_runner.py",
"-j",
str(num_procs * 2),
"--combinedlogslen=99999999",
])
if __name__ == "__main__":
main()

View File

@@ -1,6 +1,6 @@
# Copyright (c) 2023-present The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://opensource.org/license/mit.
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
name: CI
on:
@@ -19,32 +19,9 @@ concurrency:
env:
CI_FAILFAST_TEST_LEAVE_DANGLING: 1 # GHA does not care about dangling processes and setting this variable avoids killing the CI script itself on error
CIRRUS_CACHE_HOST: http://127.0.0.1:12321/ # When using Cirrus Runners this host can be used by the docker `gha` build cache type.
REPO_USE_CIRRUS_RUNNERS: 'bitcoin/bitcoin' # Use cirrus runners and cache for this repo, instead of falling back to the slow GHA runners
defaults:
run:
# Enforce fail-fast behavior for all platforms.
# See: https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash
MAKEJOBS: '-j10'
jobs:
runners:
name: 'determine runners'
runs-on: ubuntu-latest
outputs:
provider: ${{ steps.runners.outputs.provider }}
steps:
- id: runners
run: |
if [[ "${REPO_USE_CIRRUS_RUNNERS}" == "${{ github.repository }}" ]]; then
echo "provider=cirrus" >> "$GITHUB_OUTPUT"
echo "::notice title=Runner Selection::Using Cirrus Runners"
else
echo "provider=gha" >> "$GITHUB_OUTPUT"
echo "::notice title=Runner Selection::Using GitHub-hosted runners"
fi
test-each-commit:
name: 'test each commit'
runs-on: ubuntu-24.04
@@ -55,7 +32,7 @@ jobs:
steps:
- name: Determine fetch depth
run: echo "FETCH_DEPTH=$((${{ github.event.pull_request.commits }} + 2))" >> "$GITHUB_ENV"
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: ${{ env.FETCH_DEPTH }}
@@ -88,18 +65,14 @@ jobs:
EXCLUDE_MERGE_BASE_ANCESTORS=^${MERGE_BASE}^@
fi
echo "TEST_BASE=$(git rev-list -n$((${{ env.MAX_COUNT }} + 1)) --reverse HEAD $EXCLUDE_MERGE_BASE_ANCESTORS | head -1)" >> "$GITHUB_ENV"
- run: |
git fetch origin "${GITHUB_BASE_REF}"
git config user.email "ci@example.com"
git config user.name "CI"
- run: |
sudo apt-get update
sudo apt-get install clang mold ccache build-essential cmake ninja-build pkgconf python3-zmq libevent-dev libboost-dev libsqlite3-dev systemtap-sdt-dev libzmq3-dev qt6-base-dev qt6-tools-dev qt6-l10n-tools libqrencode-dev capnproto libcapnp-dev -y
sudo pip3 install --break-system-packages pycapnp
sudo apt-get install clang ccache build-essential cmake pkgconf python3-zmq libevent-dev libboost-dev libsqlite3-dev libdb++-dev systemtap-sdt-dev libzmq3-dev qtbase5-dev qttools5-dev qttools5-dev-tools qtwayland5 libqrencode-dev -y
- name: Compile and run tests
run: |
# Run tests on commits after the last merge commit and before the PR head commit
git rebase --exec "git merge --no-commit origin/${GITHUB_BASE_REF} && python3 ./.github/ci-test-each-commit-exec.py && git reset --hard" ${{ env.TEST_BASE }}
# Use clang++, because it is a bit faster and uses less memory than g++
git rebase --exec "echo Running test-one-commit on \$( git log -1 ) && CC=clang CXX=clang++ cmake -B build -DWERROR=ON -DWITH_ZMQ=ON -DBUILD_GUI=ON -DBUILD_BENCH=ON -DBUILD_FUZZ_BINARY=ON -DWITH_BDB=ON -DWITH_USDT=ON -DCMAKE_CXX_FLAGS='-Wno-error=unused-member-function' && cmake --build build -j $(nproc) && ctest --output-on-failure --stop-on-failure --test-dir build -j $(nproc) && ./build/test/functional/test_runner.py -j $(( $(nproc) * 2 )) --combinedlogslen=99999999" ${{ env.TEST_BASE }}
macos-native-arm64:
name: ${{ matrix.job-name }}
@@ -133,12 +106,8 @@ jobs:
BASE_ROOT_DIR: ${{ github.workspace }}
steps:
- &CHECKOUT
name: Checkout
uses: actions/checkout@v5
with:
# Ensure the latest merged pull request state is used, even on re-runs.
ref: &CHECKOUT_REF_TMPL ${{ github.event_name == 'pull_request' && github.ref || '' }}
- name: Checkout
uses: actions/checkout@v4
- name: Clang version
run: |
@@ -155,12 +124,7 @@ jobs:
run: |
# A workaround for "The `brew link` step did not complete successfully" error.
brew install --quiet python@3 || brew link --overwrite python@3
brew install --quiet coreutils ninja pkgconf gnu-getopt ccache boost libevent zeromq qt@6 qrencode capnp
- name: Install Python packages
run: |
git clone -b v2.1.0 https://github.com/capnproto/pycapnp
pip3 install ./pycapnp -C force-bundled-libcapnp=True --break-system-packages
brew install --quiet coreutils ninja pkgconf gnu-getopt ccache boost libevent zeromq qt@5 qrencode
- name: Set Ccache directory
run: echo "CCACHE_DIR=${RUNNER_TEMP}/ccache_dir" >> "$GITHUB_ENV"
@@ -186,8 +150,10 @@ jobs:
# https://github.com/actions/cache/blob/main/tips-and-workarounds.md#update-a-cache
key: ${{ github.job }}-${{ matrix.job-type }}-ccache-${{ github.run_id }}
windows-native-dll:
win64-native:
name: ${{ matrix.job-name }}
# Use latest image, but hardcode version to avoid silent upgrades (and breaks).
# See: https://github.com/actions/runner-images#available-images.
runs-on: windows-2022
if: ${{ vars.SKIP_BRANCH_PUSH != 'true' || github.event_name == 'pull_request' }}
@@ -202,14 +168,15 @@ jobs:
job-type: [standard, fuzz]
include:
- job-type: standard
generate-options: '-DBUILD_GUI=ON -DWITH_ZMQ=ON -DBUILD_BENCH=ON -DWERROR=ON'
job-name: 'Windows native, VS 2022'
generate-options: '-DBUILD_GUI=ON -DWITH_BDB=ON -DWITH_ZMQ=ON -DBUILD_BENCH=ON -DWERROR=ON'
job-name: 'Win64 native, VS 2022'
- job-type: fuzz
generate-options: '-DVCPKG_MANIFEST_NO_DEFAULT_FEATURES=ON -DVCPKG_MANIFEST_FEATURES="wallet" -DBUILD_GUI=OFF -DBUILD_FOR_FUZZING=ON -DWERROR=ON'
job-name: 'Windows native, fuzz, VS 2022'
generate-options: '-DVCPKG_MANIFEST_NO_DEFAULT_FEATURES=ON -DVCPKG_MANIFEST_FEATURES="sqlite" -DBUILD_GUI=OFF -DBUILD_FOR_FUZZING=ON -DWERROR=ON'
job-name: 'Win64 native fuzz, VS 2022'
steps:
- *CHECKOUT
- name: Checkout
uses: actions/checkout@v4
- name: Configure Developer Command Prompt for Microsoft Visual C++
# Using microsoft/setup-msbuild is not enough.
@@ -218,7 +185,6 @@ jobs:
arch: x64
- name: Get tool information
shell: pwsh
run: |
cmake -version | Tee-Object -FilePath "cmake_version"
Write-Output "---"
@@ -226,13 +192,12 @@ jobs:
$env:VCToolsVersion | Tee-Object -FilePath "toolset_version"
py -3 --version
Write-Host "PowerShell version $($PSVersionTable.PSVersion.ToString())"
bash --version
- name: Using vcpkg with MSBuild
run: |
echo "set(VCPKG_BUILD_TYPE release)" >> "${VCPKG_INSTALLATION_ROOT}/triplets/x64-windows.cmake"
# Workaround for libevent, which requires CMake 3.1 but is incompatible with CMake >= 4.0.
sed -i '1s/^/set(ENV{CMAKE_POLICY_VERSION_MINIMUM} 3.5)\n/' "${VCPKG_INSTALLATION_ROOT}/scripts/ports.cmake"
Set-Location "$env:VCPKG_INSTALLATION_ROOT"
Add-Content -Path "triplets\x64-windows.cmake" -Value "set(VCPKG_BUILD_TYPE release)"
Add-Content -Path "triplets\x64-windows-static.cmake" -Value "set(VCPKG_BUILD_TYPE release)"
- name: vcpkg tools cache
uses: actions/cache@v4
@@ -249,7 +214,7 @@ jobs:
- name: Generate build system
run: |
cmake -B build -Werror=dev --preset vs2022 -DCMAKE_TOOLCHAIN_FILE="${VCPKG_INSTALLATION_ROOT}/scripts/buildsystems/vcpkg.cmake" ${{ matrix.generate-options }}
cmake -B build --preset vs2022-static -DCMAKE_TOOLCHAIN_FILE="$env:VCPKG_INSTALLATION_ROOT\scripts\buildsystems\vcpkg.cmake" ${{ matrix.generate-options }}
- name: Save vcpkg binary cache
uses: actions/cache/save@v4
@@ -261,44 +226,32 @@ jobs:
- name: Build
working-directory: build
run: |
cmake --build . -j $NUMBER_OF_PROCESSORS --config Release
- name: Get bitcoind manifest
if: matrix.job-type == 'standard'
working-directory: build
run: |
mt.exe -nologo -inputresource:bin/Release/bitcoind.exe -out:bitcoind.manifest
cat bitcoind.manifest
echo
mt.exe -nologo -inputresource:bin/Release/bitcoind.exe -validate_manifest
cmake --build . -j $env:NUMBER_OF_PROCESSORS --config Release
- name: Run test suite
if: matrix.job-type == 'standard'
working-directory: build
env:
QT_PLUGIN_PATH: '${{ github.workspace }}\build\vcpkg_installed\x64-windows\Qt6\plugins'
run: |
ctest --output-on-failure --stop-on-failure -j $NUMBER_OF_PROCESSORS -C Release
ctest --output-on-failure --stop-on-failure -j $env:NUMBER_OF_PROCESSORS -C Release
- name: Run functional tests
if: matrix.job-type == 'standard'
working-directory: build
env:
BITCOIN_BIN: '${{ github.workspace }}\build\bin\Release\bitcoin.exe'
BITCOIND: '${{ github.workspace }}\build\bin\Release\bitcoind.exe'
BITCOINCLI: '${{ github.workspace }}\build\bin\Release\bitcoin-cli.exe'
BITCOINTX: '${{ github.workspace }}\build\bin\Release\bitcoin-tx.exe'
BITCOINUTIL: '${{ github.workspace }}\build\bin\Release\bitcoin-util.exe'
BITCOINWALLET: '${{ github.workspace }}\build\bin\Release\bitcoin-wallet.exe'
TEST_RUNNER_EXTRA: ${{ github.event_name != 'pull_request' && '--extended' || '' }}
run: py -3 test/functional/test_runner.py --jobs $NUMBER_OF_PROCESSORS --ci --quiet --tmpdirprefix="${RUNNER_TEMP}" --combinedlogslen=99999999 --timeout-factor=${TEST_RUNNER_TIMEOUT_FACTOR} ${TEST_RUNNER_EXTRA}
shell: cmd
run: py -3 test\functional\test_runner.py --jobs %NUMBER_OF_PROCESSORS% --ci --quiet --tmpdirprefix=%RUNNER_TEMP% --combinedlogslen=99999999 --timeout-factor=%TEST_RUNNER_TIMEOUT_FACTOR% %TEST_RUNNER_EXTRA%
- name: Clone corpora
if: matrix.job-type == 'fuzz'
run: |
git clone --depth=1 https://github.com/bitcoin-core/qa-assets "${RUNNER_TEMP}/qa-assets"
cd "${RUNNER_TEMP}/qa-assets"
echo "Using qa-assets repo from commit ..."
git clone --depth=1 https://github.com/bitcoin-core/qa-assets "$env:RUNNER_TEMP\qa-assets"
Set-Location "$env:RUNNER_TEMP\qa-assets"
Write-Host "Using qa-assets repo from commit ..."
git log -1
- name: Run fuzz tests
@@ -306,259 +259,48 @@ jobs:
working-directory: build
env:
BITCOINFUZZ: '${{ github.workspace }}\build\bin\Release\fuzz.exe'
shell: cmd
run: |
py -3 test/fuzz/test_runner.py --par $NUMBER_OF_PROCESSORS --loglevel DEBUG "${RUNNER_TEMP}/qa-assets/fuzz_corpora"
py -3 test\fuzz\test_runner.py --par %NUMBER_OF_PROCESSORS% --loglevel DEBUG %RUNNER_TEMP%\qa-assets\fuzz_corpora
windows-cross:
name: 'Linux->Windows cross, no tests'
needs: runners
runs-on: ${{ needs.runners.outputs.provider == 'cirrus' && 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-sm' || 'ubuntu-24.04' }}
asan-lsan-ubsan-integer-no-depends-usdt:
name: 'ASan + LSan + UBSan + integer, no depends, USDT'
runs-on: ubuntu-24.04 # has to match container in ci/test/00_setup_env_native_asan.sh for tracing tools
if: ${{ vars.SKIP_BRANCH_PUSH != 'true' || github.event_name == 'pull_request' }}
timeout-minutes: 120
env:
FILE_ENV: './ci/test/00_setup_env_win64.sh'
FILE_ENV: "./ci/test/00_setup_env_native_asan.sh"
DANGER_CI_ON_HOST_FOLDERS: 1
steps:
- *CHECKOUT
- name: Checkout
uses: actions/checkout@v4
- name: Configure environment
uses: ./.github/actions/configure-environment
- name: Set CI directories
run: |
echo "CCACHE_DIR=${{ runner.temp }}/ccache_dir" >> "$GITHUB_ENV"
echo "BASE_ROOT_DIR=${{ runner.temp }}" >> "$GITHUB_ENV"
echo "BASE_BUILD_DIR=${{ runner.temp }}/build-asan" >> "$GITHUB_ENV"
- name: Restore caches
id: restore-cache
uses: ./.github/actions/restore-caches
- name: Configure Docker
uses: ./.github/actions/configure-docker
- name: Restore Ccache cache
id: ccache-cache
uses: actions/cache/restore@v4
with:
cache-provider: ${{ needs.runners.outputs.provider }}
- name: CI script
run: ./ci/test_run_all.sh
- name: Save caches
uses: ./.github/actions/save-caches
- name: Upload built executables
uses: actions/upload-artifact@v4
with:
name: x86_64-w64-mingw32-executables-${{ github.run_id }}
path: |
${{ env.BASE_BUILD_DIR }}/bin/*.exe
${{ env.BASE_BUILD_DIR }}/src/secp256k1/bin/*.exe
${{ env.BASE_BUILD_DIR }}/src/univalue/*.exe
${{ env.BASE_BUILD_DIR }}/test/config.ini
windows-native-test:
name: 'Windows, test cross-built'
runs-on: windows-2022
needs: windows-cross
env:
PYTHONUTF8: 1
TEST_RUNNER_TIMEOUT_FACTOR: 40
steps:
- *CHECKOUT
- name: Download built executables
uses: actions/download-artifact@v4
with:
name: x86_64-w64-mingw32-executables-${{ github.run_id }}
- name: Run bitcoind.exe
run: ./bin/bitcoind.exe -version
- name: Find mt.exe tool
shell: pwsh
run: |
$sdk_dir = (Get-ItemProperty 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows Kits\Installed Roots' -Name KitsRoot10).KitsRoot10
$sdk_latest = (Get-ChildItem "$sdk_dir\bin" -Directory | Where-Object { $_.Name -match '^\d+\.\d+\.\d+\.\d+$' } | Sort-Object Name -Descending | Select-Object -First 1).Name
"MT_EXE=${sdk_dir}bin\${sdk_latest}\x64\mt.exe" >> $env:GITHUB_ENV
- name: Get bitcoind manifest
shell: pwsh
run: |
& $env:MT_EXE -nologo -inputresource:bin\bitcoind.exe -out:bitcoind.manifest
Get-Content bitcoind.manifest
& $env:MT_EXE -nologo -inputresource:bin\bitcoind.exe -validate_manifest
- name: Run unit tests
# Can't use ctest here like other jobs as we don't have a CMake build tree.
run: |
./bin/test_bitcoin.exe -l test_suite # Intentionally run sequentially here, to catch test case failures caused by dirty global state from prior test cases.
./src/secp256k1/bin/exhaustive_tests.exe
./src/secp256k1/bin/noverify_tests.exe
./src/secp256k1/bin/tests.exe
./src/univalue/object.exe
./src/univalue/unitester.exe
- name: Run benchmarks
run: ./bin/bench_bitcoin.exe -sanity-check
- name: Adjust paths in test/config.ini
shell: pwsh
run: |
(Get-Content "test/config.ini") -replace '(?<=^SRCDIR=).*', '${{ github.workspace }}' -replace '(?<=^BUILDDIR=).*', '${{ github.workspace }}' -replace '(?<=^RPCAUTH=).*', '${{ github.workspace }}/share/rpcauth/rpcauth.py' | Set-Content "test/config.ini"
Get-Content "test/config.ini"
- name: Set previous release directory
run: |
echo "PREVIOUS_RELEASES_DIR=${{ runner.temp }}/previous_releases" >> "$GITHUB_ENV"
- name: Get previous releases
working-directory: test
run: ./get_previous_releases.py --target-dir $PREVIOUS_RELEASES_DIR
- name: Run functional tests
env:
# TODO: Fix the excluded test and re-enable it.
# feature_unsupported_utxo_db.py fails on windows because of emojis in the test data directory
EXCLUDE: '--exclude wallet_multiwallet.py,feature_unsupported_utxo_db.py'
TEST_RUNNER_EXTRA: ${{ github.event_name != 'pull_request' && '--extended' || '' }}
run: py -3 test/functional/test_runner.py --jobs $NUMBER_OF_PROCESSORS --ci --quiet --tmpdirprefix="$RUNNER_TEMP" --combinedlogslen=99999999 --timeout-factor=$TEST_RUNNER_TIMEOUT_FACTOR $EXCLUDE $TEST_RUNNER_EXTRA
ci-matrix:
name: ${{ matrix.name }}
needs: runners
runs-on: ${{ needs.runners.outputs.provider == 'cirrus' && matrix.cirrus-runner || matrix.fallback-runner }}
if: ${{ vars.SKIP_BRANCH_PUSH != 'true' || github.event_name == 'pull_request' }}
timeout-minutes: ${{ matrix.timeout-minutes }}
env:
DANGER_CI_ON_HOST_FOLDERS: 1
FILE_ENV: ${{ matrix.file-env }}
strategy:
fail-fast: false
matrix:
include:
- name: '32 bit ARM, unit tests, no functional tests'
cirrus-runner: 'ubuntu-24.04-arm' # Cirrus' Arm runners are Apple (with virtual Linux aarch64), which doesn't support 32-bit mode
fallback-runner: 'ubuntu-24.04-arm'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_arm.sh'
provider: 'gha'
- name: 'ASan + LSan + UBSan + integer, no depends, USDT'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-md' # has to match container in ci/test/00_setup_env_native_asan.sh for tracing tools
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_native_asan.sh'
- name: 'macOS-cross, gui, no tests'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-sm'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_mac_cross.sh'
- name: 'No wallet, libbitcoinkernel'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-sm'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_native_nowallet_libbitcoinkernel.sh'
- name: 'no IPC, i686, DEBUG'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-md'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_i686_no_ipc.sh'
- name: 'fuzzer,address,undefined,integer, no depends'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-lg'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 240
file-env: './ci/test/00_setup_env_native_fuzz.sh'
- name: 'previous releases, depends DEBUG'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-md'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_native_previous_releases.sh'
- name: 'CentOS, depends, gui'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-lg'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_native_centos.sh'
- name: 'tidy'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-md'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_native_tidy.sh'
- name: 'TSan, depends, no gui'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-md'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_native_tsan.sh'
- name: 'MSan, depends'
cirrus-runner: 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-lg'
fallback-runner: 'ubuntu-24.04'
timeout-minutes: 120
file-env: './ci/test/00_setup_env_native_msan.sh'
steps:
- *CHECKOUT
- name: Configure environment
uses: ./.github/actions/configure-environment
- name: Restore caches
id: restore-cache
uses: ./.github/actions/restore-caches
- name: Configure Docker
uses: ./.github/actions/configure-docker
with:
cache-provider: ${{ matrix.provider || needs.runners.outputs.provider }}
path: ${{ env.CCACHE_DIR }}
key: ${{ github.job }}-ccache-${{ github.run_id }}
restore-keys: ${{ github.job }}-ccache-
- name: Enable bpfcc script
if: ${{ env.CONTAINER_NAME == 'ci_native_asan' }}
# In the image build step, no external environment variables are available,
# so any settings will need to be written to the settings env file:
run: sed -i "s|\${INSTALL_BCC_TRACING_TOOLS}|true|g" ./ci/test/00_setup_env_native_asan.sh
- name: Set mmap_rnd_bits
if: ${{ env.CONTAINER_NAME == 'ci_native_tsan' || env.CONTAINER_NAME == 'ci_native_msan' }}
# Prevents crashes due to high ASLR entropy
run: sudo sysctl -w vm.mmap_rnd_bits=28
- name: CI script
run: ./ci/test_run_all.sh
- name: Save caches
uses: ./.github/actions/save-caches
lint:
name: 'lint'
needs: runners
runs-on: ${{ needs.runners.outputs.provider == 'cirrus' && 'ghcr.io/cirruslabs/ubuntu-runner-amd64:24.04-xs' || 'ubuntu-24.04' }}
if: ${{ vars.SKIP_BRANCH_PUSH != 'true' || github.event_name == 'pull_request' }}
timeout-minutes: 20
env:
CONTAINER_NAME: "bitcoin-linter"
steps:
- name: Checkout
uses: actions/checkout@v5
- name: Save Ccache cache
uses: actions/cache/save@v4
if: github.event_name != 'pull_request' && steps.ccache-cache.outputs.cache-hit != 'true'
with:
ref: *CHECKOUT_REF_TMPL
fetch-depth: 0
- name: Configure Docker
uses: ./.github/actions/configure-docker
with:
cache-provider: ${{ needs.runners.outputs.provider }}
- name: CI script
run: |
set -o xtrace
docker buildx build -t "$CONTAINER_NAME" $DOCKER_BUILD_CACHE_ARG --file "./ci/lint_imagefile" .
CIRRUS_PR_FLAG=""
if [ "${{ github.event_name }}" = "pull_request" ]; then
CIRRUS_PR_FLAG="-e CIRRUS_PR=1"
fi
docker run --rm $CIRRUS_PR_FLAG -v "$(pwd)":/bitcoin "$CONTAINER_NAME"
path: ${{ env.CCACHE_DIR }}
# https://github.com/actions/cache/blob/main/tips-and-workarounds.md#update-a-cache
key: ${{ github.job }}-ccache-${{ github.run_id }}

9
.gitignore vendored
View File

@@ -1,8 +1,3 @@
# Patterns that are specific to a text editor, IDE, operating system, or user
# environment are not added here. They should be added to your local gitignore
# file instead:
# https://docs.github.com/en/get-started/git-basics/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer
# Build subdirectories.
/*build*
!/build-aux
@@ -20,8 +15,8 @@
# Previous releases
/releases
# cargo default target dir
target/
#build tests
test/lint/test_runner/target/
/guix-build-*

View File

@@ -1,7 +1,7 @@
[main]
host = https://www.transifex.com
[o:bitcoin:p:bitcoin:r:qt-translation-030x]
[o:bitcoin:p:bitcoin:r:qt-translation-029x]
file_filter = src/qt/locale/bitcoin_<lang>.xlf
source_file = src/qt/locale/bitcoin_en.xlf
source_lang = en

View File

@@ -19,20 +19,16 @@ if(POLICY CMP0171)
cmake_policy(SET CMP0171 NEW)
endif()
# When adjusting CMake flag variables, we must not override those explicitly
# set by the user. These are a subset of the CACHE_VARIABLES property.
get_directory_property(precious_variables CACHE_VARIABLES)
#=============================
# Project / Package metadata
#=============================
set(CLIENT_NAME "Bitcoin Core")
set(CLIENT_VERSION_MAJOR 30)
set(CLIENT_VERSION_MINOR 2)
set(CLIENT_VERSION_MAJOR 29)
set(CLIENT_VERSION_MINOR 0)
set(CLIENT_VERSION_BUILD 0)
set(CLIENT_VERSION_RC 0)
set(CLIENT_VERSION_RC 1)
set(CLIENT_VERSION_IS_RELEASE "true")
set(COPYRIGHT_YEAR "2026")
set(COPYRIGHT_YEAR "2025")
# During the enabling of the CXX and CXXOBJ languages, we modify
# CMake's compiler/linker invocation strings by appending the content
@@ -94,12 +90,11 @@ endif()
#=============================
include(CMakeDependentOption)
# When adding a new option, end the <help_text> with a full stop for consistency.
option(BUILD_BITCOIN_BIN "Build bitcoin executable." ON)
option(BUILD_DAEMON "Build bitcoind executable." ON)
option(BUILD_GUI "Build bitcoin-qt executable." OFF)
option(BUILD_CLI "Build bitcoin-cli executable." ON)
option(BUILD_TESTS "Build test_bitcoin and other unit test executables." ON)
option(BUILD_TESTS "Build test_bitcoin executable." ON)
option(BUILD_TX "Build bitcoin-tx executable." ${BUILD_TESTS})
option(BUILD_UTIL "Build bitcoin-util executable." ${BUILD_TESTS})
@@ -107,16 +102,35 @@ option(BUILD_UTIL_CHAINSTATE "Build experimental bitcoin-chainstate executable."
option(BUILD_KERNEL_LIB "Build experimental bitcoinkernel library." ${BUILD_UTIL_CHAINSTATE})
option(ENABLE_WALLET "Enable wallet." ON)
if(ENABLE_WALLET)
option(WITH_SQLITE "Enable SQLite wallet support." ${ENABLE_WALLET})
if(WITH_SQLITE)
if(VCPKG_TARGET_TRIPLET)
# Use of the `unofficial::` namespace is a vcpkg package manager convention.
find_package(unofficial-sqlite3 CONFIG REQUIRED)
else()
find_package(SQLite3 3.7.17 REQUIRED)
endif()
set(USE_SQLITE ON)
endif()
option(WITH_BDB "Enable Berkeley DB (BDB) wallet support." OFF)
cmake_dependent_option(WARN_INCOMPATIBLE_BDB "Warn when using a Berkeley DB (BDB) version other than 4.8." ON "WITH_BDB" OFF)
if(WITH_BDB)
find_package(BerkeleyDB 4.8 MODULE REQUIRED)
set(USE_BDB ON)
if(NOT BerkeleyDB_VERSION VERSION_EQUAL 4.8)
message(WARNING "Found Berkeley DB (BDB) other than 4.8.\n"
"BDB (legacy) wallets opened by this build will not be portable!"
)
if(WARN_INCOMPATIBLE_BDB)
message(WARNING "If this is intended, pass \"-DWARN_INCOMPATIBLE_BDB=OFF\".\n"
"Passing \"-DWITH_BDB=OFF\" will suppress this warning."
)
endif()
endif()
endif()
cmake_dependent_option(BUILD_WALLET_TOOL "Build bitcoin-wallet tool." ${BUILD_TESTS} "ENABLE_WALLET" OFF)
option(ENABLE_HARDENING "Attempt to harden the resulting executables." ON)
option(REDUCE_EXPORTS "Attempt to reduce exported symbols in the resulting executables." OFF)
option(WERROR "Treat compiler warnings as errors." OFF)
option(WITH_CCACHE "Attempt to use ccache for compiling." ON)
@@ -131,7 +145,7 @@ if(WITH_USDT)
find_package(USDT MODULE REQUIRED)
endif()
option(ENABLE_EXTERNAL_SIGNER "Enable external signer support." ON)
cmake_dependent_option(ENABLE_EXTERNAL_SIGNER "Enable external signer support." ON "NOT WIN32" OFF)
cmake_dependent_option(WITH_QRENCODE "Enable QR code support." ON "BUILD_GUI" OFF)
if(WITH_QRENCODE)
@@ -139,11 +153,10 @@ if(WITH_QRENCODE)
set(USE_QRCODE TRUE)
endif()
cmake_dependent_option(WITH_DBUS "Enable DBus support." ON "NOT CMAKE_SYSTEM_NAME MATCHES \"(Windows|Darwin)\" AND BUILD_GUI" OFF)
cmake_dependent_option(WITH_DBUS "Enable DBus support." ON "CMAKE_SYSTEM_NAME STREQUAL \"Linux\" AND BUILD_GUI" OFF)
cmake_dependent_option(ENABLE_IPC "Build multiprocess bitcoin-node and bitcoin-gui executables in addition to monolithic bitcoind and bitcoin-qt executables." ON "NOT WIN32" OFF)
cmake_dependent_option(WITH_EXTERNAL_LIBMULTIPROCESS "Build with external libmultiprocess library instead of with local git subtree when ENABLE_IPC is enabled. This is not normally recommended, but can be useful for developing libmultiprocess itself." OFF "ENABLE_IPC" OFF)
if(ENABLE_IPC AND WITH_EXTERNAL_LIBMULTIPROCESS)
option(WITH_MULTIPROCESS "Build multiprocess bitcoin-node and bitcoin-gui executables in addition to monolithic bitcoind and bitcoin-qt executables. Requires libmultiprocess library. Experimental." OFF)
if(WITH_MULTIPROCESS)
find_package(Libmultiprocess REQUIRED COMPONENTS Lib)
find_package(LibmultiprocessNative REQUIRED COMPONENTS Bin
NAMES Libmultiprocess
@@ -163,7 +176,7 @@ if(BUILD_GUI)
if(BUILD_GUI_TESTS)
list(APPEND qt_components Test)
endif()
find_package(Qt 6.2 MODULE REQUIRED
find_package(Qt 5.11.3 MODULE REQUIRED
COMPONENTS ${qt_components}
)
unset(qt_components)
@@ -203,7 +216,6 @@ target_link_libraries(core_interface INTERFACE
if(BUILD_FOR_FUZZING)
message(WARNING "BUILD_FOR_FUZZING=ON will disable all other targets and force BUILD_FUZZ_BINARY=ON.")
set(BUILD_BITCOIN_BIN OFF)
set(BUILD_DAEMON OFF)
set(BUILD_CLI OFF)
set(BUILD_TX OFF)
@@ -217,7 +229,6 @@ if(BUILD_FOR_FUZZING)
set(BUILD_TESTS OFF)
set(BUILD_GUI_TESTS OFF)
set(BUILD_BENCH OFF)
set(ENABLE_IPC OFF)
set(BUILD_FUZZ_BINARY ON)
target_compile_definitions(core_interface INTERFACE
@@ -279,10 +290,6 @@ if(WIN32)
/Zc:__cplusplus
/sdl
)
target_link_options(core_interface INTERFACE
# We embed our own manifests.
/MANIFEST:NO
)
# Improve parallelism in MSBuild.
# See: https://devblogs.microsoft.com/cppblog/improved-parallelism-in-msbuild/.
list(APPEND CMAKE_VS_GLOBALS "UseMultiToolTask=true")
@@ -341,28 +348,13 @@ target_link_libraries(core_interface INTERFACE
Threads::Threads
)
# Define sanitize_interface with -fsanitize flags intended to apply to all
# libraries and executables.
add_library(sanitize_interface INTERFACE)
target_link_libraries(core_interface INTERFACE sanitize_interface)
if(SANITIZERS)
# Transform list of sanitizers into -fsanitize flags, replacing "fuzzer" with
# "fuzzer-no-link" in sanitize_interface flags, and moving "fuzzer" to
# fuzzer_interface flags. If -DSANITIZERS=fuzzer is specified, the fuzz test
# binary should be built with -fsanitize=fuzzer (so it can use libFuzzer's
# main function), but libraries should be built with -fsanitize=fuzzer-no-link
# (so they can be linked into other executables that have their own main
# functions).
string(REGEX REPLACE "(^|,)fuzzer($|,)" "\\1fuzzer-no-link\\2" sanitize_opts "${SANITIZERS}")
set(fuzz_flag "")
if(NOT sanitize_opts STREQUAL SANITIZERS)
set(fuzz_flag "-fsanitize=fuzzer")
endif()
# First check if the compiler accepts flags. If an incompatible pair like
# -fsanitize=address,thread is used here, this check will fail. This will also
# fail if a bad argument is passed, e.g. -fsanitize=undfeined
try_append_cxx_flags("-fsanitize=${sanitize_opts}" TARGET sanitize_interface
try_append_cxx_flags("-fsanitize=${SANITIZERS}" TARGET sanitize_interface
RESULT_VAR cxx_supports_sanitizers
SKIP_LINK
)
@@ -375,15 +367,15 @@ if(SANITIZERS)
# flag. This is a separate check so we can give a better error message when
# the sanitize flags are supported by the compiler but the actual sanitizer
# libs are missing.
try_append_linker_flag("-fsanitize=${sanitize_opts}" VAR SANITIZER_LDFLAGS
try_append_linker_flag("-fsanitize=${SANITIZERS}" VAR SANITIZER_LDFLAGS
SOURCE "
#include <cstdint>
#include <cstddef>
extern \"C\" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) { return 0; }
__attribute__((weak)) // allow for libFuzzer linking
int main() { return 0; }
"
RESULT_VAR linker_supports_sanitizers
NO_CACHE_IF_FAILED
)
if(NOT linker_supports_sanitizers)
message(FATAL_ERROR "Linker did not accept requested flags, you are missing required libraries.")
@@ -391,10 +383,8 @@ if(SANITIZERS)
endif()
target_link_options(sanitize_interface INTERFACE ${SANITIZER_LDFLAGS})
# Define fuzzer_interface with flags intended to apply to the fuzz test binary,
# and perform a test compilation to determine correct value of
# FUZZ_BINARY_LINKS_WITHOUT_MAIN_FUNCTION.
if(BUILD_FUZZ_BINARY)
target_link_libraries(core_interface INTERFACE ${FUZZ_LIBS})
include(CheckSourceCompilesWithFlags)
check_cxx_source_compiles_with_flags("
#include <cstdint>
@@ -402,12 +392,9 @@ if(BUILD_FUZZ_BINARY)
extern \"C\" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) { return 0; }
// No main() function.
" FUZZ_BINARY_LINKS_WITHOUT_MAIN_FUNCTION
LDFLAGS ${SANITIZER_LDFLAGS} ${fuzz_flag}
LDFLAGS ${SANITIZER_LDFLAGS}
LINK_LIBRARIES ${FUZZ_LIBS}
)
add_library(fuzzer_interface INTERFACE)
target_link_options(fuzzer_interface INTERFACE ${fuzz_flag})
target_link_libraries(fuzzer_interface INTERFACE ${FUZZ_LIBS})
endif()
include(AddBoostIfNeeded)
@@ -444,7 +431,6 @@ else()
try_append_cxx_flags("-Wvla" TARGET warn_interface SKIP_LINK)
try_append_cxx_flags("-Wshadow-field" TARGET warn_interface SKIP_LINK)
try_append_cxx_flags("-Wthread-safety" TARGET warn_interface SKIP_LINK)
try_append_cxx_flags("-Wthread-safety-pointer" TARGET warn_interface SKIP_LINK)
try_append_cxx_flags("-Wloop-analysis" TARGET warn_interface SKIP_LINK)
try_append_cxx_flags("-Wredundant-decls" TARGET warn_interface SKIP_LINK)
try_append_cxx_flags("-Wunused-member-function" TARGET warn_interface SKIP_LINK)
@@ -497,70 +483,62 @@ try_append_cxx_flags("-fmacro-prefix-map=A=B" TARGET core_interface SKIP_LINK
# -fstack-reuse=none for all gcc builds. (Only gcc understands this flag).
try_append_cxx_flags("-fstack-reuse=none" TARGET core_interface)
if(MSVC)
try_append_linker_flag("/DYNAMICBASE" TARGET core_interface)
try_append_linker_flag("/HIGHENTROPYVA" TARGET core_interface)
try_append_linker_flag("/NXCOMPAT" TARGET core_interface)
else()
if(ENABLE_HARDENING)
add_library(hardening_interface INTERFACE)
target_link_libraries(core_interface INTERFACE hardening_interface)
if(MSVC)
try_append_linker_flag("/DYNAMICBASE" TARGET hardening_interface)
try_append_linker_flag("/HIGHENTROPYVA" TARGET hardening_interface)
try_append_linker_flag("/NXCOMPAT" TARGET hardening_interface)
else()
# _FORTIFY_SOURCE requires that there is some level of optimization,
# otherwise it does nothing and just creates a compiler warning.
try_append_cxx_flags("-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3"
RESULT_VAR cxx_supports_fortify_source
SOURCE "int main() {
# if !defined __OPTIMIZE__ || __OPTIMIZE__ <= 0
#error
#endif
}"
)
if(cxx_supports_fortify_source)
target_compile_options(core_interface INTERFACE
-U_FORTIFY_SOURCE
-D_FORTIFY_SOURCE=3
# _FORTIFY_SOURCE requires that there is some level of optimization,
# otherwise it does nothing and just creates a compiler warning.
try_append_cxx_flags("-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3"
RESULT_VAR cxx_supports_fortify_source
SOURCE "int main() {
# if !defined __OPTIMIZE__ || __OPTIMIZE__ <= 0
#error
#endif
}"
)
endif()
unset(cxx_supports_fortify_source)
try_append_cxx_flags("-Wstack-protector" TARGET core_interface SKIP_LINK)
try_append_cxx_flags("-fstack-protector-all" TARGET core_interface)
try_append_cxx_flags("-fcf-protection=full" TARGET core_interface)
if(MINGW)
# stack-clash-protection is a no-op for Windows.
# See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90458 for more details.
else()
try_append_cxx_flags("-fstack-clash-protection" TARGET core_interface)
endif()
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "aarch64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "arm64")
if(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
try_append_cxx_flags("-mbranch-protection=bti" TARGET core_interface SKIP_LINK)
else()
try_append_cxx_flags("-mbranch-protection=standard" TARGET core_interface SKIP_LINK)
if(cxx_supports_fortify_source)
target_compile_options(hardening_interface INTERFACE
-U_FORTIFY_SOURCE
-D_FORTIFY_SOURCE=3
)
endif()
endif()
unset(cxx_supports_fortify_source)
try_append_linker_flag("-Wl,--enable-reloc-section" TARGET core_interface)
try_append_linker_flag("-Wl,--dynamicbase" TARGET core_interface)
try_append_linker_flag("-Wl,--nxcompat" TARGET core_interface)
try_append_linker_flag("-Wl,--high-entropy-va" TARGET core_interface)
try_append_linker_flag("-Wl,-z,relro" TARGET core_interface)
try_append_linker_flag("-Wl,-z,now" TARGET core_interface)
# TODO: This can be dropped once Bitcoin Core no longer supports
# NetBSD 10.0 or if upstream fix is backported.
# NetBSD's dynamic linker ld.elf_so < 11.0 supports exactly 2
# `PT_LOAD` segments and binaries linked with `-z separate-code`
# have 4 `PT_LOAD` segments.
# Relevant discussions:
# - https://github.com/bitcoin/bitcoin/pull/28724#issuecomment-2589347934
# - https://mail-index.netbsd.org/tech-userlevel/2023/01/05/msg013666.html
if(CMAKE_SYSTEM_NAME STREQUAL "NetBSD" AND CMAKE_SYSTEM_VERSION VERSION_LESS 11.0)
try_append_linker_flag("-Wl,-z,noseparate-code" TARGET core_interface)
else()
try_append_linker_flag("-Wl,-z,separate-code" TARGET core_interface)
endif()
if(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
try_append_linker_flag("-Wl,-fixup_chains" TARGET core_interface)
try_append_cxx_flags("-Wstack-protector" TARGET hardening_interface SKIP_LINK)
try_append_cxx_flags("-fstack-protector-all" TARGET hardening_interface)
try_append_cxx_flags("-fcf-protection=full" TARGET hardening_interface)
if(MINGW)
# stack-clash-protection is a no-op for Windows.
# See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90458 for more details.
else()
try_append_cxx_flags("-fstack-clash-protection" TARGET hardening_interface)
endif()
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "aarch64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "arm64")
if(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
try_append_cxx_flags("-mbranch-protection=bti" TARGET hardening_interface SKIP_LINK)
else()
try_append_cxx_flags("-mbranch-protection=standard" TARGET hardening_interface SKIP_LINK)
endif()
endif()
try_append_linker_flag("-Wl,--enable-reloc-section" TARGET hardening_interface)
try_append_linker_flag("-Wl,--dynamicbase" TARGET hardening_interface)
try_append_linker_flag("-Wl,--nxcompat" TARGET hardening_interface)
try_append_linker_flag("-Wl,--high-entropy-va" TARGET hardening_interface)
try_append_linker_flag("-Wl,-z,relro" TARGET hardening_interface)
try_append_linker_flag("-Wl,-z,now" TARGET hardening_interface)
try_append_linker_flag("-Wl,-z,separate-code" TARGET hardening_interface)
if(CMAKE_SYSTEM_NAME STREQUAL "Darwin")
try_append_linker_flag("-Wl,-fixup_chains" TARGET hardening_interface)
endif()
endif()
endif()
@@ -591,9 +569,11 @@ set(Python3_FIND_FRAMEWORK LAST CACHE STRING "")
set(Python3_FIND_UNVERSIONED_NAMES FIRST CACHE STRING "")
mark_as_advanced(Python3_FIND_FRAMEWORK Python3_FIND_UNVERSIONED_NAMES)
find_package(Python3 3.10 COMPONENTS Interpreter)
if(NOT TARGET Python3::Interpreter)
if(Python3_EXECUTABLE)
set(PYTHON_COMMAND ${Python3_EXECUTABLE})
else()
list(APPEND configure_warnings
"Minimum required Python not found."
"Minimum required Python not found. Utils and rpcauth tests are disabled."
)
endif()
@@ -637,6 +617,8 @@ add_subdirectory(doc)
add_subdirectory(src)
include(cmake/tests.cmake)
include(Maintenance)
setup_split_debug_script()
add_maintenance_targets()
@@ -647,16 +629,15 @@ message("\n")
message("Configure summary")
message("=================")
message("Executables:")
message(" bitcoin ............................. ${BUILD_BITCOIN_BIN}")
message(" bitcoind ............................ ${BUILD_DAEMON}")
if(BUILD_DAEMON AND ENABLE_IPC)
if(BUILD_DAEMON AND WITH_MULTIPROCESS)
set(bitcoin_daemon_status ON)
else()
set(bitcoin_daemon_status OFF)
endif()
message(" bitcoin-node (multiprocess) ......... ${bitcoin_daemon_status}")
message(" bitcoin-qt (GUI) .................... ${BUILD_GUI}")
if(BUILD_GUI AND ENABLE_IPC)
if(BUILD_GUI AND WITH_MULTIPROCESS)
set(bitcoin_gui_status ON)
else()
set(bitcoin_gui_status OFF)
@@ -670,21 +651,15 @@ message(" bitcoin-chainstate (experimental) ... ${BUILD_UTIL_CHAINSTATE}")
message(" libbitcoinkernel (experimental) ..... ${BUILD_KERNEL_LIB}")
message("Optional features:")
message(" wallet support ...................... ${ENABLE_WALLET}")
if(ENABLE_WALLET)
message(" - descriptor wallets (SQLite) ...... ${WITH_SQLITE}")
message(" - legacy wallets (Berkeley DB) ..... ${WITH_BDB}")
endif()
message(" external signer ..................... ${ENABLE_EXTERNAL_SIGNER}")
message(" ZeroMQ .............................. ${WITH_ZMQ}")
if(ENABLE_IPC)
if (WITH_EXTERNAL_LIBMULTIPROCESS)
set(ipc_status "ON (with external libmultiprocess)")
else()
set(ipc_status ON)
endif()
else()
set(ipc_status OFF)
endif()
message(" IPC ................................. ${ipc_status}")
message(" USDT tracing ........................ ${WITH_USDT}")
message(" QR code (GUI) ....................... ${WITH_QRENCODE}")
message(" DBus (GUI) .......................... ${WITH_DBUS}")
message(" DBus (GUI, Linux only) .............. ${WITH_DBUS}")
message("Tests:")
message(" test_bitcoin ........................ ${BUILD_TESTS}")
message(" test_bitcoin-qt ..................... ${BUILD_GUI_TESTS}")
@@ -700,6 +675,7 @@ message("Cross compiling ....................... ${cross_status}")
message("C++ compiler .......................... ${CMAKE_CXX_COMPILER_ID} ${CMAKE_CXX_COMPILER_VERSION}, ${CMAKE_CXX_COMPILER}")
include(FlagsSummary)
flags_summary()
message("Attempt to harden executables ......... ${ENABLE_HARDENING}")
message("Treat compiler warnings as errors ..... ${WERROR}")
message("Use ccache for compiling .............. ${WITH_CCACHE}")
message("\n")

View File

@@ -1,5 +1,6 @@
{
"version": 3,
"cmakeMinimumRequired": {"major": 3, "minor": 21, "patch": 0},
"configurePresets": [
{
"name": "vs2022",
@@ -61,7 +62,6 @@
"name": "dev-mode",
"displayName": "Developer mode, with all features/dependencies enabled",
"binaryDir": "${sourceDir}/build_dev_mode",
"errors": {"dev": true},
"cacheVariables": {
"BUILD_BENCH": "ON",
"BUILD_CLI": "ON",
@@ -77,9 +77,13 @@
"BUILD_UTIL_CHAINSTATE": "ON",
"BUILD_WALLET_TOOL": "ON",
"ENABLE_EXTERNAL_SIGNER": "ON",
"ENABLE_HARDENING": "ON",
"ENABLE_WALLET": "ON",
"ENABLE_IPC": "ON",
"WARN_INCOMPATIBLE_BDB": "OFF",
"WITH_BDB": "ON",
"WITH_MULTIPROCESS": "ON",
"WITH_QRENCODE": "ON",
"WITH_SQLITE": "ON",
"WITH_USDT": "ON",
"WITH_ZMQ": "ON"
}

View File

@@ -80,7 +80,7 @@ facilitates social contribution, easy testing and peer review.
To contribute a patch, the workflow is as follows:
1. Fork repository ([only for the first time](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo))
1. Fork repository ([only for the first time](https://docs.github.com/en/get-started/quickstart/fork-a-repo))
1. Create topic branch
1. Commit patches
@@ -115,14 +115,13 @@ fixes or code moves with actual code changes.
Make sure each individual commit is hygienic: that it builds successfully on its
own without warnings, errors, regressions, or test failures.
This means tests must be updated in the same commit that changes the behavior.
Commit messages should be verbose by default consisting of a short subject line
(50 chars max), a blank line and detailed explanatory text as separate
paragraph(s), unless the title alone is self-explanatory (like "Correct typo
in init.cpp") in which case a single title line is sufficient. Commit messages should be
helpful to people reading your code in the future, so explain the reasoning for
your decisions. Further explanation [here](https://cbea.ms/git-commit/).
your decisions. Further explanation [here](https://chris.beams.io/posts/git-commit/).
If a particular commit references another issue, please add the reference. For
example: `refs #1234` or `fixes #4321`. Using the `fixes` or `closes` keywords
@@ -183,7 +182,7 @@ for more information on helping with translations.
### Work in Progress Changes and Requests for Comments
If a pull request is not to be considered for merging (yet), please
prefix the title with [WIP] or use [Tasks Lists](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#task-lists)
prefix the title with [WIP] or use [Tasks Lists](https://docs.github.com/en/github/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#task-lists)
in the body of the pull request to indicate tasks are pending.
### Address Feedback
@@ -402,7 +401,7 @@ about:
- It may be because your code is too complex for all but a few people, and those people
may not have realized your pull request even exists. A great way to find people who
are qualified and care about the code you are touching is the
[Git Blame feature](https://docs.github.com/en/repositories/working-with-files/using-files/viewing-and-understanding-files). Simply
[Git Blame feature](https://docs.github.com/en/github/managing-files-in-a-repository/managing-files-on-github/tracking-changes-in-a-file). Simply
look up who last modified the code you are changing and see if you can find
them and give them a nudge. Don't be incessant about the nudging, though.
- Finally, if all else fails, ask on IRC or elsewhere for someone to give your pull request

View File

@@ -1,7 +1,7 @@
The MIT License (MIT)
Copyright (c) 2009-2026 The Bitcoin Core developers
Copyright (c) 2009-2026 Bitcoin Developers
Copyright (c) 2009-2025 The Bitcoin Core developers
Copyright (c) 2009-2025 Bitcoin Developers
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1 +1 @@
See [doc/build-\*.md](/doc)
See [doc/build-\*.md](/doc)

View File

@@ -19,7 +19,7 @@ License
-------
Bitcoin Core is released under the terms of the MIT license. See [COPYING](COPYING) for more
information or see https://opensource.org/license/MIT.
information or see https://opensource.org/licenses/MIT.
Development Process
-------------------
@@ -56,8 +56,8 @@ in Python.
These tests can be run (if the [test dependencies](/test) are installed) with: `build/test/functional/test_runner.py`
(assuming `build` is your build directory).
The CI (Continuous Integration) systems make sure that every pull request is tested on Windows, Linux, and macOS.
The CI must pass on all commits before merge to avoid unrelated CI failures on new pull requests.
The CI (Continuous Integration) systems make sure that every pull request is built for Windows, Linux, and macOS,
and that unit/sanity tests are run automatically.
### Manual Quality Assurance (QA) Testing
@@ -70,7 +70,7 @@ Translations
------------
Changes to translations as well as new translations can be submitted to
[Bitcoin Core's Transifex page](https://explore.transifex.com/bitcoin/bitcoin/).
[Bitcoin Core's Transifex page](https://www.transifex.com/bitcoin/bitcoin/).
Translations are periodically pulled from Transifex and merged into the git repository. See the
[translation process](doc/translation_process.md) for details on how this works.

View File

@@ -1,8 +1,8 @@
# CI Scripts
## CI Scripts
This directory contains scripts for each build step in each build stage.
## Running a Stage Locally
### Running a Stage Locally
Be aware that the tests will be built and run in-place, so please run at your own risk.
If the repository is not a fresh git clone, you might have to clean files from previous builds or test runs first.
@@ -27,7 +27,7 @@ with a specific configuration,
env -i HOME="$HOME" PATH="$PATH" USER="$USER" bash -c 'FILE_ENV="./ci/test/00_setup_env_arm.sh" ./ci/test_run_all.sh'
```
## Configurations
### Configurations
The test files (`FILE_ENV`) are constructed to test a wide range of
configurations, rather than a single pass/fail. This helps to catch build
@@ -49,32 +49,8 @@ env -i HOME="$HOME" PATH="$PATH" USER="$USER" bash -c 'MAKEJOBS="-j1" FILE_ENV="
The files starting with `0n` (`n` greater than 0) are the scripts that are run
in order.
## Cache
### Cache
In order to avoid rebuilding all dependencies for each build, the binaries are
cached and reused when possible. Changes in the dependency-generator will
trigger cache-invalidation and rebuilds as necessary.
## Configuring a repository for CI
### Primary repository
To configure the primary repository, follow these steps:
1. Register with [Cirrus Runners](https://cirrus-runners.app/) and purchase runners.
2. Install the Cirrus Runners GitHub app against the GitHub organization.
3. Enable organisation-level runners to be used in public repositories:
1. `Org settings -> Actions -> Runner Groups -> Default -> Allow public repos`
4. Permit the following actions to run:
1. cirruslabs/cache/restore@\*
1. cirruslabs/cache/save@\*
1. docker/setup-buildx-action@\*
1. actions/github-script@\*
### Forked repositories
When used in a fork the CI will run on GitHub's free hosted runners by default.
In this case, due to GitHub's 10GB-per-repo cache size limitations caches will be frequently evicted and missed, but the workflows will run (slowly).
It is also possible to use your own Cirrus Runners in your own fork with an appropriate patch to the `REPO_USE_CIRRUS_RUNNERS` variable in ../.github/workflows/ci.yml
NB that Cirrus Runners only work at an organisation level, therefore in order to use your own Cirrus Runners, *the fork must be within your own organisation*.

View File

@@ -6,19 +6,16 @@
export LC_ALL=C
set -o errexit -o pipefail -o xtrace
export CI_RETRY_EXE="/ci_retry --"
pushd "/"
${CI_RETRY_EXE} apt-get update
# Lint dependencies:
# - cargo (used to run the lint tests)
# - curl/xz-utils (to install shellcheck)
# - git (used in many lint scripts)
# - gpg (used by verify-commits)
${CI_RETRY_EXE} apt-get install -y cargo curl xz-utils git gpg
${CI_RETRY_EXE} apt-get install -y curl xz-utils git gpg
PYTHON_PATH="/python_build"
if [ ! -d "${PYTHON_PATH}/bin" ]; then
@@ -38,20 +35,31 @@ export PATH="${PYTHON_PATH}/bin:${PATH}"
command -v python3
python3 --version
export LINT_RUNNER_PATH="/lint_test_runner"
if [ ! -d "${LINT_RUNNER_PATH}" ]; then
${CI_RETRY_EXE} apt-get install -y cargo
(
cd "/test/lint/test_runner" || exit 1
cargo build
mkdir -p "${LINT_RUNNER_PATH}"
mv target/debug/test_runner "${LINT_RUNNER_PATH}"
)
fi
${CI_RETRY_EXE} pip3 install \
codespell==2.4.1 \
lief==0.16.6 \
codespell==2.2.6 \
lief==0.13.2 \
mypy==1.4.1 \
pyzmq==25.1.0 \
ruff==0.5.5 \
vulture==2.6
SHELLCHECK_VERSION=v0.11.0
SHELLCHECK_VERSION=v0.8.0
curl -sL "https://github.com/koalaman/shellcheck/releases/download/${SHELLCHECK_VERSION}/shellcheck-${SHELLCHECK_VERSION}.linux.x86_64.tar.xz" | \
tar --xz -xf - --directory /tmp/
mv "/tmp/shellcheck-${SHELLCHECK_VERSION}/shellcheck" /usr/bin/
MLC_VERSION=v1
MLC_VERSION=v0.19.0
MLC_BIN=mlc-x86_64-linux
curl -sL "https://github.com/becheran/mlc/releases/download/${MLC_VERSION}/${MLC_BIN}" -o "/usr/bin/mlc"
chmod +x /usr/bin/mlc

View File

@@ -16,7 +16,7 @@ if [ -n "$CIRRUS_PR" ]; then
fi
fi
RUST_BACKTRACE=1 cargo run --manifest-path "./test/lint/test_runner/Cargo.toml"
RUST_BACKTRACE=1 "${LINT_RUNNER_PATH}/test_runner"
if [ "$CIRRUS_REPO_FULL_NAME" = "bitcoin/bitcoin" ] && [ "$CIRRUS_PR" = "" ] ; then
# Sanity check only the last few commits to get notified of missing sigs,

View File

@@ -11,6 +11,7 @@ export LC_ALL=C
git config --global --add safe.directory /bitcoin
export PATH="/python_build/bin:${PATH}"
export LINT_RUNNER_PATH="/lint_test_runner"
if [ -z "$1" ]; then
bash -ic "./ci/lint/06_script.sh"

View File

@@ -4,7 +4,7 @@
# See test/lint/README.md for usage.
FROM mirror.gcr.io/ubuntu:24.04
FROM mirror.gcr.io/debian:bookworm
ENV DEBIAN_FRONTEND=noninteractive
ENV LC_ALL=C.UTF-8
@@ -12,7 +12,8 @@ ENV LC_ALL=C.UTF-8
COPY ./ci/retry/retry /ci_retry
COPY ./.python-version /.python-version
COPY ./ci/lint/container-entrypoint.sh /entrypoint.sh
COPY ./ci/lint/01_install.sh /install.sh
COPY ./ci/lint/04_install.sh /install.sh
COPY ./test/lint/test_runner /test/lint/test_runner
RUN /install.sh && \
echo 'alias lint="./ci/lint/06_script.sh"' >> ~/.bashrc && \

17
ci/lint_run_all.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
#
# Copyright (c) 2019-present The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
export LC_ALL=C.UTF-8
# Only used in .cirrus.yml. Refer to test/lint/README.md on how to run locally.
cp "./ci/retry/retry" "/ci_retry"
cp "./.python-version" "/.python-version"
mkdir --parents "/test/lint"
cp --recursive "./test/lint/test_runner" "/test/lint/"
set -o errexit; source ./ci/lint/04_install.sh
set -o errexit
./ci/lint/06_script.sh

View File

@@ -35,7 +35,7 @@ fi
echo "Fallback to default values in env (if not yet set)"
# The number of parallel jobs to pass down to make and test_runner.py
export MAKEJOBS=${MAKEJOBS:--j$(if command -v nproc > /dev/null 2>&1; then nproc; else sysctl -n hw.logicalcpu; fi)}
export MAKEJOBS=${MAKEJOBS:--j4}
# Whether to prefer BusyBox over GNU utilities
export USE_BUSY_BOX=${USE_BUSY_BOX:-false}
@@ -64,7 +64,7 @@ export BASE_OUTDIR=${BASE_OUTDIR:-$BASE_SCRATCH_DIR/out}
# The folder for previous release binaries.
# This folder exists only on the ci guest, and on the ci host as a volume.
export PREVIOUS_RELEASES_DIR=${PREVIOUS_RELEASES_DIR:-$BASE_ROOT_DIR/prev_releases}
export CI_BASE_PACKAGES=${CI_BASE_PACKAGES:-build-essential pkgconf curl ca-certificates ccache python3 rsync git procps bison e2fsprogs cmake ninja-build}
export CI_BASE_PACKAGES=${CI_BASE_PACKAGES:-build-essential pkgconf curl ca-certificates ccache python3 rsync git procps bison e2fsprogs cmake}
export GOAL=${GOAL:-install}
export DIR_QA_ASSETS=${DIR_QA_ASSETS:-${BASE_SCRATCH_DIR}/qa-assets}
export CI_RETRY_EXE=${CI_RETRY_EXE:-"retry --"}

View File

@@ -10,13 +10,12 @@ export HOST=arm-linux-gnueabihf
export DPKG_ADD_ARCH="armhf"
export PACKAGES="python3-zmq g++-arm-linux-gnueabihf busybox libc6:armhf libstdc++6:armhf libfontconfig1:armhf libxcb1:armhf"
export CONTAINER_NAME=ci_arm_linux
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04" # Check that https://packages.ubuntu.com/noble/g++-arm-linux-gnueabihf (version 13.x, similar to guix) can cross-compile
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:noble" # Check that https://packages.ubuntu.com/noble/g++-arm-linux-gnueabihf (version 13.3, similar to guix) can cross-compile
export CI_IMAGE_PLATFORM="linux/arm64"
export USE_BUSY_BOX=true
export RUN_UNIT_TESTS=true
export RUN_FUNCTIONAL_TESTS=false
export GOAL="install"
export CI_LIMIT_STACK_SIZE=1
# -Wno-psabi is to disable ABI warnings: "note: parameter passing for argument of type ... changed in GCC 7.1"
# This could be removed once the ABI change warning does not show up by default
export BITCOIN_CONFIG="-DREDUCE_EXPORTS=ON -DCMAKE_CXX_FLAGS='-Wno-psabi -Wno-error=maybe-uninitialized'"

View File

@@ -7,17 +7,17 @@
export LC_ALL=C.UTF-8
export HOST=i686-pc-linux-gnu
export CONTAINER_NAME=ci_i686_no_multiprocess
export CONTAINER_NAME=ci_i686_multiprocess
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export CI_IMAGE_PLATFORM="linux/amd64"
export PACKAGES="llvm clang g++-multilib"
export DEP_OPTS="DEBUG=1 NO_IPC=1"
export DEP_OPTS="DEBUG=1 MULTIPROCESS=1"
export GOAL="install"
export CI_LIMIT_STACK_SIZE=1
export TEST_RUNNER_EXTRA="--v2transport --usecli"
export TEST_RUNNER_EXTRA="--v2transport"
export BITCOIN_CONFIG="\
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_C_COMPILER='clang;-m32' \
-DCMAKE_CXX_COMPILER='clang++;-m32' \
-DAPPEND_CPPFLAGS='-DBOOST_MULTI_INDEX_ENABLE_SAFE_MODE' \
"
export BITCOIND=bitcoin-node # Used in functional tests

View File

@@ -8,12 +8,10 @@ export LC_ALL=C.UTF-8
# Homebrew's python@3.12 is marked as externally managed (PEP 668).
# Therefore, `--break-system-packages` is needed.
export CONTAINER_NAME="ci_mac_native" # macos does not use a container, but the env var is needed for logging
export PIP_PACKAGES="--break-system-packages zmq"
export GOAL="install deploy"
export GOAL="install"
export CMAKE_GENERATOR="Ninja"
export BITCOIN_CONFIG="-DBUILD_GUI=ON -DWITH_ZMQ=ON -DREDUCE_EXPORTS=ON -DCMAKE_EXE_LINKER_FLAGS='-Wl,-stack_size -Wl,0x80000'"
export BITCOIN_CONFIG="-DBUILD_GUI=ON -DWITH_ZMQ=ON -DREDUCE_EXPORTS=ON"
export CI_OS_NAME="macos"
export NO_DEPENDS=1
export OSX_SDK=""
export BITCOIN_CMD="bitcoin -m" # Used in functional tests

View File

@@ -6,9 +6,8 @@
export LC_ALL=C.UTF-8
export CONTAINER_NAME="ci_mac_native_fuzz" # macos does not use a container, but the env var is needed for logging
export CMAKE_GENERATOR="Ninja"
export BITCOIN_CONFIG="-DBUILD_FOR_FUZZING=ON -DCMAKE_EXE_LINKER_FLAGS='-Wl,-stack_size -Wl,0x80000'"
export BITCOIN_CONFIG="-DBUILD_FOR_FUZZING=ON"
export CI_OS_NAME="macos"
export NO_DEPENDS=1
export OSX_SDK=""

View File

@@ -19,19 +19,17 @@ else
fi
export CONTAINER_NAME=ci_native_asan
export APT_LLVM_V="21"
export PACKAGES="systemtap-sdt-dev clang-${APT_LLVM_V} llvm-${APT_LLVM_V} libclang-rt-${APT_LLVM_V}-dev python3-zmq qt6-base-dev qt6-tools-dev qt6-l10n-tools libevent-dev libboost-dev libzmq3-dev libqrencode-dev libsqlite3-dev ${BPFCC_PACKAGE} libcapnp-dev capnproto python3-pip"
export PIP_PACKAGES="--break-system-packages pycapnp"
export APT_LLVM_V="20"
export PACKAGES="systemtap-sdt-dev clang-${APT_LLVM_V} llvm-${APT_LLVM_V} libclang-rt-${APT_LLVM_V}-dev python3-zmq qtbase5-dev qttools5-dev qttools5-dev-tools libevent-dev libboost-dev libdb5.3++-dev libzmq3-dev libqrencode-dev libsqlite3-dev ${BPFCC_PACKAGE}"
export NO_DEPENDS=1
export GOAL="install"
export CI_LIMIT_STACK_SIZE=1
export BITCOIN_CONFIG="\
-DWITH_USDT=ON -DWITH_ZMQ=ON -DBUILD_GUI=ON \
-DWITH_USDT=ON -DWITH_ZMQ=ON -DWITH_BDB=ON -DWARN_INCOMPATIBLE_BDB=OFF -DBUILD_GUI=ON \
-DSANITIZERS=address,float-divide-by-zero,integer,undefined \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DCMAKE_C_COMPILER=clang-${APT_LLVM_V} \
-DCMAKE_CXX_COMPILER=clang++-${APT_LLVM_V} \
-DCMAKE_C_FLAGS='-ftrivial-auto-var-init=pattern' \
-DCMAKE_CXX_FLAGS='-ftrivial-auto-var-init=pattern' \
-DCMAKE_CXX_FLAGS='-ftrivial-auto-var-init=pattern -Wno-error=deprecated-declarations' \
-DAPPEND_CXXFLAGS='-std=c++23' \
-DAPPEND_CPPFLAGS='-DARENA_DEBUG -DDEBUG_LOCKORDER' \
"

View File

@@ -8,14 +8,8 @@ export LC_ALL=C.UTF-8
export CONTAINER_NAME=ci_native_centos
export CI_IMAGE_NAME_TAG="quay.io/centos/centos:stream10"
export CI_BASE_PACKAGES="gcc-c++ glibc-devel libstdc++-devel ccache make ninja-build git python3 python3-pip which patch xz procps-ng rsync coreutils bison e2fsprogs cmake dash"
export PIP_PACKAGES="pyzmq pycapnp"
export DEP_OPTS="DEBUG=1"
export CI_BASE_PACKAGES="gcc-c++ glibc-devel libstdc++-devel ccache make git python3 python3-pip which patch xz procps-ng ksh rsync coreutils bison e2fsprogs cmake"
export PIP_PACKAGES="pyzmq"
export DEP_OPTS="DEBUG=1" # Temporarily enable a DEBUG=1 build to check for GCC-bug-117966 regressions. This can be removed once the minimum GCC version is bumped to 12 in the previous releases task, see https://github.com/bitcoin/bitcoin/issues/31436#issuecomment-2530717875
export GOAL="install"
export BITCOIN_CONFIG="\
-DWITH_ZMQ=ON \
-DBUILD_GUI=ON \
-DREDUCE_EXPORTS=ON \
-DCMAKE_BUILD_TYPE=Debug \
"
export BITCOIN_CMD="bitcoin -m" # Used in functional tests
export BITCOIN_CONFIG="-DWITH_ZMQ=ON -DBUILD_GUI=ON -DREDUCE_EXPORTS=ON -DCMAKE_BUILD_TYPE=Debug"

View File

@@ -8,8 +8,8 @@ export LC_ALL=C.UTF-8
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export CONTAINER_NAME=ci_native_fuzz
export APT_LLVM_V="21"
export PACKAGES="clang-${APT_LLVM_V} llvm-${APT_LLVM_V} libclang-rt-${APT_LLVM_V}-dev libevent-dev libboost-dev libsqlite3-dev libcapnp-dev capnproto"
export APT_LLVM_V="20"
export PACKAGES="clang-${APT_LLVM_V} llvm-${APT_LLVM_V} libclang-rt-${APT_LLVM_V}-dev libevent-dev libboost-dev libsqlite3-dev"
export NO_DEPENDS=1
export RUN_UNIT_TESTS=false
export RUN_FUNCTIONAL_TESTS=false
@@ -19,8 +19,9 @@ export CI_CONTAINER_CAP="--cap-add SYS_PTRACE" # If run with (ASan + LSan), the
export BITCOIN_CONFIG="\
-DBUILD_FOR_FUZZING=ON \
-DSANITIZERS=fuzzer,address,undefined,float-divide-by-zero,integer \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DCMAKE_C_COMPILER=clang-${APT_LLVM_V} \
-DCMAKE_CXX_COMPILER=clang++-${APT_LLVM_V} \
-DCMAKE_C_FLAGS='-ftrivial-auto-var-init=pattern' \
-DCMAKE_CXX_FLAGS='-ftrivial-auto-var-init=pattern' \
"
export LLVM_SYMBOLIZER_PATH="/usr/bin/llvm-symbolizer-${APT_LLVM_V}"

View File

@@ -7,16 +7,15 @@
export LC_ALL=C.UTF-8
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export APT_LLVM_V="21"
LIBCXX_DIR="/cxx_build/"
LIBCXX_DIR="/msan/cxx_build/"
export MSAN_FLAGS="-fsanitize=memory -fsanitize-memory-track-origins=2 -fno-omit-frame-pointer -g -O1 -fno-optimize-sibling-calls"
# -lstdc++ to resolve link issues due to upstream packaging
LIBCXX_FLAGS="-nostdinc++ -nostdlib++ -isystem ${LIBCXX_DIR}include/c++/v1 -L${LIBCXX_DIR}lib -Wl,-rpath,${LIBCXX_DIR}lib -lc++ -lc++abi -lpthread -Wno-unused-command-line-argument -lstdc++"
LIBCXX_FLAGS="-nostdinc++ -nostdlib++ -isystem ${LIBCXX_DIR}include/c++/v1 -L${LIBCXX_DIR}lib -Wl,-rpath,${LIBCXX_DIR}lib -lc++ -lc++abi -lpthread -Wno-unused-command-line-argument"
export MSAN_AND_LIBCXX_FLAGS="${MSAN_FLAGS} ${LIBCXX_FLAGS}"
export CONTAINER_NAME="ci_native_fuzz_msan"
export PACKAGES="clang-${APT_LLVM_V} llvm-${APT_LLVM_V} llvm-${APT_LLVM_V}-dev libclang-${APT_LLVM_V}-dev libclang-rt-${APT_LLVM_V}-dev"
export DEP_OPTS="DEBUG=1 NO_QT=1 CC=clang CXX=clang++ CFLAGS='${MSAN_FLAGS}' CXXFLAGS='${MSAN_AND_LIBCXX_FLAGS}'"
export PACKAGES="ninja-build"
# BDB generates false-positives and will be removed in future
export DEP_OPTS="DEBUG=1 NO_BDB=1 NO_QT=1 CC=clang CXX=clang++ CFLAGS='${MSAN_FLAGS}' CXXFLAGS='${MSAN_AND_LIBCXX_FLAGS}'"
export GOAL="all"
# Setting CMAKE_{C,CXX}_FLAGS_DEBUG flags to an empty string ensures that the flags set in MSAN_FLAGS remain unaltered.
# _FORTIFY_SOURCE is not compatible with MSAN.
@@ -28,7 +27,7 @@ export BITCOIN_CONFIG="\
-DSANITIZERS=fuzzer,memory \
-DAPPEND_CPPFLAGS='-DBOOST_MULTI_INDEX_ENABLE_SAFE_MODE -U_FORTIFY_SOURCE' \
"
export USE_INSTRUMENTED_LIBCPP="MemoryWithOrigins"
export USE_MEMORY_SANITIZER="true"
export RUN_UNIT_TESTS="false"
export RUN_FUNCTIONAL_TESTS="false"
export RUN_FUZZ_TESTS=true

View File

@@ -8,7 +8,7 @@ export LC_ALL=C.UTF-8
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export CONTAINER_NAME=ci_native_fuzz_valgrind
export PACKAGES="libevent-dev libboost-dev libsqlite3-dev valgrind libcapnp-dev capnproto"
export PACKAGES="clang-16 llvm-16 libclang-rt-16-dev libevent-dev libboost-dev libsqlite3-dev valgrind"
export NO_DEPENDS=1
export RUN_UNIT_TESTS=false
export RUN_FUNCTIONAL_TESTS=false
@@ -17,5 +17,8 @@ export FUZZ_TESTS_CONFIG="--valgrind"
export GOAL="all"
export BITCOIN_CONFIG="\
-DBUILD_FOR_FUZZING=ON \
-DCMAKE_CXX_FLAGS='-Wno-error=array-bounds' \
-DSANITIZERS=fuzzer \
-DCMAKE_C_COMPILER=clang-16 \
-DCMAKE_CXX_COMPILER=clang++-16 \
"
export LLVM_SYMBOLIZER_PATH="/usr/bin/llvm-symbolizer-16"

View File

@@ -7,18 +7,16 @@
export LC_ALL=C.UTF-8
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export APT_LLVM_V="21"
LIBCXX_DIR="/cxx_build/"
LIBCXX_DIR="/msan/cxx_build/"
export MSAN_FLAGS="-fsanitize=memory -fsanitize-memory-track-origins=2 -fno-omit-frame-pointer -g -O1 -fno-optimize-sibling-calls"
LIBCXX_FLAGS="-nostdinc++ -nostdlib++ -isystem ${LIBCXX_DIR}include/c++/v1 -L${LIBCXX_DIR}lib -Wl,-rpath,${LIBCXX_DIR}lib -lc++ -lc++abi -lpthread -Wno-unused-command-line-argument"
export MSAN_AND_LIBCXX_FLAGS="${MSAN_FLAGS} ${LIBCXX_FLAGS}"
export CONTAINER_NAME="ci_native_msan"
export PACKAGES="clang-${APT_LLVM_V} llvm-${APT_LLVM_V} llvm-${APT_LLVM_V}-dev libclang-${APT_LLVM_V}-dev libclang-rt-${APT_LLVM_V}-dev python3-pip"
export PIP_PACKAGES="--break-system-packages pycapnp"
export DEP_OPTS="DEBUG=1 NO_QT=1 CC=clang CXX=clang++ CFLAGS='${MSAN_FLAGS}' CXXFLAGS='${MSAN_AND_LIBCXX_FLAGS}'"
export PACKAGES="ninja-build"
# BDB generates false-positives and will be removed in future
export DEP_OPTS="DEBUG=1 NO_BDB=1 NO_QT=1 CC=clang CXX=clang++ CFLAGS='${MSAN_FLAGS}' CXXFLAGS='${MSAN_AND_LIBCXX_FLAGS}'"
export GOAL="install"
export CI_LIMIT_STACK_SIZE=1
# Setting CMAKE_{C,CXX}_FLAGS_DEBUG flags to an empty string ensures that the flags set in MSAN_FLAGS remain unaltered.
# _FORTIFY_SOURCE is not compatible with MSAN.
export BITCOIN_CONFIG="\
@@ -28,4 +26,4 @@ export BITCOIN_CONFIG="\
-DSANITIZERS=memory \
-DAPPEND_CPPFLAGS='-U_FORTIFY_SOURCE' \
"
export USE_INSTRUMENTED_LIBCPP="MemoryWithOrigins"
export USE_MEMORY_SANITIZER="true"

View File

@@ -9,8 +9,7 @@ export LC_ALL=C.UTF-8
export CONTAINER_NAME=ci_native_nowallet_libbitcoinkernel
export CI_IMAGE_NAME_TAG="mirror.gcr.io/debian:bookworm"
# Use minimum supported python3.10 (or best-effort 3.11) and clang-16, see doc/dependencies.md
export PACKAGES="python3-zmq python3-pip clang-16 llvm-16 libc++abi-16-dev libc++-16-dev"
export PIP_PACKAGES="--break-system-packages pycapnp"
export PACKAGES="python3-zmq clang-16 llvm-16 libc++abi-16-dev libc++-16-dev"
export DEP_OPTS="NO_WALLET=1 CC=clang-16 CXX='clang++-16 -stdlib=libc++'"
export GOAL="install"
export BITCOIN_CONFIG="-DREDUCE_EXPORTS=ON -DBUILD_UTIL_CHAINSTATE=ON -DBUILD_KERNEL_LIB=ON -DBUILD_SHARED_LIBS=ON"

View File

@@ -10,17 +10,18 @@ export CONTAINER_NAME=ci_native_previous_releases
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:22.04"
# Use minimum supported python3.10 and gcc-11, see doc/dependencies.md
export PACKAGES="gcc-11 g++-11 python3-zmq"
export DEP_OPTS="CC=gcc-11 CXX=g++-11"
export DEP_OPTS="DEBUG=1 CC=gcc-11 CXX=g++-11"
export TEST_RUNNER_EXTRA="--previous-releases --coverage --extended --exclude feature_dbcrash" # Run extended tests so that coverage does not fail, but exclude the very slow dbcrash
export RUN_UNIT_TESTS_SEQUENTIAL="true"
export RUN_UNIT_TESTS="false"
export GOAL="install"
export CI_LIMIT_STACK_SIZE=1
export DOWNLOAD_PREVIOUS_RELEASES="true"
export BITCOIN_CONFIG="\
-DWITH_ZMQ=ON -DBUILD_GUI=ON -DREDUCE_EXPORTS=ON \
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_C_FLAGS='-funsigned-char' \
-DCMAKE_C_FLAGS_DEBUG='-g2 -O2' \
-DCMAKE_C_FLAGS_DEBUG='-g0 -O2' \
-DCMAKE_CXX_FLAGS='-funsigned-char' \
-DCMAKE_CXX_FLAGS_DEBUG='-g2 -O2' \
-DCMAKE_CXX_FLAGS_DEBUG='-g0 -O2' \
-DAPPEND_CPPFLAGS='-DBOOST_MULTI_INDEX_ENABLE_SAFE_MODE' \
"

View File

@@ -8,9 +8,9 @@ export LC_ALL=C.UTF-8
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export CONTAINER_NAME=ci_native_tidy
export TIDY_LLVM_V="20"
export TIDY_LLVM_V="19"
export APT_LLVM_V="${TIDY_LLVM_V}"
export PACKAGES="clang-${TIDY_LLVM_V} libclang-${TIDY_LLVM_V}-dev llvm-${TIDY_LLVM_V}-dev libomp-${TIDY_LLVM_V}-dev clang-tidy-${TIDY_LLVM_V} jq libevent-dev libboost-dev libzmq3-dev systemtap-sdt-dev qt6-base-dev qt6-tools-dev qt6-l10n-tools libqrencode-dev libsqlite3-dev libcapnp-dev capnproto"
export PACKAGES="clang-${TIDY_LLVM_V} libclang-${TIDY_LLVM_V}-dev llvm-${TIDY_LLVM_V}-dev libomp-${TIDY_LLVM_V}-dev clang-tidy-${TIDY_LLVM_V} jq libevent-dev libboost-dev libzmq3-dev systemtap-sdt-dev qtbase5-dev qttools5-dev qttools5-dev-tools libqrencode-dev libsqlite3-dev libdb++-dev"
export NO_DEPENDS=1
export RUN_UNIT_TESTS=false
export RUN_FUNCTIONAL_TESTS=false
@@ -19,7 +19,8 @@ export RUN_CHECK_DEPS=true
export RUN_TIDY=true
export GOAL="install"
export BITCOIN_CONFIG="\
-DWITH_ZMQ=ON -DBUILD_GUI=ON -DBUILD_BENCH=ON -DWITH_USDT=ON \
-DWITH_ZMQ=ON -DBUILD_GUI=ON -DBUILD_BENCH=ON -DWITH_USDT=ON -DWITH_BDB=ON -DWARN_INCOMPATIBLE_BDB=OFF \
-DENABLE_HARDENING=OFF \
-DCMAKE_C_COMPILER=clang-${TIDY_LLVM_V} \
-DCMAKE_CXX_COMPILER=clang++-${TIDY_LLVM_V} \
-DCMAKE_C_FLAGS_RELWITHDEBINFO='-O0 -g0' \

View File

@@ -8,14 +8,9 @@ export LC_ALL=C.UTF-8
export CONTAINER_NAME=ci_native_tsan
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export APT_LLVM_V="21"
LIBCXX_DIR="/cxx_build/"
LIBCXX_FLAGS="-fsanitize=thread -nostdinc++ -nostdlib++ -isystem ${LIBCXX_DIR}include/c++/v1 -L${LIBCXX_DIR}lib -Wl,-rpath,${LIBCXX_DIR}lib -lc++ -lc++abi -lpthread -Wno-unused-command-line-argument"
export PACKAGES="clang-${APT_LLVM_V} llvm-${APT_LLVM_V} llvm-${APT_LLVM_V}-dev libclang-${APT_LLVM_V}-dev libclang-rt-${APT_LLVM_V}-dev python3-zmq python3-pip"
export PIP_PACKAGES="--break-system-packages pycapnp"
export DEP_OPTS="CC=clang CXX=clang++ CXXFLAGS='${LIBCXX_FLAGS}' NO_QT=1"
export APT_LLVM_V="20"
export PACKAGES="clang-${APT_LLVM_V} llvm-${APT_LLVM_V} libclang-rt-${APT_LLVM_V}-dev libc++abi-${APT_LLVM_V}-dev libc++-${APT_LLVM_V}-dev python3-zmq"
export DEP_OPTS="CC=clang-${APT_LLVM_V} CXX='clang++-${APT_LLVM_V} -stdlib=libc++'"
export GOAL="install"
export CI_LIMIT_STACK_SIZE=1
export BITCOIN_CONFIG="-DWITH_ZMQ=ON -DSANITIZERS=thread \
-DAPPEND_CPPFLAGS='-DARENA_DEBUG -DDEBUG_LOCKCONTENTION -D_LIBCPP_REMOVE_TRANSITIVE_INCLUDES'"
export USE_INSTRUMENTED_LIBCPP="Thread"
-DAPPEND_CPPFLAGS='-DARENA_DEBUG -DDEBUG_LOCKORDER -DDEBUG_LOCKCONTENTION -D_LIBCPP_REMOVE_TRANSITIVE_INCLUDES'"

View File

@@ -8,14 +8,14 @@ export LC_ALL=C.UTF-8
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04"
export CONTAINER_NAME=ci_native_valgrind
export PACKAGES="valgrind python3-zmq libevent-dev libboost-dev libzmq3-dev libsqlite3-dev libcapnp-dev capnproto python3-pip"
export PIP_PACKAGES="--break-system-packages pycapnp"
export PACKAGES="valgrind clang-16 llvm-16 libclang-rt-16-dev python3-zmq libevent-dev libboost-dev libdb5.3++-dev libzmq3-dev libsqlite3-dev"
export USE_VALGRIND=1
export NO_DEPENDS=1
# bind tests excluded for now, see https://github.com/bitcoin/bitcoin/issues/17765#issuecomment-602068547
export TEST_RUNNER_EXTRA="--exclude rpc_bind,feature_bind_extra"
export TEST_RUNNER_EXTRA="--exclude feature_init,rpc_bind,feature_bind_extra" # feature_init excluded for now, see https://github.com/bitcoin/bitcoin/issues/30011 ; bind tests excluded for now, see https://github.com/bitcoin/bitcoin/issues/17765#issuecomment-602068547
export GOAL="install"
# TODO enable GUI
export BITCOIN_CONFIG="\
-DWITH_ZMQ=ON -DBUILD_GUI=OFF \
-DWITH_ZMQ=ON -DWITH_BDB=ON -DWARN_INCOMPATIBLE_BDB=OFF -DBUILD_GUI=OFF \
-DCMAKE_C_COMPILER=clang-16 \
-DCMAKE_CXX_COMPILER=clang++-16 \
"

View File

@@ -7,11 +7,15 @@
export LC_ALL=C.UTF-8
export CONTAINER_NAME=ci_win64
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:24.04" # Check that https://packages.ubuntu.com/noble/g++-mingw-w64-x86-64-posix (version 13.x, similar to guix) can cross-compile
export CI_IMAGE_NAME_TAG="mirror.gcr.io/ubuntu:noble" # Check that g++-mingw-w64-x86-64-posix (version 13.2, similar to guix) can cross-compile
export CI_IMAGE_PLATFORM="linux/amd64"
export HOST=x86_64-w64-mingw32
export PACKAGES="g++-mingw-w64-x86-64-posix nsis"
export RUN_UNIT_TESTS=false
export DPKG_ADD_ARCH="i386"
export PACKAGES="nsis g++-mingw-w64-x86-64-posix wine-binfmt wine64 wine32 file"
# Install wine, but do not run unit tests, as they surface frequent
# false-positives.
export RUN_UNIT_TESTS=${RUN_UNIT_TESTS:-false}
export RUN_FUNCTIONAL_TESTS=false
export GOAL="deploy"
export BITCOIN_CONFIG="-DREDUCE_EXPORTS=ON -DBUILD_GUI_TESTS=OFF \
-DCMAKE_CXX_FLAGS='-Wno-error=maybe-uninitialized'"
-DCMAKE_CXX_FLAGS='-Wno-error=maybe-uninitialized -Wno-error=array-bounds'"

View File

@@ -6,11 +6,11 @@
export LC_ALL=C.UTF-8
set -o errexit -o pipefail -o xtrace
set -ex
CFG_DONE="${BASE_ROOT_DIR}/ci.base-install-done" # Use a global setting to remember whether this script ran to avoid running it twice
CFG_DONE="ci.base-install-done" # Use a global git setting to remember whether this script ran to avoid running it twice
if [ "$( cat "${CFG_DONE}" || true )" == "done" ]; then
if [ "$(git config --global ${CFG_DONE})" == "true" ]; then
echo "Skip base install"
exit 0
fi
@@ -34,8 +34,7 @@ fi
if [[ $CI_IMAGE_NAME_TAG == *centos* ]]; then
bash -c "dnf -y install epel-release"
# The ninja-build package is available in the CRB repository.
bash -c "dnf -y --allowerasing --enablerepo crb install $CI_BASE_PACKAGES $PACKAGES"
bash -c "dnf -y --allowerasing install $CI_BASE_PACKAGES $PACKAGES"
elif [ "$CI_OS_NAME" != "macos" ]; then
if [[ -n "${APPEND_APT_SOURCES_LIST}" ]]; then
echo "${APPEND_APT_SOURCES_LIST}" >> /etc/apt/sources.list
@@ -44,24 +43,32 @@ elif [ "$CI_OS_NAME" != "macos" ]; then
${CI_RETRY_EXE} bash -c "apt-get install --no-install-recommends --no-upgrade -y $PACKAGES $CI_BASE_PACKAGES"
fi
if [ -n "${APT_LLVM_V}" ]; then
update-alternatives --install /usr/bin/clang++ clang++ "/usr/bin/clang++-${APT_LLVM_V}" 100
update-alternatives --install /usr/bin/clang clang "/usr/bin/clang-${APT_LLVM_V}" 100
update-alternatives --install /usr/bin/llvm-symbolizer llvm-symbolizer "/usr/bin/llvm-symbolizer-${APT_LLVM_V}" 100
fi
if [ -n "$PIP_PACKAGES" ]; then
# shellcheck disable=SC2086
${CI_RETRY_EXE} pip3 install --user $PIP_PACKAGES
fi
if [[ -n "${USE_INSTRUMENTED_LIBCPP}" ]]; then
${CI_RETRY_EXE} git clone --depth=1 https://github.com/llvm/llvm-project -b "llvmorg-21.1.1" /llvm-project
if [[ ${USE_MEMORY_SANITIZER} == "true" ]]; then
${CI_RETRY_EXE} git clone --depth=1 https://github.com/llvm/llvm-project -b "llvmorg-20.1.0" /msan/llvm-project
cmake -G Ninja -B /cxx_build/ \
cmake -G Ninja -B /msan/clang_build/ \
-DLLVM_ENABLE_PROJECTS="clang" \
-DCMAKE_BUILD_TYPE=Release \
-DLLVM_TARGETS_TO_BUILD=Native \
-DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi;libunwind" \
-S /msan/llvm-project/llvm
ninja -C /msan/clang_build/ "$MAKEJOBS"
ninja -C /msan/clang_build/ install-runtimes
update-alternatives --install /usr/bin/clang++ clang++ /msan/clang_build/bin/clang++ 100
update-alternatives --install /usr/bin/clang clang /msan/clang_build/bin/clang 100
update-alternatives --install /usr/bin/llvm-symbolizer llvm-symbolizer /msan/clang_build/bin/llvm-symbolizer 100
cmake -G Ninja -B /msan/cxx_build/ \
-DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi;libunwind" \
-DCMAKE_BUILD_TYPE=Release \
-DLLVM_USE_SANITIZER="${USE_INSTRUMENTED_LIBCPP}" \
-DLLVM_USE_SANITIZER=MemoryWithOrigins \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DLLVM_TARGETS_TO_BUILD=Native \
@@ -69,13 +76,13 @@ if [[ -n "${USE_INSTRUMENTED_LIBCPP}" ]]; then
-DLIBCXXABI_USE_LLVM_UNWINDER=OFF \
-DLIBCXX_ABI_DEFINES="_LIBCPP_ABI_BOUNDED_ITERATORS;_LIBCPP_ABI_BOUNDED_ITERATORS_IN_STD_ARRAY;_LIBCPP_ABI_BOUNDED_ITERATORS_IN_STRING;_LIBCPP_ABI_BOUNDED_ITERATORS_IN_VECTOR;_LIBCPP_ABI_BOUNDED_UNIQUE_PTR" \
-DLIBCXX_HARDENING_MODE=debug \
-S /llvm-project/runtimes
-S /msan/llvm-project/runtimes
ninja -C /cxx_build/ "$MAKEJOBS"
ninja -C /msan/cxx_build/ "$MAKEJOBS"
# Clear no longer needed source folder
du -sh /llvm-project
rm -rf /llvm-project
du -sh /msan/llvm-project
rm -rf /msan/llvm-project
fi
if [[ "${RUN_TIDY}" == "true" ]]; then
@@ -89,7 +96,7 @@ mkdir -p "${DEPENDS_DIR}/SDKs" "${DEPENDS_DIR}/sdk-sources"
OSX_SDK_BASENAME="Xcode-${XCODE_VERSION}-${XCODE_BUILD_ID}-extracted-SDK-with-libcxx-headers"
if [ -n "$XCODE_VERSION" ] && [ ! -d "${DEPENDS_DIR}/SDKs/${OSX_SDK_BASENAME}" ]; then
OSX_SDK_FILENAME="${OSX_SDK_BASENAME}.tar"
OSX_SDK_FILENAME="${OSX_SDK_BASENAME}.tar.gz"
OSX_SDK_PATH="${DEPENDS_DIR}/sdk-sources/${OSX_SDK_FILENAME}"
if [ ! -f "$OSX_SDK_PATH" ]; then
${CI_RETRY_EXE} curl --location --fail "${SDK_URL}/${OSX_SDK_FILENAME}" -o "$OSX_SDK_PATH"
@@ -97,4 +104,4 @@ if [ -n "$XCODE_VERSION" ] && [ ! -d "${DEPENDS_DIR}/SDKs/${OSX_SDK_BASENAME}" ]
tar -C "${DEPENDS_DIR}/SDKs" -xf "$OSX_SDK_PATH"
fi
echo -n "done" > "${CFG_DONE}"
git config --global ${CFG_DONE} "true"

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env python3
# Copyright (c) The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://opensource.org/license/mit/.
import os
import shlex
import subprocess
import sys
def run(cmd, **kwargs):
print("+ " + shlex.join(cmd), flush=True)
try:
return subprocess.run(cmd, check=True, **kwargs)
except Exception as e:
sys.exit(e)
def main():
print("Export only allowed settings:")
settings = run(
["bash", "-c", "grep export ./ci/test/00_setup_env*.sh"],
stdout=subprocess.PIPE,
text=True,
encoding="utf8",
).stdout.splitlines()
settings = set(l.split("=")[0].split("export ")[1] for l in settings)
# Add "hidden" settings, which are never exported, manually. Otherwise,
# they will not be passed on.
settings.update([
"BASE_BUILD_DIR",
"CI_FAILFAST_TEST_LEAVE_DANGLING",
])
# Append $USER to /tmp/env to support multi-user systems and $CONTAINER_NAME
# to allow support starting multiple runs simultaneously by the same user.
env_file = "/tmp/env-{u}-{c}".format(
u=os.getenv("USER"),
c=os.getenv("CONTAINER_NAME"),
)
with open(env_file, "w", encoding="utf8") as file:
for k, v in os.environ.items():
if k in settings:
file.write(f"{k}={v}\n")
run(["cat", env_file])
run(["./ci/test/02_run_container.sh"]) # run the remainder
if __name__ == "__main__":
main()

View File

@@ -10,6 +10,10 @@ export CI_IMAGE_LABEL="bitcoin-ci-test"
set -o errexit -o pipefail -o xtrace
if [ -z "$DANGER_RUN_CI_ON_HOST" ]; then
# Export all env vars to avoid missing some.
# Though, exclude those with newlines to avoid parsing problems.
python3 -c 'import os; [print(f"{key}={value}") for key, value in os.environ.items() if "\n" not in value and "HOME" != key and "PATH" != key and "USER" != key]' | tee "/tmp/env-$USER-$CONTAINER_NAME"
# Env vars during the build can not be changed. For example, a modified
# $MAKEJOBS is ignored in the build process. Use --cpuset-cpus as an
# approximation to respect $MAKEJOBS somewhat, if cpuset is available.
@@ -19,14 +23,34 @@ if [ -z "$DANGER_RUN_CI_ON_HOST" ]; then
fi
echo "Creating $CI_IMAGE_NAME_TAG container to run in"
# Use buildx unconditionally
# Using buildx is required to properly load the correct driver, for use with registry caching. Neither build, nor BUILDKIT=1 currently do this properly
DOCKER_BUILD_CACHE_ARG=""
DOCKER_BUILD_CACHE_TEMPDIR=""
DOCKER_BUILD_CACHE_OLD_DIR=""
DOCKER_BUILD_CACHE_NEW_DIR=""
# If set, use an `docker build` cache directory on the CI host
# to cache docker image layers for the CI container image.
# This cache can be multiple GB in size. Prefixed with DANGER
# as setting it removes (old cache) files from the host.
if [ "$DANGER_DOCKER_BUILD_CACHE_HOST_DIR" ]; then
# Directory where the current cache for this run could be. If not existing
# or empty, "docker build" will warn, but treat it as cache-miss and continue.
DOCKER_BUILD_CACHE_OLD_DIR="${DANGER_DOCKER_BUILD_CACHE_HOST_DIR}/${CONTAINER_NAME}"
# Temporary directory for a newly created cache. We can't write the new
# cache into OLD_DIR directly, as old cache layers would not be removed.
# The NEW_DIR contents are moved to OLD_DIR after OLD_DIR has been cleared.
# This happens after `docker build`. If a task fails or is aborted, the
# DOCKER_BUILD_CACHE_TEMPDIR might be retained on the host. If the host isn't
# ephemeral, it has to take care of cleaning old TEMPDIR's up.
DOCKER_BUILD_CACHE_TEMPDIR="$(mktemp --directory ci-docker-build-cache-XXXXXXXXXX)"
DOCKER_BUILD_CACHE_NEW_DIR="${DOCKER_BUILD_CACHE_TEMPDIR}/${CONTAINER_NAME}"
DOCKER_BUILD_CACHE_ARG="--cache-from type=local,src=${DOCKER_BUILD_CACHE_OLD_DIR} --cache-to type=local,dest=${DOCKER_BUILD_CACHE_NEW_DIR},mode=max"
fi
# shellcheck disable=SC2086
docker buildx build \
DOCKER_BUILDKIT=1 docker build \
--file "${BASE_READ_ONLY_DIR}/ci/test_imagefile" \
--build-arg "CI_IMAGE_NAME_TAG=${CI_IMAGE_NAME_TAG}" \
--build-arg "FILE_ENV=${FILE_ENV}" \
--build-arg "BASE_ROOT_DIR=${BASE_ROOT_DIR}" \
$MAYBE_CPUSET \
--platform="${CI_IMAGE_PLATFORM}" \
--label="${CI_IMAGE_LABEL}" \
@@ -34,6 +58,15 @@ if [ -z "$DANGER_RUN_CI_ON_HOST" ]; then
$DOCKER_BUILD_CACHE_ARG \
"${BASE_READ_ONLY_DIR}"
if [ "$DANGER_DOCKER_BUILD_CACHE_HOST_DIR" ]; then
if [ -e "${DOCKER_BUILD_CACHE_NEW_DIR}/index.json" ]; then
echo "Removing the existing docker build cache in ${DOCKER_BUILD_CACHE_OLD_DIR}"
rm -rf "${DOCKER_BUILD_CACHE_OLD_DIR}"
echo "Moving the contents of ${DOCKER_BUILD_CACHE_NEW_DIR} to ${DOCKER_BUILD_CACHE_OLD_DIR}"
mv "${DOCKER_BUILD_CACHE_NEW_DIR}" "${DOCKER_BUILD_CACHE_OLD_DIR}"
fi
fi
docker volume create "${CONTAINER_NAME}_ccache" || true
docker volume create "${CONTAINER_NAME}_depends" || true
docker volume create "${CONTAINER_NAME}_depends_sources" || true
@@ -61,11 +94,16 @@ if [ -z "$DANGER_RUN_CI_ON_HOST" ]; then
fi
if [ "$DANGER_CI_ON_HOST_CCACHE_FOLDER" ]; then
# Temporary exclusion for https://github.com/bitcoin/bitcoin/issues/31108
# to allow CI configs and envs generated in the past to work for a bit longer.
# Can be removed in March 2025.
if [ "${CCACHE_DIR}" != "/tmp/ccache_dir" ]; then
if [ ! -d "${CCACHE_DIR}" ]; then
echo "Error: Directory '${CCACHE_DIR}' must be created in advance."
exit 1
fi
CI_CCACHE_MOUNT="type=bind,src=${CCACHE_DIR},dst=${CCACHE_DIR}"
fi # End temporary exclusion
fi
docker network create --ipv6 --subnet 1111:1111::/112 ci-ip6net || true
@@ -85,6 +123,8 @@ if [ -z "$DANGER_RUN_CI_ON_HOST" ]; then
# When detecting podman-docker, `--external` should be added.
docker image prune --force --filter "label=$CI_IMAGE_LABEL"
# Append $USER to /tmp/env to support multi-user systems and $CONTAINER_NAME
# to allow support starting multiple runs simultaneously by the same user.
# shellcheck disable=SC2086
CI_CONTAINER_ID=$(docker run --cap-add LINUX_IMMUTABLE $CI_CONTAINER_CAP --rm --interactive --detach --tty \
--mount "type=bind,src=$BASE_READ_ONLY_DIR,dst=$BASE_READ_ONLY_DIR,readonly" \
@@ -118,9 +158,13 @@ CI_EXEC () {
export -f CI_EXEC
# Normalize all folders to BASE_ROOT_DIR
CI_EXEC rsync --recursive --perms --stats --human-readable "${BASE_READ_ONLY_DIR}/" "${BASE_ROOT_DIR}" || echo "Nothing to copy from ${BASE_READ_ONLY_DIR}/"
CI_EXEC rsync --archive --stats --human-readable "${BASE_READ_ONLY_DIR}/" "${BASE_ROOT_DIR}" || echo "Nothing to copy from ${BASE_READ_ONLY_DIR}/"
CI_EXEC "${BASE_ROOT_DIR}/ci/test/01_base_install.sh"
# Fixes permission issues when there is a container UID/GID mismatch with the owner
# of the git source code directory.
CI_EXEC git config --global --add safe.directory \"*\"
CI_EXEC mkdir -p "${BINS_SCRATCH_DIR}"
CI_EXEC "${BASE_ROOT_DIR}/ci/test/03_test_script.sh"

View File

@@ -24,14 +24,6 @@ fi
echo "Free disk space:"
df -h
# We force an install of linux-headers again here via $PACKAGES to fix any
# kernel mismatch between a cached docker image and the underlying host.
# This can happen occasionally on hosted runners if the runner image is updated.
if [[ "$CONTAINER_NAME" == "ci_native_asan" ]]; then
$CI_RETRY_EXE apt-get update
${CI_RETRY_EXE} bash -c "apt-get install --no-install-recommends --no-upgrade -y $PACKAGES"
fi
# What host to compile for. See also ./depends/README.md
# Tests that need cross-compilation export the appropriate HOST.
# Tests that run natively guess the host
@@ -74,7 +66,7 @@ if [ "$RUN_FUZZ_TESTS" = "true" ]; then
echo "Using qa-assets repo from commit ..."
git log -1
)
elif [ "$RUN_UNIT_TESTS" = "true" ]; then
elif [ "$RUN_UNIT_TESTS" = "true" ] || [ "$RUN_UNIT_TESTS_SEQUENTIAL" = "true" ]; then
export DIR_UNIT_TEST_DATA=${DIR_QA_ASSETS}/unit_test_data/
if [ ! -d "$DIR_UNIT_TEST_DATA" ]; then
mkdir -p "$DIR_UNIT_TEST_DATA"
@@ -100,14 +92,14 @@ fi
if [ -z "$NO_DEPENDS" ]; then
if [[ $CI_IMAGE_NAME_TAG == *centos* ]]; then
SHELL_OPTS="CONFIG_SHELL=/bin/dash"
SHELL_OPTS="CONFIG_SHELL=/bin/ksh" # Temporarily use ksh instead of dash, until https://bugzilla.redhat.com/show_bug.cgi?id=2335416 is fixed.
else
SHELL_OPTS="CONFIG_SHELL="
fi
bash -c "$SHELL_OPTS make $MAKEJOBS -C depends HOST=$HOST $DEP_OPTS LOG=1"
fi
if [ "$DOWNLOAD_PREVIOUS_RELEASES" = "true" ]; then
test/get_previous_releases.py --target-dir "$PREVIOUS_RELEASES_DIR"
test/get_previous_releases.py -b -t "$PREVIOUS_RELEASES_DIR"
fi
BITCOIN_CONFIG_ALL="-DBUILD_BENCH=ON -DBUILD_FUZZ_BINARY=ON"
@@ -123,40 +115,25 @@ PRINT_CCACHE_STATISTICS="ccache --version | head -n 1 && ccache --show-stats"
# Folder where the build is done.
BASE_BUILD_DIR=${BASE_BUILD_DIR:-$BASE_SCRATCH_DIR/build-$HOST}
mkdir -p "${BASE_BUILD_DIR}"
cd "${BASE_BUILD_DIR}"
BITCOIN_CONFIG_ALL="$BITCOIN_CONFIG_ALL -DCMAKE_INSTALL_PREFIX=$BASE_OUTDIR -Werror=dev"
BITCOIN_CONFIG_ALL="$BITCOIN_CONFIG_ALL -DENABLE_EXTERNAL_SIGNER=ON -DCMAKE_INSTALL_PREFIX=$BASE_OUTDIR"
if [[ "${RUN_TIDY}" == "true" ]]; then
BITCOIN_CONFIG_ALL="$BITCOIN_CONFIG_ALL -DCMAKE_EXPORT_COMPILE_COMMANDS=ON"
fi
bash -c "cmake -S $BASE_ROOT_DIR -B ${BASE_BUILD_DIR} $BITCOIN_CONFIG_ALL $BITCOIN_CONFIG" || (
cd "${BASE_BUILD_DIR}"
# shellcheck disable=SC2046
cat $(cmake -P "${BASE_ROOT_DIR}/ci/test/GetCMakeLogFiles.cmake")
false
)
bash -c "cmake -S $BASE_ROOT_DIR $BITCOIN_CONFIG_ALL $BITCOIN_CONFIG || ( (cat $(cmake -P "${BASE_ROOT_DIR}/ci/test/GetCMakeLogFiles.cmake")) && false)"
# shellcheck disable=SC2086
cmake --build "${BASE_BUILD_DIR}" "$MAKEJOBS" --target all $GOAL || (
echo "Build failure. Verbose build follows."
# shellcheck disable=SC2086
cmake --build "${BASE_BUILD_DIR}" -j1 --target all $GOAL --verbose
false
)
bash -c "cmake --build . $MAKEJOBS --target all $GOAL" || ( echo "Build failure. Verbose build follows." && cmake --build . --target all "$GOAL" --verbose ; false )
bash -c "${PRINT_CCACHE_STATISTICS}"
if [ "$CI" = "true" ]; then
hit_rate=$(ccache -s | grep "Hits:" | head -1 | sed 's/.*(\(.*\)%).*/\1/')
if [ "${hit_rate%.*}" -lt 75 ]; then
echo "::notice title=low ccache hitrate::Ccache hit-rate in $CONTAINER_NAME was $hit_rate%"
fi
fi
du -sh "${DEPENDS_DIR}"/*/
du -sh "${PREVIOUS_RELEASES_DIR}"
if [ -n "${CI_LIMIT_STACK_SIZE}" ]; then
ulimit -s 512
if [[ $HOST = *-mingw32 ]]; then
"${BASE_ROOT_DIR}/ci/test/wrap-wine.sh"
fi
if [ -n "$USE_VALGRIND" ]; then
@@ -164,32 +141,21 @@ if [ -n "$USE_VALGRIND" ]; then
fi
if [ "$RUN_CHECK_DEPS" = "true" ]; then
"${BASE_ROOT_DIR}/contrib/devtools/check-deps.sh" "${BASE_BUILD_DIR}"
"${BASE_ROOT_DIR}/contrib/devtools/check-deps.sh" .
fi
if [ "$RUN_UNIT_TESTS" = "true" ]; then
DIR_UNIT_TEST_DATA="${DIR_UNIT_TEST_DATA}" \
LD_LIBRARY_PATH="${DEPENDS_DIR}/${HOST}/lib" \
CTEST_OUTPUT_ON_FAILURE=ON \
ctest --test-dir "${BASE_BUILD_DIR}" \
--stop-on-failure \
"${MAKEJOBS}" \
--timeout $(( TEST_RUNNER_TIMEOUT_FACTOR * 60 ))
DIR_UNIT_TEST_DATA="${DIR_UNIT_TEST_DATA}" LD_LIBRARY_PATH="${DEPENDS_DIR}/${HOST}/lib" CTEST_OUTPUT_ON_FAILURE=ON ctest --stop-on-failure "${MAKEJOBS}" --timeout $(( TEST_RUNNER_TIMEOUT_FACTOR * 60 ))
fi
if [ "$RUN_UNIT_TESTS_SEQUENTIAL" = "true" ]; then
DIR_UNIT_TEST_DATA="${DIR_UNIT_TEST_DATA}" LD_LIBRARY_PATH="${DEPENDS_DIR}/${HOST}/lib" "${BASE_OUTDIR}"/bin/test_bitcoin --catch_system_errors=no -l test_suite
fi
if [ "$RUN_FUNCTIONAL_TESTS" = "true" ]; then
# parses TEST_RUNNER_EXTRA as an array which allows for multiple arguments such as TEST_RUNNER_EXTRA='--exclude "rpc_bind.py --ipv6"'
eval "TEST_RUNNER_EXTRA=($TEST_RUNNER_EXTRA)"
LD_LIBRARY_PATH="${DEPENDS_DIR}/${HOST}/lib" \
"${BASE_BUILD_DIR}/test/functional/test_runner.py" \
--ci "${MAKEJOBS}" \
--tmpdirprefix "${BASE_SCRATCH_DIR}/test_runner/" \
--ansi \
--combinedlogslen=99999999 \
--timeout-factor="${TEST_RUNNER_TIMEOUT_FACTOR}" \
"${TEST_RUNNER_EXTRA[@]}" \
--quiet \
--failfast
LD_LIBRARY_PATH="${DEPENDS_DIR}/${HOST}/lib" test/functional/test_runner.py --ci "${MAKEJOBS}" --tmpdirprefix "${BASE_SCRATCH_DIR}"/test_runner/ --ansi --combinedlogslen=99999999 --timeout-factor="${TEST_RUNNER_TIMEOUT_FACTOR}" "${TEST_RUNNER_EXTRA[@]}" --quiet --failfast
fi
if [ "${RUN_TIDY}" = "true" ]; then
@@ -223,11 +189,5 @@ fi
if [ "$RUN_FUZZ_TESTS" = "true" ]; then
# shellcheck disable=SC2086
LD_LIBRARY_PATH="${DEPENDS_DIR}/${HOST}/lib" \
"${BASE_BUILD_DIR}/test/fuzz/test_runner.py" \
${FUZZ_TESTS_CONFIG} \
"${MAKEJOBS}" \
-l DEBUG \
"${DIR_FUZZ_IN}" \
--empty_min_time=60
LD_LIBRARY_PATH="${DEPENDS_DIR}/${HOST}/lib" test/fuzz/test_runner.py ${FUZZ_TESTS_CONFIG} "${MAKEJOBS}" -l DEBUG "${DIR_FUZZ_IN}" --empty_min_time=60
fi

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
#
# Copyright (c) 2018-present The Bitcoin Core developers
# Copyright (c) 2018-2021 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
@@ -12,7 +12,7 @@ for b_name in "${BASE_OUTDIR}/bin"/*; do
echo "Wrap $b ..."
mv "$b" "${b}_orig"
echo '#!/usr/bin/env bash' > "$b"
echo "exec valgrind --gen-suppressions=all --quiet --error-exitcode=1 --suppressions=${BASE_ROOT_DIR}/contrib/valgrind.supp \"${b}_orig\" \"\$@\"" >> "$b"
echo "valgrind --gen-suppressions=all --quiet --error-exitcode=1 --suppressions=${BASE_ROOT_DIR}/contrib/valgrind.supp \"${b}_orig\" \"\$@\"" >> "$b"
chmod +x "$b"
done
done

20
ci/test/wrap-wine.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
#
# Copyright (c) 2020-2022 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
export LC_ALL=C.UTF-8
for b_name in {"${BASE_OUTDIR}/bin"/*,src/univalue/{test_json,unitester,object}}.exe; do
# shellcheck disable=SC2044
for b in $(find "${BASE_ROOT_DIR}" -executable -type f -name "$(basename "$b_name")"); do
if (file "$b" | grep "Windows"); then
echo "Wrap $b ..."
mv "$b" "${b}_orig"
echo '#!/usr/bin/env bash' > "$b"
echo "( wine \"${b}_orig\" \"\$@\" ) || ( sleep 1 && wine \"${b}_orig\" \"\$@\" )" >> "$b"
chmod +x "$b"
fi
done
done

View File

@@ -4,16 +4,12 @@
# See ci/README.md for usage.
# We never want scratch, but default arg silences a Warning
ARG CI_IMAGE_NAME_TAG=scratch
ARG CI_IMAGE_NAME_TAG
FROM ${CI_IMAGE_NAME_TAG}
ARG FILE_ENV
ENV FILE_ENV=${FILE_ENV}
ARG BASE_ROOT_DIR
ENV BASE_ROOT_DIR=${BASE_ROOT_DIR}
COPY ./ci/retry/retry /usr/bin/retry
COPY ./ci/test/00_setup_env.sh ./${FILE_ENV} ./ci/test/01_base_install.sh /ci_container_base/ci/test/

View File

@@ -8,4 +8,4 @@ export LC_ALL=C.UTF-8
set -o errexit; source ./ci/test/00_setup_env.sh
set -o errexit
"./ci/test/02_run_container.py"
"./ci/test/02_run_container.sh"

View File

@@ -29,9 +29,18 @@
/* Copyright year */
#define COPYRIGHT_YEAR @COPYRIGHT_YEAR@
/* Define this symbol to build code that uses ARMv8 SHA-NI intrinsics */
#cmakedefine ENABLE_ARM_SHANI 1
/* Define this symbol to build code that uses AVX2 intrinsics */
#cmakedefine ENABLE_AVX2 1
/* Define if external signer support is enabled */
#cmakedefine ENABLE_EXTERNAL_SIGNER 1
/* Define this symbol to build code that uses SSE4.1 intrinsics */
#cmakedefine ENABLE_SSE41 1
/* Define to 1 to enable tracepoints for Userspace, Statically Defined Tracing
*/
#cmakedefine ENABLE_TRACING 1
@@ -39,12 +48,20 @@
/* Define to 1 to enable wallet functions. */
#cmakedefine ENABLE_WALLET 1
/* Define this symbol to build code that uses x86 SHA-NI intrinsics */
#cmakedefine ENABLE_X86_SHANI 1
/* Define to 1 if you have the declaration of `fork', and to 0 if you don't.
*/
#cmakedefine01 HAVE_DECL_FORK
/* Define to 1 if '*ifaddrs' are available. */
#cmakedefine HAVE_IFADDRS 1
/* Define to 1 if you have the declaration of `freeifaddrs', and to 0 if you
don't. */
#cmakedefine01 HAVE_DECL_FREEIFADDRS
/* Define to 1 if you have the declaration of `getifaddrs', and to 0 if you
don't. */
#cmakedefine01 HAVE_DECL_GETIFADDRS
/* Define to 1 if you have the declaration of `pipe2', and to 0 if you don't.
*/
@@ -91,6 +108,18 @@
/* Define to 1 if std::system or ::wsystem is available. */
#cmakedefine HAVE_SYSTEM 1
/* Define to 1 if you have the <sys/prctl.h> header file. */
#cmakedefine HAVE_SYS_PRCTL_H 1
/* Define to 1 if you have the <sys/resources.h> header file. */
#cmakedefine HAVE_SYS_RESOURCES_H 1
/* Define to 1 if you have the <sys/vmmeter.h> header file. */
#cmakedefine HAVE_SYS_VMMETER_H 1
/* Define to 1 if you have the <vm/vm_param.h> header file. */
#cmakedefine HAVE_VM_VM_PARAM_H 1
/* Define to the address where bug reports for this package should be sent. */
#define CLIENT_BUGREPORT "@CLIENT_BUGREPORT@"
@@ -106,10 +135,16 @@
/* Define to 1 if strerror_r returns char *. */
#cmakedefine STRERROR_R_CHAR_P 1
/* Define if BDB support should be compiled in */
#cmakedefine USE_BDB 1
/* Define if dbus support should be compiled in */
#cmakedefine USE_DBUS 1
/* Define if QR support should be compiled in */
#cmakedefine USE_QRCODE 1
/* Define if sqlite support should be compiled in */
#cmakedefine USE_SQLITE 1
#endif //BITCOIN_CONFIG_H

View File

@@ -4,6 +4,13 @@
include(CheckCXXSourceCompiles)
include(CheckCXXSymbolExists)
include(CheckIncludeFileCXX)
# The following HAVE_{HEADER}_H variables go to the bitcoin-build-config.h header.
check_include_file_cxx(sys/prctl.h HAVE_SYS_PRCTL_H)
check_include_file_cxx(sys/resources.h HAVE_SYS_RESOURCES_H)
check_include_file_cxx(sys/vmmeter.h HAVE_SYS_VMMETER_H)
check_include_file_cxx(vm/vm_param.h HAVE_VM_VM_PARAM_H)
check_cxx_symbol_exists(O_CLOEXEC "fcntl.h" HAVE_O_CLOEXEC)
check_cxx_symbol_exists(fdatasync "unistd.h" HAVE_FDATASYNC)
@@ -11,7 +18,9 @@ check_cxx_symbol_exists(fork "unistd.h" HAVE_DECL_FORK)
check_cxx_symbol_exists(pipe2 "unistd.h" HAVE_DECL_PIPE2)
check_cxx_symbol_exists(setsid "unistd.h" HAVE_DECL_SETSID)
if(NOT WIN32)
check_include_file_cxx(sys/types.h HAVE_SYS_TYPES_H)
check_include_file_cxx(ifaddrs.h HAVE_IFADDRS_H)
if(HAVE_SYS_TYPES_H AND HAVE_IFADDRS_H)
include(TestAppendRequiredLibraries)
test_append_socket_library(core_interface)
endif()
@@ -19,8 +28,6 @@ endif()
include(TestAppendRequiredLibraries)
test_append_atomic_library(core_interface)
# Even though ::system is part of the standard library, we still check
# for it, to support building targets that don't have it, such as iOS.
check_cxx_symbol_exists(std::system "cstdlib" HAVE_STD_SYSTEM)
check_cxx_symbol_exists(::_wsystem "stdlib.h" HAVE__WSYSTEM)
if(HAVE_STD_SYSTEM OR HAVE__WSYSTEM)
@@ -55,6 +62,13 @@ check_cxx_source_compiles("
# Check for posix_fallocate().
check_cxx_source_compiles("
// same as in src/util/fs_helpers.cpp
#ifdef __linux__
#ifdef _POSIX_C_SOURCE
#undef _POSIX_C_SOURCE
#endif
#define _POSIX_C_SOURCE 200112L
#endif // __linux__
#include <fcntl.h>
int main()
@@ -163,6 +177,7 @@ if(NOT MSVC)
" HAVE_SSE41
CXXFLAGS ${SSE41_CXXFLAGS}
)
set(ENABLE_SSE41 ${HAVE_SSE41})
# Check for AVX2 intrinsics.
set(AVX2_CXXFLAGS -mavx -mavx2)
@@ -177,6 +192,7 @@ if(NOT MSVC)
" HAVE_AVX2
CXXFLAGS ${AVX2_CXXFLAGS}
)
set(ENABLE_AVX2 ${HAVE_AVX2})
# Check for x86 SHA-NI intrinsics.
set(X86_SHANI_CXXFLAGS -msse4 -msha)
@@ -193,6 +209,7 @@ if(NOT MSVC)
" HAVE_X86_SHANI
CXXFLAGS ${X86_SHANI_CXXFLAGS}
)
set(ENABLE_X86_SHANI ${HAVE_X86_SHANI})
# Check for ARMv8 SHA-NI intrinsics.
set(ARM_SHANI_CXXFLAGS -march=armv8-a+crypto)
@@ -210,4 +227,5 @@ if(NOT MSVC)
" HAVE_ARM_SHANI
CXXFLAGS ${ARM_SHANI_CXXFLAGS}
)
set(ENABLE_ARM_SHANI ${HAVE_ARM_SHANI})
endif()

View File

@@ -90,6 +90,9 @@ else()
try_append_cxx_flags("-Wconditional-uninitialized" TARGET nowarn_leveldb_interface SKIP_LINK
IF_CHECK_PASSED "-Wno-conditional-uninitialized"
)
try_append_cxx_flags("-Wsuggest-override" TARGET nowarn_leveldb_interface SKIP_LINK
IF_CHECK_PASSED "-Wno-suggest-override"
)
endif()
target_link_libraries(leveldb PRIVATE

View File

@@ -1,38 +0,0 @@
# Copyright (c) 2025 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://opensource.org/license/mit/.
function(add_libmultiprocess subdir)
# Set BUILD_TESTING to match BUILD_TESTS. BUILD_TESTING is a standard cmake
# option that controls whether enable_testing() is called, but in the bitcoin
# build a BUILD_TESTS option is used instead.
set(BUILD_TESTING "${BUILD_TESTS}")
add_subdirectory(${subdir} EXCLUDE_FROM_ALL)
# Apply core_interface compile options to libmultiprocess runtime library.
target_link_libraries(multiprocess PUBLIC $<BUILD_INTERFACE:core_interface>)
target_link_libraries(mputil PUBLIC $<BUILD_INTERFACE:core_interface>)
target_link_libraries(mpgen PUBLIC $<BUILD_INTERFACE:core_interface>)
# Mark capproto options as advanced to hide by default from cmake UI
mark_as_advanced(CapnProto_DIR)
mark_as_advanced(CapnProto_capnpc_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_capnp_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_capnp-json_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_capnp-rpc_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_capnp-websocket_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_kj-async_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_kj-gzip_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_kj-http_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_kj_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_kj-test_IMPORTED_LOCATION)
mark_as_advanced(CapnProto_kj-tls_IMPORTED_LOCATION)
if(BUILD_TESTS)
# Add tests to "all" target so ctest can run them
set_target_properties(mptests PROPERTIES EXCLUDE_FROM_ALL OFF)
endif()
# Exclude examples from compilation database, because the examples are not
# built by default, and they contain generated c++ code. Without this
# exclusion, tools like clang-tidy and IWYU that make use of compilation
# database would complain that the generated c++ source files do not exist. An
# alternate fix could build "mpexamples" by default like "mptests" above.
set_target_properties(mpcalculator mpprinter mpexample PROPERTIES EXPORT_COMPILE_COMMANDS OFF)
endfunction()

View File

@@ -17,29 +17,17 @@ function(add_boost_if_needed)
directory and other added INTERFACE properties.
]=]
if(CMAKE_HOST_APPLE)
find_program(HOMEBREW_EXECUTABLE brew)
if(HOMEBREW_EXECUTABLE)
execute_process(
COMMAND ${HOMEBREW_EXECUTABLE} --prefix boost
OUTPUT_VARIABLE Boost_ROOT
ERROR_QUIET
OUTPUT_STRIP_TRAILING_WHITESPACE
)
endif()
endif()
find_package(Boost 1.74.0 REQUIRED CONFIG)
mark_as_advanced(Boost_INCLUDE_DIR boost_headers_DIR)
# Workaround for a bug in NetBSD pkgsrc.
# See: https://github.com/NetBSD/pkgsrc/issues/167.
if(CMAKE_SYSTEM_NAME STREQUAL "NetBSD")
get_filename_component(_boost_include_dir "${boost_headers_DIR}/../../../include/" ABSOLUTE)
set_target_properties(Boost::headers PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES ${_boost_include_dir}
)
unset(_boost_include_dir)
# We cannot rely on find_package(Boost ...) to work properly without
# Boost_NO_BOOST_CMAKE set until we require a more recent Boost because
# upstream did not ship proper CMake files until 1.82.0.
# Until then, we rely on CMake's FindBoost module.
# See: https://cmake.org/cmake/help/latest/policy/CMP0167.html
if(POLICY CMP0167)
cmake_policy(SET CMP0167 OLD)
endif()
set(Boost_NO_BOOST_CMAKE ON)
find_package(Boost 1.73.0 REQUIRED)
mark_as_advanced(Boost_INCLUDE_DIR)
set_target_properties(Boost::headers PROPERTIES IMPORTED_GLOBAL TRUE)
target_compile_definitions(Boost::headers INTERFACE
# We don't use multi_index serialization.
@@ -57,24 +45,34 @@ function(add_boost_if_needed)
# older than 1.80.
# See: https://github.com/boostorg/config/pull/430.
set(CMAKE_REQUIRED_DEFINITIONS -DBOOST_NO_CXX98_FUNCTION_BASE)
get_target_property(CMAKE_REQUIRED_INCLUDES Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
set(CMAKE_REQUIRED_INCLUDES ${Boost_INCLUDE_DIR})
include(CMakePushCheckState)
cmake_push_check_state()
include(TryAppendCXXFlags)
set(CMAKE_REQUIRED_FLAGS ${working_compiler_werror_flag})
set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY)
include(CheckCXXSourceCompiles)
check_cxx_source_compiles("
#include <boost/config.hpp>
" NO_DIAGNOSTICS_BOOST_NO_CXX98_FUNCTION_BASE
)
cmake_pop_check_state()
if(NO_DIAGNOSTICS_BOOST_NO_CXX98_FUNCTION_BASE)
target_compile_definitions(Boost::headers INTERFACE
BOOST_NO_CXX98_FUNCTION_BASE
)
else()
set(CMAKE_REQUIRED_DEFINITIONS)
endif()
# Some package managers, such as vcpkg, vendor Boost.Test separately
# from the rest of the headers, so we have to check for it individually.
if(BUILD_TESTS AND DEFINED VCPKG_TARGET_TRIPLET)
find_package(boost_included_unit_test_framework ${Boost_VERSION} EXACT REQUIRED CONFIG)
list(APPEND CMAKE_REQUIRED_DEFINITIONS -DBOOST_TEST_NO_MAIN)
include(CheckIncludeFileCXX)
check_include_file_cxx(boost/test/included/unit_test.hpp HAVE_BOOST_INCLUDED_UNIT_TEST_H)
if(NOT HAVE_BOOST_INCLUDED_UNIT_TEST_H)
message(FATAL_ERROR "Building test_bitcoin executable requested but boost/test/included/unit_test.hpp header not available.")
endif()
endif()
endfunction()

View File

@@ -4,21 +4,11 @@
include_guard(GLOBAL)
function(add_windows_resources target rc_file)
macro(add_windows_resources target rc_file)
if(WIN32)
target_sources(${target} PRIVATE ${rc_file})
endif()
endfunction()
# Add a fusion manifest to Windows executables.
# See: https://learn.microsoft.com/en-us/windows/win32/sbscs/application-manifests
function(add_windows_application_manifest target)
if(WIN32)
configure_file(${PROJECT_SOURCE_DIR}/cmake/windows-app.manifest.in ${target}.manifest USE_SOURCE_PERMISSIONS)
file(CONFIGURE
OUTPUT ${target}-manifest.rc
CONTENT "1 /* CREATEPROCESS_MANIFEST_RESOURCE_ID */ 24 /* RT_MANIFEST */ \"${target}.manifest\""
set_property(SOURCE ${rc_file}
APPEND PROPERTY COMPILE_DEFINITIONS WINDRES_PREPROC
)
add_windows_resources(${target} ${CMAKE_CURRENT_BINARY_DIR}/${target}-manifest.rc)
endif()
endfunction()
endmacro()

View File

@@ -0,0 +1,133 @@
# Copyright (c) 2023-present The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://opensource.org/license/mit/.
#[=======================================================================[
FindBerkeleyDB
--------------
Finds the Berkeley DB headers and library.
Imported Targets
^^^^^^^^^^^^^^^^
This module provides imported target ``BerkeleyDB::BerkeleyDB``, if
Berkeley DB has been found.
Result Variables
^^^^^^^^^^^^^^^^
This module defines the following variables:
``BerkeleyDB_FOUND``
"True" if Berkeley DB found.
``BerkeleyDB_VERSION``
The MAJOR.MINOR version of Berkeley DB found.
#]=======================================================================]
set(_BerkeleyDB_homebrew_prefix)
if(CMAKE_HOST_APPLE)
find_program(HOMEBREW_EXECUTABLE brew)
if(HOMEBREW_EXECUTABLE)
# The Homebrew package manager installs the berkeley-db* packages as
# "keg-only", which means they are not symlinked into the default prefix.
# To find such a package, the find_path() and find_library() commands
# need additional path hints that are computed by Homebrew itself.
execute_process(
COMMAND ${HOMEBREW_EXECUTABLE} --prefix berkeley-db@4
OUTPUT_VARIABLE _BerkeleyDB_homebrew_prefix
ERROR_QUIET
OUTPUT_STRIP_TRAILING_WHITESPACE
)
endif()
endif()
find_path(BerkeleyDB_INCLUDE_DIR
NAMES db_cxx.h
HINTS ${_BerkeleyDB_homebrew_prefix}/include
PATH_SUFFIXES 4.8 48 db4.8 4 db4 5.3 db5.3 5 db5
)
mark_as_advanced(BerkeleyDB_INCLUDE_DIR)
unset(_BerkeleyDB_homebrew_prefix)
if(NOT BerkeleyDB_LIBRARY)
if(VCPKG_TARGET_TRIPLET)
# The vcpkg package manager installs the berkeleydb package with the same name
# of release and debug libraries. Therefore, the default search paths set by
# vcpkg's toolchain file cannot be used to search libraries as the debug one
# will always be found.
set(CMAKE_FIND_USE_CMAKE_PATH FALSE)
endif()
get_filename_component(_BerkeleyDB_lib_hint "${BerkeleyDB_INCLUDE_DIR}" DIRECTORY)
find_library(BerkeleyDB_LIBRARY_RELEASE
NAMES db_cxx-4.8 db4_cxx db48 db_cxx-5.3 db_cxx-5 db_cxx libdb48
NAMES_PER_DIR
HINTS ${_BerkeleyDB_lib_hint}
PATH_SUFFIXES lib
)
mark_as_advanced(BerkeleyDB_LIBRARY_RELEASE)
find_library(BerkeleyDB_LIBRARY_DEBUG
NAMES db_cxx-4.8 db4_cxx db48 db_cxx-5.3 db_cxx-5 db_cxx libdb48
NAMES_PER_DIR
HINTS ${_BerkeleyDB_lib_hint}
PATH_SUFFIXES debug/lib
)
mark_as_advanced(BerkeleyDB_LIBRARY_DEBUG)
unset(_BerkeleyDB_lib_hint)
unset(CMAKE_FIND_USE_CMAKE_PATH)
include(SelectLibraryConfigurations)
select_library_configurations(BerkeleyDB)
# The select_library_configurations() command sets BerkeleyDB_FOUND, but we
# want the one from the find_package_handle_standard_args() command below.
unset(BerkeleyDB_FOUND)
endif()
if(BerkeleyDB_INCLUDE_DIR)
file(STRINGS "${BerkeleyDB_INCLUDE_DIR}/db.h" _BerkeleyDB_version_strings REGEX "^#define[\t ]+DB_VERSION_(MAJOR|MINOR|PATCH)[ \t]+[0-9]+.*")
string(REGEX REPLACE ".*#define[\t ]+DB_VERSION_MAJOR[ \t]+([0-9]+).*" "\\1" _BerkeleyDB_version_major "${_BerkeleyDB_version_strings}")
string(REGEX REPLACE ".*#define[\t ]+DB_VERSION_MINOR[ \t]+([0-9]+).*" "\\1" _BerkeleyDB_version_minor "${_BerkeleyDB_version_strings}")
string(REGEX REPLACE ".*#define[\t ]+DB_VERSION_PATCH[ \t]+([0-9]+).*" "\\1" _BerkeleyDB_version_patch "${_BerkeleyDB_version_strings}")
unset(_BerkeleyDB_version_strings)
# The MAJOR.MINOR.PATCH version will be logged in the following find_package_handle_standard_args() command.
set(_BerkeleyDB_full_version ${_BerkeleyDB_version_major}.${_BerkeleyDB_version_minor}.${_BerkeleyDB_version_patch})
set(BerkeleyDB_VERSION ${_BerkeleyDB_version_major}.${_BerkeleyDB_version_minor})
unset(_BerkeleyDB_version_major)
unset(_BerkeleyDB_version_minor)
unset(_BerkeleyDB_version_patch)
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(BerkeleyDB
REQUIRED_VARS BerkeleyDB_LIBRARY BerkeleyDB_INCLUDE_DIR
VERSION_VAR _BerkeleyDB_full_version
)
unset(_BerkeleyDB_full_version)
if(BerkeleyDB_FOUND AND NOT TARGET BerkeleyDB::BerkeleyDB)
add_library(BerkeleyDB::BerkeleyDB UNKNOWN IMPORTED)
set_target_properties(BerkeleyDB::BerkeleyDB PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${BerkeleyDB_INCLUDE_DIR}"
)
if(BerkeleyDB_LIBRARY_RELEASE)
set_property(TARGET BerkeleyDB::BerkeleyDB APPEND PROPERTY
IMPORTED_CONFIGURATIONS RELEASE
)
set_target_properties(BerkeleyDB::BerkeleyDB PROPERTIES
IMPORTED_LOCATION_RELEASE "${BerkeleyDB_LIBRARY_RELEASE}"
)
endif()
if(BerkeleyDB_LIBRARY_DEBUG)
set_property(TARGET BerkeleyDB::BerkeleyDB APPEND PROPERTY
IMPORTED_CONFIGURATIONS DEBUG)
set_target_properties(BerkeleyDB::BerkeleyDB PROPERTIES
IMPORTED_LOCATION_DEBUG "${BerkeleyDB_LIBRARY_DEBUG}"
)
endif()
endif()

View File

@@ -21,16 +21,16 @@ endif()
find_path(QRencode_INCLUDE_DIR
NAMES qrencode.h
HINTS ${PC_QRencode_INCLUDE_DIRS}
PATHS ${PC_QRencode_INCLUDE_DIRS}
)
find_library(QRencode_LIBRARY_RELEASE
NAMES qrencode
HINTS ${PC_QRencode_LIBRARY_DIRS}
PATHS ${PC_QRencode_LIBRARY_DIRS}
)
find_library(QRencode_LIBRARY_DEBUG
NAMES qrencoded qrencode
HINTS ${PC_QRencode_LIBRARY_DIRS}
PATHS ${PC_QRencode_LIBRARY_DIRS}
)
include(SelectLibraryConfigurations)
select_library_configurations(QRencode)

View File

@@ -27,6 +27,19 @@ if(CMAKE_HOST_APPLE)
endif()
endif()
# Save CMAKE_FIND_ROOT_PATH_MODE_LIBRARY state.
unset(_qt_find_root_path_mode_library_saved)
if(DEFINED CMAKE_FIND_ROOT_PATH_MODE_LIBRARY)
set(_qt_find_root_path_mode_library_saved ${CMAKE_FIND_ROOT_PATH_MODE_LIBRARY})
endif()
# The Qt config files internally use find_library() calls for all
# dependencies to ensure their availability. In turn, the find_library()
# inspects the well-known locations on the file system; therefore, it must
# be able to find platform-specific system libraries, for example:
# /usr/x86_64-w64-mingw32/lib/libm.a or /usr/arm-linux-gnueabihf/lib/libm.a.
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY BOTH)
find_package(Qt${Qt_FIND_VERSION_MAJOR} ${Qt_FIND_VERSION}
COMPONENTS ${Qt_FIND_COMPONENTS}
HINTS ${_qt_homebrew_prefix}
@@ -34,6 +47,14 @@ find_package(Qt${Qt_FIND_VERSION_MAJOR} ${Qt_FIND_VERSION}
)
unset(_qt_homebrew_prefix)
# Restore CMAKE_FIND_ROOT_PATH_MODE_LIBRARY state.
if(DEFINED _qt_find_root_path_mode_library_saved)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ${_qt_find_root_path_mode_library_saved})
unset(_qt_find_root_path_mode_library_saved)
else()
unset(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY)
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Qt
REQUIRED_VARS Qt${Qt_FIND_VERSION_MAJOR}_DIR

View File

@@ -36,10 +36,6 @@ if(USDT_INCLUDE_DIR)
include(CheckCXXSourceCompiles)
set(CMAKE_REQUIRED_INCLUDES ${USDT_INCLUDE_DIR})
check_cxx_source_compiles("
#if defined(__arm__)
# define STAP_SDT_ARG_CONSTRAINT g
#endif
// Setting SDT_USE_VARIADIC lets systemtap (sys/sdt.h) know that we want to use
// the optional variadic macros to define tracepoints.
#define SDT_USE_VARIADIC 1

View File

@@ -7,7 +7,6 @@ function(generate_setup_nsi)
set(abs_top_builddir ${PROJECT_BINARY_DIR})
set(CLIENT_URL ${PROJECT_HOMEPAGE_URL})
set(CLIENT_TARNAME "bitcoin")
set(BITCOIN_WRAPPER_NAME "bitcoin")
set(BITCOIN_GUI_NAME "bitcoin-qt")
set(BITCOIN_DAEMON_NAME "bitcoind")
set(BITCOIN_CLI_NAME "bitcoin-cli")

View File

@@ -7,19 +7,14 @@ include(GNUInstallDirs)
function(install_binary_component component)
cmake_parse_arguments(PARSE_ARGV 1
IC # prefix
"HAS_MANPAGE;INTERNAL" # options
"" # one_value_keywords
"" # multi_value_keywords
IC # prefix
"HAS_MANPAGE" # options
"" # one_value_keywords
"" # multi_value_keywords
)
set(target_name ${component})
if(IC_INTERNAL)
set(runtime_dest ${CMAKE_INSTALL_LIBEXECDIR})
else()
set(runtime_dest ${CMAKE_INSTALL_BINDIR})
endif()
install(TARGETS ${target_name}
RUNTIME DESTINATION ${runtime_dest}
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
COMPONENT ${component}
)
if(INSTALL_MAN AND IC_HAS_MANPAGE)

View File

@@ -19,11 +19,11 @@ function(setup_split_debug_script)
endfunction()
function(add_maintenance_targets)
if(NOT TARGET Python3::Interpreter)
if(NOT PYTHON_COMMAND)
return()
endif()
foreach(target IN ITEMS bitcoin bitcoind bitcoin-node bitcoin-qt bitcoin-gui bitcoin-cli bitcoin-tx bitcoin-util bitcoin-wallet test_bitcoin bench_bitcoin)
foreach(target IN ITEMS bitcoind bitcoin-qt bitcoin-cli bitcoin-tx bitcoin-util bitcoin-wallet test_bitcoin bench_bitcoin)
if(TARGET ${target})
list(APPEND executables $<TARGET_FILE:${target}>)
endif()
@@ -31,27 +31,19 @@ function(add_maintenance_targets)
add_custom_target(check-symbols
COMMAND ${CMAKE_COMMAND} -E echo "Running symbol and dynamic library checks..."
COMMAND Python3::Interpreter ${PROJECT_SOURCE_DIR}/contrib/guix/symbol-check.py ${executables}
COMMAND ${PYTHON_COMMAND} ${PROJECT_SOURCE_DIR}/contrib/devtools/symbol-check.py ${executables}
VERBATIM
)
add_custom_target(check-security
COMMAND ${CMAKE_COMMAND} -E echo "Checking binary security..."
COMMAND Python3::Interpreter ${PROJECT_SOURCE_DIR}/contrib/guix/security-check.py ${executables}
COMMAND ${PYTHON_COMMAND} ${PROJECT_SOURCE_DIR}/contrib/devtools/security-check.py ${executables}
VERBATIM
)
endfunction()
function(add_windows_deploy_target)
if(MINGW AND TARGET bitcoin AND TARGET bitcoin-qt AND TARGET bitcoind AND TARGET bitcoin-cli AND TARGET bitcoin-tx AND TARGET bitcoin-wallet AND TARGET bitcoin-util AND TARGET test_bitcoin)
find_program(MAKENSIS_EXECUTABLE makensis)
if(NOT MAKENSIS_EXECUTABLE)
add_custom_target(deploy
COMMAND ${CMAKE_COMMAND} -E echo "Error: NSIS not found"
)
return()
endif()
if(MINGW AND TARGET bitcoin-qt AND TARGET bitcoind AND TARGET bitcoin-cli AND TARGET bitcoin-tx AND TARGET bitcoin-wallet AND TARGET bitcoin-util AND TARGET test_bitcoin)
# TODO: Consider replacing this code with the CPack NSIS Generator.
# See https://cmake.org/cmake/help/latest/cpack_gen/nsis.html
include(GenerateSetupNsi)
@@ -59,7 +51,6 @@ function(add_windows_deploy_target)
add_custom_command(
OUTPUT ${PROJECT_BINARY_DIR}/bitcoin-win64-setup.exe
COMMAND ${CMAKE_COMMAND} -E make_directory ${PROJECT_BINARY_DIR}/release
COMMAND ${CMAKE_STRIP} $<TARGET_FILE:bitcoin> -o ${PROJECT_BINARY_DIR}/release/$<TARGET_FILE_NAME:bitcoin>
COMMAND ${CMAKE_STRIP} $<TARGET_FILE:bitcoin-qt> -o ${PROJECT_BINARY_DIR}/release/$<TARGET_FILE_NAME:bitcoin-qt>
COMMAND ${CMAKE_STRIP} $<TARGET_FILE:bitcoind> -o ${PROJECT_BINARY_DIR}/release/$<TARGET_FILE_NAME:bitcoind>
COMMAND ${CMAKE_STRIP} $<TARGET_FILE:bitcoin-cli> -o ${PROJECT_BINARY_DIR}/release/$<TARGET_FILE_NAME:bitcoin-cli>
@@ -67,7 +58,7 @@ function(add_windows_deploy_target)
COMMAND ${CMAKE_STRIP} $<TARGET_FILE:bitcoin-wallet> -o ${PROJECT_BINARY_DIR}/release/$<TARGET_FILE_NAME:bitcoin-wallet>
COMMAND ${CMAKE_STRIP} $<TARGET_FILE:bitcoin-util> -o ${PROJECT_BINARY_DIR}/release/$<TARGET_FILE_NAME:bitcoin-util>
COMMAND ${CMAKE_STRIP} $<TARGET_FILE:test_bitcoin> -o ${PROJECT_BINARY_DIR}/release/$<TARGET_FILE_NAME:test_bitcoin>
COMMAND ${MAKENSIS_EXECUTABLE} -V2 ${PROJECT_BINARY_DIR}/bitcoin-win64-setup.nsi
COMMAND makensis -V2 ${PROJECT_BINARY_DIR}/bitcoin-win64-setup.nsi
VERBATIM
)
add_custom_target(deploy DEPENDS ${PROJECT_BINARY_DIR}/bitcoin-win64-setup.exe)
@@ -92,7 +83,6 @@ function(add_macos_deploy_target)
COMMAND ${CMAKE_COMMAND} --install ${PROJECT_BINARY_DIR} --config $<CONFIG> --component bitcoin-qt --prefix ${macos_app}/Contents/MacOS --strip
COMMAND ${CMAKE_COMMAND} -E rename ${macos_app}/Contents/MacOS/bin/$<TARGET_FILE_NAME:bitcoin-qt> ${macos_app}/Contents/MacOS/Bitcoin-Qt
COMMAND ${CMAKE_COMMAND} -E rm -rf ${macos_app}/Contents/MacOS/bin
COMMAND ${CMAKE_COMMAND} -E rm -rf ${macos_app}/Contents/MacOS/share
VERBATIM
)
@@ -100,7 +90,7 @@ function(add_macos_deploy_target)
if(CMAKE_HOST_APPLE)
add_custom_command(
OUTPUT ${PROJECT_BINARY_DIR}/${osx_volname}.zip
COMMAND Python3::Interpreter ${PROJECT_SOURCE_DIR}/contrib/macdeploy/macdeployqtplus ${macos_app} ${osx_volname} -translations-dir=${QT_TRANSLATIONS_DIR} -zip
COMMAND ${PYTHON_COMMAND} ${PROJECT_SOURCE_DIR}/contrib/macdeploy/macdeployqtplus ${macos_app} ${osx_volname} -translations-dir=${QT_TRANSLATIONS_DIR} -zip
DEPENDS ${PROJECT_BINARY_DIR}/${macos_app}/Contents/MacOS/Bitcoin-Qt
VERBATIM
)
@@ -113,7 +103,7 @@ function(add_macos_deploy_target)
else()
add_custom_command(
OUTPUT ${PROJECT_BINARY_DIR}/dist/${macos_app}/Contents/MacOS/Bitcoin-Qt
COMMAND ${CMAKE_COMMAND} -E env OBJDUMP=${CMAKE_OBJDUMP} $<TARGET_FILE:Python3::Interpreter> ${PROJECT_SOURCE_DIR}/contrib/macdeploy/macdeployqtplus ${macos_app} ${osx_volname} -translations-dir=${QT_TRANSLATIONS_DIR}
COMMAND OBJDUMP=${CMAKE_OBJDUMP} ${PYTHON_COMMAND} ${PROJECT_SOURCE_DIR}/contrib/macdeploy/macdeployqtplus ${macos_app} ${osx_volname} -translations-dir=${QT_TRANSLATIONS_DIR}
DEPENDS ${PROJECT_BINARY_DIR}/${macos_app}/Contents/MacOS/Bitcoin-Qt
VERBATIM
)
@@ -121,22 +111,16 @@ function(add_macos_deploy_target)
DEPENDS ${PROJECT_BINARY_DIR}/dist/${macos_app}/Contents/MacOS/Bitcoin-Qt
)
find_program(ZIP_EXECUTABLE zip)
if(NOT ZIP_EXECUTABLE)
add_custom_target(deploy
COMMAND ${CMAKE_COMMAND} -E echo "Error: ZIP not found"
)
else()
add_custom_command(
OUTPUT ${PROJECT_BINARY_DIR}/dist/${osx_volname}.zip
WORKING_DIRECTORY dist
COMMAND ${PROJECT_SOURCE_DIR}/cmake/script/macos_zip.sh ${ZIP_EXECUTABLE} ${osx_volname}.zip
VERBATIM
)
add_custom_target(deploy
DEPENDS ${PROJECT_BINARY_DIR}/dist/${osx_volname}.zip
)
endif()
find_program(ZIP_COMMAND zip REQUIRED)
add_custom_command(
OUTPUT ${PROJECT_BINARY_DIR}/dist/${osx_volname}.zip
WORKING_DIRECTORY dist
COMMAND ${PROJECT_SOURCE_DIR}/cmake/script/macos_zip.sh ${ZIP_COMMAND} ${osx_volname}.zip
VERBATIM
)
add_custom_target(deploy
DEPENDS ${PROJECT_BINARY_DIR}/dist/${osx_volname}.zip
)
endif()
add_dependencies(deploydir bitcoin-qt)
add_dependencies(deploy deploydir)

View File

@@ -105,13 +105,14 @@ function(remove_cxx_flag_from_all_configs flag)
endfunction()
function(replace_cxx_flag_in_config config old_flag new_flag)
string(TOUPPER "CMAKE_CXX_FLAGS_${config}" var_name)
if("${var_name}" IN_LIST precious_variables)
return()
endif()
string(REGEX REPLACE "(^| )${old_flag}( |$)" "\\1${new_flag}\\2" ${var_name} "${${var_name}}")
set(${var_name} "${${var_name}}" PARENT_SCOPE)
set_property(CACHE ${var_name} PROPERTY VALUE "${${var_name}}")
string(TOUPPER "${config}" config_uppercase)
string(REGEX REPLACE "(^| )${old_flag}( |$)" "\\1${new_flag}\\2" new_flags "${CMAKE_CXX_FLAGS_${config_uppercase}}")
set(CMAKE_CXX_FLAGS_${config_uppercase} "${new_flags}" PARENT_SCOPE)
set(CMAKE_CXX_FLAGS_${config_uppercase} "${new_flags}"
CACHE STRING
"Flags used by the CXX compiler during ${config_uppercase} builds."
FORCE
)
endfunction()
set_default_config(RelWithDebInfo)

View File

@@ -38,7 +38,8 @@ function(test_append_socket_library target)
message(FATAL_ERROR "Cannot figure out how to use getifaddrs/freeifaddrs.")
endif()
endif()
set(HAVE_IFADDRS TRUE PARENT_SCOPE)
set(HAVE_DECL_GETIFADDRS TRUE PARENT_SCOPE)
set(HAVE_DECL_FREEIFADDRS TRUE PARENT_SCOPE)
endfunction()
# Clang, when building for 32-bit,

View File

@@ -20,7 +20,7 @@ In configuration output, this function prints a string by the following pattern:
function(try_append_linker_flag flag)
cmake_parse_arguments(PARSE_ARGV 1
TALF # prefix
"NO_CACHE_IF_FAILED" # options
"" # options
"TARGET;VAR;SOURCE;RESULT_VAR" # one_value_keywords
"IF_CHECK_PASSED" # multi_value_keywords
)
@@ -68,10 +68,6 @@ function(try_append_linker_flag flag)
if(DEFINED TALF_RESULT_VAR)
set(${TALF_RESULT_VAR} "${${result}}" PARENT_SCOPE)
endif()
if(NOT ${result} AND TALF_NO_CACHE_IF_FAILED)
unset(${result} CACHE)
endif()
endfunction()
if(MSVC)

View File

@@ -1,52 +0,0 @@
# Copyright (c) 2023-present The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://opensource.org/license/mit/.
enable_language(C)
function(add_secp256k1 subdir)
message("")
message("Configuring secp256k1 subtree...")
set(BUILD_SHARED_LIBS OFF)
set(CMAKE_EXPORT_COMPILE_COMMANDS OFF)
set(SECP256K1_ENABLE_MODULE_ECDH OFF CACHE BOOL "" FORCE)
set(SECP256K1_ENABLE_MODULE_RECOVERY ON CACHE BOOL "" FORCE)
set(SECP256K1_ENABLE_MODULE_MUSIG ON CACHE BOOL "" FORCE)
set(SECP256K1_BUILD_BENCHMARK OFF CACHE BOOL "" FORCE)
set(SECP256K1_BUILD_TESTS ${BUILD_TESTS} CACHE BOOL "" FORCE)
set(SECP256K1_BUILD_EXHAUSTIVE_TESTS ${BUILD_TESTS} CACHE BOOL "" FORCE)
if(NOT BUILD_TESTS)
# Always skip the ctime tests, if we are building no other tests.
# Otherwise, they are built if Valgrind is available. See SECP256K1_VALGRIND.
set(SECP256K1_BUILD_CTIME_TESTS ${BUILD_TESTS} CACHE BOOL "" FORCE)
endif()
set(SECP256K1_BUILD_EXAMPLES OFF CACHE BOOL "" FORCE)
include(GetTargetInterface)
# -fsanitize and related flags apply to both C++ and C,
# so we can pass them down to libsecp256k1 as CFLAGS and LDFLAGS.
get_target_interface(SECP256K1_APPEND_CFLAGS "" sanitize_interface COMPILE_OPTIONS)
string(STRIP "${SECP256K1_APPEND_CFLAGS} ${APPEND_CPPFLAGS}" SECP256K1_APPEND_CFLAGS)
string(STRIP "${SECP256K1_APPEND_CFLAGS} ${APPEND_CFLAGS}" SECP256K1_APPEND_CFLAGS)
set(SECP256K1_APPEND_CFLAGS ${SECP256K1_APPEND_CFLAGS} CACHE STRING "" FORCE)
get_target_interface(SECP256K1_APPEND_LDFLAGS "" sanitize_interface LINK_OPTIONS)
string(STRIP "${SECP256K1_APPEND_LDFLAGS} ${APPEND_LDFLAGS}" SECP256K1_APPEND_LDFLAGS)
set(SECP256K1_APPEND_LDFLAGS ${SECP256K1_APPEND_LDFLAGS} CACHE STRING "" FORCE)
# We want to build libsecp256k1 with the most tested RelWithDebInfo configuration.
foreach(config IN LISTS CMAKE_BUILD_TYPE CMAKE_CONFIGURATION_TYPES)
if(config STREQUAL "")
continue()
endif()
string(TOUPPER "${config}" config)
set(CMAKE_C_FLAGS_${config} "${CMAKE_C_FLAGS_RELWITHDEBINFO}")
endforeach()
# If the CFLAGS environment variable is defined during building depends
# and configuring this build system, its content might be duplicated.
if(DEFINED ENV{CFLAGS})
deduplicate_flags(CMAKE_C_FLAGS)
endif()
add_subdirectory(${subdir})
set_target_properties(secp256k1 PROPERTIES
EXCLUDE_FROM_ALL TRUE
)
endfunction()

15
cmake/tests.cmake Normal file
View File

@@ -0,0 +1,15 @@
# Copyright (c) 2023-present The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://opensource.org/license/mit/.
if(TARGET bitcoin-util AND TARGET bitcoin-tx AND PYTHON_COMMAND)
add_test(NAME util_test_runner
COMMAND ${CMAKE_COMMAND} -E env BITCOINUTIL=$<TARGET_FILE:bitcoin-util> BITCOINTX=$<TARGET_FILE:bitcoin-tx> ${PYTHON_COMMAND} ${PROJECT_BINARY_DIR}/test/util/test_runner.py
)
endif()
if(PYTHON_COMMAND)
add_test(NAME util_rpcauth_test
COMMAND ${PYTHON_COMMAND} ${PROJECT_BINARY_DIR}/test/util/rpcauth-test.py
)
endif()

View File

@@ -1,15 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity
type="win32"
name="org.bitcoincore.${target}"
version="${CLIENT_VERSION_MAJOR}.${CLIENT_VERSION_MINOR}.${CLIENT_VERSION_BUILD}.0"
/>
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel>
</requestedPrivileges>
</security>
</trustInfo>
</assembly>

View File

@@ -9,75 +9,4 @@ Example usage:
python3 asmap-tool.py encode /path/to/input.file /path/to/output.file
python3 asmap-tool.py decode /path/to/input.file /path/to/output.file
python3 asmap-tool.py diff /path/to/first.file /path/to/second.file
python3 asmap-tool.py diff_addrs /path/to/first.file /path/to/second.file addrs.file
```
These commands may take a few minutes to run with `python3`,
depending on the amount of data involved and your machine specs.
Consider using `pypy3` for a faster run time.
### Encoding and Decoding
ASmap files are somewhat large in text form, and need to be encoded
to binary before being used with Bitcoin Core.
The `encode` command takes an ASmap and an output file.
The `--fill`/`-f` flag further reduces the size of the output file
by assuming an AS assignment for an unmapped network if an adjacent network is assigned.
This procedure is lossy, in the sense that it loses information
about which ranges were unassigned.
However, if the input ASmap is incomplete,
this procedure will also reassign ranges that should have an AS assignment,
resulting in an ASmap that may diverge from reality significantly.
Finally, another consequence is that the resulting encoded file
will no longer be meaningful for diffs.
Therefore only use `--fill` if
you want to optimise space, you have a reasonably complete ASmap,
and do not intend to diff the file at a later time.
The `decode` command takes an encoded ASmap and an output file.
As with `encode`, the `--fill`/`-f` flag reduces the output file size
by reassigning subnets. Conversely, the `--non-overlapping`/`-n` flag
increases output size by outputting strictly non-overlapping network ranges.
### Comparing ASmaps
AS control of IP networks changes frequently, therefore it can be useful to get
the changes between two ASmaps via the `diff` and `diff_addrs` commands.
`diff` takes two ASmap files, and returns a file detailing the changes
in the state of a network's AS assignment between the first and the second file.
This command may take a few minutes to run, depending on your machine.
The example below shows the three possible output states:
- reassigned to a new AS (`AS26496 # was AS20738`),
- present in the first but not the second (`# 220.157.65.0/24 was AS9723`),
- or present in the second but not the first (`# was unassigned`).
```
217.199.160.0/19 AS26496 # was AS20738
# 220.157.65.0/24 was AS9723
216.151.172.0/23 AS400080 # was unassigned
2001:470:49::/48 AS20205 # was AS6939
# 2001:678:bd0::/48 was AS207631
2001:67c:308::/48 AS26496 # was unassigned
```
`diff` accepts a `--ignore-unassigned`/`-i` flag
which ignores networks present in the second but not the first.
`diff_addrs` is intended to provide changes between two ASmaps and
a node's known peers.
The command takes two ASmap files, and a file of IP addresses as output by
the `bitcoin-cli getnodeaddresses` command.
It returns the changes between the two ASmaps for the peer IPs provided in
the `getnodeaddresses` output.
The resulting file is in the same format as the `diff` command shown above.
You can output address data to a file:
```
bitcoin-cli getnodeaddresses 0 > addrs.json
```
and pass in the address file as the third argument:
```
python3 asmap-tool.py diff_addrs path/to/first.file path/to/second.file addrs.json
```

View File

@@ -39,7 +39,7 @@ _bitcoin_cli() {
if ((cword > 4)); then
case ${words[cword-4]} in
listtransactions|setban)
importaddress|listtransactions|setban)
COMPREPLY=( $( compgen -W "true false" -- "$cur" ) )
return 0
;;
@@ -52,7 +52,10 @@ _bitcoin_cli() {
if ((cword > 3)); then
case ${words[cword-3]} in
getbalance|gettxout|listreceivedbyaddress|listsinceblock)
addmultisigaddress)
return 0
;;
getbalance|gettxout|importaddress|importpubkey|importprivkey|listreceivedbyaddress|listsinceblock)
COMPREPLY=( $( compgen -W "true false" -- "$cur" ) )
return 0
;;
@@ -77,7 +80,7 @@ _bitcoin_cli() {
fi
case "$prev" in
backupwallet)
backupwallet|dumpwallet|importwallet)
_filedir
return 0
;;

View File

@@ -5,7 +5,7 @@ Upstream-Contact: Satoshi Nakamoto <satoshin@gmx.com>
Source: https://github.com/bitcoin/bitcoin
Files: *
Copyright: 2009-2026, Bitcoin Core Developers
Copyright: 2009-2025, Bitcoin Core Developers
License: Expat
Comment: The Bitcoin Core Developers encompasses all contributors to the
project, listed in the release notes or the git log.
@@ -121,6 +121,17 @@ Comment:
You should have received a copy of the GNU General Public License along
with this program. If not, see <http://www.gnu.org/licenses/>.
License: GPL-3+
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU General Public License, Version 3 or any
later version published by the Free Software Foundation.
Comment:
On Debian systems the GNU General Public License (GPL) version 3 is
located in '/usr/share/common-licenses/GPL-3'.
.
You should have received a copy of the GNU General Public License along
with this program. If not, see <http://www.gnu.org/licenses/>.
License: public-domain
This work is in the public domain.

View File

@@ -8,42 +8,18 @@ deterministic-fuzz-coverage
A tool to check for non-determinism in fuzz coverage. To get the help, run:
```
cargo run --manifest-path ./contrib/devtools/deterministic-fuzz-coverage/Cargo.toml -- --help
RUST_BACKTRACE=1 cargo run --manifest-path ./contrib/devtools/deterministic-fuzz-coverage/Cargo.toml -- --help
```
To execute the tool, compilation has to be done with the build options:
To execute the tool, compilation has to be done with the build options
`-DCMAKE_C_COMPILER='clang' -DCMAKE_CXX_COMPILER='clang++'
-DBUILD_FOR_FUZZING=ON -DCMAKE_CXX_FLAGS='-fPIC -fprofile-instr-generate
-fcoverage-mapping'`. Both llvm-profdata and llvm-cov must be installed. Also,
the qa-assets repository must have been cloned. Finally, a fuzz target has to
be picked before running the tool:
```
-DCMAKE_C_COMPILER='clang' -DCMAKE_CXX_COMPILER='clang++' -DBUILD_FOR_FUZZING=ON -DCMAKE_CXX_FLAGS='-fprofile-instr-generate -fcoverage-mapping'
```
Both llvm-profdata and llvm-cov must be installed. Also, the qa-assets
repository must have been cloned. Finally, a fuzz target has to be picked
before running the tool:
```
cargo run --manifest-path ./contrib/devtools/deterministic-fuzz-coverage/Cargo.toml -- $PWD/build_dir $PWD/qa-assets/fuzz_corpora fuzz_target_name
```
deterministic-unittest-coverage
===========================
A tool to check for non-determinism in unit-test coverage. To get the help, run:
```
cargo run --manifest-path ./contrib/devtools/deterministic-unittest-coverage/Cargo.toml -- --help
```
To execute the tool, compilation has to be done with the build options:
```
-DCMAKE_C_COMPILER='clang' -DCMAKE_CXX_COMPILER='clang++' -DCMAKE_CXX_FLAGS='-fprofile-instr-generate -fcoverage-mapping'
```
Both llvm-profdata and llvm-cov must be installed.
```
cargo run --manifest-path ./contrib/devtools/deterministic-unittest-coverage/Cargo.toml -- $PWD/build_dir <boost unittest filter>
RUST_BACKTRACE=1 cargo run --manifest-path ./contrib/devtools/deterministic-fuzz-coverage/Cargo.toml -- $PWD/build_dir $PWD/qa-assets/corpora-dir fuzz_target_name
```
clang-format-diff.py
@@ -159,6 +135,35 @@ For example:
BUILDDIR=$PWD/my-build-dir contrib/devtools/gen-bitcoin-conf.sh
```
security-check.py
=================
Perform basic security checks on a series of executables.
symbol-check.py
===============
A script to check that release executables only contain
certain symbols and are only linked against allowed libraries.
For Linux this means checking for allowed gcc, glibc and libstdc++ version symbols.
This makes sure they are still compatible with the minimum supported distribution versions.
For macOS and Windows we check that the executables are only linked against libraries we allow.
Example usage:
find ../path/to/executables -type f -executable | xargs python3 contrib/devtools/symbol-check.py
If no errors occur the return value will be 0 and the output will be empty.
If there are any errors the return value will be 1 and output like this will be printed:
.../64/test_bitcoin: symbol memcpy from unsupported version GLIBC_2.14
.../64/test_bitcoin: symbol __fdelt_chk from unsupported version GLIBC_2.15
.../64/test_bitcoin: symbol std::out_of_range::~out_of_range() from unsupported version GLIBCXX_3.4.15
.../64/test_bitcoin: symbol _ZNSt8__detail15_List_nod from unsupported version GLIBCXX_3.4.15
circular-dependencies.py
========================

View File

@@ -41,6 +41,9 @@ ALLOWED_DEPENDENCIES+=(
# Declare list of known errors that should be suppressed.
declare -A SUPPRESS
# init.cpp file currently calls Berkeley DB sanity check function on startup, so
# there is an undocumented dependency of the node library on the wallet library.
SUPPRESS["init.cpp.o bdb.cpp.o _ZN6wallet27BerkeleyDatabaseSanityCheckEv"]=1
# init/common.cpp file calls InitError and InitWarning from interface_ui which
# is currently part of the node library. interface_ui should just be part of the
# common library instead, and is moved in

View File

@@ -22,6 +22,7 @@ EXCLUDE = [
'src/test/fuzz/FuzzedDataProvider.h',
'src/tinyformat.h',
'src/bench/nanobench.h',
'test/functional/test_framework/bignum.py',
# python init:
'*__init__.py',
]
@@ -93,6 +94,7 @@ EXPECTED_HOLDER_NAMES = [
r"Satoshi Nakamoto",
r"The Bitcoin Core developers",
r"BitPay Inc\.",
r"University of Illinois at Urbana-Champaign\.",
r"Pieter Wuille",
r"Wladimir J\. van der Laan",
r"Jeff Garzik",

View File

@@ -1,7 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 4
[[package]]
name = "deterministic-fuzz-coverage"
version = "0.1.0"

View File

@@ -2,102 +2,89 @@
// Distributed under the MIT software license, see the accompanying
// file COPYING or https://opensource.org/license/mit/.
use std::collections::VecDeque;
use std::env;
use std::fs::{read_dir, DirEntry, File};
use std::path::{Path, PathBuf};
use std::process::{Command, ExitCode};
use std::fs::{read_dir, File};
use std::path::Path;
use std::process::{exit, Command, Stdio};
use std::str;
use std::thread;
/// A type for a complete and readable error message.
type AppError = String;
type AppResult = Result<(), AppError>;
const LLVM_PROFDATA: &str = "llvm-profdata";
const LLVM_COV: &str = "llvm-cov";
const GIT: &str = "git";
const DIFF: &str = "diff";
const DEFAULT_PAR: usize = 1;
fn exit_help(err: &str) -> AppError {
format!(
r#"
Error: {err}
Usage: program ./build_dir ./qa-assets/fuzz_corpora fuzz_target_name [parallelism={DEFAULT_PAR}]
Refer to the devtools/README.md for more details."#
)
fn exit_help(err: &str) -> ! {
eprintln!("Error: {}", err);
eprintln!();
eprintln!("Usage: program ./build_dir ./qa-assets-corpora-dir fuzz_target");
eprintln!();
eprintln!("Refer to the devtools/README.md for more details.");
exit(1)
}
fn sanity_check(corpora_dir: &Path, fuzz_exe: &Path) -> AppResult {
for tool in [LLVM_PROFDATA, LLVM_COV, GIT] {
let output = Command::new(tool).arg("--help").output();
fn sanity_check(corpora_dir: &Path, fuzz_exe: &Path) {
for tool in [LLVM_PROFDATA, LLVM_COV, DIFF] {
let output = Command::new(tool).arg("--version").output();
match output {
Ok(output) if output.status.success() => {}
_ => Err(exit_help(&format!("The tool {} is not installed", tool)))?,
_ => {
exit_help(&format!("The tool {} is not installed", tool));
}
}
}
if !corpora_dir.is_dir() {
Err(exit_help(&format!(
exit_help(&format!(
"Fuzz corpora path ({}) must be a directory",
corpora_dir.display()
)))?;
));
}
if !fuzz_exe.exists() {
Err(exit_help(&format!(
exit_help(&format!(
"Fuzz executable ({}) not found",
fuzz_exe.display()
)))?;
));
}
Ok(())
}
fn app() -> AppResult {
fn main() {
// Parse args
let args = env::args().collect::<Vec<_>>();
let build_dir = args.get(1).ok_or(exit_help("Must set build dir"))?;
let build_dir = args
.get(1)
.unwrap_or_else(|| exit_help("Must set build dir"));
if build_dir == "--help" {
Err(exit_help("--help requested"))?;
exit_help("--help requested")
}
let corpora_dir = args.get(2).ok_or(exit_help("Must set fuzz corpora dir"))?;
let corpora_dir = args
.get(2)
.unwrap_or_else(|| exit_help("Must set fuzz corpora dir"));
let fuzz_target = args
.get(3)
// Require fuzz target for now. In the future it could be optional and the tool could
// iterate over all compiled fuzz targets
.ok_or(exit_help("Must set fuzz target"))?;
let par = match args.get(4) {
Some(s) => s
.parse::<usize>()
.map_err(|e| exit_help(&format!("Could not parse parallelism as usize ({s}): {e}")))?,
None => DEFAULT_PAR,
}
.max(1);
if args.get(5).is_some() {
Err(exit_help("Too many args"))?;
.unwrap_or_else(|| exit_help("Must set fuzz target"));
if args.get(4).is_some() {
exit_help("Too many args")
}
let build_dir = Path::new(build_dir);
let corpora_dir = Path::new(corpora_dir);
let fuzz_exe = build_dir.join("bin/fuzz");
let fuzz_exe = build_dir.join("src/test/fuzz/fuzz");
sanity_check(corpora_dir, &fuzz_exe)?;
sanity_check(corpora_dir, &fuzz_exe);
deterministic_coverage(build_dir, corpora_dir, &fuzz_exe, fuzz_target, par)
deterministic_coverage(build_dir, corpora_dir, &fuzz_exe, fuzz_target);
}
fn using_libfuzzer(fuzz_exe: &Path) -> Result<bool, AppError> {
fn using_libfuzzer(fuzz_exe: &Path) -> bool {
println!("Check if using libFuzzer ...");
let stderr = Command::new(fuzz_exe)
.arg("-help=1") // Will be interpreted as option (libfuzzer) or as input file
.env("FUZZ", "addition_overflow") // Any valid target
.output()
.map_err(|e| format!("fuzz failed with {e}"))?
.expect("fuzz failed")
.stderr;
let help_output = str::from_utf8(&stderr)
.map_err(|e| format!("The libFuzzer -help=1 output must be valid text ({e})"))?;
Ok(help_output.contains("libFuzzer"))
let help_output = str::from_utf8(&stderr).expect("The -help=1 output must be valid text");
help_output.contains("libFuzzer")
}
fn deterministic_coverage(
@@ -105,177 +92,115 @@ fn deterministic_coverage(
corpora_dir: &Path,
fuzz_exe: &Path,
fuzz_target: &str,
par: usize,
) -> AppResult {
let using_libfuzzer = using_libfuzzer(fuzz_exe)?;
if using_libfuzzer {
println!("Warning: The fuzz executable was compiled with libFuzzer as sanitizer.");
println!("This tool may be tripped by libFuzzer misbehavior.");
println!("It is recommended to compile without libFuzzer.");
}
) {
let using_libfuzzer = using_libfuzzer(fuzz_exe);
let profraw_file = build_dir.join("fuzz_det_cov.profraw");
let profdata_file = build_dir.join("fuzz_det_cov.profdata");
let corpus_dir = corpora_dir.join(fuzz_target);
let mut entries = read_dir(&corpus_dir)
.map_err(|err| {
.unwrap_or_else(|err| {
exit_help(&format!(
"The fuzz target's input directory must exist! ({}; {})",
corpus_dir.display(),
err
))
})?
})
.map(|entry| entry.expect("IO error"))
.collect::<Vec<_>>();
entries.sort_by_key(|entry| entry.file_name());
let run_single = |run_id: char, entry: &Path, thread_id: usize| -> Result<PathBuf, AppError> {
let cov_txt_path = build_dir.join(format!("fuzz_det_cov.show.t{thread_id}.{run_id}.txt"));
let profraw_file = build_dir.join(format!("fuzz_det_cov.t{thread_id}.{run_id}.profraw"));
let profdata_file = build_dir.join(format!("fuzz_det_cov.t{thread_id}.{run_id}.profdata"));
{
let output = {
let run_single = |run_id: u8, entry: &Path| {
let cov_txt_path = build_dir.join(format!("fuzz_det_cov.show.{run_id}.txt"));
assert!({
{
let mut cmd = Command::new(fuzz_exe);
if using_libfuzzer {
cmd.args(["-runs=1", "-shuffle=1", "-prefer_small=0"]);
cmd.arg("-runs=1");
}
cmd
}
.env("LLVM_PROFILE_FILE", &profraw_file)
.env("FUZZ", fuzz_target)
.arg(entry)
.output()
.map_err(|e| format!("fuzz failed: {e}"))?;
if !output.status.success() {
Err(format!(
"fuzz failed!\nstdout:\n{}\nstderr:\n{}\n",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr)
))?;
}
}
if !Command::new(LLVM_PROFDATA)
.status()
.expect("fuzz failed")
.success()
});
assert!(Command::new(LLVM_PROFDATA)
.arg("merge")
.arg("--sparse")
.arg(&profraw_file)
.arg("-o")
.arg(&profdata_file)
.status()
.map_err(|e| format!("{LLVM_PROFDATA} merge failed with {e}"))?
.success()
{
Err(format!("{LLVM_PROFDATA} merge failed. This can be a sign of compiling without code coverage support."))?;
}
let cov_file = File::create(&cov_txt_path)
.map_err(|e| format!("Failed to create coverage txt file ({e})"))?;
if !Command::new(LLVM_COV)
.expect("merge failed")
.success());
let cov_file = File::create(&cov_txt_path).expect("Failed to create coverage txt file");
let passed = Command::new(LLVM_COV)
.args([
"show",
"--show-line-counts-or-regions",
"--show-branches=count",
"--show-expansions",
"--show-instantiation-summary",
"-Xdemangler=llvm-cxxfilt",
&format!("--instr-profile={}", profdata_file.display()),
])
.arg(fuzz_exe)
.stdout(cov_file)
.stdout(Stdio::from(cov_file))
.spawn()
.map_err(|e| format!("{LLVM_COV} show failed with {e}"))?
.expect("Failed to execute llvm-cov")
.wait()
.map_err(|e| format!("{LLVM_COV} show failed with {e}"))?
.success()
{
Err(format!("{LLVM_COV} show failed"))?;
};
Ok(cov_txt_path)
.expect("Failed to execute llvm-cov")
.success();
if !passed {
panic!("Failed to execute llvm-profdata")
}
cov_txt_path
};
let check_diff = |a: &Path, b: &Path, err: &str| -> AppResult {
let same = Command::new(GIT)
.args(["--no-pager", "diff", "--no-index"])
let check_diff = |a: &Path, b: &Path, err: &str| {
let same = Command::new(DIFF)
.arg("--unified")
.arg(a)
.arg(b)
.status()
.map_err(|e| format!("{GIT} diff failed with {e}"))?
.expect("Failed to execute diff command")
.success();
if !same {
Err(format!(
r#"
The coverage was not deterministic between runs.
{err}"#
))?;
eprintln!();
eprintln!("The coverage was not determinstic between runs.");
eprintln!("{}", err);
eprintln!("Exiting.");
exit(1);
}
Ok(())
};
// First, check that each fuzz input is deterministic running by itself in a process.
// First, check that each fuzz input is determinisic running by itself in a process.
//
// This can catch issues and isolate where a single fuzz input triggers non-determinism, but
// all other fuzz inputs are deterministic.
//
// Also, This can catch issues where several fuzz inputs are non-deterministic, but the sum of
// their overall coverage trace remains the same across runs and thus remains undetected.
println!(
"Check each fuzz input individually ... ({} inputs with parallelism {par})",
entries.len()
);
let check_individual = |entry: &DirEntry, thread_id: usize| -> AppResult {
for entry in entries {
let entry = entry.path();
if !entry.is_file() {
Err(format!("{} should be a file", entry.display()))?;
}
let cov_txt_base = run_single('a', &entry, thread_id)?;
let cov_txt_repeat = run_single('b', &entry, thread_id)?;
assert!(entry.is_file());
let cov_txt_base = run_single(0, &entry);
let cov_txt_repeat = run_single(1, &entry);
check_diff(
&cov_txt_base,
&cov_txt_repeat,
&format!("The fuzz target input was {}.", entry.display()),
)?;
Ok(())
};
thread::scope(|s| -> AppResult {
let mut handles = VecDeque::with_capacity(par);
let mut res = Ok(());
for (i, entry) in entries.iter().enumerate() {
println!("[{}/{}]", i + 1, entries.len());
handles.push_back(s.spawn(move || check_individual(entry, i % par)));
while handles.len() >= par || i == (entries.len() - 1) || res.is_err() {
if let Some(th) = handles.pop_front() {
let thread_result = match th.join() {
Err(_e) => Err("A scoped thread panicked".to_string()),
Ok(r) => r,
};
if thread_result.is_err() {
res = thread_result;
}
} else {
return res;
}
}
}
res
})?;
);
}
// Finally, check that running over all fuzz inputs in one process is deterministic as well.
// This can catch issues where mutable global state is leaked from one fuzz input execution to
// the next.
println!("Check all fuzz inputs in one go ...");
{
if !corpus_dir.is_dir() {
Err(format!("{} should be a folder", corpus_dir.display()))?;
}
let cov_txt_base = run_single('a', &corpus_dir, 0)?;
let cov_txt_repeat = run_single('b', &corpus_dir, 0)?;
assert!(corpus_dir.is_dir());
let cov_txt_base = run_single(0, &corpus_dir);
let cov_txt_repeat = run_single(1, &corpus_dir);
check_diff(
&cov_txt_base,
&cov_txt_repeat,
&format!("All fuzz inputs in {} were used.", corpus_dir.display()),
)?;
}
println!("✨ Coverage test passed for {fuzz_target}. ✨");
Ok(())
}
fn main() -> ExitCode {
match app() {
Ok(()) => ExitCode::SUCCESS,
Err(err) => {
eprintln!("⚠️\n{}", err);
ExitCode::FAILURE
}
);
}
println!("Coverage test passed for {fuzz_target}.");
}

View File

@@ -1,7 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 4
[[package]]
name = "deterministic-unittest-coverage"
version = "0.1.0"

View File

@@ -1,6 +0,0 @@
[package]
name = "deterministic-unittest-coverage"
version = "0.1.0"
edition = "2021"
[dependencies]

View File

@@ -1,149 +0,0 @@
// Copyright (c) The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or https://opensource.org/license/mit/.
use std::env;
use std::fs::File;
use std::path::{Path, PathBuf};
use std::process::{Command, ExitCode};
use std::str;
/// A type for a complete and readable error message.
type AppError = String;
type AppResult = Result<(), AppError>;
const LLVM_PROFDATA: &str = "llvm-profdata";
const LLVM_COV: &str = "llvm-cov";
const GIT: &str = "git";
fn exit_help(err: &str) -> AppError {
format!(
r#"
Error: {err}
Usage: program ./build_dir boost_unittest_filter
Refer to the devtools/README.md for more details."#
)
}
fn sanity_check(test_exe: &Path) -> AppResult {
for tool in [LLVM_PROFDATA, LLVM_COV, GIT] {
let output = Command::new(tool).arg("--help").output();
match output {
Ok(output) if output.status.success() => {}
_ => Err(exit_help(&format!("The tool {} is not installed", tool)))?,
}
}
if !test_exe.exists() {
Err(exit_help(&format!(
"Test executable ({}) not found",
test_exe.display()
)))?;
}
Ok(())
}
fn app() -> AppResult {
// Parse args
let args = env::args().collect::<Vec<_>>();
let build_dir = args.get(1).ok_or(exit_help("Must set build dir"))?;
if build_dir == "--help" {
Err(exit_help("--help requested"))?;
}
let filter = args
.get(2)
// Require filter for now. In the future it could be optional and the tool could provide a
// default filter.
.ok_or(exit_help("Must set boost test filter"))?;
if args.get(3).is_some() {
Err(exit_help("Too many args"))?;
}
let build_dir = Path::new(build_dir);
let test_exe = build_dir.join("bin/test_bitcoin");
sanity_check(&test_exe)?;
deterministic_coverage(build_dir, &test_exe, filter)
}
fn deterministic_coverage(build_dir: &Path, test_exe: &Path, filter: &str) -> AppResult {
let profraw_file = build_dir.join("test_det_cov.profraw");
let profdata_file = build_dir.join("test_det_cov.profdata");
let run_single = |run_id: char| -> Result<PathBuf, AppError> {
println!("Run with id {run_id}");
let cov_txt_path = build_dir.join(format!("test_det_cov.show.{run_id}.txt"));
if !Command::new(test_exe)
.env("LLVM_PROFILE_FILE", &profraw_file)
.env("BOOST_TEST_RUN_FILTERS", filter)
.env("RANDOM_CTX_SEED", "21")
.status()
.map_err(|e| format!("test failed with {e}"))?
.success()
{
Err("test failed".to_string())?;
}
if !Command::new(LLVM_PROFDATA)
.arg("merge")
.arg("--sparse")
.arg(&profraw_file)
.arg("-o")
.arg(&profdata_file)
.status()
.map_err(|e| format!("{LLVM_PROFDATA} merge failed with {e}"))?
.success()
{
Err(format!("{LLVM_PROFDATA} merge failed. This can be a sign of compiling without code coverage support."))?;
}
let cov_file = File::create(&cov_txt_path)
.map_err(|e| format!("Failed to create coverage txt file ({e})"))?;
if !Command::new(LLVM_COV)
.args([
"show",
"--show-line-counts-or-regions",
"--show-branches=count",
"--show-expansions",
"--show-instantiation-summary",
"-Xdemangler=llvm-cxxfilt",
&format!("--instr-profile={}", profdata_file.display()),
])
.arg(test_exe)
.stdout(cov_file)
.status()
.map_err(|e| format!("{LLVM_COV} show failed with {e}"))?
.success()
{
Err(format!("{LLVM_COV} show failed"))?;
}
Ok(cov_txt_path)
};
let check_diff = |a: &Path, b: &Path| -> AppResult {
let same = Command::new(GIT)
.args(["--no-pager", "diff", "--no-index"])
.arg(a)
.arg(b)
.status()
.map_err(|e| format!("{GIT} diff failed with {e}"))?
.success();
if !same {
Err("The coverage was not deterministic between runs.".to_string())?;
}
Ok(())
};
let r0 = run_single('a')?;
let r1 = run_single('b')?;
check_diff(&r0, &r1)?;
println!("✨ The coverage was deterministic across two runs. ✨");
Ok(())
}
fn main() -> ExitCode {
match app() {
Ok(()) => ExitCode::SUCCESS,
Err(err) => {
eprintln!("⚠️\n{}", err);
ExitCode::FAILURE
}
}
}

View File

@@ -50,8 +50,7 @@ EOF
# adding newlines is a bit funky to ensure portability for BSD
# see here for more details: https://stackoverflow.com/a/24575385
${BITCOIND} --help \
| sed '1,/Options:/d' \
| sed -E '/^[[:space:]]{2}-help/,/^[[:space:]]*$/d' \
| sed '1,/Print this help message and exit/d' \
| sed -E 's/^[[:space:]]{2}\-/#/' \
| sed -E 's/^[[:space:]]{7}/# /' \
| sed -E '/[=[:space:]]/!s/#.*$/&=1/' \

View File

@@ -3,14 +3,12 @@
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
import os
import re
import subprocess
import sys
import tempfile
import argparse
BINARIES = [
'bin/bitcoin',
'bin/bitcoind',
'bin/bitcoin-cli',
'bin/bitcoin-tx',
@@ -60,11 +58,10 @@ for relpath in BINARIES:
print(f'{abspath} not found or not an executable', file=sys.stderr)
sys.exit(1)
# take first line (which must contain version)
output = r.stdout.splitlines()[0]
# find the version e.g. v30.99.0-ce771726f3e7
search = re.search(r"v[0-9]\S+", output)
assert search
verstr = search.group(0)
verstr = r.stdout.splitlines()[0]
# last word of line is the actual version e.g. v22.99.0-5c6b3d5b3508
verstr = verstr.split()[-1]
assert verstr.startswith('v')
# remaining lines are copyright
copyright = r.stdout.split('\n')[1:]
assert copyright[0].startswith('Copyright (C)')

View File

@@ -12,13 +12,13 @@ import random
# Parameters:
# Aim for still working fine at some point in the future. [datetime]
TIME = datetime(2028, 4, 2)
TIME = datetime(2027, 10, 6)
# Expected block interval. [timedelta]
BLOCK_INTERVAL = timedelta(seconds=600)
# The number of headers corresponding to the minchainwork parameter. [headers]
MINCHAINWORK_HEADERS = 912683
MINCHAINWORK_HEADERS = 886157
# Combined processing bandwidth from all attackers to one victim. [bit/s]
# 6 Gbit/s is approximately the speed at which a single thread of a Ryzen 5950X CPU thread can hash

View File

@@ -6,10 +6,6 @@
Perform basic security checks on a series of executables.
Exit status will be 0 if successful, and the program will be silent.
Otherwise the exit status will be 1 and it will log which executables failed which checks.
Example usage:
find ../path/to/guix/binaries -type f -executable | xargs python3 contrib/guix/security-check.py
'''
import re
import sys
@@ -30,13 +26,13 @@ def check_ELF_RELRO(binary) -> bool:
# However, the dynamic linker need to write to this area so these are RW.
# Glibc itself takes care of mprotecting this area R after relocations are finished.
# See also https://marc.info/?l=binutils&m=1498883354122353
if segment.type == lief.ELF.Segment.TYPE.GNU_RELRO:
if segment.type == lief.ELF.SEGMENT_TYPES.GNU_RELRO:
have_gnu_relro = True
have_bindnow = False
try:
flags = binary.get(lief.ELF.DynamicEntry.TAG.FLAGS)
if flags.has(lief.ELF.DynamicEntryFlags.FLAG.BIND_NOW):
flags = binary.get(lief.ELF.DYNAMIC_TAGS.FLAGS)
if flags.value & lief.ELF.DYNAMIC_FLAGS.BIND_NOW:
have_bindnow = True
except Exception:
have_bindnow = False
@@ -55,9 +51,9 @@ def check_ELF_SEPARATE_CODE(binary):
based on their permissions. This checks for missing -Wl,-z,separate-code
and potentially other problems.
'''
R = lief.ELF.Segment.FLAGS.R
W = lief.ELF.Segment.FLAGS.W
E = lief.ELF.Segment.FLAGS.X
R = lief.ELF.SEGMENT_FLAGS.R
W = lief.ELF.SEGMENT_FLAGS.W
E = lief.ELF.SEGMENT_FLAGS.X
EXPECTED_FLAGS = {
# Read + execute
'.init': R | E,
@@ -99,7 +95,7 @@ def check_ELF_SEPARATE_CODE(binary):
# and for each section, remember the flags of the associated program header.
flags_per_section = {}
for segment in binary.segments:
if segment.type == lief.ELF.Segment.TYPE.LOAD:
if segment.type == lief.ELF.SEGMENT_TYPES.LOAD:
for section in segment.sections:
flags_per_section[section.name] = segment.flags
# Spot-check ELF LOAD program header flags per section
@@ -123,8 +119,8 @@ def check_ELF_CONTROL_FLOW(binary) -> bool:
def check_ELF_FORTIFY(binary) -> bool:
# bitcoin wrapper does not currently contain any fortified functions
if '--monolithic' in binary.strings:
# bitcoin-util does not currently contain any fortified functions
if 'Bitcoin Core bitcoin-util utility version ' in binary.strings:
return True
chk_funcs = set()
@@ -134,20 +130,21 @@ def check_ELF_FORTIFY(binary) -> bool:
if match:
chk_funcs.add(match.group(0))
# ignore stack-protector
# ignore stack-protector and bdb
chk_funcs.discard('__stack_chk')
chk_funcs.discard('__db_chk')
return len(chk_funcs) >= 1
def check_PE_DYNAMIC_BASE(binary) -> bool:
'''PIE: DllCharacteristics bit 0x40 signifies dynamicbase (ASLR)'''
return lief.PE.OptionalHeader.DLL_CHARACTERISTICS.DYNAMIC_BASE in binary.optional_header.dll_characteristics_lists
return lief.PE.DLL_CHARACTERISTICS.DYNAMIC_BASE in binary.optional_header.dll_characteristics_lists
# Must support high-entropy 64-bit address space layout randomization
# in addition to DYNAMIC_BASE to have secure ASLR.
def check_PE_HIGH_ENTROPY_VA(binary) -> bool:
'''PIE: DllCharacteristics bit 0x20 signifies high-entropy ASLR'''
return lief.PE.OptionalHeader.DLL_CHARACTERISTICS.HIGH_ENTROPY_VA in binary.optional_header.dll_characteristics_lists
return lief.PE.DLL_CHARACTERISTICS.HIGH_ENTROPY_VA in binary.optional_header.dll_characteristics_lists
def check_PE_RELOC_SECTION(binary) -> bool:
'''Check for a reloc section. This is required for functional ASLR.'''
@@ -178,7 +175,7 @@ def check_MACHO_NOUNDEFS(binary) -> bool:
'''
Check for no undefined references.
'''
return binary.header.has(lief.MachO.Header.FLAGS.NOUNDEFS)
return binary.header.has(lief.MachO.HEADER_FLAGS.NOUNDEFS)
def check_MACHO_FIXUP_CHAINS(binary) -> bool:
'''
@@ -203,13 +200,7 @@ def check_NX(binary) -> bool:
'''
Check for no stack execution
'''
# binary.has_nx checks are only for the stack, but MachO binaries might
# have executable heaps.
if binary.format == lief.Binary.FORMATS.MACHO:
return binary.concrete.has_nx_stack and binary.concrete.has_nx_heap
else:
return binary.has_nx
return binary.has_nx
def check_MACHO_CONTROL_FLOW(binary) -> bool:
'''
@@ -232,7 +223,6 @@ def check_MACHO_BRANCH_PROTECTION(binary) -> bool:
return False
BASE_ELF = [
('FORTIFY', check_ELF_FORTIFY),
('PIE', check_PIE),
('NX', check_NX),
('RELRO', check_ELF_RELRO),
@@ -257,21 +247,21 @@ BASE_MACHO = [
]
CHECKS = {
lief.Binary.FORMATS.ELF: {
lief.Header.ARCHITECTURES.X86_64: BASE_ELF + [('CONTROL_FLOW', check_ELF_CONTROL_FLOW)],
lief.Header.ARCHITECTURES.ARM: BASE_ELF,
lief.Header.ARCHITECTURES.ARM64: BASE_ELF,
lief.Header.ARCHITECTURES.PPC64: BASE_ELF,
lief.Header.ARCHITECTURES.RISCV: BASE_ELF,
lief.EXE_FORMATS.ELF: {
lief.ARCHITECTURES.X86: BASE_ELF + [('CONTROL_FLOW', check_ELF_CONTROL_FLOW), ('FORTIFY', check_ELF_FORTIFY)],
lief.ARCHITECTURES.ARM: BASE_ELF + [('FORTIFY', check_ELF_FORTIFY)],
lief.ARCHITECTURES.ARM64: BASE_ELF + [('FORTIFY', check_ELF_FORTIFY)],
lief.ARCHITECTURES.PPC: BASE_ELF + [('FORTIFY', check_ELF_FORTIFY)],
lief.ARCHITECTURES.RISCV: BASE_ELF, # Skip FORTIFY. See https://github.com/lief-project/LIEF/issues/1082.
},
lief.Binary.FORMATS.PE: {
lief.Header.ARCHITECTURES.X86_64: BASE_PE,
lief.EXE_FORMATS.PE: {
lief.ARCHITECTURES.X86: BASE_PE,
},
lief.Binary.FORMATS.MACHO: {
lief.Header.ARCHITECTURES.X86_64: BASE_MACHO + [('PIE', check_PIE),
lief.EXE_FORMATS.MACHO: {
lief.ARCHITECTURES.X86: BASE_MACHO + [('PIE', check_PIE),
('NX', check_NX),
('CONTROL_FLOW', check_MACHO_CONTROL_FLOW)],
lief.Header.ARCHITECTURES.ARM64: BASE_MACHO + [('BRANCH_PROTECTION', check_MACHO_BRANCH_PROTECTION)],
lief.ARCHITECTURES.ARM64: BASE_MACHO + [('BRANCH_PROTECTION', check_MACHO_BRANCH_PROTECTION)],
}
}
@@ -279,9 +269,9 @@ if __name__ == '__main__':
retval: int = 0
for filename in sys.argv[1:]:
binary = lief.parse(filename)
etype = binary.format
arch = binary.abstract.header.architecture
binary.concrete
failed: list[str] = []
for (name, func) in CHECKS[etype][arch]:

View File

@@ -8,7 +8,7 @@ and are only linked against allowed libraries.
Example usage:
find ../path/to/guix/binaries -type f -executable | xargs python3 contrib/guix/symbol-check.py
find ../path/to/binaries -type f -executable | xargs python3 contrib/devtools/symbol-check.py
'''
import sys
@@ -32,9 +32,9 @@ import lief
# See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html for more info.
MAX_VERSIONS = {
'GCC': (7,0,0),
'GCC': (4,3,0),
'GLIBC': {
lief.ELF.ARCH.X86_64: (2,31),
lief.ELF.ARCH.x86_64: (2,31),
lief.ELF.ARCH.ARM: (2,31),
lief.ELF.ARCH.AARCH64:(2,31),
lief.ELF.ARCH.PPC64: (2,31),
@@ -46,46 +46,47 @@ MAX_VERSIONS = {
# Ignore symbols that are exported as part of every executable
IGNORE_EXPORTS = {
'stdin', 'stdout', 'stderr',
'environ', '_environ', '__environ', '_fini', '_init', 'stdin',
'stdout', 'stderr',
}
# Expected linker-loader names can be found here:
# https://sourceware.org/glibc/wiki/ABIList?action=recall&rev=16
ELF_INTERPRETER_NAMES: dict[lief.ELF.ARCH, dict[lief.Header.ENDIANNESS, str]] = {
lief.ELF.ARCH.X86_64: {
lief.Header.ENDIANNESS.LITTLE: "/lib64/ld-linux-x86-64.so.2",
ELF_INTERPRETER_NAMES: dict[lief.ELF.ARCH, dict[lief.ENDIANNESS, str]] = {
lief.ELF.ARCH.x86_64: {
lief.ENDIANNESS.LITTLE: "/lib64/ld-linux-x86-64.so.2",
},
lief.ELF.ARCH.ARM: {
lief.Header.ENDIANNESS.LITTLE: "/lib/ld-linux-armhf.so.3",
lief.ENDIANNESS.LITTLE: "/lib/ld-linux-armhf.so.3",
},
lief.ELF.ARCH.AARCH64: {
lief.Header.ENDIANNESS.LITTLE: "/lib/ld-linux-aarch64.so.1",
lief.ENDIANNESS.LITTLE: "/lib/ld-linux-aarch64.so.1",
},
lief.ELF.ARCH.PPC64: {
lief.Header.ENDIANNESS.BIG: "/lib64/ld64.so.1",
lief.Header.ENDIANNESS.LITTLE: "/lib64/ld64.so.2",
lief.ENDIANNESS.BIG: "/lib64/ld64.so.1",
lief.ENDIANNESS.LITTLE: "/lib64/ld64.so.2",
},
lief.ELF.ARCH.RISCV: {
lief.Header.ENDIANNESS.LITTLE: "/lib/ld-linux-riscv64-lp64d.so.1",
lief.ENDIANNESS.LITTLE: "/lib/ld-linux-riscv64-lp64d.so.1",
},
}
ELF_ABIS: dict[lief.ELF.ARCH, dict[lief.Header.ENDIANNESS, list[int]]] = {
lief.ELF.ARCH.X86_64: {
lief.Header.ENDIANNESS.LITTLE: [3,2,0],
ELF_ABIS: dict[lief.ELF.ARCH, dict[lief.ENDIANNESS, list[int]]] = {
lief.ELF.ARCH.x86_64: {
lief.ENDIANNESS.LITTLE: [3,2,0],
},
lief.ELF.ARCH.ARM: {
lief.Header.ENDIANNESS.LITTLE: [3,2,0],
lief.ENDIANNESS.LITTLE: [3,2,0],
},
lief.ELF.ARCH.AARCH64: {
lief.Header.ENDIANNESS.LITTLE: [3,7,0],
lief.ENDIANNESS.LITTLE: [3,7,0],
},
lief.ELF.ARCH.PPC64: {
lief.Header.ENDIANNESS.LITTLE: [3,10,0],
lief.Header.ENDIANNESS.BIG: [3,2,0],
lief.ENDIANNESS.LITTLE: [3,10,0],
lief.ENDIANNESS.BIG: [3,2,0],
},
lief.ELF.ARCH.RISCV: {
lief.Header.ENDIANNESS.LITTLE: [4,15,0],
lief.ENDIANNESS.LITTLE: [4,15,0],
},
}
@@ -121,6 +122,7 @@ ELF_ALLOWED_LIBRARIES = {
'libxcb-shape.so.0',
'libxcb-sync.so.1',
'libxcb-xfixes.so.0',
'libxcb-xinerama.so.0',
'libxcb-xkb.so.1',
}
@@ -144,31 +146,19 @@ MACHO_ALLOWED_LIBRARIES = {
'IOSurface', # cross process image/drawing buffers
'libobjc.A.dylib', # Objective-C runtime library
'Metal', # 3D graphics
'QuartzCore', # animation
'Security', # access control and authentication
'UniformTypeIdentifiers', # collection of types that map to MIME and file types
'QuartzCore', # animation
}
PE_ALLOWED_LIBRARIES = {
'ADVAPI32.dll', # legacy security & registry
'bcrypt.dll', # newer security and identity API
'ADVAPI32.dll', # security & registry
'IPHLPAPI.DLL', # IP helper API
'KERNEL32.dll', # win32 base APIs
'msvcrt.dll', # C standard library for MSVC
'SHELL32.dll', # shell API
'WS2_32.dll', # sockets
# bitcoin-qt only
'api-ms-win-core-synch-l1-2-0.dll', # Synchronization Primitives API
'api-ms-win-core-winrt-l1-1-0.dll', # Windows Runtime API
'api-ms-win-core-winrt-string-l1-1-0.dll', # WinRT String API
'AUTHZ.dll', # Windows Authorization Framework
'comdlg32.dll', # Common Dialog Box Library
'd3d11.dll', # Direct3D 11 API
'd3d12.dll', # Direct3D 12 API
'd3d9.dll', # Direct3D 9 API
'dwmapi.dll', # desktop window manager
'DWrite.dll', # DirectX Typography Services
'dxgi.dll', # DirectX Graphics Infrastructure
'GDI32.dll', # graphics device interface
'IMM32.dll', # input method editor
'NETAPI32.dll', # network management
@@ -181,8 +171,6 @@ PE_ALLOWED_LIBRARIES = {
'VERSION.dll', # version checking
'WINMM.dll', # WinMM audio API
'WTSAPI32.dll', # Remote Desktop
'SETUPAPI.dll', # Windows Setup API
'SHCORE.dll', # Stream Handler Core
}
def check_version(max_versions, version, arch) -> bool:
@@ -220,13 +208,13 @@ def check_exported_symbols(binary) -> bool:
name = symbol.name
if binary.header.machine_type == lief.ELF.ARCH.RISCV or name in IGNORE_EXPORTS:
continue
print(f'{filename}: export of symbol {name} not allowed!')
print(f'{binary.name}: export of symbol {name} not allowed!')
ok = False
return ok
def check_RUNPATH(binary) -> bool:
assert binary.get(lief.ELF.DynamicEntry.TAG.RUNPATH) is None
assert binary.get(lief.ELF.DynamicEntry.TAG.RPATH) is None
assert binary.get(lief.ELF.DYNAMIC_TAGS.RUNPATH) is None
assert binary.get(lief.ELF.DYNAMIC_TAGS.RPATH) is None
return True
def check_ELF_libraries(binary) -> bool:
@@ -276,14 +264,6 @@ def check_PE_subsystem_version(binary) -> bool:
return True
return False
def check_PE_application_manifest(binary) -> bool:
if not binary.has_resources:
# No resources at all.
return False
rm = binary.resources_manager
return rm.has_manifest
def check_ELF_interpreter(binary) -> bool:
expected_interpreter = ELF_INTERPRETER_NAMES[binary.header.machine_type][binary.abstract.header.endianness]
@@ -291,12 +271,12 @@ def check_ELF_interpreter(binary) -> bool:
def check_ELF_ABI(binary) -> bool:
expected_abi = ELF_ABIS[binary.header.machine_type][binary.abstract.header.endianness]
note = binary.concrete.get(lief.ELF.Note.TYPE.GNU_ABI_TAG)
assert note.abi == lief.ELF.NoteAbi.ABI.LINUX
return note.version == expected_abi
note = binary.concrete.get(lief.ELF.NOTE_TYPES.ABI_TAG)
assert note.details.abi == lief.ELF.NOTE_ABIS.LINUX
return note.details.version == expected_abi
CHECKS = {
lief.Binary.FORMATS.ELF: [
lief.EXE_FORMATS.ELF: [
('IMPORTED_SYMBOLS', check_imported_symbols),
('EXPORTED_SYMBOLS', check_exported_symbols),
('LIBRARY_DEPENDENCIES', check_ELF_libraries),
@@ -304,16 +284,15 @@ lief.Binary.FORMATS.ELF: [
('ABI', check_ELF_ABI),
('RUNPATH', check_RUNPATH),
],
lief.Binary.FORMATS.MACHO: [
lief.EXE_FORMATS.MACHO: [
('DYNAMIC_LIBRARIES', check_MACHO_libraries),
('MIN_OS', check_MACHO_min_os),
('SDK', check_MACHO_sdk),
('LLD', check_MACHO_lld),
],
lief.Binary.FORMATS.PE: [
lief.EXE_FORMATS.PE: [
('DYNAMIC_LIBRARIES', check_PE_libraries),
('SUBSYSTEM_VERSION', check_PE_subsystem_version),
('APPLICATION_MANIFEST', check_PE_application_manifest),
]
}
@@ -321,7 +300,6 @@ if __name__ == '__main__':
retval: int = 0
for filename in sys.argv[1:]:
binary = lief.parse(filename)
etype = binary.format
failed: list[str] = []

View File

@@ -0,0 +1,151 @@
#!/usr/bin/env bash
#
# Copyright (c) 2019-2020 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
# Test for deterministic coverage across unit test runs.
export LC_ALL=C
# Use GCOV_EXECUTABLE="gcov" if compiling with gcc.
# Use GCOV_EXECUTABLE="llvm-cov gcov" if compiling with clang.
GCOV_EXECUTABLE="gcov"
# Disable tests known to cause non-deterministic behaviour and document the source or point of non-determinism.
NON_DETERMINISTIC_TESTS=(
"blockfilter_index_tests/blockfilter_index_initial_sync" # src/checkqueue.h: In CCheckQueue::Loop(): while (queue.empty()) { ... }
"coinselector_tests/knapsack_solver_test" # coinselector_tests.cpp: if (equal_sets(setCoinsRet, setCoinsRet2))
"fs_tests/fsbridge_fstream" # deterministic test failure?
"miner_tests/CreateNewBlock_validity" # validation.cpp: if (signals.CallbacksPending() > 10)
"scheduler_tests/manythreads" # scheduler.cpp: CScheduler::serviceQueue()
"scheduler_tests/singlethreadedscheduler_ordered" # scheduler.cpp: CScheduler::serviceQueue()
"txvalidationcache_tests/checkinputs_test" # validation.cpp: if (signals.CallbacksPending() > 10)
"txvalidationcache_tests/tx_mempool_block_doublespend" # validation.cpp: if (signals.CallbacksPending() > 10)
"txindex_tests/txindex_initial_sync" # validation.cpp: if (signals.CallbacksPending() > 10)
"txvalidation_tests/tx_mempool_reject_coinbase" # validation.cpp: if (signals.CallbacksPending() > 10)
"validation_block_tests/processnewblock_signals_ordering" # validation.cpp: if (signals.CallbacksPending() > 10)
"wallet_tests/coin_mark_dirty_immature_credit" # validation.cpp: if (signals.CallbacksPending() > 10)
"wallet_tests/dummy_input_size_test" # validation.cpp: if (signals.CallbacksPending() > 10)
"wallet_tests/importmulti_rescan" # validation.cpp: if (signals.CallbacksPending() > 10)
"wallet_tests/importwallet_rescan" # validation.cpp: if (signals.CallbacksPending() > 10)
"wallet_tests/ListCoins" # validation.cpp: if (signals.CallbacksPending() > 10)
"wallet_tests/scan_for_wallet_transactions" # validation.cpp: if (signals.CallbacksPending() > 10)
"wallet_tests/wallet_disableprivkeys" # validation.cpp: if (signals.CallbacksPending() > 10)
)
TEST_BITCOIN_BINARY="src/test/test_bitcoin"
print_usage() {
echo "Usage: $0 [custom test filter (default: all but known non-deterministic tests)] [number of test runs (default: 2)]"
}
N_TEST_RUNS=2
BOOST_TEST_RUN_FILTERS=""
if [[ $# != 0 ]]; then
if [[ $1 == "--help" ]]; then
print_usage
exit
fi
PARSED_ARGUMENTS=0
if [[ $1 =~ [a-z] ]]; then
BOOST_TEST_RUN_FILTERS=$1
PARSED_ARGUMENTS=$((PARSED_ARGUMENTS + 1))
shift
fi
if [[ $1 =~ ^[0-9]+$ ]]; then
N_TEST_RUNS=$1
PARSED_ARGUMENTS=$((PARSED_ARGUMENTS + 1))
shift
fi
if [[ ${PARSED_ARGUMENTS} == 0 || $# -gt 2 || ${N_TEST_RUNS} -lt 2 ]]; then
print_usage
exit
fi
fi
if [[ ${BOOST_TEST_RUN_FILTERS} == "" ]]; then
BOOST_TEST_RUN_FILTERS="$(IFS=":"; echo "!${NON_DETERMINISTIC_TESTS[*]}" | sed 's/:/:!/g')"
else
echo "Using Boost test filter: ${BOOST_TEST_RUN_FILTERS}"
echo
fi
if ! command -v gcov > /dev/null; then
echo "Error: gcov not installed. Exiting."
exit 1
fi
if ! command -v gcovr > /dev/null; then
echo "Error: gcovr not installed. Exiting."
exit 1
fi
if [[ ! -e ${TEST_BITCOIN_BINARY} ]]; then
echo "Error: Executable ${TEST_BITCOIN_BINARY} not found. Run \"cmake -B build -DCMAKE_BUILD_TYPE=Coverage\" and compile."
exit 1
fi
get_file_suffix_count() {
find src/ -type f -name "*.$1" | wc -l
}
if [[ $(get_file_suffix_count gcno) == 0 ]]; then
echo "Error: Could not find any *.gcno files. The *.gcno files are generated by the compiler. Run \"cmake -B build -DCMAKE_BUILD_TYPE=Coverage\" and re-compile."
exit 1
fi
get_covr_filename() {
echo "gcovr.run-$1.txt"
}
TEST_RUN_ID=0
while [[ ${TEST_RUN_ID} -lt ${N_TEST_RUNS} ]]; do
TEST_RUN_ID=$((TEST_RUN_ID + 1))
echo "[$(date +"%Y-%m-%d %H:%M:%S")] Measuring coverage, run #${TEST_RUN_ID} of ${N_TEST_RUNS}"
find src/ -type f -name "*.gcda" -exec rm {} \;
if [[ $(get_file_suffix_count gcda) != 0 ]]; then
echo "Error: Stale *.gcda files found. Exiting."
exit 1
fi
TEST_OUTPUT_TEMPFILE=$(mktemp)
if ! BOOST_TEST_RUN_FILTERS="${BOOST_TEST_RUN_FILTERS}" ${TEST_BITCOIN_BINARY} > "${TEST_OUTPUT_TEMPFILE}" 2>&1; then
cat "${TEST_OUTPUT_TEMPFILE}"
rm "${TEST_OUTPUT_TEMPFILE}"
exit 1
fi
rm "${TEST_OUTPUT_TEMPFILE}"
if [[ $(get_file_suffix_count gcda) == 0 ]]; then
echo "Error: Running the test suite did not create any *.gcda files. The gcda files are generated when the instrumented test programs are executed. Run \"cmake -B build -DCMAKE_BUILD_TYPE=Coverage\" and re-compile."
exit 1
fi
GCOVR_TEMPFILE=$(mktemp)
if ! gcovr --gcov-executable "${GCOV_EXECUTABLE}" -r src/ > "${GCOVR_TEMPFILE}"; then
echo "Error: gcovr failed. Output written to ${GCOVR_TEMPFILE}. Exiting."
exit 1
fi
GCOVR_FILENAME=$(get_covr_filename ${TEST_RUN_ID})
mv "${GCOVR_TEMPFILE}" "${GCOVR_FILENAME}"
if grep -E "^TOTAL *0 *0 " "${GCOVR_FILENAME}"; then
echo "Error: Spurious gcovr output. Make sure the correct GCOV_EXECUTABLE variable is set in $0 (\"gcov\" for gcc, \"llvm-cov gcov\" for clang)."
exit 1
fi
if [[ ${TEST_RUN_ID} != 1 ]]; then
COVERAGE_DIFF=$(diff -u "$(get_covr_filename 1)" "${GCOVR_FILENAME}")
if [[ ${COVERAGE_DIFF} != "" ]]; then
echo
echo "The line coverage is non-deterministic between runs. Exiting."
echo
echo "The test suite must be deterministic in the sense that the set of lines executed at least"
echo "once must be identical between runs. This is a necessary condition for meaningful"
echo "coverage measuring."
echo
echo "${COVERAGE_DIFF}"
exit 1
fi
rm "${GCOVR_FILENAME}"
fi
done
echo
echo "Coverage test passed: Deterministic coverage across ${N_TEST_RUNS} runs."
exit

View File

@@ -18,10 +18,10 @@ Otherwise, you may choose from one of the following options to install Guix:
- Works on nearly all Linux distributions
- Installs any release
- Binary installation only, requires high level of trust
3. Using fanquake's **container image** [↗︎ external instructions][install-fanquake-container]
3. Using fanquake's **Docker image** [↗︎ external instructions][install-fanquake-docker]
- Maintained by fanquake
- Easy (automatically performs *some* setup)
- Works wherever container images work (Docker/Podman)
- Works wherever Docker images work
- Installs any release
- Binary installation only, requires high level of trust
4. Using a **distribution-maintained package** [⤓ skip to section][install-distro-pkg]
@@ -57,7 +57,7 @@ Regardless of which installation option you chose, the changes to
`/etc/profile.d` will not take effect until the next shell or desktop session,
so you should log out and log back in.
## Option 3: Using fanquake's container image
## Option 3: Using fanquake's Docker image
Please refer to fanquake's instructions
[here](https://github.com/fanquake/core-review/tree/master/guix).
@@ -319,7 +319,7 @@ Source: https://logs.guix.gnu.org/guix/2020-11-12.log#232527
Start by cloning Guix:
```
git clone https://codeberg.org/guix/guix.git
git clone https://git.savannah.gnu.org/git/guix.git
cd guix
```
@@ -415,7 +415,7 @@ make it "what Guix intended", then read the next few subsections.
This section definitely does not apply to you if you installed Guix using:
1. The shell installer script
2. fanquake's container image
2. fanquake's Docker image
3. Debian's `guix` package
#### Background
@@ -607,7 +607,7 @@ checklist.
```
Generation 38 Feb 22 2021 16:39:31 (current)
guix f350df4
repository URL: https://codeberg.org/guix/guix.git
repository URL: https://git.savannah.gnu.org/git/guix.git
branch: version-1.2.0
commit: f350df405fbcd5b9e27e6b6aa500da7f101f41e7
```
@@ -760,13 +760,13 @@ Please see the following links for more details:
- An upstream coreutils bug has been filed: [debbugs#47940](https://debbugs.gnu.org/cgi/bugreport.cgi?bug=47940)
- A Guix bug detailing the underlying problem has been filed: [guix-issues#47935](https://issues.guix.gnu.org/47935), [guix-issues#49985](https://issues.guix.gnu.org/49985#5)
- A commit to skip this test is included since Guix 1.4.0:
[codeberg/guix@6ba1058](https://codeberg.org/guix/guix/commit/6ba1058df0c4ce5611c2367531ae5c3cdc729ab4)
- A commit to skip this test in Guix has been merged into the core-updates branch:
[savannah/guix@6ba1058](https://git.savannah.gnu.org/cgit/guix.git/commit/?id=6ba1058df0c4ce5611c2367531ae5c3cdc729ab4)
[install-script]: #options-1-and-2-using-the-official-shell-installer-script-or-binary-tarball
[install-bin-tarball]: #options-1-and-2-using-the-official-shell-installer-script-or-binary-tarball
[install-fanquake-container]: #option-3-using-fanquakes-container-image
[install-fanquake-docker]: #option-3-using-fanquakes-docker-image
[install-distro-pkg]: #option-4-using-a-distribution-maintained-package
[install-source]: #option-5-building-from-source

View File

@@ -37,7 +37,7 @@ You can then either point to the SDK using the `SDK_PATH` environment variable:
```sh
# Extract the SDK tarball to /path/to/parent/dir/of/extracted/SDK/Xcode-<foo>-<bar>-extracted-SDK-with-libcxx-headers
tar -C /path/to/parent/dir/of/extracted/SDK -xaf /path/to/Xcode-<foo>-<bar>-extracted-SDK-with-libcxx-headers.tar
tar -C /path/to/parent/dir/of/extracted/SDK -xaf /path/to/Xcode-<foo>-<bar>-extracted-SDK-with-libcxx-headers.tar.gz
# Indicate where to locate the SDK tarball
export SDK_PATH=/path/to/parent/dir/of/extracted/SDK
@@ -365,6 +365,12 @@ Where `<PREFIX>` is likely:
- `/usr/local` if you installed Guix from source and didn't supply any
prefix-modifying flags to Guix's `./configure`
For dongcarl's substitute server at https://guix.carldong.io, run as root:
```sh
wget -qO- 'https://guix.carldong.io/signing-key.pub' | guix archive --authorize
```
#### Removing authorized keys
To remove previously authorized keys, simply edit `/etc/guix/acl` and remove the
@@ -376,28 +382,28 @@ Once its key is authorized, the official Guix build farm at
https://ci.guix.gnu.org is automatically used unless the `--no-substitutes` flag
is supplied. This default list of substitute servers is overridable both on a
`guix-daemon` level and when you invoke `guix` commands. See examples below for
the various ways of adding a substitute server after having [authorized
its signing key](#step-1-authorize-the-signing-keys).
the various ways of adding dongcarl's substitute server after having [authorized
his signing key](#step-1-authorize-the-signing-keys).
Change the **default list** of substitute servers by starting `guix-daemon` with
the `--substitute-urls` option (you will likely need to edit your init script):
```sh
guix-daemon <cmd> --substitute-urls='https://bordeaux.guix.gnu.org https://ci.guix.gnu.org'
guix-daemon <cmd> --substitute-urls='https://guix.carldong.io https://ci.guix.gnu.org'
```
Override the default list of substitute servers by passing the
`--substitute-urls` option for invocations of `guix` commands:
```sh
guix <cmd> --substitute-urls='https://bordeaux.guix.gnu.org https://ci.guix.gnu.org'
guix <cmd> --substitute-urls='https://guix.carldong.io https://ci.guix.gnu.org'
```
For scripts under `./contrib/guix`, set the `SUBSTITUTE_URLS` environment
variable:
```sh
export SUBSTITUTE_URLS='https://bordeaux.guix.gnu.org https://ci.guix.gnu.org'
export SUBSTITUTE_URLS='https://guix.carldong.io https://ci.guix.gnu.org'
```
## Option 2: Disabling substitutes on an ad-hoc basis

View File

@@ -69,12 +69,6 @@ fi
mkdir -p "$VERSION_BASE"
################
# SOURCE_DATE_EPOCH should not unintentionally be set
################
check_source_date_epoch
################
# Build directories should not exist
################

View File

@@ -67,12 +67,6 @@ EOF
exit 1
fi
################
# SOURCE_DATE_EPOCH should not unintentionally be set
################
check_source_date_epoch
################
# The codesignature git worktree should not be dirty
################

View File

@@ -69,12 +69,11 @@ unset CPLUS_INCLUDE_PATH
unset OBJC_INCLUDE_PATH
unset OBJCPLUS_INCLUDE_PATH
# Set native toolchain
build_CC="${NATIVE_GCC}/bin/gcc -isystem ${NATIVE_GCC}/include"
build_CXX="${NATIVE_GCC}/bin/g++ -isystem ${NATIVE_GCC}/include/c++ -isystem ${NATIVE_GCC}/include"
export C_INCLUDE_PATH="${NATIVE_GCC}/include"
export CPLUS_INCLUDE_PATH="${NATIVE_GCC}/include/c++:${NATIVE_GCC}/include"
case "$HOST" in
*darwin*) export LIBRARY_PATH="${NATIVE_GCC}/lib" ;; # Required for native packages
*darwin*) export LIBRARY_PATH="${NATIVE_GCC}/lib" ;; # Required for qt/qmake
*mingw*) export LIBRARY_PATH="${NATIVE_GCC}/lib" ;;
*)
NATIVE_GCC_STATIC="$(store_path gcc-toolchain static)"
@@ -137,7 +136,8 @@ export GUIX_LD_WRAPPER_DISABLE_RPATH=yes
# Make /usr/bin if it doesn't exist
[ -e /usr/bin ] || mkdir -p /usr/bin
# Symlink env to a conventional path
# Symlink file and env to a conventional path
[ -e /usr/bin/file ] || ln -s --no-dereference "$(command -v file)" /usr/bin/file
[ -e /usr/bin/env ] || ln -s --no-dereference "$(command -v env)" /usr/bin/env
# Determine the correct value for -Wl,--dynamic-linker for the current $HOST
@@ -171,8 +171,6 @@ make -C depends --jobs="$JOBS" HOST="$HOST" \
${SOURCES_PATH+SOURCES_PATH="$SOURCES_PATH"} \
${BASE_CACHE+BASE_CACHE="$BASE_CACHE"} \
${SDK_PATH+SDK_PATH="$SDK_PATH"} \
${build_CC+build_CC="$build_CC"} \
${build_CXX+build_CXX="$build_CXX"} \
x86_64_linux_CC=x86_64-linux-gnu-gcc \
x86_64_linux_CXX=x86_64-linux-gnu-g++ \
x86_64_linux_AR=x86_64-linux-gnu-gcc-ar \
@@ -183,6 +181,8 @@ make -C depends --jobs="$JOBS" HOST="$HOST" \
case "$HOST" in
*darwin*)
# Unset now that Qt is built
unset C_INCLUDE_PATH
unset CPLUS_INCLUDE_PATH
unset LIBRARY_PATH
;;
esac
@@ -242,7 +242,6 @@ mkdir -p "$DISTSRC"
cmake -S . -B build \
--toolchain "${BASEPREFIX}/${HOST}/toolchain.cmake" \
-DWITH_CCACHE=OFF \
-Werror=dev \
${CONFIGFLAGS}
# Build Bitcoin Core
@@ -290,7 +289,7 @@ mkdir -p "$DISTSRC"
*)
# Split binaries from their debug symbols
{
find "${DISTNAME}/bin" "${DISTNAME}/libexec" -type f -executable -print0
find "${DISTNAME}/bin" -type f -executable -print0
} | xargs -0 -P"$JOBS" -I{} "${DISTSRC}/build/split-debug.sh" {} {} {}.dbg
;;
esac

View File

@@ -110,7 +110,7 @@ mkdir -p "$DISTSRC"
# Apply detached codesignatures (in-place)
signapple apply dist/Bitcoin-Qt.app codesignatures/osx/"${HOST}"/dist/Bitcoin-Qt.app
find "${DISTNAME}" \( -wholename "*/bin/*" -o -wholename "*/libexec/*" \) -type f | while read -r bin
find "${DISTNAME}" -wholename "*/bin/*" -type f | while read -r bin
do
signapple apply "${bin}" "codesignatures/osx/${HOST}/${bin}.${ARCH}sign"
done

View File

@@ -21,26 +21,6 @@ check_tools() {
done
}
################
# SOURCE_DATE_EPOCH should not unintentionally be set
################
check_source_date_epoch() {
if [ -n "$SOURCE_DATE_EPOCH" ] && [ -z "$FORCE_SOURCE_DATE_EPOCH" ]; then
cat << EOF
ERR: Environment variable SOURCE_DATE_EPOCH is set which may break reproducibility.
Aborting...
Hint: You may want to:
1. Unset this variable: \`unset SOURCE_DATE_EPOCH\` before rebuilding
2. Set the 'FORCE_SOURCE_DATE_EPOCH' environment variable if you insist on
using your own epoch
EOF
exit 1
fi
}
check_tools cat env readlink dirname basename git
################
@@ -70,7 +50,7 @@ fi
# across time.
time-machine() {
# shellcheck disable=SC2086
guix time-machine --url=https://codeberg.org/guix/guix.git \
guix time-machine --url=https://git.savannah.gnu.org/git/guix.git \
--commit=53396a22afc04536ddf75d8f82ad2eafa5082725 \
--cores="$JOBS" \
--keep-failed \

View File

@@ -2,24 +2,21 @@
((gnu packages bash) #:select (bash-minimal))
(gnu packages bison)
((gnu packages certs) #:select (nss-certs))
((gnu packages check) #:select (libfaketime))
((gnu packages cmake) #:select (cmake-minimal))
(gnu packages commencement)
(gnu packages compression)
(gnu packages cross-base)
(gnu packages file)
(gnu packages gawk)
(gnu packages gcc)
((gnu packages installers) #:select (nsis-x86_64))
((gnu packages linux) #:select (linux-libre-headers-6.1))
(gnu packages llvm)
(gnu packages mingw)
(gnu packages ninja)
(gnu packages pkg-config)
((gnu packages python) #:select (python-minimal))
((gnu packages python-build) #:select (python-poetry-core))
((gnu packages python-build) #:select (python-tomli python-poetry-core))
((gnu packages python-crypto) #:select (python-asn1crypto))
((gnu packages python-science) #:select (python-scikit-build-core))
((gnu packages python-xyz) #:select (python-pydantic-2 python-pydantic-core))
((gnu packages tls) #:select (openssl))
((gnu packages version-control) #:select (git-minimal))
(guix build-system cmake)
@@ -161,35 +158,37 @@ chain for " target " development."))
(define-public python-lief
(package
(name "python-lief")
(version "0.16.6")
(version "0.13.2")
(source (origin
(method git-fetch)
(uri (git-reference
(url "https://github.com/lief-project/LIEF")
(commit version)))
(file-name (git-file-name name version))
(modules '((guix build utils)))
(snippet
'(begin
;; Configure build for Python bindings.
(substitute* "api/python/config-default.toml"
(("(ninja = )true" all m)
(string-append m "false"))
(("(parallel-jobs = )0" all m)
(string-append m (number->string (parallel-job-count)))))))
(sha256
(base32
"1pq9nagrnkl1x943bqnpiyxmkd9vk99znfxiwqp6vf012b50bz2a"))
(patches (search-our-patches "lief-scikit-0-9.patch"))))
(build-system pyproject-build-system)
(native-inputs (list cmake-minimal
ninja
python-scikit-build-core
python-pydantic-core
python-pydantic-2))
"0y48x358ppig5xp97ahcphfipx7cg9chldj2q5zrmn610fmi4zll"))))
(build-system python-build-system)
(native-inputs (list cmake-minimal python-tomli))
(arguments
(list
#:tests? #f ;needs network
#:phases #~(modify-phases %standard-phases
(add-before 'build 'set-pythonpath
(add-before 'build 'change-directory
(lambda _
(setenv "PYTHONPATH"
(string-append (string-append (getcwd) "/api/python/backend")
":" (or (getenv "PYTHONPATH") "")))))
(add-after 'set-pythonpath 'change-directory
(chdir "api/python")))
(replace 'build
(lambda _
(chdir "api/python"))))))
(invoke "python" "setup.py" "build"))))))
(home-page "https://github.com/lief-project/LIEF")
(synopsis "Library to instrument executable formats")
(description
@@ -210,17 +209,7 @@ and abstract ELF, PE and MachO formats.")
(base32
"1j47vwq4caxfv0xw68kw5yh00qcpbd56d7rq6c483ma3y7s96yyz"))))
(build-system cmake-build-system)
(arguments
(list
#:phases
#~(modify-phases %standard-phases
(replace 'check
(lambda* (#:key tests? #:allow-other-keys)
(if tests?
(invoke "faketime" "-f" "@2025-01-01 00:00:00" ;; Tests fail after 2025.
"ctest" "--output-on-failure" "--no-tests=error")
(format #t "test suite not run~%")))))))
(inputs (list libfaketime openssl))
(inputs (list openssl))
(home-page "https://github.com/mtrojnar/osslsigncode")
(synopsis "Authenticode signing and timestamping tool")
(description "osslsigncode is a small tool that implements part of the
@@ -541,6 +530,7 @@ inspecting signatures in Mach-O binaries.")
which
coreutils-minimal
;; File(system) inspection
file
grep
diffutils
findutils
@@ -557,7 +547,6 @@ inspecting signatures in Mach-O binaries.")
gcc-toolchain-13
cmake-minimal
gnu-make
ninja
;; Scripting
python-minimal ;; (3.10)
;; Git

View File

@@ -1,21 +0,0 @@
Partially revert f23ced2f4ffc170d0a6f40ff4a1bee575e3447cf
Restore compat with python-scikit-build-core 0.9.x
Can be dropped when using python-scikit-build-core >= 0.10.x
--- a/api/python/backend/setup.py
+++ b/api/python/backend/setup.py
@@ -101,12 +101,12 @@ def _get_hooked_config(is_editable: bool) -> Optional[dict[str, Union[str, List[
config_settings = {
"logging.level": "DEBUG",
"build-dir": config.build_dir,
- "build.targets": config.build.targets,
"install.strip": config.strip,
"backport.find-python": "0",
"wheel.py-api": config.build.py_api,
"cmake.source-dir": SRC_DIR.as_posix(),
"cmake.build-type": config.build.build_type,
+ "cmake.targets": config.build.targets,
"cmake.args": [
*config.cmake_generator,
*config.get_cmake_args(is_editable),

View File

@@ -25,7 +25,8 @@ ExecStart=/usr/bin/bitcoind -pid=/run/bitcoind/bitcoind.pid \
-shutdownnotify='systemd-notify --stopping'
# Make sure the config directory is readable by the service user
ExecStartPre=!/bin/chgrp bitcoin /etc/bitcoin
PermissionsStartOnly=true
ExecStartPre=/bin/chgrp bitcoin /etc/bitcoin
# Process management
####################
@@ -43,6 +44,7 @@ TimeoutStopSec=600
# Run as bitcoin:bitcoin
User=bitcoin
Group=bitcoin
# /run/bitcoind
RuntimeDirectory=bitcoind

View File

@@ -44,15 +44,15 @@ xip -x Xcode_15.xip
### Step 2: Generating the SDK tarball from `Xcode.app`
To generate the SDK, run the script [`gen-sdk.py`](./gen-sdk.py) with the
To generate the SDK, run the script [`gen-sdk`](./gen-sdk) with the
path to `Xcode.app` (extracted in the previous stage) as the first argument.
```bash
./contrib/macdeploy/gen-sdk.py '/path/to/Xcode.app'
./contrib/macdeploy/gen-sdk '/path/to/Xcode.app'
```
The generated archive should be: `Xcode-15.0-15A240d-extracted-SDK-with-libcxx-headers.tar`.
The `sha256sum` should be `95b00dc41fa090747dc0a7907a5031a2fcb2d7f95c9584ba6bccdb99b6e3d498`.
The generated archive should be: `Xcode-15.0-15A240d-extracted-SDK-with-libcxx-headers.tar.gz`.
The `sha256sum` should be `c0c2e7bb92c1fee0c4e9f3a485e4530786732d6c6dd9e9f418c282aa6892f55d`.
## Deterministic macOS App Notes

View File

@@ -44,7 +44,7 @@ ${SIGNAPPLE} apply "${UNSIGNED_BUNDLE}" "${OUTROOT}/${BUNDLE_ROOT}/${BUNDLE_NAME
${SIGNAPPLE} notarize --detach "${OUTROOT}/${BUNDLE_ROOT}" --passphrase "${api_key_pass}" "$2" "$3" "${UNSIGNED_BUNDLE}"
# Sign each binary
find . -maxdepth 3 \( -wholename "*/bin/*" -o -wholename "*/libexec/*" \) -type f -exec realpath --relative-to=. {} \; | while read -r bin
find . -maxdepth 3 -wholename "*/bin/*" -type f -exec realpath --relative-to=. {} \; | while read -r bin
do
bin_dir=$(dirname "${bin}")
bin_name=$(basename "${bin}")

View File

@@ -2,7 +2,9 @@
import argparse
import plistlib
import pathlib
import sys
import tarfile
import gzip
import os
import contextlib
@@ -20,12 +22,12 @@ def run():
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('xcode_app', metavar='XCODEAPP', type=pathlib.Path)
parser.add_argument("-o", metavar='OUTSDKTAR', dest='out_sdkt', type=pathlib.Path, required=False)
parser.add_argument('xcode_app', metavar='XCODEAPP', nargs=1)
parser.add_argument("-o", metavar='OUTSDKTGZ', nargs=1, dest='out_sdktgz', required=False)
args = parser.parse_args()
xcode_app = args.xcode_app.resolve()
xcode_app = pathlib.Path(args.xcode_app[0]).resolve()
assert xcode_app.is_dir(), "The supplied Xcode.app path '{}' either does not exist or is not a directory".format(xcode_app)
xcode_app_plist = xcode_app.joinpath("Contents/version.plist")
@@ -45,10 +47,14 @@ def run():
out_name = "Xcode-{xcode_version}-{xcode_build_id}-extracted-SDK-with-libcxx-headers".format(xcode_version=xcode_version, xcode_build_id=xcode_build_id)
out_sdkt_path = args.out_sdkt or pathlib.Path("./{}.tar".format(out_name))
if args.out_sdktgz:
out_sdktgz_path = pathlib.Path(args.out_sdktgz_path)
else:
# Construct our own out_sdktgz if not specified on the command line
out_sdktgz_path = pathlib.Path("./{}.tar.gz".format(out_name))
def tarfp_add_with_base_change(tarfp, dir_to_add, alt_base_dir):
"""Add all files in dir_to_add to tarfp, but prepend alt_base_dir to the files'
"""Add all files in dir_to_add to tarfp, but prepent alt_base_dir to the files'
names
e.g. if the only file under /root/bazdir is /root/bazdir/qux, invoking:
@@ -62,8 +68,6 @@ def run():
"""
def change_tarinfo_base(tarinfo):
if tarinfo.name and tarinfo.name.endswith((".swiftmodule", ".modulemap")):
return None
if tarinfo.name and tarinfo.name.startswith("./"):
tarinfo.name = str(pathlib.Path(alt_base_dir, tarinfo.name))
if tarinfo.linkname and tarinfo.linkname.startswith("./"):
@@ -77,17 +81,16 @@ def run():
return tarinfo
with cd(dir_to_add):
# recursion already adds entries in sorted order
tarfp.add("./usr/include", recursive=True, filter=change_tarinfo_base)
tarfp.add("./usr/lib", recursive=True, filter=change_tarinfo_base)
tarfp.add("./System/Library/Frameworks", recursive=True, filter=change_tarinfo_base)
tarfp.add(".", recursive=True, filter=change_tarinfo_base)
print("Creating output .tar file...")
with out_sdkt_path.open("wb") as fp:
with tarfile.open(mode="w", fileobj=fp, format=tarfile.PAX_FORMAT) as tarfp:
print("Adding MacOSX SDK {} files...".format(sdk_version))
tarfp_add_with_base_change(tarfp, sdk_dir, out_name)
print("Done! Find the resulting tarball at:")
print(out_sdkt_path.resolve())
print("Creating output .tar.gz file...")
with out_sdktgz_path.open("wb") as fp:
with gzip.GzipFile(fileobj=fp, mode='wb', compresslevel=9, mtime=0) as gzf:
with tarfile.open(mode="w", fileobj=gzf, format=tarfile.GNU_FORMAT) as tarfp:
print("Adding MacOSX SDK {} files...".format(sdk_version))
tarfp_add_with_base_change(tarfp, sdk_dir, out_name)
print("Done! Find the resulting gzipped tarball at:")
print(out_sdktgz_path.resolve())
if __name__ == '__main__':
run()

View File

@@ -157,19 +157,20 @@ class DeploymentInfo(object):
self.qtPath = None
self.pluginPath = None
self.deployedFrameworks = []
def detectQtPath(self, frameworkDirectory: str):
parentDir = os.path.dirname(frameworkDirectory)
if os.path.exists(os.path.join(parentDir, "share", "qt", "translations")):
if os.path.exists(os.path.join(parentDir, "translations")):
# Classic layout, e.g. "/usr/local/Trolltech/Qt-4.x.x"
self.qtPath = parentDir
else:
self.qtPath = os.getenv("QTDIR", None)
if self.qtPath is not None:
pluginPath = os.path.join(self.qtPath, "share", "qt", "plugins")
pluginPath = os.path.join(self.qtPath, "plugins")
if os.path.exists(pluginPath):
self.pluginPath = pluginPath
def usesFramework(self, name: str) -> bool:
for framework in self.deployedFrameworks:
if framework.endswith(".framework"):
@@ -180,7 +181,7 @@ class DeploymentInfo(object):
return True
return False
def getFrameworks(binaryPath: str, verbose: int, rpath: str = '') -> list[FrameworkInfo]:
def getFrameworks(binaryPath: str, verbose: int) -> list[FrameworkInfo]:
objdump = os.getenv("OBJDUMP", "objdump")
if verbose:
print(f"Inspecting with {objdump}: {binaryPath}")
@@ -194,19 +195,17 @@ def getFrameworks(binaryPath: str, verbose: int, rpath: str = '') -> list[Framew
lines.pop(0) # First line is the inspected binary
if ".framework" in binaryPath or binaryPath.endswith(".dylib"):
lines.pop(0) # Frameworks and dylibs list themselves as a dependency.
libraries = []
for line in lines:
line = line.replace("@loader_path", os.path.dirname(binaryPath))
if rpath:
line = line.replace("@rpath", rpath)
info = FrameworkInfo.fromLibraryLine(line.strip())
if info is not None:
if verbose:
print("Found framework:")
print(info)
libraries.append(info)
return libraries
def runInstallNameTool(action: str, *args):
@@ -319,7 +318,7 @@ def deployFrameworks(frameworks: list[FrameworkInfo], bundlePath: str, binaryPat
# install_name_tool it a new id.
changeIdentification(framework.deployedInstallName, deployedBinaryPath, verbose)
# Check for framework dependencies
dependencies = getFrameworks(deployedBinaryPath, verbose, rpath=framework.frameworkDirectory)
dependencies = getFrameworks(deployedBinaryPath, verbose)
for dependency in dependencies:
changeInstallName(dependency.installName, dependency.deployedInstallName, deployedBinaryPath, verbose)
@@ -466,18 +465,18 @@ if config.translations_dir:
sys.stderr.write(f"Error: Could not find translation dir \"{config.translations_dir[0]}\"\n")
sys.exit(1)
print("+ Adding Qt translations +")
print("+ Adding Qt translations +")
translations = Path(config.translations_dir[0])
translations = Path(config.translations_dir[0])
regex = re.compile('qt_[a-z]*(.qm|_[A-Z]*.qm)')
regex = re.compile('qt_[a-z]*(.qm|_[A-Z]*.qm)')
lang_files = [x for x in translations.iterdir() if regex.match(x.name)]
lang_files = [x for x in translations.iterdir() if regex.match(x.name)]
for file in lang_files:
if verbose:
print(file.as_posix(), "->", os.path.join(applicationBundle.resourcesPath, file.name))
shutil.copy2(file.as_posix(), os.path.join(applicationBundle.resourcesPath, file.name))
for file in lang_files:
if verbose:
print(file.as_posix(), "->", os.path.join(applicationBundle.resourcesPath, file.name))
shutil.copy2(file.as_posix(), os.path.join(applicationBundle.resourcesPath, file.name))
# ------------------------------------------------

View File

@@ -10,21 +10,22 @@ to addrman with).
Update `MIN_BLOCKS` in `makeseeds.py` and the `-m`/`--minblocks` arguments below, as needed.
The seeds compiled into the release are created from sipa's and achow101's
The seeds compiled into the release are created from sipa's, achow101's and luke-jr's
DNS seed, virtu's crawler, and asmap community AS map data. Run the following commands
from the `/contrib/seeds` directory:
```
curl https://bitcoin.sipa.be/seeds.txt.gz | gzip -dc > seeds_main.txt
curl https://21.ninja/seeds.txt.gz | gzip -dc >> seeds_main.txt
curl https://luke.dashjr.org/programs/bitcoin/files/charts/seeds.txt >> seeds_main.txt
curl https://mainnet.achownodes.xyz/seeds.txt.gz | gzip -dc >> seeds_main.txt
curl https://signet.achownodes.xyz/seeds.txt.gz | gzip -dc > seeds_signet.txt
curl https://testnet.achownodes.xyz/seeds.txt.gz | gzip -dc > seeds_test.txt
curl https://testnet4.achownodes.xyz/seeds.txt.gz | gzip -dc > seeds_testnet4.txt
curl https://raw.githubusercontent.com/asmap/asmap-data/main/latest_asmap.dat > asmap-filled.dat
python3 makeseeds.py -a asmap-filled.dat -s seeds_main.txt > nodes_main.txt
python3 makeseeds.py -a asmap-filled.dat -s seeds_signet.txt -m 266000 > nodes_signet.txt
python3 makeseeds.py -a asmap-filled.dat -s seeds_test.txt -m 4650000 > nodes_test.txt
python3 makeseeds.py -a asmap-filled.dat -s seeds_testnet4.txt -m 100000 > nodes_testnet4.txt
python3 makeseeds.py -a asmap-filled.dat -s seeds_signet.txt -m 237800 > nodes_signet.txt
python3 makeseeds.py -a asmap-filled.dat -s seeds_test.txt > nodes_test.txt
python3 makeseeds.py -a asmap-filled.dat -s seeds_testnet4.txt -m 72600 > nodes_testnet4.txt
python3 generate-seeds.py . > ../../src/chainparamsseeds.h
```

View File

@@ -26,7 +26,7 @@ MAX_SEEDS_PER_ASN = {
'ipv6': 10,
}
MIN_BLOCKS = 910000
MIN_BLOCKS = 868000
PATTERN_IPV4 = re.compile(r"^(([0-2]?\d{1,2})\.([0-2]?\d{1,2})\.([0-2]?\d{1,2})\.([0-2]?\d{1,2})):(\d{1,5})$")
PATTERN_IPV6 = re.compile(r"^\[([\da-f:]+)]:(\d{1,5})$", re.IGNORECASE)
@@ -48,8 +48,7 @@ PATTERN_AGENT = re.compile(
r"|25\.(0|1|2|99)\.0"
r"|26\.(0|1|2|99)\.0"
r"|27\.(0|1|2|99)\.0"
r"|28\.(0|1|2|99)\.0"
r"|29\.(0|99)\.0"
r"|28\.(0|1|99)\.0"
r")")
def parseline(line: str) -> Union[dict, None]:

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More