f700609e8a doc: Release notes for mining IPC interface bump (Ryan Ofsky)
9453c15361 ipc mining: break compatibility with existing clients (version bump) (Sjors Provoost)
70de5cc2d2 ipc mining: pass missing context to BlockTemplate methods (incompatible schema change) (Sjors Provoost)
2278f017af ipc mining: remove deprecated methods (incompatible schema change) (Ryan Ofsky)
c6638fa7c5 ipc mining: provide default option values (incompatible schema change) (Ryan Ofsky)
a4603ac774 ipc mining: declare constants for default field values (Ryan Ofsky)
ff995b50cf ipc test: add workaround to block_reserved_weight exception test (Ryan Ofsky)
b970cdf20f test framework: expand expected_stderr, expected_ret_code options (Ryan Ofsky)
df53a3e5ec rpc refactor: stop using deprecated getCoinbaseCommitment method (Ryan Ofsky)
Pull request description:
This PR increments the field number of the `Init.makeMining` method and makes the old `makeMining` method return an error, so IPC mining clients not using the latest schema file will get an error and not be able to access the Mining interface.
Normally, there shouldn't be a need to break compatibility this way, but the mining interface has evolved a lot since it was first introduced, with old clients using the original methods less stable and performant than newer clients. So now is a good time to introduce a cutoff, drop deprecated methods, and stop supporting old clients which can't function as well.
Bumping the field number is also an opportunity to make other improvements that would be awkward to implement compatibly:
- Making Cap'n Proto default parameter and field values match default values of corresponding C++ methods and structs.
- Adding missing Context parameters to Mining.createNewBlock and checkBlock methods so these methods will be executed on separate execution threads and not block the Cap'n Proto event loop thread.
More details about these changes are in the commit messages.
ACKs for top commit:
Sjors:
ACK f700609e8a
enirox001:
ACK f700609e8a
ismaelsadeeq:
ACK f700609e8a
sedited:
ACK f700609e8a
Tree-SHA512: 0901886af00214c138643b33cec21647de5671dfff2021afe06d78dfd970664a844cde9a1e28f685bb27edccaf6e0c3f2d1e6bb4164bde6b84f42955946e366d
0b96b9c600 Minimize mempool lock, sync txo spender index only when and if needed (sstone)
3d82ec5bdd Add a "tx output spender" index (sstone)
Pull request description:
This PR adds a new "tx output spender" index, which allows users to query which tx spent a given outpoint with the `gettxspendingprevout` RPC call that was added by https://github.com/bitcoin/bitcoin/pull/24408.
Such an index would be extremely useful for Lightning, and probably for most layer-2 protocols that rely on chains of unpublished transactions.
UPDATE: this PR is ready for review and issues have been addressed:
- using a watch-only wallet instead would not work if there is a significant number of outpoints to watch (see https://github.com/bitcoin/bitcoin/pull/24539#issuecomment-1276595646)
- this PR does not require `-txindex` anymore
We use a composite key with 2 parts (suggested by romanz): hash(spent outpoint) and tx position, with an empty value. Average composite key size is 15 bytes.
The spending tx can optionally be returned by `gettxspendingprevout` (even it `-txindex is not set`).
ACKs for top commit:
hodlinator:
re-ACK 0b96b9c600
sedited:
Re-ACK 0b96b9c600
fjahr:
ACK 0b96b9c600
w0xlt:
reACK 0b96b9c600
Tree-SHA512: 95c2c313ef4086e7d5bf1cf1a3c7b91cfe2bb1a0dcb4c9d3aa8a6e5bfde66aaca48d85a1f1251a780523c3e4356ec8a97fe6f5c7145bc6ccb6f820b26716ae01
cae6d895f8 fuzz: add target for CoinsViewOverlay (Andrew Toth)
86eda88c8e fuzz: move backend mutating block to end of coins_view (Andrew Toth)
89824fb27b fuzz: pass coins_view_cache to TestCoinsView in coins_view (Andrew Toth)
73e99a5966 coins: don't mutate main cache when connecting block (Andrew Toth)
67c0d1798e coins: introduce CoinsViewOverlay (Andrew Toth)
69b01af0eb coins: add PeekCoin() (Andrew Toth)
Pull request description:
This is a slightly modified version of the first few commits of #31132, which can be merged as an independent change. It has a small benefit on its own, but will help in moving the parent PR forward.
When accessing coins via the `CCoinsViewCache`, methods like `GetCoin` can call `FetchCoin` which actually mutate `cacheCoins` internally to cache entries when they are pulled from the backing db. This is generally a performance improvement for single threaded access patterns, but it precludes us from accessing entries in a `CCoinsViewCache` from multiple threads without a lock.
Another aspect is that when we use the resettable `CCoinsViewCache` view backed by the main cache for use in `ConnectBlock()`, we will insert entries into the main cache even if the block is determined to be invalid. This is not the biggest concern, since an invalid block requires proof-of-work. But, an attacker could craft multiple invalid blocks to fill the main cache. This would make us `Flush` the cache more often than necessary. Obviously this would be very expensive to do on mainnet.
Introduce `CoinsViewOverlay`, a `CCoinsViewCache` subclass that reads coins without mutating the underlying cache via `FetchCoin()`.
Add `PeekCoin()` to look up a Coin through a stack of `CCoinsViewCache` layers without populating parent caches. This prevents the main cache from caching inputs pulled from disk for a block that has not yet been fully validated. Once `Flush()` is called on the view, these inputs will be added as spent to `coinsCache` in the main cache via `BatchWrite()`.
This is the foundation for async input fetching, where worker threads must not mutate shared state.
ACKs for top commit:
l0rinc:
ACK cae6d895f8
sipa:
reACK cae6d895f8
sedited:
Re-ACK cae6d895f8
willcl-ark:
ACK cae6d895f8
vasild:
Cursory ACK cae6d895f8
ryanofsky:
Code review ACK cae6d895f8. PR is basically back to the form I had acked the first time, implementing `PeekCoin()` by calling `GetCoin()`. This is not ideal because `PeekCoin()` is not supposed to modify caches and `GetCoin()` does that, but it at least avoids problems of the subsequent approach tried where `GetCoin()` calls `PeekCoin` and would result in bugs when subclasses implement `GetCoin` forgetting to override `PeekCoin`. Hopefully #34124 can clean all of this by making relevant methods pure virtual.
Tree-SHA512: a81a98e60ca9e47454933ad879840cc226cb3b841bc36a4b746c34b350e07c546cdb5ddc55ec1ff66cf65d1ec503d22201d3dc12d4e82a8f4d386ccc52ba6441
afea2af139 net: reduce log level for PCP/NAT-PMP NOT_AUTHORIZED failures (ANAVHEOBA)
Pull request description:
Cherry-picks (and tweaks) a commit from #34117 which the ANAVHEOBA
did not follow up with when changes were requested.
The tweak here is to log once at `LogWarning`, so that users have a chance
to spot misconfiguration.
----
Users running on home networks with routers that don't support PCP (Port
Control Protocol) or NAT-PMP port mapping receive frequent warning-level
log messages every few minutes:
"pcp: Mapping failed with result NOT_AUTHORIZED (code 2)"
This is expected behavior for many consumer routers that have PCP
disabled by default, not an actionable error.
Add explicit constants for the NOT_AUTHORIZED result code (value 2)
for both NAT-PMP and PCP protocols. Log the first NOT_AUTHORIZED
failure at warning level for visibility, then downgrade subsequent
occurrences to LogDebug to avoid log noise. Other failure types
continue to warn unconditionally.
Fixes#34114
ACKs for top commit:
achow101:
ACK afea2af139
sedited:
ACK afea2af139
Tree-SHA512: 43dad9f3cca0ef9b82446045a3ccd90418cd5595c9965e938d9d539bbba863dde6b4a358adbee56f8d97d6efbf947eb5ddbbaf426faefcf3b1e36e4c8edb0d94
fa90d44a22 test: Fix intermittent issues in feature_assumevalid.py (MarcoFalke)
Pull request description:
The test has many issues:
* The `send_blocks_until_disconnected` seems to imply to wait until a disconnect happens. However, in reality it will blindly send all blocks and then return early, when done or when a disconnect happens. This will cause test failures when python is running faster than Bitcoin Core, for example when using sanitizers.
* The `assert_debug_log` scopes are bloated, which makes finding the above issue harder. This is, because a test failure will basically print the 1000+ line debug log excerpt twice, with the second instance stripped of useful python test logs. Also, the checks are less precise, if they happen over a larger scope/snippet.
Fix all issues, by:
* Removing `send_blocks_until_disconnected` and just sending the blocks that are needed to get the disconnect and then explicitly waiting for the disconnect.
* Reduce the scopes of the debug log checks. This can be reviewed with the git options `--color-moved=dimmed-zebra --color-moved-ws=ignore-all-space`.
ACKs for top commit:
l0rinc:
ACK fa90d44a22
achow101:
ACK fa90d44a22
Tree-SHA512: 8a18fa0009fb40a47317976e20b230625182874391babe0da3c3dfd55d2cc8e2da7008ce8f38691d01cd37ae02cac7c9e88b8193c2ed66d3621ee3f792649f71
2a1d0db799 doc: Mention private broadcast RPCs in release notes (Andrew Toth)
c3378be10b test: Cover abortprivatebroadcast in p2p_private_broadcast (Andrew Toth)
557260ca14 rpc: Add abortprivatebroadcast (Andrew Toth)
15dff452eb test: Cover getprivatebroadcastinfo in p2p_private_broadcast (Andrew Toth)
996f20c18a rpc: Add getprivatebroadcastinfo (Andrew Toth)
5e64982541 net: Add PrivateBroadcast::GetBroadcastInfo (Andrew Toth)
573bb542be net: Store recipient node address in private broadcast (Andrew Toth)
Pull request description:
Follow up from #29415
Sending a transaction via private broadcast does not have any way for a user to track the status of the transaction before it gets returned by another peer. The default logs have been removed as well in #34267. Nor is there any way to abort a transaction once it has been added to the private broadcast queue.
This adds two new RPCs:
- `getprivatebroadastinfo` returns information about what transactions are in the private broadcast queue, including all the peers' addresses we have chosen and timestamps.
- `abortprivatebroadcast` stops broadcasting a transaction in the private broadcast queue.
ACKs for top commit:
nervana21:
tACK 2a1d0db799
achow101:
ACK 2a1d0db799
l0rinc:
ACK 2a1d0db799
danielabrozzoni:
tACK 2a1d0db799
sedited:
ACK 2a1d0db799
Tree-SHA512: cc8682d0be68a57b42bea6e3d091da2b80995d9e6d3b98644cb120a05c2b48a97c2e211173289b758c4f4e23f1d1a1f9be528a9b8c6644f71d1dd0ae5f673326
afb1bc120e validation: Use dirty entry count in flush warnings and disk space checks (Pieter Wuille)
b413491a1c coins: Keep track of number of dirty entries in `CCoinsViewCache` (Pieter Wuille)
7e52b1b945 fuzz: call `EmplaceCoinInternalDANGER` as well in `SimulationTest` (Lőrinc)
Pull request description:
### Problem
Now that non-wiping flushes are possible (#28280, #28233), the cache may be mostly clean at flush time.
But the flush warning, disk-space check, and benchmark logging still used total cache size, so a node with a 10 GiB cache that only needs to write a small fraction of dirty entries could still trigger a scary warning via the disk-space checks.
The previous `DynamicMemoryUsage` metric was also fundamentally wrong for estimating disk writes, even before non-wiping flushes. In-memory coin size differs from on-disk write size due to LevelDB overhead, log doubling, and compaction.
The warning also only fired in `FlushStateToDisk`, so `AssumeUTXO` snapshot loads never warned at all.
### Fix
This PR tracks the actual number of dirty entries via `m_dirty_count` in `CCoinsViewCache`, maintained alongside the existing dirty-flag linked list, `SanityCheck` cross-validating both counts.
The warning and benchmark log move from `FlushStateToDisk` down to `CCoinsViewDB::BatchWrite`, where the actual I/O happens. This is the single place all flush paths converge (regular flushes, syncs, and snapshot loads), so the warning now fires correctly for `AssumeUTXO` too.
The threshold changes from 1 GiB of memory to 10 million dirty entries, which is roughly equivalent but avoids the in-memory vs on-disk size confusion.
The disk-space safety check now uses `GetDirtyCount()` with the existing conservative 48-byte-per-entry estimate, preventing unnecessary shutdowns when the cache is large but mostly clean.
---
Note: the first commit adds fuzz coverage for `EmplaceCoinInternalDANGER` in `SimulationTest` to exercise the accounting paths before modifying them.
Note: this is a revival of #31703 with all outstanding review feedback addressed.
ACKs for top commit:
Eunovo:
Concept ACK afb1bc120e
andrewtoth:
re-ACK afb1bc120e
sipa:
Code review ACK afb1bc120e
sedited:
ACK afb1bc120e
Tree-SHA512: 4133c6669fd20836ae2fb62ed804cdf6ebaa61076927b54fc412e42455a2f0d4cadfab0844064f9c32431eacb1f5e47b78de8e5cde1b26ba7239a7becf92f369
726b3663cc http: properly respond to HTTP request during shutdown (furszy)
59d24bd5dd threadpool: make Submit return Expected instead of throwing (furszy)
Pull request description:
Fixes#34573.
As mentioned in https://github.com/bitcoin/bitcoin/issues/34573#issuecomment-3891596958, the ThreadPool PR (#33689) revealed an existing issue.
Before that PR, we were returning an incorrect error "Request rejected because http work queue depth exceeded" during shutdown for unhandled requests (we were not differentiating between "queue depth exceeded" and "server interrupted" errors). Now, with the ThreadPool inclusion, we return the proper error but we don't handle it properly.
This PR improves exactly that. Handling the missing error and properly returning it to the user.
The race can be reproduced as follows:
1) The server receives an http request.
2) Processing of the request is delayed, and shutdown is triggered in the meantime.
3) During shutdown, the libevent callback is unregistered and the threadpool interrupted.
4) The delayed request (step 2) resumes and tries to submit a task to the now-interrupted server.
Reproduction test can be found https://github.com/bitcoin/bitcoin/pull/34577#issuecomment-3902672521.
Also, to prevent this kind of issue from happening again, this PR changes task submission
to return the error as part of the function's return value using `util::Expected` instead of
throwing the exception. Unlike exceptions, which require extra try-catch blocks and can be
ignored, returning `Expected` forces callers to explicitly handle failures, and attributes
like `[[nodiscard]]` allow us catch unhandled ones at compile time.
ACKs for top commit:
achow101:
ACK 726b3663cc
sedited:
ACK 726b3663cc
pinheadmz:
re-ACK 726b3663cc
andrewtoth:
ACK 726b3663cc
hodlinator:
re-ACK 726b3663cc
Tree-SHA512: ef026e299adde1148c9fc575e7d937e957bf0ddedfc1cf081941b568736417c2eefcd8bc8c8aea795d7347040ed05da4371bddcdbda7d385e04bf4dc8d875780
We sync txospenderindex after we've checked the mempool for spending transaction, and only if search is not limited to the mempool and no
spending transactions have been found for some of the provided outpoints.
This should minimize the chance of having a block containing a spending transaction that is no longer in the mempool but has not been indexed yet.
8966352df3 doc: add release notes (ismaelsadeeq)
704a09fe71 test: ensure fee estimator provide fee rate estimate < 1 s/vb (ismaelsadeeq)
243e48cf49 fees: delete unused dummy field (ismaelsadeeq)
fc4fbda42a fees: bump fees file version (ismaelsadeeq)
b54dedcc85 fees: reduce `MIN_BUCKET_FEERATE` to 100 (ismaelsadeeq)
Pull request description:
This is a simple PR that updates the block policy estimator’s `MIN_BUCKET_FEERATE` constant to be 100, which is identical to the policy `DEFAULT_MIN_RELAY_TX_FEE`.
This change enables the block policy fee rate estimator to return sub-1 sat/vB fee rate estimates.
The change is correct because the estimator creates buckets of fee rates from
`MIN_BUCKET_FEERATE`,
`MIN_BUCKET_FEERATE` + `FEE_SPACING`,
`MIN_BUCKET_FEERATE` + `2 * FEE_SPACING`,
… up to `MAX_BUCKET_FEERATE`.
This means it will record sub-1 sat/vB fee rates in the buckets and may return them as a fee rate estimate when that bucket is the lowest one with sufficient transactions for a given target.
---
While touching this part of the fee estimator code, this PR got rid of the dummy value persisted in the file
ACKs for top commit:
achow101:
ACK 8966352df3
musaHaruna:
ACK [8966352](8966352df3)
polespinasa:
ACK 8966352df3
Tree-SHA512: 9424e0820fcbe124adf5e5d9101a8a2983ba802887101875b2d1856d700c8b563d7a6f6ca3e3db29cb939f786719603aadbf5480d59d55d0951ed1c0caa49868
e0463b4e8c rpc: add coinbase_tx field to getblock (Sjors Provoost)
Pull request description:
This adds a `coinbase_tx` field to the `getblock` RPC result, starting at verbosity level 1. It contains only fields guaranteed to be small, i.e. not the outputs.
Initial motivation for this was to more efficiently scan for BIP54 compliance. Without this change, it requires verbosity level 2 to get the coinbase, which makes such scan very slow. See https://github.com/bitcoin-inquisition/bitcoin/pull/99#issuecomment-3852370506.
Adding these fields should be useful in general though and hardly makes the verbosity 1 result longer.
```
bitcoin rpc help getblock
getblock "blockhash" ( verbosity )
If verbosity is 0, returns a string that is serialized, hex-encoded data for block 'hash'.
If verbosity is 1, returns an Object with information about block <hash>.
If verbosity is 2, returns an Object with information about block <hash> and information about each transaction.
...
Result (for verbosity = 1):
{ (json object)
"hash" : "hex", (string) the block hash (same as provided)
"confirmations" : n, (numeric) The number of confirmations, or -1 if the block is not on the main chain
"size" : n, (numeric) The block size
"strippedsize" : n, (numeric) The block size excluding witness data
"weight" : n, (numeric) The block weight as defined in BIP 141
"coinbase_tx" : { (json object) Coinbase transaction metadata
"version" : n, (numeric) The coinbase transaction version
"locktime" : n, (numeric) The coinbase transaction's locktime (nLockTime)
"sequence" : n, (numeric) The coinbase input's sequence number (nSequence)
"coinbase" : "hex", (string) The coinbase input's script
"witness" : "hex" (string, optional) The coinbase input's first (and only) witness stack element, if present
},
"height" : n, (numeric) The block height or index
"version" : n, (numeric) The block version
...
```
```
bitcoin rpc getblock 000000000000000000013c986f9aebe800a78454c835ccd07ecae2650bfad3f6 1
```
```json
{
"hash": "000000000000000000013c986f9aebe800a78454c835ccd07ecae2650bfad3f6",
"confirmations": 2,
"height": 935113,
"version": 561807360,
"...": "...",
"weight": 3993624,
"coinbase_tx": {
"version": 2,
"locktime": 0,
"sequence": 4294967295,
"coinbase": "03c9440e04307c84692f466f756e6472792055534120506f6f6c202364726f70676f6c642ffabe6d6d9a8624235259d3680c972b0dd42fa3fe1c45c5e5ae5a96fe10c182bda17080e70100000000000000184b17d3f138020000000000",
"witness": "0000000000000000000000000000000000000000000000000000000000000000"
},
"tx": [
"70eb053340c7978c5aa1b34d75e1ba9f9d1879c09896317f306f30c243536b62",
"5bcf8ed2900cb70721e808b8977898e47f2c9001fcee83c3ccd29e51c7775dcd",
"3f1991771aef846d7bb379d2931cccc04e8421a630ec9f52d22449d028d2e7f4",
"..."
]
}
```
ACKs for top commit:
sedited:
Re-ACK e0463b4e8c
darosior:
re-utACK e0463b4e8c
Tree-SHA512: 1b3e7111e6a0edffde8619c49b3e9bca833c8e90e416defc66811bd56dd00d45b69a84c8fd9715508f4e6515f77ac4fb5c59868ab997ae111017c78c05b74ba3
1b36bf0c5d subprocess: Fix `-Wunused-private-field` for `Child` class on Windows (Hennadii Stepanov)
9f2b338bc0 subprocess: Fix `-Wunused-private-field` for `Popen` class on Windows (Hennadii Stepanov)
Pull request description:
This PR is a prerequisite for https://github.com/bitcoin/bitcoin/pull/31507.
It resolves `-Wunused-private-field` warnings triggered in `src/util/subprocess.h` when compiling with clang-cl on Windows:
```
D:\a\bitcoin\bitcoin\src\util/subprocess.h(759,10): warning : private field 'parent_' is not used [-Wunused-private-field] [D:\a\bitcoin\bitcoin\build\src\util\bitcoin_util.vcxproj]
D:\a\bitcoin\bitcoin\src\util/subprocess.h(760,7): warning : private field 'err_wr_pipe_' is not used [-Wunused-private-field] [D:\a\bitcoin\bitcoin\build\src\util\bitcoin_util.vcxproj]
D:\a\bitcoin\bitcoin\src\util/subprocess.h(1038,7): warning : private field 'child_pid_' is not used [-Wunused-private-field] [D:\a\bitcoin\bitcoin\build\src\util\bitcoin_util.vcxproj]
```
Only the second commit has been [submitted](https://github.com/arun11299/cpp-subprocess/pull/127) upstream. The first commit is specific to this repository and not applicable upstream.
ACKs for top commit:
maflcko:
review ACK 1b36bf0c5d👋
purpleKarrot:
ACK 1b36bf0c5d
sedited:
ACK 1b36bf0c5d
Tree-SHA512: 1bc0544d769264fa74d2f39150595ee6339af4bca7b7051ecaecbe234c17b643b715e00cfb9302a16ffc4856957f4fa47c216aebf03fec0cd95c387f51bd29a6
fa5672dcaf refactor: [gui] Use SettingTo<int64_t> over deprecated SettingToInt (MarcoFalke)
fac3ecaf69 rpc: Properly parse -rpcworkqueue/-rpcthreads (MarcoFalke)
faee36f63b util: Add SettingTo<Int>() and GetArg<Int>() (MarcoFalke)
Pull request description:
The integral arg parsing has many issues:
* There is no way to parse an unsigned integral type at all
* There is no way to parse an integral type of less width than int64_t
* As a result, calling code splatters confusing c-style casts just to let the code compile. However, usually there are no range checks and proper range handling.
For example, when someone (maybe for testing) wants to set the rpc work queue to the maximum possible number, there is no easy way to do so without reading the source code and manually crafting the exact integer value. Using the "9999 hack" will silently set it to `-1` (!)
To test:
`/bld-cmake/bin/bitcoin-qt -datadir=/tmp -regtest -rpcworkqueue=99999999999999999999999999 -printtoconsole=1 -server=1 -debug=http | grep 'set work queue of depth'`
Before:
```
[http] set work queue of depth -1
```
After:
```
[http] set work queue of depth 2147483647
ACKs for top commit:
stickies-v:
ACK fa5672dcaf
pinheadmz:
ACK fa5672dcaf
sedited:
ACK fa5672dcaf
Tree-SHA512: e5060453a0aa1c4e27080e928b0ae2d1015fe487246e4059866eef415f301bc7712ce306d95076ce5b66a5e57c620715b33998192c0ff06b0384085a0390c714
This adds a "coinbase_tx" field to the getblock RPC result, starting
at verbosity level 1. It contains only fields guaranteed to be small,
i.e. not the outputs.
a099655f2e scripted-diff: Update `DeriveType` enum values to mention ranged derivations (rkrux)
Pull request description:
While reviewing the MuSig2 descriptors PR #31244, I realized that the enum
`DeriveType` here logically refers to the derive type for ranged descriptors.
This became evident to me while going through the implementations of `IsRange`
& `IsHardened` functions of `BIP32PubkeyProvider`, and the `ParsePubkeyInner`
function. Initially I got confused by reading `IsRange` translating to
`!= DeriveType::NO`, but later realised it specifically referred to the presence
of ranged derivations. I propose explicitly mentioning "ranged" in the values
of the `DeriveType` enum would make it easier to parse the descriptors code.
This enum is used in one file only - `script/descriptors.cpp`. That's why I
explicitly passed it as the argument in the `sed` command in the script.
ACKs for top commit:
hodlinator:
re-ACK a099655f2e
pablomartin4btc:
ACK a099655f2e
PeterWrighten:
ACK a099655
Tree-SHA512: 03f11e5a37edd4f92b7113c13cdeabb11c62cc5d836874f9a4eee107362d64a1745e6a65079033dc260a58d8693bccc9dce9c18e9433a05258e8a6b34242514c
Adds an outpoint -> txid index, which can be used to find which transactions spent a given output.
We use a composite key with 2 parts (suggested by @romanz): hash(spent outpoint) and tx position, with an empty value.
To find the spending tx for a given outpoint, we do a prefix search (prefix being the hash of the provided outpoint), and for all keys that match this prefix
we load the tx at the position specified in the key and return it, along with the block hash, if does spend the provided outpoint.
To handle reorgs we just erase the keys computed from the removed block.
This index is extremely useful for Lightning and more generally for layer-2 protocols that rely on chains of unpublished transactions.
If enabled, this index will be used by `gettxspendingprevout` when it does not find a spending transaction in the mempool.
62e378584e guix: don't export TZ twice (fanquake)
badcf1c68d guix: fix typo in guix-codesign (fanquake)
Pull request description:
Remove a double export of `TZ` and fix a typo.
ACKs for top commit:
janb84:
ACK 62e378584e
hebasto:
ACK 62e378584e, I have reviewed the code and it looks OK.
Tree-SHA512: 6e0b66d20db2b0535bea7d29a28782d0ddd423eaace5f2aa708a227985d465e33def10e9f8ac5ea5482fe640ea0df4ca9df8e9f417bf5d040867564fc60aebd0
24699fec84 doc: Add initial asmap data documentation (Fabian Jahr)
bab085d282 ci: Use without embedded asmap build option in one ci job (Fabian Jahr)
e53934422a doc: Expand documentation on asmap feature and tooling (Fabian Jahr)
6244212a55 init, net: Implement usage of binary-embedded asmap data (Fabian Jahr)
6202b50fb9 build: Generate ip_asn.dat.h during build process (Fabian Jahr)
634cd60dc8 build: Add embedded asmap data (Fabian Jahr)
Pull request description:
This is the final in a series of PRs that implement the necessary changes for embedding of asmap data into the binary. This last part add the initial asmap data, implements the build changes and adds further documentation.
Currently an asmap file needs to be acquired by there user from some location or the user needs to generate one themselves. Then they need to move the file to the right place in datadir or pass the path to the file as `-asmap=PATH` in order to use the asmap feature. The change here allows for builds to embed asmap data into the bitcoind binary which makes it possible to use the feature without handling of the asmap file by the user. If the user starts bitcoind with `-asmap` the embedded data will be used for bucketing of nodes.
The data lives in the repository at `src/node/data/ip_asn.dat` and can be replaced with a new version at any time. The idea is that the data should be updated with every release. By default the data at that location is embedded into the binary but there is also a build option to prevent this (`-DWITH_EMBEDDED_ASMAP=OFF`). In this case the original behavior of the `-asmap` option is maintained.
ACKs for top commit:
achow101:
ACK 24699fec84
sipa:
ACK 24699fec84
hodlinator:
ACK 24699fec84
Tree-SHA512: c2e33dbeea387efdfd3d415432bf8fa64de80f272c1207015ea53b85bb77f5c29f1dae5644513a23c844a98fb0a4bb257bf765f38b15bfc4c41984f0315b4c6a
24f93c9af7 release note (Pol Espinasa)
331a5279d2 wallet, rpc:remove settxfee and paytxfee (Pol Espinasa)
Pull request description:
**Summary**
This PR removes the settxfee RPC and paytxfee setting (Bitcoin Core 31.0).
These two features were deprecated in https://github.com/bitcoin/bitcoin/pull/31278.
ACKs for top commit:
achow101:
ACK 24f93c9af7
w0xlt:
reACK 24f93c9af7
Tree-SHA512: e090f1a72ba2cbeba7c982dd51dfdcf6baf0a164827337cf56fd85f733e143b8d6116b8cd53c59c812cacef193dfa0b101a830fc455e32bf225e8505e7b2a554
Makes sure we respond to the client as the HTTP request attempts to submit a task to
the thread pool during server shutdown.
Roughly what happens:
1) The server receives an HTTP request and starts calling http_request_cb().
2) Meanwhile on another thread, shutdown is triggered which calls InterruptHTTPServer()
and unregisters libevent http_request_cb() callback and interrupts the thread pool.
3) The request (step 1) resumes and tries to submit a task to the now-interrupted server.
This fix detects failed submissions immediately, and the server responds with
HTTP_SERVICE_UNAVAILABLE.
6df4a045f7 qt: Replace three dots with ellipsis (Hennadii Stepanov)
Pull request description:
From the translator's [comment](https://app.transifex.com/bitcoin/bitcoin/translate/#pl/qt-translation-031x/611215465/) on Transifex:
> Source text has 3 dots, instead of 3-dot character. Most commonly used form is the 3-dot character: …
You could consider updating it.
Top commit has no ACKs.
Tree-SHA512: 73ca2d574798b682abca352346fd3664293f14b66c16b01d1a10128e840072fae883b65db14b86e8a4dffdd849563f4c9412d99baeb9735c919d3629ad9799f7
fa4cb96bde test: Set assert_debug_log timeout to 0 (MarcoFalke)
Pull request description:
The `assert_debug_log` helper is meant to be a context manager. However, it has a default timeout of 2 seconds, and can act as a silent polling/wait/sync function. This is confusing and brittle, and leads to intermittent bugs such as https://github.com/bitcoin/bitcoin/pull/34571.
Fix all issues, by setting the default timeout to zero. Then adjust all call sites to either use this correctly like a context manager, or adjust them explicitly to specify a timeout, to document that this is a polling sync function.
ACKs for top commit:
hodlinator:
re-ACK fa4cb96bde
brunoerg:
ACK fa4cb96bde
Tree-SHA512: 111a45c84327c3afea98fd9f8de13dd7d11969b2309471b4ad21694b290a3734f6e1eda75a41b39613b0995c5b7075298085cea2ca06aa07d4385eb8d72fe95b
fb3e1bf9c9 test: check LoadBlockIndex correctly recomputes invalidity flags (stratospher)
29740c06ac validation: remove BLOCK_FAILED_MASK (stratospher)
b5b2956bda validation: reset BLOCK_FAILED_CHILD to BLOCK_FAILED_VALID when loading from disk (stratospher)
37bc207852 validation: stop using BLOCK_FAILED_CHILD (stratospher)
120c631e16 refactor: use clearer variables in InvalidateBlock() (stratospher)
18f11695c7 validation: don't update BLOCK_FAILED_VALID to BLOCK_FAILED_CHILD in InvalidateBlock (stratospher)
Pull request description:
Fixes https://github.com/bitcoin/bitcoin/issues/32173
even though we have a distinction between `BLOCK_FAILED_VALID` and `BLOCK_FAILED_CHILD` in the codebase,
we don't use it for anything. Whenever we check for BlockStatus, we use `BLOCK_FAILED_MASK` which encompasses both of them.
Since there is no functional difference between `BLOCK_FAILED_VALID` and `BLOCK_FAILED_CHILD` and it's added
code complexity to correctly categorise them (ex: https://github.com/bitcoin/bitcoin/pull/31405#discussion_r1914366243, https://github.com/bitcoin/bitcoin/pull/16856#issuecomment-565506585), we could just remove it.
Looking for conceptual feedback on whether it's better to improve handling of `BLOCK_FAILED_CHILD` in the codebase or remove `BLOCK_FAILED_CHILD`.
Of less relevance, but it would also fix a `reconsiderblock` crash that could happen in the situation mentioned in https://github.com/bitcoin/bitcoin/issues/32173#issuecomment-2767030982
Similar attempt in the past in https://github.com/bitcoin/bitcoin/pull/16856#issuecomment-568073859
ACKs for top commit:
stickies-v:
re-ACK fb3e1bf9c9
alexanderwiederin:
ACK fb3e1bf9c9
mzumsande:
re-ACK fb3e1bf9c9
Tree-SHA512: e97b739885c40a8c021966438e9767cc02bc183056236d6a8c64f6819347ae70c0fbcd71cc2528917560d9f4fd56aed45faf1b6c75d98de7b08b621693a97fbc
fa48d42163 test: Stricter unit test (MarcoFalke)
fa626bd143 util: Remove brittle and confusing sp::Popen(std::string) (MarcoFalke)
Pull request description:
The subprocess Popen call that accepts a full `std::string` has many issues:
* It promotes brittle and broken code, where spaces are not properly quoted. Example: https://github.com/bitcoin/bitcoin/pull/33929#discussion_r2590523065
* The internally used `util::split` function does incorrectly split on spaces, instead of using `shlex.split`.
* It is redundant and not needed, because a vector interface already exists.
Fix all issues by removing it and just using the vector interface.
This pull request should not change any behavior: Note that the command taken from `gArgs.GetArg("-signer", "")` is still passed through the `sp::util::split` helper, just like before. Fixing that is left for a follow-up, so that this change here is basically just a refactor.
This also fixes a unit test bug as a side-effect: Fixes https://github.com/bitcoin/bitcoin/issues/32574.
ACKs for top commit:
janb84:
cr ACK fa48d42163
fjahr:
Code review ACK fa48d42163
hebasto:
re-ACK fa48d42163.
Tree-SHA512: 3d29226977c9392502f9361e2bd42b471ad03761bbf6a94ef6e545cbe4492ad5858da1ac9cc64b2791aacb9b6e6f3c3f63dbcc3a2bf45f6a13b5bc33eddf8c2b
c2fcf25069 clusterlin: inline GetReachable into Deactivate (optimization) (Pieter Wuille)
d90f98ab4a clusterlin: inline UpdateChunk into (De)Activate (optimization) (Pieter Wuille)
b684f954bb clusterlin: unidirectional MakeTopological initially (optimization) (Pieter Wuille)
1daa600c1c clusterlin: track suboptimal chunks (optimization) (Pieter Wuille)
63b06d5523 clusterlin: keep track of active children (optimization) (Pieter Wuille)
ae16485aa9 clusterlin: special-case self-merges (optimization) (Pieter Wuille)
3221f1a074 clusterlin: make MergeSequence take SetIdx (simplification) (Pieter Wuille)
7194de3f7c clusterlin: precompute reachable sets (optimization) (Pieter Wuille)
6f898dbb8b clusterlin: simplify PickMergeCandidate (optimization) (Pieter Wuille)
dcf458ffb9 clusterlin: split up OptimizeStep (refactor) (Pieter Wuille)
cbd684a471 clusterlin: abstract out functions from MergeStep (refactor) (Pieter Wuille)
b75574a653 clusterlin: improve TxData::dep_top_idx type (optimization) (Pieter Wuille)
73cbd15d45 clusterlin: get rid of DepData (optimization) (Pieter Wuille)
7c6f63a8a9 clusterlin: pool SetInfos (preparation) (Pieter Wuille)
20e2f3e96d scripted-diff: rename _rep -> _idx in SFL (Pieter Wuille)
268fcb6a53 clusterlin: add more Assumes and sanity checks (tests) (Pieter Wuille)
d69c9f56ea clusterlin: count chunk deps without loop (optimization) (Pieter Wuille)
f66fa69ce0 clusterlin: split tx/chunk dep counting (preparation) (Pieter Wuille)
900e459778 clusterlin: avoid depgraph argument in SanityCheck (cleanup) (Pieter Wuille)
666b37970f clusterlin: fix type to count dependencies (Pieter Wuille)
Pull request description:
Follow-up to #32545, part of #30289.
This contains many of the optimizations that were originally part of #32545 itself.
Here is a list of commits and the geometric average of the benchmark timings. Note that these aren't worst cases, but because of the nature of the optimizations here, I do expect them to apply roughly equally to all kinds of clusters. In other words, the relative improvement shown by these numbers should be representative:
| commit title | ns per optimal linearization |
|:--|--:|
| clusterlin: split tx/chunk dep counting (preparation) | 24,760.30 |
| clusterlin: count chunk deps without loop (optimization) | 24,677.64 |
| scripted-diff: rename _rep -> _idx in SFL | 24,640.08 |
| clusterlin: get rid of DepData, reuse sets (optimization) | 24,389.01 |
| clusterlin: improve TxData::dep_top_idx type (optimization) | 22,578.58 |
| clusterlin: abstract out functions from MergeStep (refactor) | 22,577.15 |
| clusterlin: split up OptimizeStep (refactor) | 22,981.11 |
| clusterlin: simplify PickMergeCandidate (optimization) | 22,018.63 |
| clusterlin: precompute reachable sets (optimization) | 21,194.91 |
| clusterlin: make MergeSequence take SetIdx (simplification) | 21,135.60 |
| clusterlin: special-case self-merges (optimization) | 20,588.13 |
| clusterlin: keep track of active children (optimization) | 13,911.22 |
| clusterlin: track suboptimal chunks (optimization) | 13,629.42 |
| clusterlin: unidirectional MakeTopological initially (optimization) | 12,796.57 |
| clusterlin: inline UpdateChunk into (De)Activate (optimization) | 12,706.35 |
| clusterlin: inline GetReachable into Deactivate (optimization) | 12,525.66 |
And to show that they apply to all clusters roughly similarly:
<img width="1239" height="640" alt="output(1)" src="https://github.com/user-attachments/assets/edd73937-3f87-4582-b2b9-eaed7e6ff352" />
ACKs for top commit:
instagibbs:
reACK c2fcf25069
ajtowns:
reACK c2fcf25069
Tree-SHA512: e8920f7952a6681b4c1d70c864bd0e9784127ae4fd7c740be3e24a473f72e83544d2293066ed9b3e685b755febd6bbbc6c7da0c9b6ef3699b05eaa8d5bc073a0
Unlike exceptions, which can be ignored as they require extra try-catch
blocks, returning expected errors forces callers to always handle
submission failures.
Not throwing an exception also fixes an unclean shutdown bug
#34573 since we no longer throw when attempting to Submit()
from the libevent callback http_request_cb().
Add a test for block index transitioning from legacy
BLOCK_FAILED_CHILD to BLOCK_FAILED_VALID behavior.
In the scenario where a valid block has a BLOCK_FAILED_CHILD
parent and a BLOCK_FAILED_VALID grandparent, ensure that all
three blocks are correctly marked as BLOCK_FAILED_VALID
after reloading the block index.
- there maybe existing block indexes stored in disk with
BLOCK_FAILED_CHILD
- since they don't exist anymore, clean up block index entries with
BLOCK_FAILED_CHILD and reset it to BLOCK_FAILED_VALID.
even though we have a distinction between BLOCK_FAILED_VALID
and BLOCK_FAILED_CHILD in the codebase, we don't use it for
anything. since there's no functional difference between them
and it's unnecessary code complexity to categorise them correctly,
just mark as BLOCK_FAILED_VALID instead.
Improve upon the variable name for `invalid_walk_tip` to make the
InvalidateBlock logic easier to read. Block tip before disconnection
is now tracked directly via `disconnected_tip`, and `new_tip`
is the tip after the disconnect.
Co-authored-by: stickies-v <stickies-v@protonmail.com>
Avoid two full iterations over all of a chunks' transactions to
recompute the reachable sets, by inlining them into the
dependency-updating loops.
Note that there is no need to do the same for Activate, because the
reachable sets after merging can be computed directly from the input
chunks' reachable sets. Deactivate needs to recompute them, however.
The two calls to UpdateChunk, in Activate and Deactive each, are subtly
different: the top one needs to update the chunk_idx of iterated
transactions, while the bottom one leaves it unchanged. To exploit this
difference, inline the four function calls, getting rid of UpdateChunks.
This is also a preparation for a future improvement that inlines the
recomputation of reachable sets in the same loop in Deactivate.
It suffices to initially only attempt one direction of merges in
MakeTopological(), and only try both directions on chunks that are the
result of other merges.
This means we can iterate over all active dependencies in a
cluster/chunk in O(ntx) time rather than O(ndeps) (*), as the number of
active dependencies in a set of transactions of size is at most ntx-1.
(*) Asymptotically, this is not actually true, as for large transaction
counts, iterating over a BitSet still scales with ntx. In practice
however, where BitSets are represented by a constant number of integers,
it holds.
After a split, if the top part has a dependency on the bottom part, the
first MergeSequence will always perform this merge and then stop. This
is referred to as a self-merge.
We can special case these by detecting self-merges early, and avoiding
the overhead of a full MergeSequence which involves two
PickMergeCandidate calls (a succesful and an unsuccesful one).
Future changes will rely on knowing the chunk indexes of the two created
chunks after a split. It is natural to return this information from
Deactivate, which also simplifies MergeSequence.
Instead of computing the set of reachable transactions inside
PickMergeCandidate, make the information precomputed, and updated in
Activate (by merging the two chunks' reachable sets) and Deactivate (by
recomputing).
This is a small performance gain on itself, but also a preparation for
future optimizations that rely on quickly testing whether dependencies
between chunks exist.
The current process consists of iterating over the transactions of the
chunk one by one, and then for each figuring out which of its
parents/children are in unprocessed chunks.
Simplify this (and speed it up slightly) by splitting this process into
two phases: first determine the union of all parents/children, and then
find which chunks those belong to.
The combined size of TxData::dep_top_idx can be 16 KiB with 64
transactions and SetIdx = uint32_t. Use a smaller type where possible to
reduce memory footprint and improve cache locality of m_tx_data.
Also switch from an std::vector to an std::array, reducing allocation
overhead and indirections.
With the earlier change to pool SetInfo objects, there is little need
for DepData anymore. Use parent/child TxIdxs to refer to dependencies,
and find their top set by having a child TxIdx-indexed vector in each
TxData, rather than a list of dependencies. This makes code for
iterating over dependencies more natural and simpler.
This significantly changes the data structures used in SFL, based on the
observation that the DepData::top_setinfo fields are quite wasteful:
there is one per dependency (up to n^2/4), but we only ever need one per
active dependency (of which there at most n-1). In total, the number of
chunks plus the number of active dependencies is always exactly equal to
the number of transactions, so it makes sense to have a shared pool of
SetInfos, which are used for both chunks and top sets.
To that effect, introduce a separate m_set_info variable, which stores a
SetInfo per transaction. Some of these are used for chunk sets, and some
for active dependencies' top sets. Every activation transforms the
parent's chunk into the top set for the new dependency. Every
deactivation transforms the top set into the new parent chunk.
With indexes into m_set_data (SetIdx) becoming bounded by the number of
transactions, we can use a SetType to represent sets of SetIdxs.
Specifically, an m_chunk_idxs is added which contains all SetIdx
referring to chunks. This leads to a much more natural way of iterating
over chunks.
Also use this opportunity to normalize many variable names.