Merge bitcoin/bitcoin#33464: p2p: Use network-dependent timers for inbound inv scheduling

0f7d4ee4e8 p2p: Use different inbound inv timer per network (Martin Zumsande)
94db966a3b net: use generic network key for addrcache (Martin Zumsande)

Pull request description:

  Currently, `NextInvToInbounds` schedules  each round of `inv` at the same time for all inbound peers. It's being done this way because with a separate timer per peer (like it's done for outbounds), an attacker could do multiple connections to learn about the time a transaction arrived. (#13298).

  However, having a single timer for inbounds of all networks is also an obvious fingerprinting vector: Connecting to a suspected pair of privacy-network and clearnet addresses and observing the `inv` pattern makes it trivial to confirm or refute that they are the same node.

  This PR changes it such that a separate timer is used for each network.
  It uses the existing method  from `getaddr` caching and generalizes it to be saved in a new field `m_network_key` in `CNode` which will be used for both `getaddr` caching and `inv` scheduling, and can also be used for any future anti-fingerprinting measures.

ACKs for top commit:
  sipa:
    utACK 0f7d4ee4e8
  stratospher:
    reACK 0f7d4ee.
  naiyoma:
    Tested ACK 0f7d4ee4e8
  danielabrozzoni:
    reACK 0f7d4ee4e8

Tree-SHA512: e197c3005b2522051db432948874320b74c23e01e66988ee1ee11917dac0923f58c1252fa47da24e68b08d7a355d8e5e0a3ccdfa6e4324cb901f21dfa880cd9c
This commit is contained in:
merge-script
2025-10-03 23:45:17 +01:00
8 changed files with 76 additions and 39 deletions

View File

@@ -807,7 +807,7 @@ private:
uint32_t GetFetchFlags(const Peer& peer) const;
std::atomic<std::chrono::microseconds> m_next_inv_to_inbounds{0us};
std::map<uint64_t, std::chrono::microseconds> m_next_inv_to_inbounds_per_network_key GUARDED_BY(g_msgproc_mutex);
/** Number of nodes with fSyncStarted. */
int nSyncStarted GUARDED_BY(cs_main) = 0;
@@ -837,12 +837,14 @@ private:
/**
* For sending `inv`s to inbound peers, we use a single (exponentially
* distributed) timer for all peers. If we used a separate timer for each
* distributed) timer for all peers with the same network key. If we used a separate timer for each
* peer, a spy node could make multiple inbound connections to us to
* accurately determine when we received the transaction (and potentially
* determine the transaction's origin). */
* accurately determine when we received a transaction (and potentially
* determine the transaction's origin). Each network key has its own timer
* to make fingerprinting harder. */
std::chrono::microseconds NextInvToInbounds(std::chrono::microseconds now,
std::chrono::seconds average_interval) EXCLUSIVE_LOCKS_REQUIRED(g_msgproc_mutex);
std::chrono::seconds average_interval,
uint64_t network_key) EXCLUSIVE_LOCKS_REQUIRED(g_msgproc_mutex);
// All of the following cache a recent block, and are protected by m_most_recent_block_mutex
@@ -1143,15 +1145,15 @@ static bool CanServeWitnesses(const Peer& peer)
}
std::chrono::microseconds PeerManagerImpl::NextInvToInbounds(std::chrono::microseconds now,
std::chrono::seconds average_interval)
std::chrono::seconds average_interval,
uint64_t network_key)
{
if (m_next_inv_to_inbounds.load() < now) {
// If this function were called from multiple threads simultaneously
// it would possible that both update the next send variable, and return a different result to their caller.
// This is not possible in practice as only the net processing thread invokes this function.
m_next_inv_to_inbounds = now + m_rng.rand_exp_duration(average_interval);
auto [it, inserted] = m_next_inv_to_inbounds_per_network_key.try_emplace(network_key, 0us);
auto& timer{it->second};
if (timer < now) {
timer = now + m_rng.rand_exp_duration(average_interval);
}
return m_next_inv_to_inbounds;
return timer;
}
bool PeerManagerImpl::IsBlockRequested(const uint256& hash)
@@ -5715,7 +5717,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
if (tx_relay->m_next_inv_send_time < current_time) {
fSendTrickle = true;
if (pto->IsInboundConn()) {
tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL);
tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL, pto->m_network_key);
} else {
tx_relay->m_next_inv_send_time = current_time + m_rng.rand_exp_duration(OUTBOUND_INVENTORY_BROADCAST_INTERVAL);
}