To make sure we don't cause force-closures because of commit sig
mismatches, we add a test case to verify that the retransmitted HTLC
matches the original HTLC.
We update the lightning channel state machine in some key areas. If the
noop TLV is set in the update_add_htlc custom records then we change the
entry type to noop. When settling the HTLC if the type is noop we credit
the satoshi amount back to the sender.
We add a new update type to the payment descriptor to describe this new
type of htlc. This type of HTLC will only end up being set if explicitly
signalled by external software.
We check the context of the payment lifecycle at the beginning of
the `resumepayment` loop. This will make sure we have always the
latest state of the payment before deciding on the next steps in
the function `decideNextStep`.
This demonstrates that the "imported" account behaves differently for a
wallet that's watch-only vs. normal. The testTaprootImportTapscriptFullKeyFundPsbt
test case succeeds in a normal run but fails in a remote-signing setup.
For some reason, an imported tapscript address ends up in the "default"
account when remote-signing but in the "imported" account for a normal
wallet.
In this commit, we add a new atomic bool to only permit a single gossip
backlog goroutine per peer. If we get a new reuqest that needs a backlog
while we're still processing the other, then we'll drop that request.
In this commit, we add detailed documentation to help node operators
understand and configure the gossip rate limiting system effectively.
The new guide addresses a critical knowledge gap that has led to
misconfigured nodes experiencing synchronization failures.
The documentation covers the token bucket algorithm used for rate
limiting, providing clear formulas and examples for calculating
appropriate values based on node size and network position. We include
specific recommendations ranging from 50 KB/s for small nodes to
1 MB/s for large routing nodes, with detailed calculations showing
how these values are derived.
The guide explains the relationship between rate limiting and other
configuration options like num-restricted-slots and the new
filter-concurrency setting. We provide troubleshooting steps for
common issues like slow initial sync and peer disconnections, along
with debug commands and log patterns to identify problems.
Configuration examples are provided for conservative, balanced, and
performance-oriented setups, giving operators concrete starting points
they can adapt to their specific needs. The documentation emphasizes
the importance of not setting rate limits too low, warning that values
below 50 KB/s can cause synchronization to fail entirely.
In this commit, we complete the integration of the configurable gossip
filter concurrency by wiring the new FilterConcurrency configuration
through all layers of the application.
The changes connect the gossip.filter-concurrency configuration option
from the command-line interface through the server initialization code
to the gossiper and sync manager. This ensures that operators can
actually use the new configuration option to tune their node's
concurrent gossip filter processing capacity based on their specific
requirements and available resources.
In this commit, we complete the integration of the asynchronous
timestamp range queue by modifying ProcessRemoteAnnouncement to use
the new queuing mechanism instead of calling ApplyGossipFilter
synchronously.
This change ensures that when a peer sends a GossipTimestampRange
message, it is queued for asynchronous processing rather than
blocking the gossiper's main message processing loop. The modification
prevents the peer's readHandler from blocking on potentially slow
gossip filter operations, maintaining connection stability during
periods of high synchronization activity.
If the queue is full when attempting to enqueue a message, we log
a warning but return success to prevent peer disconnection. This
design choice prioritizes connection stability over guaranteed
delivery of every gossip filter request, which is acceptable since
peers can always resend timestamp range messages if needed.
In this commit, we introduce an asynchronous processing queue for
GossipTimestampRange messages in the GossipSyncer. This change addresses
a critical issue where the gossiper could block indefinitely when
processing timestamp range messages during periods of high load.
Previously, when a peer sent a GossipTimestampRange message, the
gossiper would synchronously call ApplyGossipFilter, which could block
on semaphore acquisition, database queries, and rate limiting. This
synchronous processing created a bottleneck where the entire peer
message processing pipeline would stall, potentially causing timeouts
and disconnections.
The new design adds a timestampRangeQueue channel with a capacity of 1
message and a dedicated goroutine for processing these messages
asynchronously. This follows the established pattern used for other
message types in the syncer. When the queue is full, we drop messages
and log a warning rather than blocking indefinitely, providing graceful
degradation under extreme load conditions.
In this commit, we make the gossip filter semaphore capacity configurable
through a new FilterConcurrency field. This change allows node operators
to tune the number of concurrent gossip filter applications based on
their node's resources and network position.
The previous hard-coded limit of 5 concurrent filter applications could
become a bottleneck when multiple peers attempt to synchronize
simultaneously. By making this value configurable via the new
gossip.filter-concurrency option, operators can increase this limit
for better performance on well-resourced nodes or maintain conservative
values on resource-constrained systems.
We keep the default value at 5 to maintain backward compatibility and
avoid unexpected resource usage increases for existing deployments. The
sample configuration file is updated to document this new option.