We may have already persisted node announcements that have multiple DNS
addresses since we may have received them before updating our code to
check for this. So here we just make sure not to send these on to our
peers.
Use the new feature of Go 1.24, fix linter warnings.
This change was produced by:
- running golangci-lint run --fix
- sed 's/context.Background/t.Context/' -i `git grep -l context.Background | grep test.go`
- manually fixing broken tests
- itest, lntest: use ht.Context() where ht or hn is available
- in HarnessNode.Stop() we keep using context.Background(), because it is
called from a cleanup handler in which t.Context() is canceled already.
When users set `gossip.ban-threshold` to 0, it's now treated as setting
the ban score to max uint64, which effectively disables the banning. We
still want to record the peer's ban score in case we need it for future
debugging.
In this commit, we add a new atomic bool to only permit a single gossip
backlog goroutine per peer. If we get a new reuqest that needs a backlog
while we're still processing the other, then we'll drop that request.
In this commit, we complete the integration of the asynchronous
timestamp range queue by modifying ProcessRemoteAnnouncement to use
the new queuing mechanism instead of calling ApplyGossipFilter
synchronously.
This change ensures that when a peer sends a GossipTimestampRange
message, it is queued for asynchronous processing rather than
blocking the gossiper's main message processing loop. The modification
prevents the peer's readHandler from blocking on potentially slow
gossip filter operations, maintaining connection stability during
periods of high synchronization activity.
If the queue is full when attempting to enqueue a message, we log
a warning but return success to prevent peer disconnection. This
design choice prioritizes connection stability over guaranteed
delivery of every gossip filter request, which is acceptable since
peers can always resend timestamp range messages if needed.
In this commit, we introduce an asynchronous processing queue for
GossipTimestampRange messages in the GossipSyncer. This change addresses
a critical issue where the gossiper could block indefinitely when
processing timestamp range messages during periods of high load.
Previously, when a peer sent a GossipTimestampRange message, the
gossiper would synchronously call ApplyGossipFilter, which could block
on semaphore acquisition, database queries, and rate limiting. This
synchronous processing created a bottleneck where the entire peer
message processing pipeline would stall, potentially causing timeouts
and disconnections.
The new design adds a timestampRangeQueue channel with a capacity of 1
message and a dedicated goroutine for processing these messages
asynchronously. This follows the established pattern used for other
message types in the syncer. When the queue is full, we drop messages
and log a warning rather than blocking indefinitely, providing graceful
degradation under extreme load conditions.
In this commit, we make the gossip filter semaphore capacity configurable
through a new FilterConcurrency field. This change allows node operators
to tune the number of concurrent gossip filter applications based on
their node's resources and network position.
The previous hard-coded limit of 5 concurrent filter applications could
become a bottleneck when multiple peers attempt to synchronize
simultaneously. By making this value configurable via the new
gossip.filter-concurrency option, operators can increase this limit
for better performance on well-resourced nodes or maintain conservative
values on resource-constrained systems.
We keep the default value at 5 to maintain backward compatibility and
avoid unexpected resource usage increases for existing deployments. The
sample configuration file is updated to document this new option.
In this commit, we move the serialisation details of a channel's
features to the DB layer and change the `models` field to instead use a
more useful `*lnwire.FeatureVector` type.
This makes the features easier to work with and moves the serialisation
to where it is actually used.
Create an abstract hashAccumulator interface for the Channel graph
bootstrapper so that we can later introduce a deterministic accumulator
to be used during testing.
If a method returns an error, we should assume all other parameters to
be nil unless the documentation explicitly says otherwise. So here, we
fix a log line where a dereference is made to an object that will be nil
due to an error being returned.
Replace all usages of the "github.com/go-errors/errors" and
"github.com/pkg/errors" packages with the standard lib's "errors"
package. This ensures that error wrapping and `errors.Is` checks will
work as expected.
Remove the previously added TODOs which would extract InboundFee info
from the ExtraOpaqueData of a ChannelUpdate at the time of
ChannelEdgePolicy construction. These can now be replaced by using the
newly added InboundFee record on the ChannelUpdate message.
In this commit, we make sure to set the new field wherever appropriate.
This will be any place where the ChannelEdgePolicy is constructed other
than its disk deserialisation.
We add logging to we can draw conclusions how long the processing
of gossip message last and potentially see whether the syncer
buffer channel size is a bottleneck in processing.
The `GraphSource` interface in the `autopilot` package is directly
implemented by the `graphdb.KVStore` and so we will eventually thread
contexts through to this interface. So in this commit, we start updating
the autopilot system to thread contexts through in preparation for
passing the context through to any calls made to the GraphSource.
Two context.TODOs are added here which will be addressed in follow up
commits.
For any method that takes a context that has a select that listens on
the systems quit channel, we should also listen on the ctx since we
should not need to worry about if this context is derived internally or
externally.