In preparation for adding a NodeAnnouncement2 struct along with a
NodeAnnouncement interface, this commit renames the existing
NodeAnnouncment struct to NodeAnnouncement1.
In this commit, we update the mockChannelGraphTimeSeries to implement
the new iterator-based UpdatesInHorizon interface. The mock maintains
its existing behavior of receiving messages through a channel and
returning them to the caller, but now wraps this in an iterator
function.
The implementation creates an iterator that pulls the entire message
slice from the mock's response channel, then yields each message
individually. This preserves the test semantics while conforming to the
new interface, ensuring all existing tests continue to pass without
modification.
In this commit, we update ApplyGossipFilter to leverage the new
iterator-based UpdatesInHorizon method. The key innovation here is using
iter.Pull2 to create a pull-based iterator that allows us to check if
any updates exist before launching the background goroutine.
This approach provides several benefits over the previous implementation.
First, we avoid the overhead of launching a goroutine when there are no
updates to send, which was previously unavoidable without materializing
the entire result set. Second, we maintain lazy loading throughout the
sending process, only pulling messages from the database as they're
needed for transmission.
The implementation uses Pull2 to peek at the first message, determining
whether to proceed with sending updates. If updates exist, ownership of
the iterator is transferred to the goroutine, which continues pulling
and sending messages until exhausted. This design ensures memory usage
remains bounded regardless of the number of updates being synchronized.
In this commit, we complete the iterator conversion work started in PR
10128 by threading the iterator pattern through to the higher-level
UpdatesInHorizon method. This change converts the method from returning
a fully materialized slice of messages to returning a lazy iterator that
yields messages on demand.
The new signature uses iter.Seq2 to allow error propagation during
iteration, eliminating the need for a separate error return value. This
approach enables callers to handle errors as they occur during iteration
rather than failing upfront.
The implementation now lazily processes channel and node updates,
yielding them as they're generated rather than accumulating them in
memory. This maintains the same ordering guarantees (channels before
nodes) while significantly reducing memory pressure when dealing with
large update sets during gossip synchronization.
In this commit, we update all callers of NodeUpdatesInHorizon and
ChanUpdatesInHorizon to use the new iterator-based APIs. The changes
use fn.Collect to maintain existing behavior while benefiting from the
memory efficiency of iterators when possible.
In this commit, we refactor the ChanUpdatesInHorizon method to return
an iterator instead of a slice. This change significantly reduces
memory usage when dealing with large result sets by allowing callers to
process items incrementally rather than loading everything into memory
at once.
In this commit, we refactor the NodeUpdatesInHorizon method to return
an iterator instead of a slice. This change significantly reduces
memory usage when dealing with large result sets by allowing callers to
process items incrementally rather than loading everything into memory
at once.
The new implementation uses Go 1.23's iter.Seq type to provide a
standard iterator interface. The method now supports configurable batch
sizes through functional options, allowing fine-tuned control over
memory usage and performance characteristics.
Rather than reading all the entries from disk into memory (before this
commit, we did consult the cache for most entries, skipping the disk
hits), we now expose a chunked iterator instead.
We also make the process of filtering out public nodes first class. This
saves many newly created db transactions later.
We may have already persisted node announcements that have multiple DNS
addresses since we may have received them before updating our code to
check for this. So here we just make sure not to send these on to our
peers.
Use the new feature of Go 1.24, fix linter warnings.
This change was produced by:
- running golangci-lint run --fix
- sed 's/context.Background/t.Context/' -i `git grep -l context.Background | grep test.go`
- manually fixing broken tests
- itest, lntest: use ht.Context() where ht or hn is available
- in HarnessNode.Stop() we keep using context.Background(), because it is
called from a cleanup handler in which t.Context() is canceled already.
When users set `gossip.ban-threshold` to 0, it's now treated as setting
the ban score to max uint64, which effectively disables the banning. We
still want to record the peer's ban score in case we need it for future
debugging.
In this commit, we add a new atomic bool to only permit a single gossip
backlog goroutine per peer. If we get a new reuqest that needs a backlog
while we're still processing the other, then we'll drop that request.
In this commit, we complete the integration of the asynchronous
timestamp range queue by modifying ProcessRemoteAnnouncement to use
the new queuing mechanism instead of calling ApplyGossipFilter
synchronously.
This change ensures that when a peer sends a GossipTimestampRange
message, it is queued for asynchronous processing rather than
blocking the gossiper's main message processing loop. The modification
prevents the peer's readHandler from blocking on potentially slow
gossip filter operations, maintaining connection stability during
periods of high synchronization activity.
If the queue is full when attempting to enqueue a message, we log
a warning but return success to prevent peer disconnection. This
design choice prioritizes connection stability over guaranteed
delivery of every gossip filter request, which is acceptable since
peers can always resend timestamp range messages if needed.
In this commit, we introduce an asynchronous processing queue for
GossipTimestampRange messages in the GossipSyncer. This change addresses
a critical issue where the gossiper could block indefinitely when
processing timestamp range messages during periods of high load.
Previously, when a peer sent a GossipTimestampRange message, the
gossiper would synchronously call ApplyGossipFilter, which could block
on semaphore acquisition, database queries, and rate limiting. This
synchronous processing created a bottleneck where the entire peer
message processing pipeline would stall, potentially causing timeouts
and disconnections.
The new design adds a timestampRangeQueue channel with a capacity of 1
message and a dedicated goroutine for processing these messages
asynchronously. This follows the established pattern used for other
message types in the syncer. When the queue is full, we drop messages
and log a warning rather than blocking indefinitely, providing graceful
degradation under extreme load conditions.
In this commit, we make the gossip filter semaphore capacity configurable
through a new FilterConcurrency field. This change allows node operators
to tune the number of concurrent gossip filter applications based on
their node's resources and network position.
The previous hard-coded limit of 5 concurrent filter applications could
become a bottleneck when multiple peers attempt to synchronize
simultaneously. By making this value configurable via the new
gossip.filter-concurrency option, operators can increase this limit
for better performance on well-resourced nodes or maintain conservative
values on resource-constrained systems.
We keep the default value at 5 to maintain backward compatibility and
avoid unexpected resource usage increases for existing deployments. The
sample configuration file is updated to document this new option.
In this commit, we move the serialisation details of a channel's
features to the DB layer and change the `models` field to instead use a
more useful `*lnwire.FeatureVector` type.
This makes the features easier to work with and moves the serialisation
to where it is actually used.
Create an abstract hashAccumulator interface for the Channel graph
bootstrapper so that we can later introduce a deterministic accumulator
to be used during testing.
If a method returns an error, we should assume all other parameters to
be nil unless the documentation explicitly says otherwise. So here, we
fix a log line where a dereference is made to an object that will be nil
due to an error being returned.
Replace all usages of the "github.com/go-errors/errors" and
"github.com/pkg/errors" packages with the standard lib's "errors"
package. This ensures that error wrapping and `errors.Is` checks will
work as expected.