We may have already persisted node announcements that have multiple DNS
addresses since we may have received them before updating our code to
check for this. So here we just make sure not to send these on to our
peers.
In this commit, we complete the integration of the asynchronous
timestamp range queue by modifying ProcessRemoteAnnouncement to use
the new queuing mechanism instead of calling ApplyGossipFilter
synchronously.
This change ensures that when a peer sends a GossipTimestampRange
message, it is queued for asynchronous processing rather than
blocking the gossiper's main message processing loop. The modification
prevents the peer's readHandler from blocking on potentially slow
gossip filter operations, maintaining connection stability during
periods of high synchronization activity.
If the queue is full when attempting to enqueue a message, we log
a warning but return success to prevent peer disconnection. This
design choice prioritizes connection stability over guaranteed
delivery of every gossip filter request, which is acceptable since
peers can always resend timestamp range messages if needed.
In this commit, we move the serialisation details of a channel's
features to the DB layer and change the `models` field to instead use a
more useful `*lnwire.FeatureVector` type.
This makes the features easier to work with and moves the serialisation
to where it is actually used.
If a method returns an error, we should assume all other parameters to
be nil unless the documentation explicitly says otherwise. So here, we
fix a log line where a dereference is made to an object that will be nil
due to an error being returned.
Remove the previously added TODOs which would extract InboundFee info
from the ExtraOpaqueData of a ChannelUpdate at the time of
ChannelEdgePolicy construction. These can now be replaced by using the
newly added InboundFee record on the ChannelUpdate message.
In this commit, we make sure to set the new field wherever appropriate.
This will be any place where the ChannelEdgePolicy is constructed other
than its disk deserialisation.
We add logging to we can draw conclusions how long the processing
of gossip message last and potentially see whether the syncer
buffer channel size is a bottleneck in processing.
With this, we move a context.TODO() out of the gossiper and into the
brontide package - this will be removed in a future PR which focuses on
threading contexts through that code.
The `GossiperSyncer` makes various calls to the `ChannelGraphTimeSeries`
interface which threads through to the graph DB. So in preparation for
threading context through to all the methods on that interface, we
update the GossipSyncer accordingly by passing contexts through.
Two `context.TODO()`s are added in this commit. They will be removed in
the upcoming commits.
Pass the parent LND context to the gossiper, let it derive a child
context that gets cancelled on Stop. Pass the context through to any
methods that will eventually thread it through to any graph DB calls.
One `context.TODO()` is added here - this will be removed in the next
commit.
NOTE: for any internal methods that the context gets passed to, if those
methods already listen on the gossiper's `quit` channel, then then don't
need to also listen on the passed context's Done() channel because the
quit channel is closed at the same time that the context is cancelled.
This commit is a pure refactor. We move the transaction validation
(existence, spentness, correctness) from the `graph.Builder` to the
gossiper since this is where all protocol level checks should happen.
All tests involved are also updated/moved.
As we move the funding transaction validation logic out of the builder
and into the gossiper, we want to ensure that the behaviour stays
consistent with what we have today. So we should aquire this lock before
performing any expensive checks such as building the funding tx or
valdating it.
In preparation for an upcoming commit which will move all channel
funding tx validation to the gossiper, we first move the helper method
which helps build the expected funding transaction script based on the
fields in the channel announcement. We will still want this script later
on in the builder for updating the ChainView though, and so we pass this
field along with the ChannelEdgeInfo. With this change, we can remove
the TapscriptRoot field from the ChannelEdgeInfo since the only reason
it was there was so that the builder could reconstruct the full funding
script.
In preparation for moving funding transaction validiation from the
Builder to the Gossiper in later commit, we first convert these graph
Error Codes to normal error variables. This will help make the later
commit a pure code move.
The `netann` package is a more appropriate place for this code to live.
Also, once the funding transaction code is moved out of the
`graph.Builder`, then no `lnwire` validation will occur in the `graph`
package.
This commit does two things:
- removes the concept of allow / deny. Having this in place was a
minor optimization and removing it makes the solution simpler.
- changes the job dependency tracking to track sets of abstact
parent jobs rather than individual parent jobs.
As a note, the purpose of the ValidationBarrier is that it allows us
to launch gossip validation jobs in goroutines while still ensuring
that the validation order of these goroutines is adhered to when it
comes to validating ChannelAnnouncement _before_ ChannelUpdate and
_before_ NodeAnnouncement.
We add a new config option to set the `ProofMatureDelta` so the users
can tune their graphs based on their own preference over the num of
confs found in the announcement signatures.
This commit fixes the following race,
1. syncer(state=syncingChans) sends QueryChannelRange
2. remote peer replies ReplyChannelRange
3. ProcessQueryMsg fails to process the remote peer's msg as its state
is neither waitingQueryChanReply nor waitingQueryRangeReply.
4. syncer marks its new state waitingQueryChanReply, but too late.
The historical sync will now fail, and the syncer will be stuck at this
state. What's worse is it cannot forward channel announcements to other
connected peers now as it will skip the broadcasting during initial
graph sync.
This is now fixed to make sure the following two steps are atomic,
1. syncer(state=syncingChans) sends QueryChannelRange
2. syncer marks its new state waitingQueryChanReply.
and the same for ChannelStateDB.FetchChannel. Most of the calls to these
methods provide a `nil` Tx anyways. The only place that currently
provides a non-nil tx is in the `localchans.Manager`. It takes the
transaction provided to the `ForAllOutgoingChannels` callback and passes
it to it's `updateEdge` method. Note, however, that the
`ForAllOutgoingChannels` call is a call to the graph db and the call to
`updateEdge` is a call to the `ChannelStateDB`. There is no reason that
these two calls need to happen under the same transaction as they are
reading from two completely disjoint databases. And so in the effort to
completely split untangle the relationship between the two databases, we
now dont use the same transaction for these two calls.
All the structs defined in the `channeldb/models` package are graph
related. So once we move all the graph CRUD code to the graph package,
it makes sense to have the schema structs there too. So this just moves
the `models` package over to `graph/db/models`.