Remove the previously added TODOs which would extract InboundFee info
from the ExtraOpaqueData of a ChannelUpdate at the time of
ChannelEdgePolicy construction. These can now be replaced by using the
newly added InboundFee record on the ChannelUpdate message.
In this commit, we make sure to set the new field wherever appropriate.
This will be any place where the ChannelEdgePolicy is constructed other
than its disk deserialisation.
We add logging to we can draw conclusions how long the processing
of gossip message last and potentially see whether the syncer
buffer channel size is a bottleneck in processing.
The `GraphSource` interface in the `autopilot` package is directly
implemented by the `graphdb.KVStore` and so we will eventually thread
contexts through to this interface. So in this commit, we start updating
the autopilot system to thread contexts through in preparation for
passing the context through to any calls made to the GraphSource.
Two context.TODOs are added here which will be addressed in follow up
commits.
For any method that takes a context that has a select that listens on
the systems quit channel, we should also listen on the ctx since we
should not need to worry about if this context is derived internally or
externally.
With this, we move a context.TODO() out of the gossiper and into the
brontide package - this will be removed in a future PR which focuses on
threading contexts through that code.
The `GossiperSyncer` makes various calls to the `ChannelGraphTimeSeries`
interface which threads through to the graph DB. So in preparation for
threading context through to all the methods on that interface, we
update the GossipSyncer accordingly by passing contexts through.
Two `context.TODO()`s are added in this commit. They will be removed in
the upcoming commits.
Pass the parent LND context to the gossiper, let it derive a child
context that gets cancelled on Stop. Pass the context through to any
methods that will eventually thread it through to any graph DB calls.
One `context.TODO()` is added here - this will be removed in the next
commit.
NOTE: for any internal methods that the context gets passed to, if those
methods already listen on the gossiper's `quit` channel, then then don't
need to also listen on the passed context's Done() channel because the
quit channel is closed at the same time that the context is cancelled.
In this commit, we revamp the old message based rate limiting. First, we
move to meter by bytes/s instead of messages/s. The old logic had an
error in that it limited groups of message replies, instead of each
message. With this new approach, we'll use the newly added
SerializedSize method to implement fine grained bandwidth metering.
We need to pick two values, the burst rate, and the msg bytes rate. The
burst rate is the max amt that can be sent in a given period of time. We
need to set this above 65 KB, or the max msg limit, otherwise no
messages can be sent. The bucket starts with this many tokens (bytes).
As those are depleted, the amount of tokens is refilled at the msg
bytes rate.
As conservative values, we've chosen 200 KB as the burst rate, and 100
KB/s as the limit.
Here we introduce the access manager which has caches that will
determine the access control status of our peers. Peers that have
had their funding transaction confirm with us are protected. Peers
that only have pending-open channels with us are temporary access
and can have their access revoked. The rest of the peers are granted
restricted access.
Previously, we would set the state of the syncer after sending the msg,
which has the following flow,
1. In state `queryNewChannels`, we send the msg `QueryShortChanIDs`.
2. Once the msg is sent, we change to state `waitingQueryChanReply`.
But there's no guarantee the remote won't reply back inbetween the two
step. When that happens, our syncer would still be in state
`queryNewChannels`, causing the following error,
```
[ERR] DISC gossiper.go:873: Process query msg from peer [Alice] got unexpected msg *lnwire.ReplyShortChanIDsEnd received in state queryNewChannels
```
To fix it, we now make sure the state is updated before sending the msg.
This commit adds a test to demonstrate that if we receive two identical
updates (which can happen if we get the same update from two peers in
quick succession), then our rate limiting logic will be hit early as
both updates might be counted towards the rate limit. This will be fixed
in an upcoming commit.
This commit is a pure refactor. We move the transaction validation
(existence, spentness, correctness) from the `graph.Builder` to the
gossiper since this is where all protocol level checks should happen.
All tests involved are also updated/moved.
As we move the funding transaction validation logic out of the builder
and into the gossiper, we want to ensure that the behaviour stays
consistent with what we have today. So we should aquire this lock before
performing any expensive checks such as building the funding tx or
valdating it.
In preparation for an upcoming commit which will move all channel
funding tx validation to the gossiper, we first move the helper method
which helps build the expected funding transaction script based on the
fields in the channel announcement. We will still want this script later
on in the builder for updating the ChainView though, and so we pass this
field along with the ChannelEdgeInfo. With this change, we can remove
the TapscriptRoot field from the ChannelEdgeInfo since the only reason
it was there was so that the builder could reconstruct the full funding
script.
Here, we add a new fundingTxOption modifier which will configure how we
set-up expected calls to the mock Chain once we have moved funding tx
logic to the gossiper. Note that in this commit, these modifiers don't
yet do anything.
This is in preparation for the commit where we move across all the
funding tx validation so that we can test that we are correctly updating
the zombie index.
The `graph.Builder`'s `addZombieEdge` method is currently called during
funding transaction validation for the case where the funding tx is not
found. In preparation for moving this code to the gossiper, we export
the method and add it to the ChannelGraphSource interface so that the
gossiper will be able to call it later on.
In preparation for adding more modifiers. We want to later add a
modifier that will tweak the errors returned by the mock chain once
funding transaction validation has been moved to the gossiper.
This is in preparation for moving the funding transaction validation
code to the gossiper from the graph.Builder since then the gossiper will
start making GetBlockHash/GetBlock and GetUtxo calls.
Convert a bunch of the helper functions to instead be methods on the
testCtx type. This is in preparation for adding a mockChain to the
testCtx that these helpers can then use to add blocks and utxos to.
See `notifications_test.go` for an idea of what we are trying to emulate
here. Once the funding tx code has moved to the gossiper, then the logic
in `notifications_test.go` will be removed.
In preparation for moving funding transaction validiation from the
Builder to the Gossiper in later commit, we first convert these graph
Error Codes to normal error variables. This will help make the later
commit a pure code move.
The `netann` package is a more appropriate place for this code to live.
Also, once the funding transaction code is moved out of the
`graph.Builder`, then no `lnwire` validation will occur in the `graph`
package.
This commit does two things:
- removes the concept of allow / deny. Having this in place was a
minor optimization and removing it makes the solution simpler.
- changes the job dependency tracking to track sets of abstact
parent jobs rather than individual parent jobs.
As a note, the purpose of the ValidationBarrier is that it allows us
to launch gossip validation jobs in goroutines while still ensuring
that the validation order of these goroutines is adhered to when it
comes to validating ChannelAnnouncement _before_ ChannelUpdate and
_before_ NodeAnnouncement.
We add a new config option to set the `ProofMatureDelta` so the users
can tune their graphs based on their own preference over the num of
confs found in the announcement signatures.
The mocked peer used here blocks on `sendToPeer`, which is not the
behavior of the `SendMessageLazy` of `lnpeer.Peer`. To reflect the
reality, we now make sure the `sendToPeer` is non-blocking in the tests.
This commit fixes the following race,
1. syncer(state=syncingChans) sends QueryChannelRange
2. remote peer replies ReplyChannelRange
3. ProcessQueryMsg fails to process the remote peer's msg as its state
is neither waitingQueryChanReply nor waitingQueryRangeReply.
4. syncer marks its new state waitingQueryChanReply, but too late.
The historical sync will now fail, and the syncer will be stuck at this
state. What's worse is it cannot forward channel announcements to other
connected peers now as it will skip the broadcasting during initial
graph sync.
This is now fixed to make sure the following two steps are atomic,
1. syncer(state=syncingChans) sends QueryChannelRange
2. syncer marks its new state waitingQueryChanReply.