We plan to later on add an option for a remote graph source which will
be managed from the ChannelGraph. In such a set-up, a node would rely on
the remote graph source for graph updates instead of from gossip sync.
In this scenario, however, our topology subscription logic should still
notify clients of all updates and so it makes more sense to have the
logic as part of the ChannelGraph so that we can send updates we receive
from the remote graph.
The test as it stands today does not make sense as it adds a
Partial/Shell node to the graph via AddLightningNode which will never
happen since this is only ever triggered by the gossiper which only
calls the method with a full node announcement. Shell/Partial nodes are
only ever added via AddChannelEdge which will insert a partial node if
we are adding a channel edge which has node pub keys that we dont have a
node entry for. So we adjust the test to use this more accurate flow.
We do this in preparation for moving channel cache population logic out
of the constructor and into the Start method. We also will later on
(when topology subscription is moved to the ChannelGraph), have a
goroutine that will need to be kicked off and stopped.
Here, we move the graph cache writes for AddLightningNode,
DeleteLightningNode, AddChannelEdge and MarkEdgeLive to the
ChannelGraph. Since these are writes, the cache is only updated if the
DB write is successful.
This commit moves the graph cache checks for FetchNodeFeatures,
ForEachNodeDirectedChannel, GraphSession and ForEachNodeCached from the
KVStore to the ChannelGraph. Since the ChannelGraph is currently just a
pass-through for any of the KVStore methods, all that needs to be done
for calls to go via the ChannelGraph instead directly to the KVStore is
for the ChannelGraph to go and implement those methods.
In this commit, we let the ChannelGraph be responsible for populating
the graphCache and then passing it to the KVStore. This is a first step
in moving the graphCache completely out of the KVStore layer.
And use this struct to pass NewChannelGraph anything it needs to be able
to init the KVStore that it houses. This will allow us to add
ChannelGraph specific options.
Since we have renamed a file housing some very old code, the linter has
now run on all this code for the first time. So we gotta do some
clean-up work here to make it happy.
In this commit, we rename the existing ChannelGraph struct to KVStore to
better reflect its responsibilities as the CRUD layer. We then introduce
a new ChannelGraph struct which will eventually be the layer above the
CRUD layer in which we will handle cacheing and topology subscriptions.
For now, however, it houses only the KVStore. This means that all calls
to the KVStore will now go through this layer of indirection first. This
will allow us to slowly move the graph Cache management out of the
KVStore and into the new ChannelGraph layer.
We introduce the new ChannelGraph and rename the old one in the same
commit so that all existing call-sites don't need to change at all :)
Rename it to kv_store.go so that we can re-use the graph.go file name
later on. We will use it to house the _new_ ChannelGraph when the
existing ChannelGraph is renamed to more clearly reflect its
responsibilities as the CRUD layer.
The exposed AddNode, AddEdge and UpdateEdge methods of the Builder are
currently synchronous since even though they pass messages to the
network handler which spins off the handling in a goroutine, the public
methods still wait for a response from the handling before returning.
The only part that is actually done asynchronously is the topology
notifications.
We previously tried to simplify things in [this
commit](d757b3bcfc)
but we soon realised that there was a reason for sending the messages to
the central/synchronous network handler first: it was to ensure
consistency for topology clients: ie, the ordering between when there is
a new topology client or if it is cancelled needs to be consistent and
handled synchronously with new network updates. So for example, if a new
update comes in right after a topology client cancels its subscription,
then it should _not_ be notified. Similariy for new subscriptions. So
this commit was reverted soon after.
We can, however, still simplify things as is done in this commit by
noting that _only topology subscriptions and notifications_ need to be
handled separately. The actual network updates do not need to. So that
is what is done here.
This refactor will make moving the topology subscription logic to a new
subsystem later on much easier.
This commit restricts the graph CRUD interface such that one can only
add a proof to a channel announcement and not update any other fields.
This pattern is better suited for SQL land too.
This commit fixes a bug that would check that the passed `edge` argument
of UpdateChannelEdge is nil but it should actually be checking if the
`edges` bucket is nil.
In this commit, we use the available kvdb `View` method directly for
starting a graph session instead of manually creating and commiting the
transaction. Note this changes behaviour since failed tx create/commit
will now error instead of just log.
In this commit, we add a `GraphSession` method to the `ChannelGraph`.
This method provides a caller with access to a `NodeTraverser`. This is
used by pathfinding to create a graph "session" overwhich to perform a
set of queries for a pathfinding attempt. With this refactor, we hide
details such as DB transaction creation and transaction commits from the
caller. So with this, pathfinding does not need to remember to "close
the graph session". With this commit, the `graphsession` package may be
completely removed.
The `graphsession.NewRoutingGraph` method was used to create a
RoutingGraph instance with no consistent read transaction across calls.
But now that the ChannelGraph directly implements this, we can remove
The NewRoutingGraph method.
In preparation for having the ChannelGraph directly implement the
`routing.Graph` interface, we rename the `ForEachNodeChannel` method to
`ForEachNodeDirectedChannel` since the ChannelGraph already uses the
`ForEachNodeChannel` name and the new name is more appropriate since the
ChannelGraph currently has a `ForEachNodeDirectedChannelTx` method which
passes the same DirectedChannel type to the given call-back.
Add the `Tx` suffix to both ForEachNodeDirectedChannelTx and
FetchNodeFeatures temporarily so that we free up the original names for
other use. The renamed methods will be removed or unexported in an
upcoming commit. The aim is to have no exported methods on the
ChannelGraph that accept a kvdb.RTx as a parameter.
For consistency in the graphsessoin.graph interface, we let the
FetchNodeFeatures method take a read transaction just like the
ForEachNodeDirectedChannel. This is nice because then all calls in the
same pathfinding transaction use the same read transaction.
This commit is a pure refactor. We move the transaction validation
(existence, spentness, correctness) from the `graph.Builder` to the
gossiper since this is where all protocol level checks should happen.
All tests involved are also updated/moved.
In preparation for an upcoming commit which will move all channel
funding tx validation to the gossiper, we first move the helper method
which helps build the expected funding transaction script based on the
fields in the channel announcement. We will still want this script later
on in the builder for updating the ChainView though, and so we pass this
field along with the ChannelEdgeInfo. With this change, we can remove
the TapscriptRoot field from the ChannelEdgeInfo since the only reason
it was there was so that the builder could reconstruct the full funding
script.
This is in preparation for the commit where we move across all the
funding tx validation so that we can test that we are correctly updating
the zombie index.
The `graph.Builder`'s `addZombieEdge` method is currently called during
funding transaction validation for the case where the funding tx is not
found. In preparation for moving this code to the gossiper, we export
the method and add it to the ChannelGraphSource interface so that the
gossiper will be able to call it later on.
With the previous commit, the AddNode method was removed and since that
was the only method making use of the ForEachChannel on the
GraphCacheNode interface, we can remove that method. Since the only two
methods left just expose the node's pub key and features, it really is
not required anymore and so the entire thing can be removed along with
the implementation of it.
The AddNode method on the GraphCache calls `AddNodeFeatures` underneath
and then iterates through all the node's persisted channels and adds
them to the cache too via `AddChannel`.
This is, however, not required since at the time the cache is populated
in `NewChannelGraph`, the cache is populated will all persisted nodes
and all persisted channels. Then, once any new channels come in, via
`AddChannelEdge`, they are added to the cache via AddChannel. If any new
nodes come in via `AddLightningNode`, then currently the cache's AddNode
method is called which the both adds the node and again iterates through
all persisted channels and re-adds them to the cache. This is definitely
redundent since the initial cache population and updates via
AddChannelEdge should keep the cache fresh in terms of channels.
So we remove this for 2 reasons: 1) to remove the redundent DB calls and
2) this requires a kvdb.RTx to be passed in to the GraphCache calls
which will make it hard to extract the cache out of the CRUD layer
and be used more generally.
The AddNode method made sense when the cache was first added in the
code-base
[here](369c09be61 (diff-ae36bdb6670644d20c4e43f3a0ed47f71886c2bcdf3cc2937de24315da5dc072R213))
since then during graph cache population, nodes and channels would be
added to the cache in a single DB transaction. This was, however,
changed [later
on](352008a0c2)
to be done in 2 separate DB calls for efficiency reasons.
In this commit, a new NodeRTx interface is added which represents
consistent access to a persisted models.LightningNode. The
ForEachChannel method of the interface gives the caller access to the
node's channels under the same read transaction (if any) that was used
to fetch the node in the first place. The FetchNode method returns
another NodeRTx which again will have the same underlying read
transaction.
The main point of this interface is to provide this consistent access
without needing to expose the `kvdb.RTx` type as a method parameter.
This will then make it much easier in future to add new implementations
of this interface that are backed by other databases (or RPC
connections) where the `kvdb.RTx` type does not apply.
We will make use of the new interface in the `autopilot` package in
upcoming commits in order to remove the `autopilot`'s dependence on the
pointer to the `*graphdb.ChannelGraph` which it has today.
In preparation for moving funding transaction validiation from the
Builder to the Gossiper in later commit, we first convert these graph
Error Codes to normal error variables. This will help make the later
commit a pure code move.