Implement ForEachChannelCacheable which is like ForEachChannel but its
call-back takes the cached versions of channel info & policies. This is
then used during graph cache population. This will be useful once the
SQL implementation is added so that we can reduce the number of DB trips
on cache population.
For any call-site where we extract inbound fees from a
models.ChannelEdgePolicy object that was deserialised from disk, we can
now just use the new InboundFee field on the object since we know that
it would have been populated at deserialisation time.
Note that for all these call-sites, if a failure previously happened on
decoding of the TLV stream, the error would be ignored and the edge
would just be skipped. This behaviour is now still the same given how
ErrParsingExtraTLVBytes is handled on the DB layer.
Here we add an explicit InboundFee field to the ChannelEdgePolicy
struct. Then, in the graph KVStore, at deserialisation time, we extract
the InboundFee from the ExtraOpaqueData. Currently we do this at higher
levels but we are going to move it to the DB layer so that when we add
the SQL implementation of the graph store, we can have explicit columns
for inbound fees. We need to account for the fact that we may have
invalid TLV already persisted though and we dont want to fail if we
deserialise those necessarily. So we return ErrParsingExtraTLVBytes now
if we fail to parse the extra bytes as TLV and then we let the callers
handle it similarly to how ErrParsingExtraTLVBytes is handled in that we
dont necessarily fail if we receive one of these errors.
As of this commit, we can now expect the InboundFee field of a
ChannelEdgePolicy to be set (if inbound fees are set on the policy) for
any update that we read from disk.
In this commit, we start validating the extra opaque data of a channel
edge policy before persisting it. We just check that the data is valid
TLV.
NOTE: we recently [started
validating](1410a0949d)
this at the lnwire level. So really, no new update will reach the DB
layer without this already being checked. But we check it again here so
that the DB API behaves correctly as its own unit.
In preparation for having consistency with the structs created by the
SQLStore and the KVStore (so that they have the same behaviour when
tested by the unit tests), here we make sure not to init the
ExtraOpaqueData field of the LightningNode struct unless there are
actualy bytes to set.
In this commit, we update the batch schedular so that it has the ability
to do read-only calls. It will do a best effort attempt at keeping a
transaction in read-only mode and then if any requests get added to a
batch that require a read-write tx, then the entire batch's tx will be
upgraded to use a read-write tx.
In preparation for using the same logic for non-bbolt backends, we adapt
the batch.Schedular to be more generic.
The only user of the scheduler at the moment is the KVStore in the
`graph.db` package. This store instantiates the bbolt implementation of
the scheduler.
In this commit, we add a `test_kvdb.go` file with a single definition of
the `NewTestDB` function. A new version of `MakeTestGraph` (called
`MakeTestGraphNew` is added which makes use of this `NewTestDB` function
to create the backing `V1Store` passed to the `ChannelGraph` for tests.
Later on, we will add new implementations of this method backed by
sqlite and postgres. When those are added, then build flags will
control which version of `NewTestDB` is called.
With this change, the only test call-site of `NewKVStore` is the new
`test_kvdb.go` file.
In this commit, we unify how all unit tests that make use of the graph
create their test ChannelGraph instance. This will make it easier to
ensure that once we plug in different graph DB implementations, that all
unit tests are run against all variants of the graph DB.
With this commit, `NewChannelGraph` is mainly only called via
`MakeTestGraph` for all tests _except_ for `TestGraphLoading` which
needs to be able to reload a ChannelGraph with the same backend. This
will be addressed in a follow-up commit once more helpers are defined.
Note that in all previous packages where we created a test graph using
`kvdb.GetBoltBackend`, we now need to add a `TestMain` function with a
call to `kvdb.RunTest` since the `MakeTestGraph` helper uses
`GetTestBackend` instead of `kvdb.GetBoltBackend` which requires an
embedded postgres instance to be running.
Currently none of the calls to MakeTestGraph make use of the
KVStoreOptionModifier options but later on we will want to make use of
the `WithUseGraphCache` ChannelGraphOption and so we take this
opportunity to switch out the functional parameters that the helper
function takes.
Instead of returning an error and needing to call `require.NoError` for
each call to `MakeTestGraph`, rather just used the available testing
variable to require no error within the function itself.
In this commit, we introduce the `V1Store` interface which the existing
`graphdb.KVStore` implements today. The idea is to eventually create a
SQL DB backed implementation of this interface.
Currently, a few of the graph KVStore methods take the
`batch.SchedulerOptions` param. This is only used to set the LazyAdd
option. A SchedulerOption is a functional option that takes a
`batch.Request` which has bolt-specific fields in it. This commit
restructures things a bit such that the `batch.Request` type is no
longer part of the `batch.SchedulerOptions` - this will make it easier
to implement the graph store with a different DB backend.
Since we have not removed all call-sites that make use of this
parameter, we can remove it. This helps hide DB-specific details from
the interface we will introduce for the graph store.
In preparation for creating a clean interface for the graph store, we
want to hide anything that is DB specific from the exposed methods on
the interface. Currently the `ForEachNodeChannel` and the
`FetchOtherNode` methods of the `KVStore` expose a `kvdb.RTx` parameter
which is bbolt specific. There is only one call-site of
`ForEachNodeChannel` actually makes use of the passed `kvdb.RTx`
parameter, and that is in the `establishPersistentConnections` method of
the `server` which then passes the tx parameter to `FetchOtherNode`.
So to clean-up the interface such that the `kvdb.RTx` is no longer
exposed: we instead create one new method called
`ForEachSourceNodeChannel` which can be used to replace the above
mentioned call-site. So as of this commit, all the remaining call-site
of `ForEachNodeChannel` pass in a nil param for `kvdb.RTx` - meaning we
can remove the parameter in a future commit.
Later on we will create an interface for the persisted graph data. We
want this interface to be as small and as neat as possible. In
preparation for this, we remove this unused `Wipe` method.
We do this in preparation for moving channel cache population logic out
of the constructor and into the Start method. We also will later on
(when topology subscription is moved to the ChannelGraph), have a
goroutine that will need to be kicked off and stopped.
Here, we move the graph cache writes for AddLightningNode,
DeleteLightningNode, AddChannelEdge and MarkEdgeLive to the
ChannelGraph. Since these are writes, the cache is only updated if the
DB write is successful.
This commit moves the graph cache checks for FetchNodeFeatures,
ForEachNodeDirectedChannel, GraphSession and ForEachNodeCached from the
KVStore to the ChannelGraph. Since the ChannelGraph is currently just a
pass-through for any of the KVStore methods, all that needs to be done
for calls to go via the ChannelGraph instead directly to the KVStore is
for the ChannelGraph to go and implement those methods.
In this commit, we let the ChannelGraph be responsible for populating
the graphCache and then passing it to the KVStore. This is a first step
in moving the graphCache completely out of the KVStore layer.
And use this struct to pass NewChannelGraph anything it needs to be able
to init the KVStore that it houses. This will allow us to add
ChannelGraph specific options.
Since we have renamed a file housing some very old code, the linter has
now run on all this code for the first time. So we gotta do some
clean-up work here to make it happy.
In this commit, we rename the existing ChannelGraph struct to KVStore to
better reflect its responsibilities as the CRUD layer. We then introduce
a new ChannelGraph struct which will eventually be the layer above the
CRUD layer in which we will handle cacheing and topology subscriptions.
For now, however, it houses only the KVStore. This means that all calls
to the KVStore will now go through this layer of indirection first. This
will allow us to slowly move the graph Cache management out of the
KVStore and into the new ChannelGraph layer.
We introduce the new ChannelGraph and rename the old one in the same
commit so that all existing call-sites don't need to change at all :)
Rename it to kv_store.go so that we can re-use the graph.go file name
later on. We will use it to house the _new_ ChannelGraph when the
existing ChannelGraph is renamed to more clearly reflect its
responsibilities as the CRUD layer.