Finally, we make the channel-policy part of the SQL migration idempotent
by adding a migration-only policy insert query which will not error out
if the policy already exists and does not have a timestamp that is newer
than the existing records timestamp. To keep the commit simple, a
insertChanEdgePolicyMig function is added which is basically identical
to the updateChanEdgePolicy function except for the fact that it uses
the newly added query. In the next commit, it will be simplified even
more.
In this commit, we make the channel part of the graph SQL migration
idempotent (retry-safe!). We do this by adding a migration-only channel
insert query that will not error out if a the query is called and a
chanenl with the given scid&version already exists. We also ensure that
errors are not thrown if existing channel features & extra types are
re-added.
There is no need to use the "collect-then-update" pattern for node
insertion during the SQL migration since if we do have any previously
persisted data for the node and happen to re-run the insertion for that
node, the data will be exactly the same. So we can make use of "On
conflict, no nothing" here too.
In this commit, the graph SQL migration is updated so that the node
migration step is retry-safe. This is done by using migration specific
logic & queries that do not use the same node-update-constraint as the
normal node upsert logic. For normal "run-time" logic, we always expect
a node update to have a newer timestamp than any previously stored one.
But for the migration, we will only ever be dealing with a single
announcement for a given node & to make things retry-safe, we dont want
the query to error if we re-insert the exact same node.
Finally, we update the migrateZombieIndex function to use batch
validation just like was done in the previous commits. Here, we
additionally make sure to validate the entire zombie index entry and not
just the SCID.
As was done in the previous commits for nodes & channels, we update the
migrateClosedSCIDIndex function here so that it validates migrated
entries in batches rather than one-by-one.
As was done in the previous commits for nodes & channels, we update the
migratePruneLog function here so that it validates migrated entries in
batches rather than one-by-one.
Restructue the `migrateChannelsAndPolicies` function so that it does the
validation of migrated channels and policies in batches. So instead of
fetching channel and its policies individually after migrating it, we
wait for a minimum batch size to be reached and then validate a batch of
them together. This lets us make way fewer DB round trips.
Restructue the `migrateNodes` function so that it does the validation of
migrated nodes in batches. So instead of fetching each node individually
after migrating it, we wait for a minimum batch size to be reached and
then validate a batch of nodes together. This lets us make way fewer DB
round trips.
In this commit, we add the queries that will be needed to batch-fetch
the data of a set of nodes. The logic for using these new queries is
also added but not used yet.
In this commit, we remove the LEFT JOIN query that was used for fetching
a nodes addresses. The reason it was used before was to ensure that we'd
get an empty address list if the node did exist but had no addresses.
This was for the purposes of the `AddrsForNode` method since it needs to
return false/true to indicate if the given node exists.
Use the new `SLICES` directive to add a DeleteChannels query which takes
a set of DB channel IDs. Then replace all our calls to DeleteChannel
with a paginated call to DeleteChannels.
This commit adds a new GetChannelsByOutpoints query which takes a slice
of outpoint strings. This lets us then update PruneGraph to use
paginated calls to GetChannelsByOutpoints instead of making one DB call
per outpoint.
Here, a new query (GetChannelsByOutpoints) is added which makes use of
the /*SLICE:outpoints*/ directive & added workaround. This is then used
in a test to demonstrate how the ExecutePagedQuery helper can be used to
wrap a query like this such that calls are done in pages.
The query that has been added will also be used by live code paths in an
upcoming commit.
We need to explicitly store the entire bitfield types since we may have
channel_updates with bitfields containing bits we just dont need or
understand but we still need to store the entire bitfield so that the
reconstructed announcement remains valid.
This commit only adds the new columns but does not use them yet. NOTE:
this is ok since the migration adding this schema is not available in
the production build yet.
Which lets us run `TestNodeIsPublic` against our SQL DB backends.
Note that we need to tweak the tests a little bit so that
`AddLightningNode` for the same node is always called with a newer
LastUpdate time else it will fail the SQL constraint that only allows
the upsert if the update is newer than the persisted one.
Here we implement the SQLStore methods:
- MarkEdgeZombie
- MarkEdgeLive
- IsZombieEdge
- NumZombies
These will be tested in the next commit as one more method
implementation is required.
Here we add the `ForEachNodeDirectedChannel` and `ForEachNodeCacheable`
SQLStore implementations which then lets us run
`TestGraphTraversalCacheable` and `TestGraphCacheForEachNodeChannel`
against SQL backends.
In this commit, the ForEachSourceNodeChannel implementation of the
SQLStore is added. Since this is the first method of the SQLStore that
fetches channel and policy info, it also adds all the helpers that are
required to do so. These will be re-used in upcoming commits as more
"For"-type methods are added.
With this implementation, we convert the `TestForEachSourceNodeChannel`
such that it is run against SQL backends.
In this commit, the various SQL queries are defined that we will need in
order to implement the SQLStore UpdateEdgePolicy method. Channel
policies can be "replaced" and so we use the upsert pattern for them
with the rule that any new channel policy must have a timestamp greater
than the previous one we persisted.
As is done for the KVStore implementation of the method, we use the
batch scheduler for this method.
In this commit, the `AddChannelEdge` method of the SQLStore is
implemented. Like the KVStore implementation, it makes use of the
available channel `batch.Scheduler` and also updates the reject and
channel caches.
This then lets us convert the following 2 unit tests to run against the
SQL backends:
- TestPartialNode
- TestAddChannelEdgeShellNodes
In this commit, we add the `source_nodes` table. It points to entries in
the `nodes` table. This table will store one entry per protocol version
that we are announcing a node_announcement on.
With this commit, we can run the TestSourceNode unit test against our
SQL backends.
In this commit we add the necessary SQL queries and then implement the
SQLStore's NodeUpdatesInHorizon method. This lets us run the
TestNodeUpdatesInHorizon unit tests against SQL backends.
In this commit, we add the various sqlc queries that we need in order
to implement the following V1Store methods:
- AddLightningNode
- FetchLightningNode
- HasLightningNode
- AddrsForNode
- DeleteLightningNode
- FetchNodeFeatures
These are implemented by SQLStore which then lets us use the SQLStore
backend for the following unit tests:
- TestNodeInsertionAndDeletion
- TestLightningNodePersistence
In this commit, the various SQL schemas required to store graph node
related data is defined. Specifically, the following tables are defined:
- nodes
- node_extra_types
- node_features
- node_addresses