We put this new schema update into the main line and change the
versions of the schema updates which are currently only available
in dev builds. The schemas need to be chronological therefore we
also need to rename the file numbers.
Finally, we update the migrateZombieIndex function to use batch
validation just like was done in the previous commits. Here, we
additionally make sure to validate the entire zombie index entry and not
just the SCID.
As was done in the previous commits for nodes & channels, we update the
migrateClosedSCIDIndex function here so that it validates migrated
entries in batches rather than one-by-one.
As was done in the previous commits for nodes & channels, we update the
migratePruneLog function here so that it validates migrated entries in
batches rather than one-by-one.
Restructue the `migrateChannelsAndPolicies` function so that it does the
validation of migrated channels and policies in batches. So instead of
fetching channel and its policies individually after migrating it, we
wait for a minimum batch size to be reached and then validate a batch of
them together. This lets us make way fewer DB round trips.
Restructue the `migrateNodes` function so that it does the validation of
migrated nodes in batches. So instead of fetching each node individually
after migrating it, we wait for a minimum batch size to be reached and
then validate a batch of nodes together. This lets us make way fewer DB
round trips.
The following performance gains were measured using the new benchmark
test.
```
name old time/op new time/op delta
ChanUpdatesInHorizon-native-sqlite-10 18.5s ± 3% 2.0s ± 5% -89.11% (p=0.000 n=9+9)
ChanUpdatesInHorizon-native-postgres-10 59.0s ± 3% 0.8s ±10% -98.65% (p=0.000 n=10+9)
```
In this commit, we add the queries that will be needed to batch-fetch
the data of a set of nodes. The logic for using these new queries is
also added but not used yet.
In this commit, we remove the LEFT JOIN query that was used for fetching
a nodes addresses. The reason it was used before was to ensure that we'd
get an empty address list if the node did exist but had no addresses.
This was for the purposes of the `AddrsForNode` method since it needs to
return false/true to indicate if the given node exists.
Use the new `SLICES` directive to add a DeleteChannels query which takes
a set of DB channel IDs. Then replace all our calls to DeleteChannel
with a paginated call to DeleteChannels.
This commit adds a new GetChannelsByOutpoints query which takes a slice
of outpoint strings. This lets us then update PruneGraph to use
paginated calls to GetChannelsByOutpoints instead of making one DB call
per outpoint.
Here, a new query (GetChannelsByOutpoints) is added which makes use of
the /*SLICE:outpoints*/ directive & added workaround. This is then used
in a test to demonstrate how the ExecutePagedQuery helper can be used to
wrap a query like this such that calls are done in pages.
The query that has been added will also be used by live code paths in an
upcoming commit.
We need to explicitly store the entire bitfield types since we may have
channel_updates with bitfields containing bits we just dont need or
understand but we still need to store the entire bitfield so that the
reconstructed announcement remains valid.
This commit only adds the new columns but does not use them yet. NOTE:
this is ok since the migration adding this schema is not available in
the production build yet.
Which lets us run `TestNodeIsPublic` against our SQL DB backends.
Note that we need to tweak the tests a little bit so that
`AddLightningNode` for the same node is always called with a newer
LastUpdate time else it will fail the SQL constraint that only allows
the upsert if the update is newer than the persisted one.
Here we implement the SQLStore methods:
- MarkEdgeZombie
- MarkEdgeLive
- IsZombieEdge
- NumZombies
These will be tested in the next commit as one more method
implementation is required.
Note that this table will only contain entries for channels that we have
deleted from the `channels` table which is why we cannot use foreign
keys. Similarly, we may no longer have node entries for the nodes in the
table.