We need to explicitly store the entire bitfield types since we may have
channel_updates with bitfields containing bits we just dont need or
understand but we still need to store the entire bitfield so that the
reconstructed announcement remains valid.
This commit only adds the new columns but does not use them yet. NOTE:
this is ok since the migration adding this schema is not available in
the production build yet.
Which lets us run `TestNodeIsPublic` against our SQL DB backends.
Note that we need to tweak the tests a little bit so that
`AddLightningNode` for the same node is always called with a newer
LastUpdate time else it will fail the SQL constraint that only allows
the upsert if the update is newer than the persisted one.
Here we implement the SQLStore methods:
- MarkEdgeZombie
- MarkEdgeLive
- IsZombieEdge
- NumZombies
These will be tested in the next commit as one more method
implementation is required.
Note that this table will only contain entries for channels that we have
deleted from the `channels` table which is why we cannot use foreign
keys. Similarly, we may no longer have node entries for the nodes in the
table.
Here we add the `ForEachNodeDirectedChannel` and `ForEachNodeCacheable`
SQLStore implementations which then lets us run
`TestGraphTraversalCacheable` and `TestGraphCacheForEachNodeChannel`
against SQL backends.
In this commit, the ForEachSourceNodeChannel implementation of the
SQLStore is added. Since this is the first method of the SQLStore that
fetches channel and policy info, it also adds all the helpers that are
required to do so. These will be re-used in upcoming commits as more
"For"-type methods are added.
With this implementation, we convert the `TestForEachSourceNodeChannel`
such that it is run against SQL backends.
In this commit, the various SQL queries are defined that we will need in
order to implement the SQLStore UpdateEdgePolicy method. Channel
policies can be "replaced" and so we use the upsert pattern for them
with the rule that any new channel policy must have a timestamp greater
than the previous one we persisted.
As is done for the KVStore implementation of the method, we use the
batch scheduler for this method.
In this commit, the `AddChannelEdge` method of the SQLStore is
implemented. Like the KVStore implementation, it makes use of the
available channel `batch.Scheduler` and also updates the reject and
channel caches.
This then lets us convert the following 2 unit tests to run against the
SQL backends:
- TestPartialNode
- TestAddChannelEdgeShellNodes
In this commit, we define the SQL schemas for storing graph channel
data. This includes a new `channels` table and a new `channel_features`
table along with various indices.
In this commit, we add the `source_nodes` table. It points to entries in
the `nodes` table. This table will store one entry per protocol version
that we are announcing a node_announcement on.
With this commit, we can run the TestSourceNode unit test against our
SQL backends.
In this commit we add the necessary SQL queries and then implement the
SQLStore's NodeUpdatesInHorizon method. This lets us run the
TestNodeUpdatesInHorizon unit tests against SQL backends.
In this commit, we add the various sqlc queries that we need in order
to implement the following V1Store methods:
- AddLightningNode
- FetchLightningNode
- HasLightningNode
- AddrsForNode
- DeleteLightningNode
- FetchNodeFeatures
These are implemented by SQLStore which then lets us use the SQLStore
backend for the following unit tests:
- TestNodeInsertionAndDeletion
- TestLightningNodePersistence
In this commit, the various SQL schemas required to store graph node
related data is defined. Specifically, the following tables are defined:
- nodes
- node_extra_types
- node_features
- node_addresses
Previously, we applied replacements to our schema definitions
to make them compatible with both SQLite and Postgres backends,
as the files were not fully compatible with either.
With this change, the only replacement required for SQLite has
been moved to the generator script. This adjustment ensures
compatibility by enabling auto-incrementing primary keys that
are treated as 64-bit integers by sqlc.
The current sqlc GetInvoice query experiences incremental slowdowns during
the migration of large invoice databases, primarily due to its complex
predicate set. For this specific use case, a streamlined GetInvoiceByHash
function provides a more efficient solution, maintaining near-constant
lookup times even with extensive table sizes.
Previously we intentially did not set settled_at and settle_index when
inserting a new invoice as those fields are set when we settle an
invoice through the usual invoice update. As migration requires that we
set these nullable fields, we can safely add them.
Previously, the SQL implementation of the invoice query simply
converted the start and end timestamps to time and used them
in SQL queries to check for inclusivity. However, this logic
failed when the start and end timestamps were equal.
This commit addresses and corrects this issue.
Previously SQL invoice updater ignored the set ID hint when updating an
AMP invoice resulting in update subscriptions returning all of the AMP
state as well as all AMP HTLCs. This commit synchornizes behavior with
the KV implementation such that we now only return relevant AMP state
and HTLCs when updating an AMP invoice.
Previously, when using the native schema, invoice expiries were incorrectly
stored as 64-bit values (expiry in nanoseconds instead of seconds), causing
overflow issues. Since we cannot determine the original values, we will set
the expiries for existing invoices to 1 hour with this migration.
Previously if the `reverse` named arg was unset (value of NULL), then
SQL would order by NULL instead of ID causing undifined ordering of the
returned rows. To fix that we check for NULL and also make sure to set
the `reverse` arg in the code explicitly as it in the generated code it
is an `interface{}` instead of `bool`.
This commit attempts to fix some issues with the invoice store's schema that we
couldn't foresee before the implementation was finished. This is safe as the
schema has not been instantiated yet outside of unit tests. Furthermore the
commit updates invoice store SQL queries according to fixes in the schema as
well as to prepare the higher level implementation in the upcoming commits.