In this commit, the ForEachSourceNodeChannel implementation of the
SQLStore is added. Since this is the first method of the SQLStore that
fetches channel and policy info, it also adds all the helpers that are
required to do so. These will be re-used in upcoming commits as more
"For"-type methods are added.
With this implementation, we convert the `TestForEachSourceNodeChannel`
such that it is run against SQL backends.
In this commit, the various SQL queries are defined that we will need in
order to implement the SQLStore UpdateEdgePolicy method. Channel
policies can be "replaced" and so we use the upsert pattern for them
with the rule that any new channel policy must have a timestamp greater
than the previous one we persisted.
As is done for the KVStore implementation of the method, we use the
batch scheduler for this method.
This commit removes the `batchReplayBkt` as its only effect is to allow
reforwarding htlcs during startup.
Normally for every incoming htlc added, their shared secret is used as
the key to be saved into the `sharedHashBucket`, which will be used for
check for replays. In addition, the fwdPkg's ID, which is SCID+height is
also saved to the bucket `batchReplayBkt`. Since replays of HTLCs cannot
happen at the same commitment height, when a replay happens,
`batchReplayBkt` simply doesn't have this info, and we again rely on
`sharedHashBucket` to detect it. This means most of the time the
`batchReplayBkt` is a list of SCID+height with empty values.
The `batchReplayBkt` was previously used as a mechanism to check for
reforwardings during startup - when reforwarding htlcs, it quries this
bucket and finds an empty map, knowing this is a forwarding and skips
the check in `sharedHashBucket`. Given now we use a bool flag to
explicitly skip the replay check, this bucket is no longer useful.
We now rely on the forwarding package's state to decide whether a given
packet is a reforwarding or not. If we know it's a reforwarding packet,
there's no need to check for replays in the `sharedHashes` bucket, which
behaves the same as if we are querying the `batchReplayBkt`.
In this commit, we prevent partial mutation of current
node announcement during announcement signing. If node
announcement signing failed the current node announcement
becomes inconsistent.
In this commit, we introduce a SQLStoreConfig struct which for the time
being only has the ChainHash of the genesis block of the chain this node
is running on. This is used to reconstruct lnwire messages from what we
have persisted in the DB. This means we dont need need to persist the
chain-hash of gossip messages since we know it will always be the same
for a given node. If a node were to be started with a different network,
the lnwire messages it reconstructs for gossip will be invalid.
Previously when deciding whether a UTXO is spent or not, we accept a
height hint as the starting block to look for the spending tx in the
rescan process. When it's already found spent before the rescan starts,
we will update the height hint to be the spent height, only if the
latter is greater. This means if the user-specified hint is greater than
the actual spending height, this UTXO will never be found as spent. We
now fix it by always using the spent height as the hint.
This commit adds incoming and outgoing channel ids filter to forwarding history request to filter events received/forwarded from/to a particular channel
In this commit, we fix a flake in the
`TestRbfCloseClosingNegotiationLocal/send_offer_rbf_wrong_local_script`
test.
This flake can happen if the test shuts down _before_ the state machine
is actually able to process the sent event. In this case, the
expectations are triggered, and we find that the error isn't sent.
To resolve this, we create a new wrapper function that'll use a sync
channel send to assert that the error has been sent before we exit the
test.
Implement ForEachChannelCacheable which is like ForEachChannel but its
call-back takes the cached versions of channel info & policies. This is
then used during graph cache population. This will be useful once the
SQL implementation is added so that we can reduce the number of DB trips
on cache population.
Define a new CachedEdgeInfo type and let the graph cache's AddChannel
use this. This will let us later on (for the SQL impl of the graph db)
only load from the DB what we actually need for the graph cache.