Here, we move the graph cache writes for AddLightningNode,
DeleteLightningNode, AddChannelEdge and MarkEdgeLive to the
ChannelGraph. Since these are writes, the cache is only updated if the
DB write is successful.
This commit moves the graph cache checks for FetchNodeFeatures,
ForEachNodeDirectedChannel, GraphSession and ForEachNodeCached from the
KVStore to the ChannelGraph. Since the ChannelGraph is currently just a
pass-through for any of the KVStore methods, all that needs to be done
for calls to go via the ChannelGraph instead directly to the KVStore is
for the ChannelGraph to go and implement those methods.
In this commit, we let the ChannelGraph be responsible for populating
the graphCache and then passing it to the KVStore. This is a first step
in moving the graphCache completely out of the KVStore layer.
And use this struct to pass NewChannelGraph anything it needs to be able
to init the KVStore that it houses. This will allow us to add
ChannelGraph specific options.
Since we have renamed a file housing some very old code, the linter has
now run on all this code for the first time. So we gotta do some
clean-up work here to make it happy.
In this commit, we rename the existing ChannelGraph struct to KVStore to
better reflect its responsibilities as the CRUD layer. We then introduce
a new ChannelGraph struct which will eventually be the layer above the
CRUD layer in which we will handle cacheing and topology subscriptions.
For now, however, it houses only the KVStore. This means that all calls
to the KVStore will now go through this layer of indirection first. This
will allow us to slowly move the graph Cache management out of the
KVStore and into the new ChannelGraph layer.
We introduce the new ChannelGraph and rename the old one in the same
commit so that all existing call-sites don't need to change at all :)
Rename it to kv_store.go so that we can re-use the graph.go file name
later on. We will use it to house the _new_ ChannelGraph when the
existing ChannelGraph is renamed to more clearly reflect its
responsibilities as the CRUD layer.
With what we added in the prior commit, we can significantly shrink the
size of this test. We also make it easier to extend in the future, as
this will fail if a new message is added, that doesn't have the needed
methods, as long as MsgEnd is updated.
In this commit, we add a new `TestMessage` interface for use in property
tests. With this, we'll be able to generate a random instance of a given
message, using the rapid byte stream. This can also eventually be useful
for fuzzing.
This commit makes sure when processing resolutions, e.g, settling
invoices, when the link is already broken, the process would exit with
an error. This fixes the issue we found in the itest, where an
unexpected empty remote pending commitment was created although the
remote peer is already offline.
In this commit, we add a new field `LockedIn` on HTLCs so it can be used
to decide whether an HTLC found on the local commitment has been
committed on the remote commitment.
This fixes an issue in the itests in the restart case. We'd see an error
like:
```
2025-03-12 23:41:10.754 [ERR] PFSM state_machine.go:661: FSM(rbf_chan_closer(2f20725d9004f7fda7ef280f77dd8d419fd6669bda1a5231dd58d6f6597066e0:0)): Unable to apply event err="invalid state transition: received *chancloser.SendOfferEvent while in ClosingNegotiation(local=LocalOfferSent(proposed_fee=0.00000193 BTC), remote=ClosePending(txid=07229915459cb439bdb8ad4f5bf112dc6f42fca0192ea16a7d6dd05e607b92ae, party=Remote, fee_rate=1 sat/vb))"
```
We resolve this by waiting to send in the new request unil the old one
has been completed.
The itest has both sides try to close multiple times, each time with
increasing fee rates. We also test the reconnection case, bad RBF
updates, and instances where the local party can't actually pay for
fees.
In this commit, we alter the existing co-op close flow to enable RBF
bumps after re connection. With the new RBF close flow, it's possible
that after a success round _and_ a re connection, either side wants to do
another fee bump. Typically we route these requests through the switch,
but in this case, the link no longer exists in the switch, so any
requests to fee bump again would find that the link doesn't exist.
In this commit, we implement a work around wherein if we have an RBF
chan closer active, and the link isn't in the switch, then we just route
the request directly to the chan closer via the peer. Once we have the
chan closer, we can use the exact same flow as prior.
We'll properly handle a protocol error due to user input by halting, and
sending the error back to the user.
When a user goes to issue a new update, based on which state we're in,
we'll either kick off the shutdown, or attempt a new offer. This matches
the new spec update where we'll only send `Shutdown` once per
connection.