We add an extra option to the AddLocalAlias method which only controls
whether we store a reverse lookup from the alias back to the base scid
it corresponds to. The previous flag "gossip" is still maintained, and
in a way supercedes the new flag (it will also store the base scid
lookup even if the base lookup flag isn't set). The only call that sets
this option is the XAddLocalChanAlias RPC endpoint, where we want to
make sure that a reverse lookup is stored in the alias manager in order
to later expose it via the new RPC method.
This is the final step, we actually call the interface and either
provide or retrieve the custom features over the message. We also notify
the aux components when channel reestablish is received.
This commit makes removes the `defaultQuiescenceTimeout` and makes it
configurable as different nodes have different network environment. In
addition the default timeout has been increased from 30s to 60s.
Like the previous commit, here we can start directly using the
InboundFee on the models.ChannelEdgePolicy object since we know we read
it from disk and so the InboundFee field will be populated accordingly.
NOTE: unlike the previous commit, behaviour is slightly different here
since previously we would error out here if TLV parsing failed whereas
now, the DB call will just skip the error and return a nil policy. This
should be ok since this is explicitly only dealing with our own updates
and so our TLV should always be valid.
We add logging to we can draw conclusions how long the processing
of gossip message last and potentially see whether the syncer
buffer channel size is a bottleneck in processing.
In this commit, we start to signal the prod bit for the rbf coop close
feature. We keep our signaling of the staging bit in place to ensure
the protocol continues to work between those nodes in the wild that are
still signaling the bit.
Fixes https://github.com/lightningnetwork/lnd/issues/9852
In this commit, we unify how all unit tests that make use of the graph
create their test ChannelGraph instance. This will make it easier to
ensure that once we plug in different graph DB implementations, that all
unit tests are run against all variants of the graph DB.
With this commit, `NewChannelGraph` is mainly only called via
`MakeTestGraph` for all tests _except_ for `TestGraphLoading` which
needs to be able to reload a ChannelGraph with the same backend. This
will be addressed in a follow-up commit once more helpers are defined.
Note that in all previous packages where we created a test graph using
`kvdb.GetBoltBackend`, we now need to add a `TestMain` function with a
call to `kvdb.RunTest` since the `MakeTestGraph` helper uses
`GetTestBackend` instead of `kvdb.GetBoltBackend` which requires an
embedded postgres instance to be running.
With this, we move a context.TODO() out of the gossiper and into the
brontide package - this will be removed in a future PR which focuses on
threading contexts through that code.
In this commit, we add a new CLI option to control if we D/C on slow
pongs or not. Due to the existence of head-of-the-line blocking at
various levels of abstraction (app buffer, slow processing, TCP kernel
buffers, etc), if there's a flurry of gossip messages (eg: 1K channel
updates), then even with a reasonable processing latency, a peer may
still not read our ping in time.
To give users another option, we add a flag that allows users to disable
this behavior. The default remains.
In this commit, we implement logic to downgrade to the legacy coop close
for taproot channels. Before this commit, we wouldn't allow nodes to
start up with both the taproot flag and the rbf flag activated.
In the future, once we implement the spec updates, we'll add support for
this combo, and can revert parts of this commit.
We do this in preparation for moving channel cache population logic out
of the constructor and into the Start method. We also will later on
(when topology subscription is moved to the ChannelGraph), have a
goroutine that will need to be kicked off and stopped.
And use this struct to pass NewChannelGraph anything it needs to be able
to init the KVStore that it houses. This will allow us to add
ChannelGraph specific options.
This fixes an issue in the itests in the restart case. We'd see an error
like:
```
2025-03-12 23:41:10.754 [ERR] PFSM state_machine.go:661: FSM(rbf_chan_closer(2f20725d9004f7fda7ef280f77dd8d419fd6669bda1a5231dd58d6f6597066e0:0)): Unable to apply event err="invalid state transition: received *chancloser.SendOfferEvent while in ClosingNegotiation(local=LocalOfferSent(proposed_fee=0.00000193 BTC), remote=ClosePending(txid=07229915459cb439bdb8ad4f5bf112dc6f42fca0192ea16a7d6dd05e607b92ae, party=Remote, fee_rate=1 sat/vb))"
```
We resolve this by waiting to send in the new request unil the old one
has been completed.
In this commit, we alter the existing co-op close flow to enable RBF
bumps after re connection. With the new RBF close flow, it's possible
that after a success round _and_ a re connection, either side wants to do
another fee bump. Typically we route these requests through the switch,
but in this case, the link no longer exists in the switch, so any
requests to fee bump again would find that the link doesn't exist.
In this commit, we implement a work around wherein if we have an RBF
chan closer active, and the link isn't in the switch, then we just route
the request directly to the chan closer via the peer. Once we have the
chan closer, we can use the exact same flow as prior.
We'll properly handle a protocol error due to user input by halting, and
sending the error back to the user.
When a user goes to issue a new update, based on which state we're in,
we'll either kick off the shutdown, or attempt a new offer. This matches
the new spec update where we'll only send `Shutdown` once per
connection.
In this commit, we fully integrate the new RBF close state machine into
the peer.
For the restart case after shutdown, we can short circuit the existing
logic as the new FSM will handle retransmitting the shutdown message
itself, and doesn't need to delegate that duty to the link.
Unlike the existing state machine, we're able to restart the flow to
sign a coop close with a new higher fee rate. In this case, we can now
send multiple updates to the RPC caller, one for each newly singed coop
close transaction.
To implement the async flush case, we'll launch a new goroutine to wait
until the state machine reaches the `ChannelFlushing` state, then we'll
register the hook. We don't do this at start up, as otherwise the
channel may _already_ be flushed, triggering an invalid state
transition.
In this commit, we add a new composite chanCloserFsm type. This'll allow
us to store a single value that might be a negotiator or and rbf-er.
In a follow up commit, we'll use this to conditionally create the new
rbf closer.
In the next commit, we'll start checking feature bits to decide how to
init the chan closer.
In the future, we can move the current chan closer handling logic into
an `MsgEndpoint`, which'll allow us to get rid of the explicit chan
closer map and direct handling.
In this commit, we update `chooseDeliveryScript` to generate a new
script if needed. This allows us to fold in a few other lines that
always followed this function into this expanded function.
The tests have been updated accordingly.
In this commit, we add an implementation of a new interface the rbf coop
state machine needs. We take care to accept interfaces everywhere, to
make this easier to test, and decouple from the concrete types we'll end
up using elsewhere.
Here we introduce the access manager which has caches that will
determine the access control status of our peers. Peers that have
had their funding transaction confirm with us are protected. Peers
that only have pending-open channels with us are temporary access
and can have their access revoked. The rest of the peers are granted
restricted access.
Find and replace all nolint instances refering to the `lll` linter and
replace with `ll` which is the name of our custom version of the `lll`
linter which can be used to ignore log lines during linting.
The next commit will do the configuration of the custom linter and
disable the default one.
All the structs defined in the `channeldb/models` package are graph
related. So once we move all the graph CRUD code to the graph package,
it makes sense to have the schema structs there too. So this just moves
the `models` package over to `graph/db/models`.