This commit updates all related tests to reflect the latest anchor
sweeping behavior. Previously, anchor sweeping is always attempted as
CPFP when a force close is broadcast, while now it only happens when the
deadline is less than 144. For non-CPFP purpose sweeping, it will happen
after one block is mined after the force close transaction is confirmed
as the anchor will be resent to the sweeper with a floor fee rate, hence
making it economical to sweep.
Since we now only perform CPFP when both the fee rate is higher and the
deadline is less than 144, we need to update the test to reflect that
Bob will not CPFP the force close tx for the channle Alice->Bob.
This commit changes from always sweeping anchor for a local force close
to only do so when there is an actual time pressure. After this change,
a forced anchor sweeping will only be attempted when the deadline is
less than 144 blocks.
This commit sorts wallet UTXOs by their values when using them for
sweeping inputs. This way we'd avoid locking large UTXOs when sweeping
inputs and also provide an opportunity to aggregate wallet UTXOs.
The link will send an update_fail_malformed_htlc, so we need to set
the BADONION bit. Since there isn't a replay-specific error, we
set the failure code to InvalidOnionVersion which has the BADONION bit.
This PR is a follow up, to a [follow
up](https://github.com/lightningnetwork/lnd/pull/7938) of an [initial
concurrency issue](https://github.com/lightningnetwork/lnd/pull/7856)
fixed in the peer goroutine.
In #7938, we noticed that the introduction of `p.startReady` can cause
`Disconnect` to block. This happens as `Disconnect` cannot be called
until `p.startReady` has been closed. `Disconnect` is also called from
`InboundPeerConnected` (the case of concurrent peers, so we need to
remove one of the connections) while the main server mutex is held. If
`p.Start` blocks for any reason, then this leads to the deadlock as: we
can't disconnect until we've finished starting, and we can't finish
starting as we need the disconnect caller to exit as it has the mutex.
In this commit, we now make the call to `prunePersistentPeerConnection`
async. The call to `prunePersistentPeerConnection` eventually wants to
grab the server mutex, which triggers the circular waiting scenario
above.
The main learning here is that no calls to the main server mutex path
can block from `p.Start`. This is more or less a stop gap to resolve the
issue initially introduced in v0.16.4. Assuming we want to move forward
with this fix, we should reexamine `p.startReady` all together, and also
revisit attempt to refactor this section of the code to eliminate the
mega mutex in the server in favor of a dedicated event loop.
In this commit, we modify the incoming contest resolver to use a
concurrent queue. This is meant to ensure that the invoice registry
subscription loop never blocks. This change is meant to be minimal and
implements option `5` as outlined here:
https://github.com/lightningnetwork/lnd/issues/8023.
With this change, the inner loop of the subscription dispatch method in
the invoice registry will no longer block, as the concurrent queue uses
a fixed buffer of a queue, then overflows into another queue when that
gets full.
Fixes https://github.com/lightningnetwork/lnd/issues/7917
In this commit, we add the ability to obtain blocking and mutex
profiles. The blocking profile will show which goroutines are
consistently blocked on synchronization primitives like channels, or
I/O. The mutex profile will show which mutexes are very contested.
The blocking profile can be enabled with a new arg: `--blockingprofile`.
The mutex profile can be enabled with a new arg: `--mutexprofile`. These
are both ignored if the profile port isn't set.
Activating these profiles requires the caller to pass in a sampling
rate. For now I've set it just to `1` to test things out. Unfortunately
documentation is rather scarce, so there aren't any good guides re what
these values should be set to. AFAICT, these add more overhead than the
other prowling options, so they shouldn't necessarily be enabled
persistently in production.