In this commit, we make sig job handling when singing a next commitment
non-blocking by allowing the shutdown of a channel link to prevent
further waiting on sig jobs by the channel state machine. This addresses
possible cases where the aux signer may be shut down via a separate quit
signal, so the state machine could block indefinitely on receiving an
update on a sig job.
This is a requirement for replacing the quit channel with a Context.
The Done() channel of a Context is always recv-only, so all users of
that channel must not expect a bidirectional channel.
This fixes an issue where the switch's forwarding logic would think the
bandwidth to forward an HTLC was insufficient for a custom channel HTLC,
because we only overwrote the HTLC's amount and not the packet's (which
is just a short cut struct member anyway).
This commit updates the invoice registry to utilize the settlement
interceptor during the invoice settlement routine. It allows the
interceptor to capture the invoice, providing interception clients an
opportunity to determine the settlement outcome.
It doesn't make sense to do multiple encode/decode round trips on the
custom data of an HTLC. So we just use the same custom record type
everywhere, which also simplifies some of the code again.
In this commit, we start to use the new AuxSigner to obtain+verify aux sigs for all second level HTLCs. This is similar to the existing SigPool, but we'll only attempt to do this if the AuxSigner is present (won't be for most channels).
Introduce `ResumeModified` action to resume standard behavior of a p2p
message with optional modifications as specified by the client during
interception.
- Introduce the field `CustomRecords` to the type `UpdateFulfillHtlc`.
- Encode and decode the new field into the `ExtraData` field of the
`update_fulfill_htlc` wire message.
- Empty `ExtraData` field is set to `nil`.
Create our error encrypter with a wrapped type if we have a blinding
point present. Doing this in the iterator allows us to track this
information when we have both pieces of information available to us,
compared to trying to handle this later down the line:
- Downstream link on failure: we know that we've set a blinding point
for out outgoing HTLC, but not whether we're introduction or not
- Upstream link on failure: once the failure packet has been sent
through the switch, we no longer know whether we were the introduction
point (without looking it up / examining our payload again /
propagating this information through the switch).
Introduce two wrapper types for our existing SphinxErrorEncrypter
that are used to represent error encrypters where we're a part of a
blinded route. These encrypters are functionally the same as a sphinx
encrypter, and are just used as "markers" so that we know that we
need to handle our error differently due to our different role.
We need to persist this information to account for restart cases where
we've resovled the outgoing HTLC, then restart and need to handle the
error for the incoming link. Specifically, this is relevant for:
- On chain resolution messages received after restart
- Forwarding packages that are re-forwarded after restart
This is also generally helpful, because we can store this information
in one place (the circuit) rather than trying to reconstruct it in
various places when forwarding the failure back over the switch.
We need to know what role we're playing to be able to handle errors
correctly, but the information that we need for this is held by our
iterator:
- Whether we had a blinding point in update add (blinding kit)
- Whether we had a blinding point in payload
As we're now going to use the route role return value even when our
err!=nil, we rename the error to signal that we're using less
canonical golang here.
An alternative to this approach is to attach a RouteRole to our
ErrInvalidPayload. The downside of that approach is:
- Propagate context through parsing (whether we had updateAddHtlc)
- Clumsy handling for errors that are not of type ErrInvalidPayload
When handling blinded errors, we need to know whether there was a
blinding key in our payload when we successfully parsed our payload
but then found an invalid set of fields. The combination of
parsing and validation in NewPayloadFromReader means that we don't know
whether a blinding point was available to us by the time the error is
returned.
This commit splits parsing and validation into two functions so that
we can take a look at what we actually pulled of the payload in between
parsing and TLV validation.
This commit moves all our validation related to the presence of fields
into ValidateParsedPayloadTypes so that we can handle them in a single
place. We draw the distinction between:
- Validation of the payload (and the context within it's being parsed,
final hop / blinded hop etc)
- Processing and validation of encrypted data, where we perform
additional cryptographic operations and validate that the fields
contained in the blob are valid.
This helps draw the line more clearly between the two validation types,
rather than splitting some payload-releated blinded hop processing
into the encrypted data processing part. The downside of this approach
(vs doing the blinded path payload check _after_ payload validation)
is that we have to pass additional context into payload validation
(ie, whether we got a blinding point in our UpdateAddHtlc - as we
already do for isFinalHop).
This commit adds handling for malformed HTLC errors related to blinded
paths. We expect to receive these errors _within_ a blinded path,
because all non-introduction nodes are instructed to return malformed
errors for failures.
Note that we may actually switch back to a malformed error later on if
we too are a relaying node in the route, but we handle that case the
incoming link.
This commit moves `DetermineFeePerKw` into the `Estimate` method on
`FeePreference`. A few callsites previously calling `DetermineFeePerKw`
without the max fee rate is now also temporarily fixed by forcing them
to use `Estimate` with the default sweeper max fee rate.
Reject any HTLCs that use us as an introduction point in a blinded
route if we have disabled route blinding. We have to do this after
we've processed the payload, because we only know we're an introduction
point once we've processed the payload itself.
If we received a payload with a encrypted data point set, our forwarding
information should be set from the information in our encrypted blob.
This behavior is the same for introduction and relying nodes in a
blinded route.
To separate blinded route parsing from payload parsing, we need to
return the parsed types map so that we can properly validate blinded
data payloads against what we saw in the onion.
When we have a HTLC that is part of a blinded route, we need to include
the next ephemeral blinding point in UpdateAddHtlc for the next hop. The
way that we handle the addition of this key is the same for introduction
nodes and relaying nodes within the route.
This commit introduces a blinding kits which abstracts over the
operations required to decrypt, deserialize and reconstruct forwarding
data from an encrypted blob of data included for nodes in blinded
routes.
Add an option to disable route blinding, failing back any HTLC with
a blinding point set when we haven't got the feature enabled.
Note that this commit only handles the case where we're chosen as the
relaying node (where the blinding point is in update_add_htlc), we'll
add handling for the introduction node case once we get to handling of
blinded payloads).
When we have payments inside of a blinded route, we need to know
the incoming amount to be able to back-calculate the amount that
we need to forward using the forwarding parameters provided in the
blinded route encrypted data. This commit adds the payment amount
to our DecodeHopIteratorRequest so that it can be threaded down to
payment forwarding information creation in later commits.