We check the context of the payment lifecycle at the beginning of
the `resumepayment` loop. This will make sure we have always the
latest state of the payment before deciding on the next steps in
the function `decideNextStep`.
Replace all usages of the "github.com/go-errors/errors" and
"github.com/pkg/errors" packages with the standard lib's "errors"
package. This ensures that error wrapping and `errors.Is` checks will
work as expected.
Mission control may have outdated success/failure amounts for node pairs
that have channels with differing capacities. In that case we assume to
still find the liquidity as before and rescale the amounts to the
according range.
We skip the evaluation of probabilities when the amount is lower than
the last success amount, as the probability would be evaluated to 1 in
that case.
If the success and fail amounts indicate that a channel doesn't obey a
bimodal distribution, we fall back to a uniform/linear success
probability model. This also helps to avoid numerical normalization
issues with the bimodal model.
This is achieved by adding a very small summand to the balance
distribution P(x) ~ exp(-x/s) + exp((x-c)/s), 1/c that helps to
regularize the probability distribution. The distribution becomes finite
for intermediate balances where the exponentials would be evaluated to
an exact zero (float) otherwise. This regularization is effective in
edge cases and leads to falling back to a uniform model should the
bimodal model fail.
This affects the normalization to be s * (-2 * exp(-c/s) + 2 + 1/s) and
the primitive function to receive an extra term x/(cs).
The previously added fuzz seed is expected to be resolved with this.
This test demonstrates an error found in a fuzz test by adding a
previously found seed, which will be fixed in an upcoming commit.
The following fuzz test is expected to fail:
go test -v -fuzz=Prob ./routing/
The bimodal model doesn't depend on the unit, which is why updating to
more realistic values doesn't require changes in tests.
We pin the scale in the fuzz test to not invalidate the corpus.
We can remove paymentFailureInfo since we can gate the result
interpretation on the source index, meaning that if we don't have a
source index, we deal with an unknown payment outcome because we
couldn't pinpoint the failing hop.
Whether we should let the aux bandwidth manager decide what the
bandwidth of a channel is should also depend on whether the HTLC is a
custom HTLC, not just the channel.
In order to get more precise bandwidth reports, we also need to provide
this method with the latest htlc view. Since aux data is committed to in
the channel commitment, some uncommited HTLCs may not be accounted for,
so we need to manually provide them via the HTLC view.
For legacy payments, the hash field will be nil, and we need to use the
payment identifier instead. We have multiple ways to fix this:
A trivial solution is we can simply call `sharder.GetHash` in
`collectResult`, and pass this hash to `attempt.Circuit()`, which ends
up multiple methods taking the hash. This is bad as it's confusing why
the methods of `HTLCAttempt` need to take another hash value, while
itself already has the info via `HTLCAttempt.Hash`. We don't want an
exceptional case to influence our main flow.
We can then patch the field `HTLCAttempt.Hash`, and set it to the
payment hash if it's nil, which can be done in `collectResult`. This is
also less optimal as it means every htlc attempts, either legacy or not,
now need to bear this context.
The best way to do this is to patch the field in
`reloadInflightAttempts`. As we are sure any new payments made won't be
legacy, and the only source of legacy payments comes from reloading
existing payments.
We do this in preparation for moving channel cache population logic out
of the constructor and into the Start method. We also will later on
(when topology subscription is moved to the ChannelGraph), have a
goroutine that will need to be kicked off and stopped.
In this commit, we let the ChannelGraph be responsible for populating
the graphCache and then passing it to the KVStore. This is a first step
in moving the graphCache completely out of the KVStore layer.
And use this struct to pass NewChannelGraph anything it needs to be able
to init the KVStore that it houses. This will allow us to add
ChannelGraph specific options.
In this commit, we add a `GraphSession` method to the `ChannelGraph`.
This method provides a caller with access to a `NodeTraverser`. This is
used by pathfinding to create a graph "session" overwhich to perform a
set of queries for a pathfinding attempt. With this refactor, we hide
details such as DB transaction creation and transaction commits from the
caller. So with this, pathfinding does not need to remember to "close
the graph session". With this commit, the `graphsession` package may be
completely removed.
In preparation for the next commit where we will start hiding underlying
graph details such as that a graph session needs to be "closed" after
pathfinding is done with it, we refactor things here so that the main
pathfinding logic is done in a call-back.
In preparation for having the ChannelGraph directly implement the
`routing.Graph` interface, we rename the `ForEachNodeChannel` method to
`ForEachNodeDirectedChannel` since the ChannelGraph already uses the
`ForEachNodeChannel` name and the new name is more appropriate since the
ChannelGraph currently has a `ForEachNodeDirectedChannelTx` method which
passes the same DirectedChannel type to the given call-back.
Add the `Tx` suffix to both ForEachNodeDirectedChannelTx and
FetchNodeFeatures temporarily so that we free up the original names for
other use. The renamed methods will be removed or unexported in an
upcoming commit. The aim is to have no exported methods on the
ChannelGraph that accept a kvdb.RTx as a parameter.
For consistency in the graphsessoin.graph interface, we let the
FetchNodeFeatures method take a read transaction just like the
ForEachNodeDirectedChannel. This is nice because then all calls in the
same pathfinding transaction use the same read transaction.