This can be used to allow any system to send a message to the RBF chan
closer if it knows the proper service key. In the future, we can use
this to redo the msgmux.Router in terms of the new actor abstractions.
In this commit, we implement the actor.ActorBehavior interface for
StateMachine. This enables the state machine executor to be registered
as an actor, and have messages be sent to it via a unique ServiceKey
that a concrete instance will set.
In this commit, we now register the rbfCloseActor when we create the rbf
chan closer state machine. Now the RPC server no longer neesd to
traverse a series of maps and pointers (rpcServer -> server -> peer ->
activeCloseMap -> rbf chan closer) to trigger a new fee bump.
Instead, it just creates the service key that it knows that the closer
can be reached at, and sends a message to it using the returned
actorRef/router. We also hide additional details re the various methods
in play, as we only care about the type of message we expect to send and
receive.
In this commit, we create a new rbfCloseActor wrapper struct. This will
wrap the RPC operations to trigger a new RBF close bump within a new
actor. In the next commit, we'll now register this actor, and clean up
the call graph from the rpc server to this actor.
In this commit, we add a readme which serves as a general introduction
to the pacakge, and also the motivation of the package. It serves as a
manual for developers that may wish to interact with the package.
In this commit, we add the actor system (along with the receiptionist)
and the router.
An actor can be registered with the system, which allows other callers
to locate it to send message to it via the receptionist. Custom routers
can be created for when there're actors that rely on the same service
key and also req+resp type. This can be used to implement something
similar to a worker pool.
In this commit, we add the actual Actor implementation. We define a
series of types and interfaces, that in concert, describe our actor. An
actor has some ID, a reference (used to send messages to it), and also a
set of defined messages that it'll accept.
An actor can be implemented using a simple function if it's stateless.
Otherwise, a struct can implement the Receive method, and handle its
internal message passing and state that way.
In this commit, we add two new fundamental data structures: Future[T]
and Promise[T].
A future is a response that might be ready at some point in the future.
This is already a common pattern in Go, we just make a type safe wrapper
around the typical operations: block w/ a timeout, add a call back for
execution, pipeline the response to a new future.
A promise is an intent to complete a future. Typically the caller
receives the future, and the callee is able to complete the future using
a promise.
In this commit, we add a new CLI option to control if we D/C on slow
pongs or not. Due to the existence of head-of-the-line blocking at
various levels of abstraction (app buffer, slow processing, TCP kernel
buffers, etc), if there's a flurry of gossip messages (eg: 1K channel
updates), then even with a reasonable processing latency, a peer may
still not read our ping in time.
To give users another option, we add a flag that allows users to disable
this behavior. The default remains.
In this commit, we ensure that any topology update is forced to go via
the `handleTopologySubscriptions` handler so that client subscriptions
and updates are handled correctly and in the correct order.
This removes a bug that could result from a client missing a
notification about a channel being closed if the client is subscribed
and shortly after, `PruneGraph` is called which would notify all
subscribed clients and possibly do so before the client subscription has
actually been persisted.
We remove the mutex that was previously held between DB calls and calls
that update the graphCache. This is done so that the underlying DB calls
can make use of any batch requests which they currently cannot since the
mutex prevents multiple requests from calling the methods at once.
The reason the cacheMu was originally added here was during a code
refactor that moved the `graphCache` out of the `KVStore` and into the
`ChannelGraph` and the aim was then to have a best effort way of
ensuring that updates to the DB and updates to the graphCache were as
consistent/atomic as possible.
In this commit, we update the `tlv` package version which includes type
constraints on the `tlv.SizeBigSize` method parameter. This exposes a
bug in the MilliSatoshi Record method which is fixed here.
This was not caught in tests before since currently only
our TLV encoding code makes use of this SizeFunc (so we would write 0
size to disk) but then when we read the bytes from disk and decode, we
dont use the SizeFunc and our MilliSatoshi decode method makes direct
use of the `tlv.DBigSize` function which _currently does not make use of
the `l` length variable passed to it_. So it currently does correctly
read the data.
It can take some time to unmarshal large mission control data sets such
that the macaroon can expire during that phase. We postpone the
connection establishment to give the user more time to answer the
prompt.
Mission control may have outdated success/failure amounts for node pairs
that have channels with differing capacities. In that case we assume to
still find the liquidity as before and rescale the amounts to the
according range.
We skip the evaluation of probabilities when the amount is lower than
the last success amount, as the probability would be evaluated to 1 in
that case.
If the success and fail amounts indicate that a channel doesn't obey a
bimodal distribution, we fall back to a uniform/linear success
probability model. This also helps to avoid numerical normalization
issues with the bimodal model.
This is achieved by adding a very small summand to the balance
distribution P(x) ~ exp(-x/s) + exp((x-c)/s), 1/c that helps to
regularize the probability distribution. The distribution becomes finite
for intermediate balances where the exponentials would be evaluated to
an exact zero (float) otherwise. This regularization is effective in
edge cases and leads to falling back to a uniform model should the
bimodal model fail.
This affects the normalization to be s * (-2 * exp(-c/s) + 2 + 1/s) and
the primitive function to receive an extra term x/(cs).
The previously added fuzz seed is expected to be resolved with this.
This test demonstrates an error found in a fuzz test by adding a
previously found seed, which will be fixed in an upcoming commit.
The following fuzz test is expected to fail:
go test -v -fuzz=Prob ./routing/