In this commit, we revamp the old message based rate limiting. First, we
move to meter by bytes/s instead of messages/s. The old logic had an
error in that it limited groups of message replies, instead of each
message. With this new approach, we'll use the newly added
SerializedSize method to implement fine grained bandwidth metering.
We need to pick two values, the burst rate, and the msg bytes rate. The
burst rate is the max amt that can be sent in a given period of time. We
need to set this above 65 KB, or the max msg limit, otherwise no
messages can be sent. The bucket starts with this many tokens (bytes).
As those are depleted, the amount of tokens is refilled at the msg
bytes rate.
As conservative values, we've chosen 200 KB as the burst rate, and 100
KB/s as the limit.
We plan to later on add an option for a remote graph source which will
be managed from the ChannelGraph. In such a set-up, a node would rely on
the remote graph source for graph updates instead of from gossip sync.
In this scenario, however, our topology subscription logic should still
notify clients of all updates and so it makes more sense to have the
logic as part of the ChannelGraph so that we can send updates we receive
from the remote graph.
The test as it stands today does not make sense as it adds a
Partial/Shell node to the graph via AddLightningNode which will never
happen since this is only ever triggered by the gossiper which only
calls the method with a full node announcement. Shell/Partial nodes are
only ever added via AddChannelEdge which will insert a partial node if
we are adding a channel edge which has node pub keys that we dont have a
node entry for. So we adjust the test to use this more accurate flow.
We do this in preparation for moving channel cache population logic out
of the constructor and into the Start method. We also will later on
(when topology subscription is moved to the ChannelGraph), have a
goroutine that will need to be kicked off and stopped.
Here, we move the graph cache writes for AddLightningNode,
DeleteLightningNode, AddChannelEdge and MarkEdgeLive to the
ChannelGraph. Since these are writes, the cache is only updated if the
DB write is successful.
This commit moves the graph cache checks for FetchNodeFeatures,
ForEachNodeDirectedChannel, GraphSession and ForEachNodeCached from the
KVStore to the ChannelGraph. Since the ChannelGraph is currently just a
pass-through for any of the KVStore methods, all that needs to be done
for calls to go via the ChannelGraph instead directly to the KVStore is
for the ChannelGraph to go and implement those methods.
In this commit, we let the ChannelGraph be responsible for populating
the graphCache and then passing it to the KVStore. This is a first step
in moving the graphCache completely out of the KVStore layer.
And use this struct to pass NewChannelGraph anything it needs to be able
to init the KVStore that it houses. This will allow us to add
ChannelGraph specific options.