In this commit, we introduce a SQLStoreConfig struct which for the time
being only has the ChainHash of the genesis block of the chain this node
is running on. This is used to reconstruct lnwire messages from what we
have persisted in the DB. This means we dont need need to persist the
chain-hash of gossip messages since we know it will always be the same
for a given node. If a node were to be started with a different network,
the lnwire messages it reconstructs for gossip will be invalid.
Previously when deciding whether a UTXO is spent or not, we accept a
height hint as the starting block to look for the spending tx in the
rescan process. When it's already found spent before the rescan starts,
we will update the height hint to be the spent height, only if the
latter is greater. This means if the user-specified hint is greater than
the actual spending height, this UTXO will never be found as spent. We
now fix it by always using the spent height as the hint.
This commit adds incoming and outgoing channel ids filter to forwarding history request to filter events received/forwarded from/to a particular channel
In this commit, we fix a flake in the
`TestRbfCloseClosingNegotiationLocal/send_offer_rbf_wrong_local_script`
test.
This flake can happen if the test shuts down _before_ the state machine
is actually able to process the sent event. In this case, the
expectations are triggered, and we find that the error isn't sent.
To resolve this, we create a new wrapper function that'll use a sync
channel send to assert that the error has been sent before we exit the
test.
Implement ForEachChannelCacheable which is like ForEachChannel but its
call-back takes the cached versions of channel info & policies. This is
then used during graph cache population. This will be useful once the
SQL implementation is added so that we can reduce the number of DB trips
on cache population.
Define a new CachedEdgeInfo type and let the graph cache's AddChannel
use this. This will let us later on (for the SQL impl of the graph db)
only load from the DB what we actually need for the graph cache.
Update the GraphCache.UpdatePolicy method to take a
`models.CachedEdgePolicy` instead of a `models.ChannelEdgePolicy`.
Doing this will allow us later on to only fetch the necessary info for
populating the CachedEdgePolicy when we are populating the cache via
UpdatePolicy.
Remove the previously added TODOs which would extract InboundFee info
from the ExtraOpaqueData of a ChannelUpdate at the time of
ChannelEdgePolicy construction. These can now be replaced by using the
newly added InboundFee record on the ChannelUpdate message.
Also add a temporariy replace to the tlv package which can be removed as
soon as the PR that includes this commit is merged and a new tag for the
tlv package has been created.
Now that we know that the InboundFee on the ChannelEdgePolicy is always
set appropriately, we can update the GraphCache UpdatePolicy method to
take the InboundFee directly from the ChannelEdgePolicy object.
In this commit, we make sure to set the new field wherever appropriate.
This will be any place where the ChannelEdgePolicy is constructed other
than its disk deserialisation.
Like the previous commit, here we can start directly using the
InboundFee on the models.ChannelEdgePolicy object since we know we read
it from disk and so the InboundFee field will be populated accordingly.
NOTE: unlike the previous commit, behaviour is slightly different here
since previously we would error out here if TLV parsing failed whereas
now, the DB call will just skip the error and return a nil policy. This
should be ok since this is explicitly only dealing with our own updates
and so our TLV should always be valid.
For any call-site where we extract inbound fees from a
models.ChannelEdgePolicy object that was deserialised from disk, we can
now just use the new InboundFee field on the object since we know that
it would have been populated at deserialisation time.
Note that for all these call-sites, if a failure previously happened on
decoding of the TLV stream, the error would be ignored and the edge
would just be skipped. This behaviour is now still the same given how
ErrParsingExtraTLVBytes is handled on the DB layer.
Here we add an explicit InboundFee field to the ChannelEdgePolicy
struct. Then, in the graph KVStore, at deserialisation time, we extract
the InboundFee from the ExtraOpaqueData. Currently we do this at higher
levels but we are going to move it to the DB layer so that when we add
the SQL implementation of the graph store, we can have explicit columns
for inbound fees. We need to account for the fact that we may have
invalid TLV already persisted though and we dont want to fail if we
deserialise those necessarily. So we return ErrParsingExtraTLVBytes now
if we fail to parse the extra bytes as TLV and then we let the callers
handle it similarly to how ErrParsingExtraTLVBytes is handled in that we
dont necessarily fail if we receive one of these errors.
As of this commit, we can now expect the InboundFee field of a
ChannelEdgePolicy to be set (if inbound fees are set on the policy) for
any update that we read from disk.
In this commit, we start validating the extra opaque data of a channel
edge policy before persisting it. We just check that the data is valid
TLV.
NOTE: we recently [started
validating](1410a0949d)
this at the lnwire level. So really, no new update will reach the DB
layer without this already being checked. But we check it again here so
that the DB API behaves correctly as its own unit.
This commit simplifies the code of the ForwardingLog.Query method by
removing a confusing for-loop. The for-loop makes it seem as though
multiple events could be encoded under a single timestamp. But from the
time that this forwarding log was introduced, it was never possible to
encode multiple events under the same timestamp and so this loop will
never execute successfully more than once per timestamp and can thus be
removed. This paves the way such that future expansions of the method
can be added easily.
See the initial commit that introduced this code [here](f2cd668bcf).
In this commit you can see that from the start it was never possible to
have more than one event in a single timestamp since any previous event
in that timestamp would be overwritten. Then see [this commit](97c73706b5)
where even more protection was added to ensure that each event had a
unique timestamp.