This commit introduces the ResultOpt type, which represents an operation
that can either succeed with an optional final value or fail with an
error.
The .gitignore file is also updated to exclude specific files related to
the `aider` tool.
Implement ForEachChannelCacheable which is like ForEachChannel but its
call-back takes the cached versions of channel info & policies. This is
then used during graph cache population. This will be useful once the
SQL implementation is added so that we can reduce the number of DB trips
on cache population.
Define a new CachedEdgeInfo type and let the graph cache's AddChannel
use this. This will let us later on (for the SQL impl of the graph db)
only load from the DB what we actually need for the graph cache.
Update the GraphCache.UpdatePolicy method to take a
`models.CachedEdgePolicy` instead of a `models.ChannelEdgePolicy`.
Doing this will allow us later on to only fetch the necessary info for
populating the CachedEdgePolicy when we are populating the cache via
UpdatePolicy.
Remove the previously added TODOs which would extract InboundFee info
from the ExtraOpaqueData of a ChannelUpdate at the time of
ChannelEdgePolicy construction. These can now be replaced by using the
newly added InboundFee record on the ChannelUpdate message.
Also add a temporariy replace to the tlv package which can be removed as
soon as the PR that includes this commit is merged and a new tag for the
tlv package has been created.
Now that we know that the InboundFee on the ChannelEdgePolicy is always
set appropriately, we can update the GraphCache UpdatePolicy method to
take the InboundFee directly from the ChannelEdgePolicy object.
In this commit, we make sure to set the new field wherever appropriate.
This will be any place where the ChannelEdgePolicy is constructed other
than its disk deserialisation.
Like the previous commit, here we can start directly using the
InboundFee on the models.ChannelEdgePolicy object since we know we read
it from disk and so the InboundFee field will be populated accordingly.
NOTE: unlike the previous commit, behaviour is slightly different here
since previously we would error out here if TLV parsing failed whereas
now, the DB call will just skip the error and return a nil policy. This
should be ok since this is explicitly only dealing with our own updates
and so our TLV should always be valid.
For any call-site where we extract inbound fees from a
models.ChannelEdgePolicy object that was deserialised from disk, we can
now just use the new InboundFee field on the object since we know that
it would have been populated at deserialisation time.
Note that for all these call-sites, if a failure previously happened on
decoding of the TLV stream, the error would be ignored and the edge
would just be skipped. This behaviour is now still the same given how
ErrParsingExtraTLVBytes is handled on the DB layer.
Here we add an explicit InboundFee field to the ChannelEdgePolicy
struct. Then, in the graph KVStore, at deserialisation time, we extract
the InboundFee from the ExtraOpaqueData. Currently we do this at higher
levels but we are going to move it to the DB layer so that when we add
the SQL implementation of the graph store, we can have explicit columns
for inbound fees. We need to account for the fact that we may have
invalid TLV already persisted though and we dont want to fail if we
deserialise those necessarily. So we return ErrParsingExtraTLVBytes now
if we fail to parse the extra bytes as TLV and then we let the callers
handle it similarly to how ErrParsingExtraTLVBytes is handled in that we
dont necessarily fail if we receive one of these errors.
As of this commit, we can now expect the InboundFee field of a
ChannelEdgePolicy to be set (if inbound fees are set on the policy) for
any update that we read from disk.
In this commit, we start validating the extra opaque data of a channel
edge policy before persisting it. We just check that the data is valid
TLV.
NOTE: we recently [started
validating](1410a0949d)
this at the lnwire level. So really, no new update will reach the DB
layer without this already being checked. But we check it again here so
that the DB API behaves correctly as its own unit.
This commit simplifies the code of the ForwardingLog.Query method by
removing a confusing for-loop. The for-loop makes it seem as though
multiple events could be encoded under a single timestamp. But from the
time that this forwarding log was introduced, it was never possible to
encode multiple events under the same timestamp and so this loop will
never execute successfully more than once per timestamp and can thus be
removed. This paves the way such that future expansions of the method
can be added easily.
See the initial commit that introduced this code [here](f2cd668bcf).
In this commit you can see that from the start it was never possible to
have more than one event in a single timestamp since any previous event
in that timestamp would be overwritten. Then see [this commit](97c73706b5)
where even more protection was added to ensure that each event had a
unique timestamp.
In this commit, we add an option to allow a conf req caller to receive
the full block. This is useful if the caller wants to be able to create
an SPV proof.
This commit introduces a new generic type assertion function
`assertState` to the state machine tests. This function asserts that the
state machine is currently in the expected state type and returns the
state cast to that type. This allows us to directly access the fields of
the state without having to perform a type assertion manually.
In this commit, we add a new ConfMapper which is useful for state
machines that want to project some of the conf attributes into a new
event to be sent post conf.