My key recently expired, in this commit, we update the keys to the new
refreshed version. These are the same keys, but with an expiry further
out.
Here's a clear sign of the latest Bitcoin block hash:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
000000000000000000013215ef7c32bc0427f388fc83623affe712f388
-----BEGIN PGP SIGNATURE-----
iHUEARYKAB0WIQQpYhJoGq3wVlaize6QUl997uCthgUCaPi6xwAKCRCQUl997uCt
hpqNAQC5VnnbO6h/PjywGhU4LLRvH8SdgdDEMSc7xrtWd1vgPgD+IDrHqiAb+h38
ORBnUVJCVuZrPebtdnYXVQhII91eaw4=
=WRbl
-----END PGP SIGNATURE-----
In this commit, we add a call to "go clean -cache" after each platform
build in the release script to prevent the Go build cache from accumulating
unbounded disk space during the sequential 15-platform build process.
When building for multiple platforms in sequence with "go build -v", Go
creates intermediate build artifacts and caches compiled packages for each
target platform. While this caching improves build performance within a
single platform build, it causes the cache to grow substantially when
building for many platforms sequentially. With 15 different platform/
architecture combinations, each with their own cached artifacts, this
accumulation was contributing to the disk space exhaustion.
By clearing the build cache after each platform completes, we prevent this
unbounded growth while still allowing each individual platform build to
benefit from caching during its own compilation. The module cache is
preserved (we only clear the build cache), so dependencies don't need to be
re-downloaded between platforms.
In this commit, we replace the basic inline cleanup command in the release
workflow with the comprehensive cleanup-space action that was previously
only used in the main CI workflow. The previous release workflow cleanup
simply removed the hostedtoolcache directory, which freed only a few
gigabytes and proved insufficient for multi-platform release builds.
By switching to the cleanup-space action (now enhanced to free 20-25GB),
the release workflow will have substantially more disk space available
before beginning the build process. This should resolve the disk space
exhaustion issues that were occurring during the Windows ARM build phase,
which is one of the final platforms in the 15-platform build sequence.
In this commit, we significantly expand the cleanup-space GitHub Actions
workflow to free up substantially more disk space on GitHub runners. The
previous cleanup only removed three large toolsets (dotnet, android,
ghc), which should free ~14GB. This enhancement adds removal of several
additional large packages and caches, bringing the total freed space to
approximately 20-25GB.
The specific additions include removing Swift and Julia language runtimes,
the hosted toolcache directory, all Docker images, numerous large apt
packages (aspnetcore, llvm, php, mongodb, mysql, azure-cli, browsers, and
development tools), and various cache directories. We also add disk space
reporting before and after cleanup to provide visibility into how much
space is actually being freed during workflow runs.
This enhancement was motivated by release builds running out of disk space
when building for all 15 supported platforms (darwin, freebsd, linux,
netbsd, openbsd, windows across multiple architectures). The sequential
builds with verbose output were consuming more space than the basic cleanup
could provide.
Older LND versions had a bug which would create HTLCs with
0 locktime. The utxonursery will have problems dealing with such
htlc outputs because we do not allow height hints of 0. Now we
will fetch the closeSummary of the channel and will add a
conservative height for rescanning.
It can happen that we are handling 2 of the same node announcements in
the same batch transaction. In that case, our `UpsertNode` conflict
assertion may fail. We need to handle this gracefully.
In this commit, the lnwire.NodeAnnouncement2 type is defined. This will
be used to represent the `node_announcement_2` message used in the
Gossip 2 (1.75) protocol.
We leave a TODO that should be addressed after a discussion at the spec
meeting. For now, having the incorrect TLV type is not a problem since
this ChannelUpdate2 type is not used in production.
In preparation for adding a NodeAnnouncement2 struct along with a
NodeAnnouncement interface, this commit renames the existing
NodeAnnouncment struct to NodeAnnouncement1.
Add a new OutPoint type that wraps wire.OutPoint and provides TLV
encoding/decoding capabilities through the tlv.RecordProducer interface.
This enables OutPoint to be used in TLV streams of messages.
In this commit, we update the mockChannelGraphTimeSeries to implement
the new iterator-based UpdatesInHorizon interface. The mock maintains
its existing behavior of receiving messages through a channel and
returning them to the caller, but now wraps this in an iterator
function.
The implementation creates an iterator that pulls the entire message
slice from the mock's response channel, then yields each message
individually. This preserves the test semantics while conforming to the
new interface, ensuring all existing tests continue to pass without
modification.
In this commit, we update ApplyGossipFilter to leverage the new
iterator-based UpdatesInHorizon method. The key innovation here is using
iter.Pull2 to create a pull-based iterator that allows us to check if
any updates exist before launching the background goroutine.
This approach provides several benefits over the previous implementation.
First, we avoid the overhead of launching a goroutine when there are no
updates to send, which was previously unavoidable without materializing
the entire result set. Second, we maintain lazy loading throughout the
sending process, only pulling messages from the database as they're
needed for transmission.
The implementation uses Pull2 to peek at the first message, determining
whether to proceed with sending updates. If updates exist, ownership of
the iterator is transferred to the goroutine, which continues pulling
and sending messages until exhausted. This design ensures memory usage
remains bounded regardless of the number of updates being synchronized.
In this commit, we complete the iterator conversion work started in PR
10128 by threading the iterator pattern through to the higher-level
UpdatesInHorizon method. This change converts the method from returning
a fully materialized slice of messages to returning a lazy iterator that
yields messages on demand.
The new signature uses iter.Seq2 to allow error propagation during
iteration, eliminating the need for a separate error return value. This
approach enables callers to handle errors as they occur during iteration
rather than failing upfront.
The implementation now lazily processes channel and node updates,
yielding them as they're generated rather than accumulating them in
memory. This maintains the same ordering guarantees (channels before
nodes) while significantly reducing memory pressure when dealing with
large update sets during gossip synchronization.
In this commit, we update all callers of NodeUpdatesInHorizon and
ChanUpdatesInHorizon to use the new iterator-based APIs. The changes
use fn.Collect to maintain existing behavior while benefiting from the
memory efficiency of iterators when possible.
In this commit, we update the SQL store implementation to support the
new iterator-based API for ChanUpdatesInHorizon. This includes adding
SQL query pagination support and helper functions for efficient batch
processing.
The SQL implementation uses cursor-based pagination with configurable
batch sizes, allowing efficient iteration over large result sets without
loading everything into memory. The query is optimized to use indexes
effectively and minimize database round trips.
New SQL query GetChannelsByPolicyLastUpdateRange is updated to support:
- Cursor-based pagination using (max_update_time, id) compound cursor
- Configurable batch sizes via MaxResults parameter
- Efficient batch caching with updateChanCacheBatch helper
In this commit, we update the SQL store implementation to support the
new iterator-based API for NodeUpdatesInHorizon. This includes adding a
new SQL query that supports efficient pagination through result sets.
The SQL implementation uses cursor-based pagination with configurable
batch sizes, allowing efficient iteration over large result sets without
loading everything into memory. The query is optimized to use indexes
effectively and minimize database round trips.
New SQL query GetNodesByLastUpdateRange is updated to support:
* Cursor-based pagination using (last_update, pub_key) compound cursor
* Optional filtering for public nodes only
* Configurable batch sizes via MaxResults parameter
In this commit, we refactor the ChanUpdatesInHorizon method to return
an iterator instead of a slice. This change significantly reduces
memory usage when dealing with large result sets by allowing callers to
process items incrementally rather than loading everything into memory
at once.
In this commit, we refactor the NodeUpdatesInHorizon method to return
an iterator instead of a slice. This change significantly reduces
memory usage when dealing with large result sets by allowing callers to
process items incrementally rather than loading everything into memory
at once.
The new implementation uses Go 1.23's iter.Seq type to provide a
standard iterator interface. The method now supports configurable batch
sizes through functional options, allowing fine-tuned control over
memory usage and performance characteristics.
Rather than reading all the entries from disk into memory (before this
commit, we did consult the cache for most entries, skipping the disk
hits), we now expose a chunked iterator instead.
We also make the process of filtering out public nodes first class. This
saves many newly created db transactions later.
In this commit, we introduce a new options pattern for configuring
iterator behavior in the graph database. This includes configuration
for batch sizes when iterating over channel and node updates, as well
as an option to filter for public nodes only.
The new functional options pattern allows callers to customize iterator
behavior without breaking existing APIs. Default batch sizes are set to
1000 entries for both channel and node updates, which provides a good
balance between memory usage and performance.