With MergeLinearizations() gone and the LIMO-based Linearize() replaced by SFL, we do not
need a class (LinearizationChunking) that can maintain an incrementally-improving chunk
set anymore.
Replace it with a function (ChunkLinearizationInfo) that just computes the chunks as
SetInfos once, and returns them as a vector. This simplifies several call sites too.
This places equal-feerate chunks (with no dependencies between them) in random
order in the linearization output, hiding information about DepGraph insertion
order from the output. Likewise, it randomizes the order of transactions within
chunks for the same reason.
This replaces the existing LIMO linearization algorithm (which internally uses
ancestor set finding and candidate set finding) with the much more performant
spanning-forest linearization algorithm.
This removes the old candidate-set search algorithm, and several of its tests,
benchmarks, and needed utility code.
The worst case time per cost is similar to the previous algorithm, so
ACCEPTABLE_ITERS is unchanged.
The changes made here were:
| From | To |
|-------------------|------------------|
| `m.count(k)` | `m.contains(k)` |
| `!m.count(k)` | `!m.contains(k)` |
| `m.count(k) == 0` | `!m.contains(k)` |
| `m.count(k) != 0` | `m.contains(k)` |
| `m.count(k) > 0` | `m.contains(k)` |
The commit contains the trivial, mechanical refactors where it doesn't matter if the container can have multiple elements or not
Co-authored-by: Jan B <608446+janb84@users.noreply.github.com>
Change BlockBuilderImpl's m_excluded_clusters to unordered
set since ordering is not used.
Change the set to a set of sequence numbers for a modest
stability increase under fuzz testing.
This adds a specialized Cluster implementation for singleton clusters, saving
a significant amount of memory by avoiding the need for m_depgraph, m_mapping,
and m_linearization, and their overheads.
This adds 4 functions to Cluster to help implement Merge() and Split() without
needing access to the internals of the other Cluster. This is a preparation for
a follow-up that will make Clusters a virtual class whose internals are abstracted
away.
This reduces per-Cluster memory usage by making Clusters not aware of their
own level. Instead, track it either in calling code, or infer it based on
the transactions in them.
This adds an `iters` parameter to DoWork(), which controls how much work it is
allowed to do right now.
Additionally, DoWork() won't stop at just getting everything ACCEPTABLE, but if
there is work budget left, will also attempt to get every cluster linearized
optimally.
In the existing Trim function, as soon as the set of accepted transactions
would exceed the max cluster size or count limit, the acceptance loop is
stopped, removing all later transactions. However, it is possible that by
excluding some of those transactions the would-be cluster splits apart into
multiple would-clusters. And those clusters may well permit far more
transactions before their limits are reached.
Take this into account by using a union-find structure inside TrimTxData to
keep track of the count/size of all would-be clusters that would be formed
at any point, and only reject transactions which would cause these resulting
partitions to exceed their limits.
This is not an optimization in terms of CPU usage or memory; it just
improves the quality of the transactions removed by Trim().
During reorganisations, it is possible that dependencies get add which
result in clusters that violate limits (count, size), when linking the
new from-block transactions to the old from-mempool transactions.
Unlike RBF scenarios, we cannot simply reject these policy violations
when they are due to received blocks. To accomodate this, add a Trim()
function to TxGraph, which removes transactions (including descendants)
in order to make all resulting clusters satisfy the limits.
In the initial version of the function added here, the following approach
is used:
- Lazily compute a naive linearization for the to-be-merged cluster (using
an O(n log n) algorithm, optimized for far larger groups of transactions
than the normal linearization code).
- Initialize a set of accepted transactions to {}
- Iterate over the transactions in this cluster one by one:
- If adding the transaction to the set makes it exceed the max cluster size
or count limit, stop.
- Add the transaction to the set.
- Remove all transactions from the cluster that were not included in the set
(note that this necessarily includes all descendants too, because they
appear later in the naive linearization).
Co-authored-by: Greg Sanders <gsanders87@gmail.com>
This removes the restriction added in the previous commit that individual
transactions do not exceed the max cluster size limit.
With this change, the responsibility for enforcing cluster size limits can
be localized purely in TxGraph, without callers (and in particular, tests)
needing to duplicate the enforcement for individual transactions.
This is integrated with the oversized property: the graph is oversized when
any connected component within it contains more than the cluster count limit
many transactions, or when their combined size/weight exceeds the cluster size
limit.
It becomes disallowed to call AddTransaction with a size larger than this limit,
though this limit will be lifted in the next commit.
In addition, SetTransactionFeeRate becomes SetTransactionFee, so that we do not
need to deal with the case that a call to this function might affect the
oversizedness.
This interface lets one iterate efficiently over the chunks of the main
graph in a TxGraph, in the same order as CompareMainOrder. Each chunk
can be marked as "included" or "skipped" (and in the latter case,
dependent chunks will be skipped).
This is preparation for a next commit which will introduce a class whose
objects hold references to internals in TxGraphImpl, which disallows
modifications to the graph while such objects exist.