This adds an `iters` parameter to DoWork(), which controls how much work it is
allowed to do right now.
Additionally, DoWork() won't stop at just getting everything ACCEPTABLE, but if
there is work budget left, will also attempt to get every cluster linearized
optimally.
In the existing Trim function, as soon as the set of accepted transactions
would exceed the max cluster size or count limit, the acceptance loop is
stopped, removing all later transactions. However, it is possible that by
excluding some of those transactions the would-be cluster splits apart into
multiple would-clusters. And those clusters may well permit far more
transactions before their limits are reached.
Take this into account by using a union-find structure inside TrimTxData to
keep track of the count/size of all would-be clusters that would be formed
at any point, and only reject transactions which would cause these resulting
partitions to exceed their limits.
This is not an optimization in terms of CPU usage or memory; it just
improves the quality of the transactions removed by Trim().
During reorganisations, it is possible that dependencies get add which
result in clusters that violate limits (count, size), when linking the
new from-block transactions to the old from-mempool transactions.
Unlike RBF scenarios, we cannot simply reject these policy violations
when they are due to received blocks. To accomodate this, add a Trim()
function to TxGraph, which removes transactions (including descendants)
in order to make all resulting clusters satisfy the limits.
In the initial version of the function added here, the following approach
is used:
- Lazily compute a naive linearization for the to-be-merged cluster (using
an O(n log n) algorithm, optimized for far larger groups of transactions
than the normal linearization code).
- Initialize a set of accepted transactions to {}
- Iterate over the transactions in this cluster one by one:
- If adding the transaction to the set makes it exceed the max cluster size
or count limit, stop.
- Add the transaction to the set.
- Remove all transactions from the cluster that were not included in the set
(note that this necessarily includes all descendants too, because they
appear later in the naive linearization).
Co-authored-by: Greg Sanders <gsanders87@gmail.com>
This removes the restriction added in the previous commit that individual
transactions do not exceed the max cluster size limit.
With this change, the responsibility for enforcing cluster size limits can
be localized purely in TxGraph, without callers (and in particular, tests)
needing to duplicate the enforcement for individual transactions.
This is integrated with the oversized property: the graph is oversized when
any connected component within it contains more than the cluster count limit
many transactions, or when their combined size/weight exceeds the cluster size
limit.
It becomes disallowed to call AddTransaction with a size larger than this limit,
though this limit will be lifted in the next commit.
In addition, SetTransactionFeeRate becomes SetTransactionFee, so that we do not
need to deal with the case that a call to this function might affect the
oversizedness.
This interface lets one iterate efficiently over the chunks of the main
graph in a TxGraph, in the same order as CompareMainOrder. Each chunk
can be marked as "included" or "skipped" (and in the latter case,
dependent chunks will be skipped).
This is preparation for a next commit which will introduce a class whose
objects hold references to internals in TxGraphImpl, which disallows
modifications to the graph while such objects exist.
In order to make it possible for higher layers to compare transaction quality
(ordering within the implicit total ordering on the mempool), expose a comparison
function and test it.
Before this commit, if a TxGraph::Ref object is destroyed, it becomes impossible
to refer to, but the actual corresponding transaction node in the TxGraph remains,
and remains indefinitely as there is no way to remove it.
Fix this by making the destruction of TxGraph::Ref trigger immediate removal of
the corresponding transaction in TxGraph, both in main and staging if it exists.
In order to make it easy to evaluate proposed changes to a TxGraph, introduce a
"staging" mode, where mutators (AddTransaction, AddDependency, RemoveTransaction)
do not modify the actual graph, but just a staging version of it. That staging
graph can then be commited (replacing the main one with it), or aborted (discarding
the staging).
Move a number of related modifications to TxGraphImpl into a separate
function for removal of transactions. This is preparation for a later
commit where this will be useful in more than one place.
This is a preparation for a next commit where a TxGraph will start representing
potentially two distinct graphs (a main one, and a staging one with proposed
changes).
When transactions are removed from the tail of a cluster, we know the existing
linearization remains acceptable (if it already was), but may just need splitting
and postlinearization, so special case these into separate quality levels.
Chunk-based information (primarily, chunk feerates) are never accessed without
first bringing the relevant Clusters to an "acceptable" quality level. Thus,
while operations are ongoing and Clusters are not acceptable, we can omit
computing the chunkings and chunk feerates for Clusters.
Since m_deps_to_add has been sorted by child Cluster* already, all dependencies
with the same child will be processed consecutively. Take advantage of this by
remember the last partition merged with, and reusing that if applicable.
The m_deps_to_add vector is sorted by child Cluster*, which matches the
order of an_clusters. This means we can walk through m_deps_to_add while
doing the representative lookups for an_clusters, and reuse them.
Instead of leaving the responsibility on higher layers to guarantee that
no connected component within TxGraph (a barely exposed concept, except through
GetCluster()) exceeds the cluster count limit, move this responsibility to
TxGraph itself:
* TxGraph retains a cluster count limit, but it becomes configurable at construction
time (this primarily helps with testing that it is properly enforced).
* It is always allowed to perform mutators on TxGraph, even if they would cause the
cluster count limit to be exceeded. Instead, TxGraph exposes an IsOversized()
function, which queries whether it is in a special "oversize" state.
* During oversize state, many inspectors are unavailable, but mutators remain valid,
so the higher layer can "fix" the oversize state before continuing.
Instead construct a single vector with the list of all clusters in all groups,
and then store per-group offset/range in that list.
For dependencies, reuse m_deps_to_add, and store offset/range into that.
To make testing more powerful, expose a function to perform an internal sanity
check on the state of a TxGraph. This is especially important as TxGraphImpl
contains many redundantly represented pieces of information:
* graph contains clusters, which refer to entries, but the entries refer back
* graph maintains pointers to Ref objects, which point back to the graph.
This lets us make sure they are always in sync.
This adds an initial version of the txgraph module, with the TxGraph class.
It encapsulates knowledge about the fees, sizes, and dependencies between all
mempool transactions, but nothing else.
In particular, it lacks knowledge about txids, inputs, outputs, CTransactions,
... and so forth. Instead, it exposes a generic TxGraph::Ref type to reference
nodes in the TxGraph, which can be passed around and stored by layers on top.