Merge #15741: Batch write imported stuff in importmulti

0db94e55d wallet: Pass WalletBatch to CWallet::UnsetWalletFlag (João Barbosa)
6cb888b37 Apply the batch treatment to CWallet::SetAddressBook via ImportScriptPubKeys (Ben Woosley)
6154a09e0 Move some of ProcessImport into CWallet::Import* (Ben Woosley)
ccb26cf34 Batch writes for importmulti (Andrew Chow)
d6576e349 Have WalletBatch automatically flush every 1000 updates (Andrew Chow)
366fe0be0 Add AddWatchOnlyWithDB, AddKeyOriginWithDB, AddCScriptWithDB functions (Andrew Chow)

Pull request description:

  Instead of writing each item to the wallet database individually, do them in batches so that the import runs faster.

  This was tested by importing a ranged descriptor for 10,000 keys.

  Current master

  ```
  $ time src/bitcoin-cli -regtest -rpcwallet=importbig importmulti '[{"desc": "sh(wpkh([73111820/44h/1h/0h]tpubDDoT2SgEjaU5rerQpfcRDWPAcwyZ5g7xxHgVAfPwidgPDKVjm89d6jJ8AQotp35Np3m6VaysfUY1C2g68wFqUmraGbzhSsMF9YBuTGxpBaW/1/*))#3w7php47", "range": [0, 10000], "timestamp": "now", "internal": true, "keypool": false, "watchonly": true}]'
  ...

  real	7m45.29s
  ```

  This PR:

  ```
  $ time src/bitcoin-cli -regtest -rpcwallet=importbig4 importmulti '[{"desc": "pkh([73111820/44h/1h/0h]tpubDDoT2SgEjaU5rerQpfcRDWPAcwyZ5g7xxHgVAfPwidgPDKVjm89d6jJ8AQotp35Np3m6VaysfUY1C2g68wFqUmraGbzhSsMF9YBuTGxpBaW/1/*)#v65yjgmc", "range": [0, 10000], "timestamp": "now", "internal": true, "keypool": false, "watchonly": true}]'
  ...

  real	3.93s
  ```

  Fixes #15739

ACKs for commit 0db94e:
  jb55:
    utACK 0db94e5
  ariard:
    Tested ACK 0db94e5
  Empact:
    re-utACK 0db94e55dc only change is re the privacy of `UnsetWalletFlagWithDB` and `AddCScriptWithDB`.

Tree-SHA512: 3481308a64c99b6129f7bd328113dc291fe58743464628931feaebdef0e6ec770ddd5c19e4f9fbc1249a200acb04aaf62a8d914d53b0a29ac1e557576659c0cc
This commit is contained in:
MeshCollider
2019-05-29 18:54:07 +12:00
5 changed files with 163 additions and 89 deletions

View File

@@ -607,7 +607,9 @@ void BerkeleyBatch::Flush()
if (fReadOnly)
nMinutes = 1;
env->dbenv->txn_checkpoint(nMinutes ? gArgs.GetArg("-dblogsize", DEFAULT_WALLET_DBLOGSIZE) * 1024 : 0, nMinutes, 0);
if (env) { // env is nullptr for dummy databases (i.e. in tests). Don't actually flush if env is nullptr so we don't segfault
env->dbenv->txn_checkpoint(nMinutes ? gArgs.GetArg("-dblogsize", DEFAULT_WALLET_DBLOGSIZE) * 1024 : 0, nMinutes, 0);
}
}
void BerkeleyDatabase::IncrementUpdateCounter()