mirror of
https://github.com/bitcoin/bitcoin.git
synced 2026-02-03 22:03:01 +01:00
8937221304doc: add release notes for 29415 (Vasil Dimov)582016fa5ftest: add unit test for the private broadcast storage (Vasil Dimov)e74d54e048test: add functional test for private broadcast (Vasil Dimov)818b780a05rpc: use private broadcast from sendrawtransaction RPC if -privatebroadcast is ON (Vasil Dimov)eab595f9cfnet_processing: retry private broadcast (Vasil Dimov)37b79f9c39net_processing: stop private broadcast of a transaction after round-trip (Vasil Dimov)2de53eee74net_processing: handle ConnectionType::PRIVATE_BROADCAST connections (Vasil Dimov)30a9853ad3net_processing: move a debug check in VERACK processing earlier (Vasil Dimov)d1092e5d48net_processing: modernize PushNodeVersion() (Vasil Dimov)9937a12a2fnet_processing: move the debug log about receiving VERSION earlier (Vasil Dimov)a098f37b9enet_processing: reorder the code that handles the VERSION message (Vasil Dimov)679ce3a0b8net_processing: store transactions for private broadcast in PeerManager (Vasil Dimov)a3faa6f944node: extend node::TxBroadcast with a 3rd option (Vasil Dimov)95c051e210net_processing: rename RelayTransaction() to better describe what it does (Vasil Dimov)bb49d26032net: implement opening PRIVATE_BROADCAST connections (Vasil Dimov)01dad4efe2net: introduce a new connection type for private broadcast (Vasil Dimov)94aaa5d31binit: introduce a new option to enable/disable private broadcast (Vasil Dimov)d6ee490e0alog: introduce a new category for private broadcast (Vasil Dimov) Pull request description: _Parts of this PR are isolated in independent smaller PRs to ease review:_ * [x] _https://github.com/bitcoin/bitcoin/pull/29420_ * [x] _https://github.com/bitcoin/bitcoin/pull/33454_ * [x] _https://github.com/bitcoin/bitcoin/pull/33567_ * [x] _https://github.com/bitcoin/bitcoin/pull/33793_ --- To improve privacy, broadcast locally submitted transactions (from the `sendrawtransaction` RPC) to the P2P network only via Tor or I2P short-lived connections, or to IPv4/IPv6 peers but through the Tor network. * Introduce a new connection type for private broadcast of transactions with the following properties: * started whenever there are local transactions to be sent * opened to Tor or I2P peers or IPv4/IPv6 via the Tor proxy * opened regardless of max connections limits * after handshake is completed one local transaction is pushed to the peer, `PING` is sent and after receiving `PONG` the connection is closed * ignore all incoming messages after handshake is completed (except `PONG`) * Broadcast transactions submitted via `sendrawtransaction` using this new mechanism, to a few peers. Keep doing this until we receive back this transaction from one of our ordinary peers (this takes about 1 second on mainnet). * The transaction is stored in peerman and does not enter the mempool. * Once we get an `INV` from one of our ordinary peers, then the normal flow executes: we request the transaction with `GETDATA`, receive it with a `TX` message, put it in our mempool and broadcast it to all our existent connections (as if we see it for the first time). * After we receive the full transaction as a `TX` message, in reply to our `GETDATA` request, only then consider the transaction has propagated through the network and remove it from the storage in peerman, ending the private broadcast attempts. The messages exchange should look like this: ``` tx-sender >--- connect -------> tx-recipient tx-sender >--- VERSION -------> tx-recipient (dummy VERSION with no revealing data) tx-sender <--- VERSION -------< tx-recipient tx-sender <--- WTXIDRELAY ----< tx-recipient (maybe) tx-sender <--- SENDADDRV2 ----< tx-recipient (maybe) tx-sender <--- SENDTXRCNCL ---< tx-recipient (maybe) tx-sender <--- VERACK --------< tx-recipient tx-sender >--- VERACK --------> tx-recipient tx-sender >--- INV/TX --------> tx-recipient tx-sender <--- GETDATA/TX ----< tx-recipient tx-sender >--- TX ------------> tx-recipient tx-sender >--- PING ----------> tx-recipient tx-sender <--- PONG ----------< tx-recipient tx-sender disconnects ``` Whenever a new transaction is received from `sendrawtransaction` RPC, the node will send it to a few (`NUM_PRIVATE_BROADCAST_PER_TX`) recipients right away. If after some time we still have not heard anything about the transaction from the network, then it will be sent to 1 more peer (see `PeerManagerImpl::ReattemptPrivateBroadcast()`). A few considerations: * The short-lived private broadcast connections are very cheap and fast wrt network traffic. It is expected that some of those peers could blackhole the transaction. Just one honest/proper peer is enough for successful propagation. * The peers that receive the transaction could deduce that this is initial transaction broadcast from the transaction originator. This is ok, they can't identify the sender. --- <details> <summary>How to test this?</summary> Thank you, @stratospher and @andrewtoth! Start `bitcoind` with `-privatebroadcast=1 -debug=privatebroadcast`. Create a wallet and get a new address, go to the Signet faucet and request some coins to that address: ```bash build/bin/bitcoin-cli -chain="signet" createwallet test build/bin/bitcoin-cli -chain="signet" getnewaddress ``` Get a new address for the test transaction recipient: ```bash build/bin/bitcoin-cli -chain="signet" loadwallet test new_address=$(build/bin/bitcoin-cli -chain="signet" getnewaddress) ``` Create the transaction: ```bash # Option 1: `createrawtransaction` and `signrawtransactionwithwallet`: txid=$(build/bin/bitcoin-cli -chain="signet" listunspent | jq -r '.[0] | .txid') vout=$(build/bin/bitcoin-cli -chain="signet" listunspent | jq -r '.[0] | .vout') echo "txid: $txid" echo "vout: $vout" tx=$(build/bin/bitcoin-cli -chain="signet" createrawtransaction "[{\"txid\": \"$txid\", \"vout\": $vout}]" "[{\"$new_address\": 0.00001000}]" 0 false) echo "tx: $tx" signed_tx=$(build/bin/bitcoin-cli -chain="signet" signrawtransactionwithwallet "$tx" | jq -r '.hex') echo "signed_tx: $signed_tx" # OR Option 2: `walletcreatefundedpsbt` and `walletprocesspsbt`: # This makes it not have to worry about inputs and also automatically sends back change to the wallet. # Start `bitcoind` with `-fallbackfee=0.00003000` for instance for 3 sat/vbyte fee. psbt=$(build/bin/bitcoin-cli -chain="signet" walletcreatefundedpsbt "[]" "[{\"$new_address\": 0.00001000}]" | jq -r '.psbt') echo "psbt: $psbt" signed_tx=$(build/bin/bitcoin-cli -chain="signet" walletprocesspsbt "$psbt" | jq -r '.hex') echo "signed_tx: $signed_tx" ``` Finally, send the transaction: ```bash raw_tx=$(build/bin/bitcoin-cli -chain="signet" sendrawtransaction "$signed_tx") echo "raw_tx: $raw_tx" ``` </details> --- <details> <summary>High-level explanation of the commits</summary> * New logging category and config option to enable private broadcast * `log: introduce a new category for private broadcast` * `init: introduce a new option to enable/disable private broadcast` * Implement the private broadcast connection handling on the `CConnman` side: * `net: introduce a new connection type for private broadcast` * `net: implement opening PRIVATE_BROADCAST connections` * Prepare `BroadcastTransaction()` for private broadcast requests: * `net_processing: rename RelayTransaction to better describe what it does` * `node: extend node::TxBroadcast with a 3rd option` * `net_processing: store transactions for private broadcast in PeerManager` * Implement the private broadcast connection handling on the `PeerManager` side: * `net_processing: reorder the code that handles the VERSION message` * `net_processing: move the debug log about receiving VERSION earlier` * `net_processing: modernize PushNodeVersion()` * `net_processing: move a debug check in VERACK processing earlier` * `net_processing: handle ConnectionType::PRIVATE_BROADCAST connections` * `net_processing: stop private broadcast of a transaction after round-trip` * `net_processing: retry private broadcast` * Engage the new functionality from `sendrawtransaction`: * `rpc: use private broadcast from sendrawtransaction RPC if -privatebroadcast is ON` * New tests: * `test: add functional test for private broadcast` * `test: add unit test for the private broadcast storage` </details> --- **This PR would resolve the following issues:** https://github.com/bitcoin/bitcoin/issues/3828 Clients leak IPs if they are recipients of a transaction https://github.com/bitcoin/bitcoin/issues/14692 Can't configure bitocoind to only send tx via Tor but receive clearnet transactions https://github.com/bitcoin/bitcoin/issues/19042 Tor-only transaction broadcast onlynet=onion alternative https://github.com/bitcoin/bitcoin/issues/24557 Option for receive events with all networks, but send transactions and/or blocks only with anonymous network[s]? https://github.com/bitcoin/bitcoin/issues/25450 Ability to broadcast wallet transactions only via dedicated oneshot Tor connections https://github.com/bitcoin/bitcoin/issues/32235 Tor: TX circuit isolation **Issues that are related, but (maybe?) not to be resolved by this PR:** https://github.com/bitcoin/bitcoin/issues/21876 Broadcast a transaction to specific nodes https://github.com/bitcoin/bitcoin/issues/28636 new RPC: sendrawtransactiontopeer --- Further extensions: * Have the wallet do the private broadcast as well, https://github.com/bitcoin/bitcoin/issues/11887 would have to be resolved. * Have the `submitpackage` RPC do the private broadcast as well, [draft diff in the comment below](https://github.com/bitcoin/bitcoin/pull/29415#pullrequestreview-2972293733), thanks ismaelsadeeq! * Add some stats via RPC, so that the user can better monitor what is going on during and after the broadcast. Currently this can be done via the debug log, but that is not convenient. * Make the private broadcast storage, currently in peerman, persistent over node restarts. * Add (optional) random delay before starting to broadcast the transaction in order to avoid correlating unrelated transactions based on the time when they were broadcast. Suggested independently of this PR [here](https://github.com/bitcoin/bitcoin/issues/30471). * Consider periodically sending transactions that did not originate from the node as decoy, discussed [here](https://github.com/bitcoin/bitcoin/pull/29415#discussion_r2035414972). * Consider waiting for peer's FEEFILTER message and if the transaction that was sent to the peer is below that threshold, then assume the peer is going to drop it. Then use this knowledge to retry more aggressively with another peer, instead of the current 10 min. See [comment below](https://github.com/bitcoin/bitcoin/pull/29415#issuecomment-3258611648). * It may make sense to be able to override the default policy -- eg so submitrawtransaction can go straight to the mempool and relay, even if txs are normally privately relayed. See [comment below](https://github.com/bitcoin/bitcoin/pull/29415#issuecomment-3427086681). * As a side effect we have a new metric available - the time it takes for a transaction to reach a random node in the network (from the point of view of the private broadcast recipient the tx originator is a random node somewhere in the network). This can be useful for monitoring, unrelated to privacy characteristics of this feature. --- _A previous incarnation of this can be found at https://github.com/bitcoin/bitcoin/pull/27509. It puts the transaction in the mempool and (tries to) hide it from the outside observers. This turned out to be too error prone or maybe even impossible._ ACKs for top commit: l0rinc: code review diff ACK8937221304andrewtoth: ACK8937221304pinheadmz: ACK8937221304w0xlt: ACK8937221304with nit https://github.com/bitcoin/bitcoin/pull/29415#discussion_r2654849875 mzumsande: re-ACK8937221304Tree-SHA512: d51dadc865c2eb080c903cbe2f669e69a967e5f9fc64e9a20a68f39a67bf0db6ac2ad682af7fa24ef9f0942a41c89959341a16ba7b616475e1c5ab8e563b9b96
605 lines
20 KiB
C++
605 lines
20 KiB
C++
// Copyright (c) 2009-2010 Satoshi Nakamoto
|
|
// Copyright (c) 2009-present The Bitcoin Core developers
|
|
// Distributed under the MIT software license, see the accompanying
|
|
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
|
|
|
#include <logging.h>
|
|
#include <memusage.h>
|
|
#include <util/check.h>
|
|
#include <util/fs.h>
|
|
#include <util/string.h>
|
|
#include <util/threadnames.h>
|
|
#include <util/time.h>
|
|
|
|
#include <array>
|
|
#include <cstring>
|
|
#include <map>
|
|
#include <optional>
|
|
#include <utility>
|
|
|
|
using util::Join;
|
|
using util::RemovePrefixView;
|
|
|
|
const char * const DEFAULT_DEBUGLOGFILE = "debug.log";
|
|
constexpr auto MAX_USER_SETABLE_SEVERITY_LEVEL{BCLog::Level::Info};
|
|
|
|
BCLog::Logger& LogInstance()
|
|
{
|
|
/**
|
|
* NOTE: the logger instances is leaked on exit. This is ugly, but will be
|
|
* cleaned up by the OS/libc. Defining a logger as a global object doesn't work
|
|
* since the order of destruction of static/global objects is undefined.
|
|
* Consider if the logger gets destroyed, and then some later destructor calls
|
|
* LogInfo, maybe indirectly, and you get a core dump at shutdown trying to
|
|
* access the logger. When the shutdown sequence is fully audited and tested,
|
|
* explicit destruction of these objects can be implemented by changing this
|
|
* from a raw pointer to a std::unique_ptr.
|
|
* Since the ~Logger() destructor is never called, the Logger class and all
|
|
* its subclasses must have implicitly-defined destructors.
|
|
*
|
|
* This method of initialization was originally introduced in
|
|
* ee3374234c60aba2cc4c5cd5cac1c0aefc2d817c.
|
|
*/
|
|
static BCLog::Logger* g_logger{new BCLog::Logger()};
|
|
return *g_logger;
|
|
}
|
|
|
|
bool fLogIPs = DEFAULT_LOGIPS;
|
|
|
|
static int FileWriteStr(std::string_view str, FILE *fp)
|
|
{
|
|
return fwrite(str.data(), 1, str.size(), fp);
|
|
}
|
|
|
|
bool BCLog::Logger::StartLogging()
|
|
{
|
|
StdLockGuard scoped_lock(m_cs);
|
|
|
|
assert(m_buffering);
|
|
assert(m_fileout == nullptr);
|
|
|
|
if (m_print_to_file) {
|
|
assert(!m_file_path.empty());
|
|
m_fileout = fsbridge::fopen(m_file_path, "a");
|
|
if (!m_fileout) {
|
|
return false;
|
|
}
|
|
|
|
setbuf(m_fileout, nullptr); // unbuffered
|
|
|
|
// Add newlines to the logfile to distinguish this execution from the
|
|
// last one.
|
|
FileWriteStr("\n\n\n\n\n", m_fileout);
|
|
}
|
|
|
|
// dump buffered messages from before we opened the log
|
|
m_buffering = false;
|
|
if (m_buffer_lines_discarded > 0) {
|
|
LogPrintStr_(strprintf("Early logging buffer overflowed, %d log lines discarded.\n", m_buffer_lines_discarded), SourceLocation{__func__}, BCLog::ALL, Level::Info, /*should_ratelimit=*/false);
|
|
}
|
|
while (!m_msgs_before_open.empty()) {
|
|
const auto& buflog = m_msgs_before_open.front();
|
|
std::string s{buflog.str};
|
|
FormatLogStrInPlace(s, buflog.category, buflog.level, buflog.source_loc, buflog.threadname, buflog.now, buflog.mocktime);
|
|
m_msgs_before_open.pop_front();
|
|
|
|
if (m_print_to_file) FileWriteStr(s, m_fileout);
|
|
if (m_print_to_console) fwrite(s.data(), 1, s.size(), stdout);
|
|
for (const auto& cb : m_print_callbacks) {
|
|
cb(s);
|
|
}
|
|
}
|
|
m_cur_buffer_memusage = 0;
|
|
if (m_print_to_console) fflush(stdout);
|
|
|
|
return true;
|
|
}
|
|
|
|
void BCLog::Logger::DisconnectTestLogger()
|
|
{
|
|
StdLockGuard scoped_lock(m_cs);
|
|
m_buffering = true;
|
|
if (m_fileout != nullptr) fclose(m_fileout);
|
|
m_fileout = nullptr;
|
|
m_print_callbacks.clear();
|
|
m_max_buffer_memusage = DEFAULT_MAX_LOG_BUFFER;
|
|
m_cur_buffer_memusage = 0;
|
|
m_buffer_lines_discarded = 0;
|
|
m_msgs_before_open.clear();
|
|
}
|
|
|
|
void BCLog::Logger::DisableLogging()
|
|
{
|
|
{
|
|
StdLockGuard scoped_lock(m_cs);
|
|
assert(m_buffering);
|
|
assert(m_print_callbacks.empty());
|
|
}
|
|
m_print_to_file = false;
|
|
m_print_to_console = false;
|
|
StartLogging();
|
|
}
|
|
|
|
void BCLog::Logger::EnableCategory(BCLog::LogFlags flag)
|
|
{
|
|
m_categories |= flag;
|
|
}
|
|
|
|
bool BCLog::Logger::EnableCategory(std::string_view str)
|
|
{
|
|
BCLog::LogFlags flag;
|
|
if (!GetLogCategory(flag, str)) return false;
|
|
EnableCategory(flag);
|
|
return true;
|
|
}
|
|
|
|
void BCLog::Logger::DisableCategory(BCLog::LogFlags flag)
|
|
{
|
|
m_categories &= ~flag;
|
|
}
|
|
|
|
bool BCLog::Logger::DisableCategory(std::string_view str)
|
|
{
|
|
BCLog::LogFlags flag;
|
|
if (!GetLogCategory(flag, str)) return false;
|
|
DisableCategory(flag);
|
|
return true;
|
|
}
|
|
|
|
bool BCLog::Logger::WillLogCategory(BCLog::LogFlags category) const
|
|
{
|
|
return (m_categories.load(std::memory_order_relaxed) & category) != 0;
|
|
}
|
|
|
|
bool BCLog::Logger::WillLogCategoryLevel(BCLog::LogFlags category, BCLog::Level level) const
|
|
{
|
|
// Log messages at Info, Warning and Error level unconditionally, so that
|
|
// important troubleshooting information doesn't get lost.
|
|
if (level >= BCLog::Level::Info) return true;
|
|
|
|
if (!WillLogCategory(category)) return false;
|
|
|
|
StdLockGuard scoped_lock(m_cs);
|
|
const auto it{m_category_log_levels.find(category)};
|
|
return level >= (it == m_category_log_levels.end() ? LogLevel() : it->second);
|
|
}
|
|
|
|
bool BCLog::Logger::DefaultShrinkDebugFile() const
|
|
{
|
|
return m_categories == BCLog::NONE;
|
|
}
|
|
|
|
static const std::map<std::string, BCLog::LogFlags, std::less<>> LOG_CATEGORIES_BY_STR{
|
|
{"net", BCLog::NET},
|
|
{"tor", BCLog::TOR},
|
|
{"mempool", BCLog::MEMPOOL},
|
|
{"http", BCLog::HTTP},
|
|
{"bench", BCLog::BENCH},
|
|
{"zmq", BCLog::ZMQ},
|
|
{"walletdb", BCLog::WALLETDB},
|
|
{"rpc", BCLog::RPC},
|
|
{"estimatefee", BCLog::ESTIMATEFEE},
|
|
{"addrman", BCLog::ADDRMAN},
|
|
{"selectcoins", BCLog::SELECTCOINS},
|
|
{"reindex", BCLog::REINDEX},
|
|
{"cmpctblock", BCLog::CMPCTBLOCK},
|
|
{"rand", BCLog::RAND},
|
|
{"prune", BCLog::PRUNE},
|
|
{"proxy", BCLog::PROXY},
|
|
{"mempoolrej", BCLog::MEMPOOLREJ},
|
|
{"libevent", BCLog::LIBEVENT},
|
|
{"coindb", BCLog::COINDB},
|
|
{"qt", BCLog::QT},
|
|
{"leveldb", BCLog::LEVELDB},
|
|
{"validation", BCLog::VALIDATION},
|
|
{"i2p", BCLog::I2P},
|
|
{"ipc", BCLog::IPC},
|
|
#ifdef DEBUG_LOCKCONTENTION
|
|
{"lock", BCLog::LOCK},
|
|
#endif
|
|
{"blockstorage", BCLog::BLOCKSTORAGE},
|
|
{"txreconciliation", BCLog::TXRECONCILIATION},
|
|
{"scan", BCLog::SCAN},
|
|
{"txpackages", BCLog::TXPACKAGES},
|
|
{"kernel", BCLog::KERNEL},
|
|
{"privatebroadcast", BCLog::PRIVBROADCAST},
|
|
};
|
|
|
|
static const std::unordered_map<BCLog::LogFlags, std::string> LOG_CATEGORIES_BY_FLAG{
|
|
// Swap keys and values from LOG_CATEGORIES_BY_STR.
|
|
[](const auto& in) {
|
|
std::unordered_map<BCLog::LogFlags, std::string> out;
|
|
for (const auto& [k, v] : in) {
|
|
const bool inserted{out.emplace(v, k).second};
|
|
assert(inserted);
|
|
}
|
|
return out;
|
|
}(LOG_CATEGORIES_BY_STR)
|
|
};
|
|
|
|
bool GetLogCategory(BCLog::LogFlags& flag, std::string_view str)
|
|
{
|
|
if (str.empty() || str == "1" || str == "all") {
|
|
flag = BCLog::ALL;
|
|
return true;
|
|
}
|
|
auto it = LOG_CATEGORIES_BY_STR.find(str);
|
|
if (it != LOG_CATEGORIES_BY_STR.end()) {
|
|
flag = it->second;
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
std::string BCLog::Logger::LogLevelToStr(BCLog::Level level)
|
|
{
|
|
switch (level) {
|
|
case BCLog::Level::Trace:
|
|
return "trace";
|
|
case BCLog::Level::Debug:
|
|
return "debug";
|
|
case BCLog::Level::Info:
|
|
return "info";
|
|
case BCLog::Level::Warning:
|
|
return "warning";
|
|
case BCLog::Level::Error:
|
|
return "error";
|
|
}
|
|
assert(false);
|
|
}
|
|
|
|
static std::string LogCategoryToStr(BCLog::LogFlags category)
|
|
{
|
|
if (category == BCLog::ALL) {
|
|
return "all";
|
|
}
|
|
auto it = LOG_CATEGORIES_BY_FLAG.find(category);
|
|
assert(it != LOG_CATEGORIES_BY_FLAG.end());
|
|
return it->second;
|
|
}
|
|
|
|
static std::optional<BCLog::Level> GetLogLevel(std::string_view level_str)
|
|
{
|
|
if (level_str == "trace") {
|
|
return BCLog::Level::Trace;
|
|
} else if (level_str == "debug") {
|
|
return BCLog::Level::Debug;
|
|
} else if (level_str == "info") {
|
|
return BCLog::Level::Info;
|
|
} else if (level_str == "warning") {
|
|
return BCLog::Level::Warning;
|
|
} else if (level_str == "error") {
|
|
return BCLog::Level::Error;
|
|
} else {
|
|
return std::nullopt;
|
|
}
|
|
}
|
|
|
|
std::vector<LogCategory> BCLog::Logger::LogCategoriesList() const
|
|
{
|
|
std::vector<LogCategory> ret;
|
|
ret.reserve(LOG_CATEGORIES_BY_STR.size());
|
|
for (const auto& [category, flag] : LOG_CATEGORIES_BY_STR) {
|
|
ret.push_back(LogCategory{.category = category, .active = WillLogCategory(flag)});
|
|
}
|
|
return ret;
|
|
}
|
|
|
|
/** Log severity levels that can be selected by the user. */
|
|
static constexpr std::array<BCLog::Level, 3> LogLevelsList()
|
|
{
|
|
return {BCLog::Level::Info, BCLog::Level::Debug, BCLog::Level::Trace};
|
|
}
|
|
|
|
std::string BCLog::Logger::LogLevelsString() const
|
|
{
|
|
const auto& levels = LogLevelsList();
|
|
return Join(std::vector<BCLog::Level>{levels.begin(), levels.end()}, ", ", [](BCLog::Level level) { return LogLevelToStr(level); });
|
|
}
|
|
|
|
std::string BCLog::Logger::LogTimestampStr(SystemClock::time_point now, std::chrono::seconds mocktime) const
|
|
{
|
|
std::string strStamped;
|
|
|
|
if (!m_log_timestamps)
|
|
return strStamped;
|
|
|
|
const auto now_seconds{std::chrono::time_point_cast<std::chrono::seconds>(now)};
|
|
strStamped = FormatISO8601DateTime(TicksSinceEpoch<std::chrono::seconds>(now_seconds));
|
|
if (m_log_time_micros && !strStamped.empty()) {
|
|
strStamped.pop_back();
|
|
strStamped += strprintf(".%06dZ", Ticks<std::chrono::microseconds>(now - now_seconds));
|
|
}
|
|
if (mocktime > 0s) {
|
|
strStamped += " (mocktime: " + FormatISO8601DateTime(count_seconds(mocktime)) + ")";
|
|
}
|
|
strStamped += ' ';
|
|
|
|
return strStamped;
|
|
}
|
|
|
|
namespace BCLog {
|
|
/** Belts and suspenders: make sure outgoing log messages don't contain
|
|
* potentially suspicious characters, such as terminal control codes.
|
|
*
|
|
* This escapes control characters except newline ('\n') in C syntax.
|
|
* It escapes instead of removes them to still allow for troubleshooting
|
|
* issues where they accidentally end up in strings.
|
|
*/
|
|
std::string LogEscapeMessage(std::string_view str) {
|
|
std::string ret;
|
|
for (char ch_in : str) {
|
|
uint8_t ch = (uint8_t)ch_in;
|
|
if ((ch >= 32 || ch == '\n') && ch != '\x7f') {
|
|
ret += ch_in;
|
|
} else {
|
|
ret += strprintf("\\x%02x", ch);
|
|
}
|
|
}
|
|
return ret;
|
|
}
|
|
} // namespace BCLog
|
|
|
|
std::string BCLog::Logger::GetLogPrefix(BCLog::LogFlags category, BCLog::Level level) const
|
|
{
|
|
if (category == LogFlags::NONE) category = LogFlags::ALL;
|
|
|
|
const bool has_category{m_always_print_category_level || category != LogFlags::ALL};
|
|
|
|
// If there is no category, Info is implied
|
|
if (!has_category && level == Level::Info) return {};
|
|
|
|
std::string s{"["};
|
|
if (has_category) {
|
|
s += LogCategoryToStr(category);
|
|
}
|
|
|
|
if (m_always_print_category_level || !has_category || level != Level::Debug) {
|
|
// If there is a category, Debug is implied, so don't add the level
|
|
|
|
// Only add separator if we have a category
|
|
if (has_category) s += ":";
|
|
s += Logger::LogLevelToStr(level);
|
|
}
|
|
|
|
s += "] ";
|
|
return s;
|
|
}
|
|
|
|
static size_t MemUsage(const BCLog::Logger::BufferedLog& buflog)
|
|
{
|
|
return memusage::DynamicUsage(buflog.str) +
|
|
memusage::DynamicUsage(buflog.threadname) +
|
|
memusage::MallocUsage(sizeof(memusage::list_node<BCLog::Logger::BufferedLog>));
|
|
}
|
|
|
|
BCLog::LogRateLimiter::LogRateLimiter(uint64_t max_bytes, std::chrono::seconds reset_window)
|
|
: m_max_bytes{max_bytes}, m_reset_window{reset_window} {}
|
|
|
|
std::shared_ptr<BCLog::LogRateLimiter> BCLog::LogRateLimiter::Create(
|
|
SchedulerFunction&& scheduler_func, uint64_t max_bytes, std::chrono::seconds reset_window)
|
|
{
|
|
auto limiter{std::shared_ptr<LogRateLimiter>(new LogRateLimiter(max_bytes, reset_window))};
|
|
std::weak_ptr<LogRateLimiter> weak_limiter{limiter};
|
|
auto reset = [weak_limiter] {
|
|
if (auto shared_limiter{weak_limiter.lock()}) shared_limiter->Reset();
|
|
};
|
|
scheduler_func(reset, limiter->m_reset_window);
|
|
return limiter;
|
|
}
|
|
|
|
BCLog::LogRateLimiter::Status BCLog::LogRateLimiter::Consume(
|
|
const SourceLocation& source_loc,
|
|
const std::string& str)
|
|
{
|
|
StdLockGuard scoped_lock(m_mutex);
|
|
auto& stats{m_source_locations.try_emplace(source_loc, m_max_bytes).first->second};
|
|
Status status{stats.m_dropped_bytes > 0 ? Status::STILL_SUPPRESSED : Status::UNSUPPRESSED};
|
|
|
|
if (!stats.Consume(str.size()) && status == Status::UNSUPPRESSED) {
|
|
status = Status::NEWLY_SUPPRESSED;
|
|
m_suppression_active = true;
|
|
}
|
|
|
|
return status;
|
|
}
|
|
|
|
void BCLog::Logger::FormatLogStrInPlace(std::string& str, BCLog::LogFlags category, BCLog::Level level, const SourceLocation& source_loc, std::string_view threadname, SystemClock::time_point now, std::chrono::seconds mocktime) const
|
|
{
|
|
if (!str.ends_with('\n')) str.push_back('\n');
|
|
|
|
str.insert(0, GetLogPrefix(category, level));
|
|
|
|
if (m_log_sourcelocations) {
|
|
str.insert(0, strprintf("[%s:%d] [%s] ", RemovePrefixView(source_loc.file_name(), "./"), source_loc.line(), source_loc.function_name_short()));
|
|
}
|
|
|
|
if (m_log_threadnames) {
|
|
str.insert(0, strprintf("[%s] ", (threadname.empty() ? "unknown" : threadname)));
|
|
}
|
|
|
|
str.insert(0, LogTimestampStr(now, mocktime));
|
|
}
|
|
|
|
void BCLog::Logger::LogPrintStr(std::string_view str, SourceLocation&& source_loc, BCLog::LogFlags category, BCLog::Level level, bool should_ratelimit)
|
|
{
|
|
StdLockGuard scoped_lock(m_cs);
|
|
return LogPrintStr_(str, std::move(source_loc), category, level, should_ratelimit);
|
|
}
|
|
|
|
// NOLINTNEXTLINE(misc-no-recursion)
|
|
void BCLog::Logger::LogPrintStr_(std::string_view str, SourceLocation&& source_loc, BCLog::LogFlags category, BCLog::Level level, bool should_ratelimit)
|
|
{
|
|
std::string str_prefixed = LogEscapeMessage(str);
|
|
|
|
if (m_buffering) {
|
|
{
|
|
BufferedLog buf{
|
|
.now = SystemClock::now(),
|
|
.mocktime = GetMockTime(),
|
|
.str = str_prefixed,
|
|
.threadname = util::ThreadGetInternalName(),
|
|
.source_loc = std::move(source_loc),
|
|
.category = category,
|
|
.level = level,
|
|
};
|
|
m_cur_buffer_memusage += MemUsage(buf);
|
|
m_msgs_before_open.push_back(std::move(buf));
|
|
}
|
|
|
|
while (m_cur_buffer_memusage > m_max_buffer_memusage) {
|
|
if (m_msgs_before_open.empty()) {
|
|
m_cur_buffer_memusage = 0;
|
|
break;
|
|
}
|
|
m_cur_buffer_memusage -= MemUsage(m_msgs_before_open.front());
|
|
m_msgs_before_open.pop_front();
|
|
++m_buffer_lines_discarded;
|
|
}
|
|
|
|
return;
|
|
}
|
|
|
|
FormatLogStrInPlace(str_prefixed, category, level, source_loc, util::ThreadGetInternalName(), SystemClock::now(), GetMockTime());
|
|
bool ratelimit{false};
|
|
if (should_ratelimit && m_limiter) {
|
|
auto status{m_limiter->Consume(source_loc, str_prefixed)};
|
|
if (status == LogRateLimiter::Status::NEWLY_SUPPRESSED) {
|
|
// NOLINTNEXTLINE(misc-no-recursion)
|
|
LogPrintStr_(strprintf(
|
|
"Excessive logging detected from %s:%d (%s): >%d bytes logged during "
|
|
"the last time window of %is. Suppressing logging to disk from this "
|
|
"source location until time window resets. Console logging "
|
|
"unaffected. Last log entry.",
|
|
source_loc.file_name(), source_loc.line(), source_loc.function_name_short(),
|
|
m_limiter->m_max_bytes,
|
|
Ticks<std::chrono::seconds>(m_limiter->m_reset_window)),
|
|
SourceLocation{__func__}, LogFlags::ALL, Level::Warning, /*should_ratelimit=*/false); // with should_ratelimit=false, this cannot lead to infinite recursion
|
|
} else if (status == LogRateLimiter::Status::STILL_SUPPRESSED) {
|
|
ratelimit = true;
|
|
}
|
|
}
|
|
|
|
// To avoid confusion caused by dropped log messages when debugging an issue,
|
|
// we prefix log lines with "[*]" when there are any suppressed source locations.
|
|
if (m_limiter && m_limiter->SuppressionsActive()) {
|
|
str_prefixed.insert(0, "[*] ");
|
|
}
|
|
|
|
if (m_print_to_console) {
|
|
// print to console
|
|
fwrite(str_prefixed.data(), 1, str_prefixed.size(), stdout);
|
|
fflush(stdout);
|
|
}
|
|
for (const auto& cb : m_print_callbacks) {
|
|
cb(str_prefixed);
|
|
}
|
|
if (m_print_to_file && !ratelimit) {
|
|
assert(m_fileout != nullptr);
|
|
|
|
// reopen the log file, if requested
|
|
if (m_reopen_file) {
|
|
m_reopen_file = false;
|
|
FILE* new_fileout = fsbridge::fopen(m_file_path, "a");
|
|
if (new_fileout) {
|
|
setbuf(new_fileout, nullptr); // unbuffered
|
|
fclose(m_fileout);
|
|
m_fileout = new_fileout;
|
|
}
|
|
}
|
|
FileWriteStr(str_prefixed, m_fileout);
|
|
}
|
|
}
|
|
|
|
void BCLog::Logger::ShrinkDebugFile()
|
|
{
|
|
// Amount of debug.log to save at end when shrinking (must fit in memory)
|
|
constexpr size_t RECENT_DEBUG_HISTORY_SIZE = 10 * 1000000;
|
|
|
|
assert(!m_file_path.empty());
|
|
|
|
// Scroll debug.log if it's getting too big
|
|
FILE* file = fsbridge::fopen(m_file_path, "r");
|
|
|
|
// Special files (e.g. device nodes) may not have a size.
|
|
size_t log_size = 0;
|
|
try {
|
|
log_size = fs::file_size(m_file_path);
|
|
} catch (const fs::filesystem_error&) {}
|
|
|
|
// If debug.log file is more than 10% bigger the RECENT_DEBUG_HISTORY_SIZE
|
|
// trim it down by saving only the last RECENT_DEBUG_HISTORY_SIZE bytes
|
|
if (file && log_size > 11 * (RECENT_DEBUG_HISTORY_SIZE / 10))
|
|
{
|
|
// Restart the file with some of the end
|
|
std::vector<char> vch(RECENT_DEBUG_HISTORY_SIZE, 0);
|
|
if (fseek(file, -((long)vch.size()), SEEK_END)) {
|
|
LogWarning("Failed to shrink debug log file: fseek(...) failed");
|
|
fclose(file);
|
|
return;
|
|
}
|
|
int nBytes = fread(vch.data(), 1, vch.size(), file);
|
|
fclose(file);
|
|
|
|
file = fsbridge::fopen(m_file_path, "w");
|
|
if (file)
|
|
{
|
|
fwrite(vch.data(), 1, nBytes, file);
|
|
fclose(file);
|
|
}
|
|
}
|
|
else if (file != nullptr)
|
|
fclose(file);
|
|
}
|
|
|
|
void BCLog::LogRateLimiter::Reset()
|
|
{
|
|
decltype(m_source_locations) source_locations;
|
|
{
|
|
StdLockGuard scoped_lock(m_mutex);
|
|
source_locations.swap(m_source_locations);
|
|
m_suppression_active = false;
|
|
}
|
|
for (const auto& [source_loc, stats] : source_locations) {
|
|
if (stats.m_dropped_bytes == 0) continue;
|
|
LogPrintLevel_(
|
|
LogFlags::ALL, Level::Warning, /*should_ratelimit=*/false,
|
|
"Restarting logging from %s:%d (%s): %d bytes were dropped during the last %ss.",
|
|
source_loc.file_name(), source_loc.line(), source_loc.function_name_short(),
|
|
stats.m_dropped_bytes, Ticks<std::chrono::seconds>(m_reset_window));
|
|
}
|
|
}
|
|
|
|
bool BCLog::LogRateLimiter::Stats::Consume(uint64_t bytes)
|
|
{
|
|
if (bytes > m_available_bytes) {
|
|
m_dropped_bytes += bytes;
|
|
m_available_bytes = 0;
|
|
return false;
|
|
}
|
|
|
|
m_available_bytes -= bytes;
|
|
return true;
|
|
}
|
|
|
|
bool BCLog::Logger::SetLogLevel(std::string_view level_str)
|
|
{
|
|
const auto level = GetLogLevel(level_str);
|
|
if (!level.has_value() || level.value() > MAX_USER_SETABLE_SEVERITY_LEVEL) return false;
|
|
m_log_level = level.value();
|
|
return true;
|
|
}
|
|
|
|
bool BCLog::Logger::SetCategoryLogLevel(std::string_view category_str, std::string_view level_str)
|
|
{
|
|
BCLog::LogFlags flag;
|
|
if (!GetLogCategory(flag, category_str)) return false;
|
|
|
|
const auto level = GetLogLevel(level_str);
|
|
if (!level.has_value() || level.value() > MAX_USER_SETABLE_SEVERITY_LEVEL) return false;
|
|
|
|
StdLockGuard scoped_lock(m_cs);
|
|
m_category_log_levels[flag] = level.value();
|
|
return true;
|
|
}
|