To add to the other answer, since Bitcoin Core 24.0, there is an additional protection implemented against low-difficulty header spam: header pre-syncing.
To recapitulate, since the headers-first synchronization introduced in Bitcoin Core 0.10.0, blocks are never downloaded before their headers are known and verified to have sufficient work (which means: enough to within one day of the active chain tip, and more than the preconfigured minimum chain work). This means we already don't need to worry about low-difficulty block spam anymore. The blocks are just not downloaded unless they're part of a chain that's proven to be good enough.
Yet, a weaker problem remained: a peer could start giving us (multiple) chains of headers that never amount to anything valuable. Since the headers are sent in forward order, there is no way to know at the beginning how good the result will get. The old solution (checkpoints) is unsatisfactory: it relies on updated software with hardcoded overrides on what chain is acceptable. As of Bitcoin Core 26.0, the checkpoints remain, but haven't been updated since 2014. By now, mining has become so much cheaper per hash, that an attacker could realistically start an attack after the last checkpoint.
Since Bitcoin Core 24.0, a new solution has been implemented: headers pre-syncing.
The idea is that the header synchronization (which precedes the block synchronization) is split up in two phases:
- In a first phase, header presyncing, headers from a peer are downloaded and verified, but not stored, because we don't yet know whether they'll end up being good enough. Instead, only a very compact (salted) hash of these headers is kept (in per-peer memory, discarded upon disconnect).
- In a second phase, header redownloading, the same headers are downloaded again from the same peer, and compared to the stored hash(*). If there is a match, they're fed to full header validation, which stores them, and will trigger block download.
This approach comes at a cost - the header synchronization is effectively performed twice, doubling its bandwidth cost (which is still small compared to full block download), but removes the last reliance on checkpoints in the codebase. They'll likely be removed in some future version.
(*) A simple hash wouldn't be sufficient, as we need the ability to verify headers along the way, not just the end. The actual structure consists of a single 1-bit salted hash every ~600 blocks. To compensate for the (extremely) small hash, upon redownloading, there is a buffer of ~14000 headers that are downloaded before validation. Only if all the ~23 1-bit hashes in those 14000 headers match, then the beginning of the buffer is fed to validation. This means every header has some 23 bits checked against it before validation, which would cost an attacker millions of tries to succeed against.
I also see a line ' // The best chain should have at least this much work. consensus.nMinimumChainWork = uint256S("0x000000000000000000000000000000000000000000f91c579d57cad4bc5278cc"); ' Which seems to require any chain to have a lot of PoW to be considered ok.
– Kalle Rosenbaum Jun 07 '18 at 12:33nMinimumChainWork
parameter, that's used in https://github.com/bitcoin/bitcoin/blob/f8bcef38fb9be48f0f5908a6c4c0cbe8c5a729d6/src/net_processing.cpp#L1511. That should cover our asses? – Kalle Rosenbaum Jun 07 '18 at 12:44