9

Why does Bitcoin Core contributor Peter Todd think that increasing the Bitcoin block size would lead to a more centralised system?

“The system doesn’t scale, and you just have to accept that and do something smarter. Every time you do a [block size limit] increase, you’re making the system more centralized.”

How does Bitcoin block size affect centralisation?

Derek Mahar
  • 197
  • 5

1 Answers1

8

The problem arises from the uneven propagation of information in the network. While the propagation of small blocks is fast, i.e., every miner gets to know a new block approximately at the same time, this is no longer true for large blocks.

Large blocks take a lot of time to be forwarded to every node in the network, since this includes miners, these miners are unaware that there is a new block and they'll continue mining on top of the old parent, i.e., they will work on a competitor of the newly found block. This means that they are actively wasting their computational resources on an obsolete solution if the propagating block gets accepted into the blockchain.

Big miners are likely to have a better connectivity with the Bitcoin network, resulting in them losing less of their computational time due. This in turn puts small miners at a disadvantage since they are now less competitive.

The extreme case would be one strong miner, with close to a majority of the computational resources making such big blocks that everybody else learns about its blocks very late, while it already has spent considerable time finding the followup block.

The uneven propagation is due to both network latencies and verification times of the block that is being propagated, so this is a compound problem from both slow verifiers and high latency/low bandwidth.

For more detail see Information Propagation in the Bitcoin Network (disclosure: I am the author of that paper) and Majority is not Enough: Bitcoin Mining is Vulnerable.

cdecker
  • 9,498
  • 1
  • 39
  • 61
  • 1
    1 MB per 10 minutes is about 1.7 KB per sec, 2 MB blocks would require 3.4 KB per sec. Anyone who doesn't have several orders of magnitudes more bandwidth than that, should not be part of any online network in the first place (but instead, get a decent connection). Old fashioned mobile 3G or even the lowest grade budget consumer internet connections offer thousand times faster speeds than that. – RocketNuts Mar 06 '16 at 15:55
  • Oh and I know it's not like that in China yet, but what's preventing them for running their relaying server outside China? The actual mining work (i.e. their factories full of mining hardware) does not need the full block data at all, just the stratum which is independent on block size. If all it takes is a $5/month VPS to mitigate any bandwidth issues for a serious mining operation, you can't seriously claim that it hurts decentralization. – RocketNuts Mar 06 '16 at 15:57
  • Also, I agree 8 MB or 20 MB blocks might be a bit steep for some people (although even then the effects should be minimal and easily circumvented) but 2 MB instead of 1 MB would be absolutely no problem whatsoever. – RocketNuts Mar 06 '16 at 16:01
  • Yes, each individual node will trivially have the required bandwidth for 2MB, but since these blocks are relayed over multiple hops the simple bandwidth of a single node is not the bottleneck. Rather each nodes adds to the propagation delay and the longer the distance the longer it takes. For example take the propagation data from last week (http://bitcoinstats.com/network/propagation/2016/03/03), after a minute there still were 10% that had not heard about a block on average. – cdecker Mar 06 '16 at 16:28
  • @cdecker If propagation delay is such a problem, why don't bitcoin developers, miners, and other stakeholders consider decreasing the block size? – Derek Mahar Sep 12 '16 at 10:27
  • Well there is a sweet spot between fork-rate and usability. Decreasing the block size further reduces the transaction rate that the network can handle, without increasing consistency much. On the other handle if we increase the block size we get a super-linear decrease in consistency. There certainly is a bit of wiggle room, but we simply cannot use block size to scale beyond a certain point. – cdecker Sep 13 '16 at 05:22
  • @cdecker Cited paper is dated 2013 - bandwidth has increased circa 30% a year. 13th IEEE International Conference on Peer-to-Peer Computing, IEEE P2P 2013, Trento, Italy, September 9-11, 2013 – OneArb Apr 24 '23 at 00:20
  • @cdecker as per your own paper conclusion: "Our measurements show that a single node implementing these changes reduces the number of blockchain forks in the network by over 50%". Would the prominent change be: "[...] connect to every node in the network creating a star sub-graph [...] speeding up the propagation [...]"? – OneArb Apr 24 '23 at 00:31
  • That is correct, however you need to keep in mind that an average 30% yearly increase in bandwidth on individual connections does not translate into the equivalent network wide bandwidth increase. Also noteworthy is that at some point the network latency (speed of light) starts dominating the overall network propagation. – cdecker Apr 25 '23 at 10:27