In early 2013, Bitcoin miners were pressured into raising their soft-limit for the first time. The network was under heavy load from SatoshiDice; with wallet software woefully unprepared for the emerging fee market, arguments were made for a quick action by pool operators to accommodate more transactions in their blocks. Through an unfortunate twist of events, the March 2013 accidental fork provided early warnings of how ill-prepared the network was for larger loads without foundational groundwork being laid and thorough risk analysis undertaken first.

As expected, demand for cheap transactions quickly consumed the additional space made available by the soft limit increases. Statistical data shows this growth coinciding with periods of high consolidation of Bitcoin’s hashing power towards the top-3 mining pools.

Similarly, when evaluating historical peer data, one cannot help but to conclude that the decrease in peer number can also be directly correlated to the increase in cost led by the growth in the size of blocks over that period of time.

While everyone’s attention has now moved toward the modification of a single constant, the block size limit, the efforts previously put into other protocol optimizations are no less valuable; they deserve more consideration. Without them, empirical observations indicate that the network would likely not be able to support its current maximum theoretical load of 1 megabyte without externalizing the costs at the margins and incentivizing further centralization.

In Part 1 of a post just published to the bitcoincore.org website, I highlight some of the most notable development milestones that have made their way into the Bitcoin Core project and explain their impact on the performance of the network and how we were able to cope with the increased load.

Today, it would be unimaginable to ask a new user to run a validating node from versions preceding the technology described in this post. The average system would have trouble synchronizing with the tip of the chain without significant tweaking and, if it did, it would just barely be able to keep up with the progress.

Earlier concerns for transaction throughput would have seemed premature had we all known then what we know today about how the system reacts to increased pressure. With the benefits of hindsight, initial proposals pushing for an arbitrary increase of the block size limit, at a time when the software was a lot less safe in handling even 1 megabyte blocks, appear even more hazardous than some had anticipated.

Scaling had to happen first through improvements of the algorithms and basic operations that are most critical to keeping the existing functions of the network sustainable. By adhering to basic computer science precepts, Bitcoin developers managed to provide several orders-of-magnitude improvements to system performance. In doing so, they were able to successfully cut down the bootstrap time required for new users to participate in the network even as the size of the blockchain grew significantly larger.

Only with this in mind can we now afford to consider a more ambitious roadmap of upgrades to further improve user experience at a transactional level. With the segregated witness soft fork awaiting activation, we tackle yet another major bottleneck by fixing the quadratic scaling of sighash operations. More importantly, we open the door to further optimization in the way we use the system’s resources and accelerate the deployment of second-layer solutions that support massive scale intelligently.

There is an old adage that suggests one needs to learn his history to figure out the path ahead. A study of the evolution of the protocol over time indicates that working smarter rather than harder has paid dividends over time. The scalability challenges of the network are best addressed by efficient constructions that reduce Big-O complexity rather than changes that needlessly expand the resource requirements of end-users.