This is the third of four articles outlining BitPay’s current throughts and plans regarding the block size issue. It follows yesterday’s article: Bitcoin as a Settlement System.

This article covers several topics that didn’t fit neatly into the other articles, but are important and relevant to the ongoing debates about the scalability, upgradeability and the future of Bitcoin in general.

The Fear of Hard Forks

The Bitcoin community is paralyzed with fear when it comes to hard forks. While it’s understandable, it’s something we need to get past. Unless you believe that Bitcoin can have a future without any hard forks, we must be able to successfully manage a hard fork or Bitcoin will eventually perish. The Bitcoin community has always held the view that if there is a fatal flaw in Bitcoin, it’s better to discover it sooner rather than later (when the stakes are potentially much higher). Perhaps it is better that we deploy a hard fork sooner rather than later. It would be reassuring to know that we can successfully incorporate new innovations into Bitcoin.

Bitcoin is Not Inherently Less Efficient than Centralized Payment Networks

Many people believe that Bitcoin is inherently less efficient than centralized payment networks like Visa and MasterCard. It isn’t. When comparing Bitcoin with these systems, people typically point out there is a lot of waste and inefficiency because you have so many nodes performing redundant work. But it’s not required that we have lots of nodes validating transactions.

Bitcoin would work perfectly well if only one node was validating transactions and building the block chain. And due to the nature of a Bitcoin transaction, you don’t need to employ costly machine learning algorithms to combat fraud (as the card networks do). However, there is one caveat; you have to fully trust the one node performing the validation. But that’s no different than the trust you have to place in the card network operators.

To eliminate the need to trust a single, centralized validation node, you can choose to run your own node and perform your own validation. That is a feature of Bitcoin that the card networks don’t have. There are only as many nodes on the Bitcoin network as there is demand to perform independent and trust-less validation of transactions. Yes, at a macro level, a lot of redundant work is being performed, but all of that extra work has value to someone.

In the degenerate case of only one centralized node, which is most comparable to the legacy card networks, Bitcoin is probably more efficient.

The Value of Running a Full Node

When you run your own Bitcoin node, you have an added level of safety compared with someone using a lightweight wallet or a Bitcoin bank. You aren’t relying on a third party to verify that the Bitcoins someone has sent to you are valid. You aren’t relying on them to tell you when that transaction has been confirmed in a block (or how many confirmations it has).

If your wallet is using a third party service to interface with the Bitcoin network, you are placing trust in that third party to tell you the truth about the state of the Bitcoin system. If such a service wanted to defraud you, they could tell you that you’ve received a payment that has been confirmed, when in fact it hasn’t. That might be an acceptable risk if you’re dealing in small amounts of value. Or it might be acceptable if you have a close relationship with the third party (maybe it’s your employer or a small organization you’re a part of). But in other circumstances, it would be completely unacceptable.

At BitPay for example, we run a lot of Bitcoin nodes and we are able verify transactions without having to trust any third party. Likewise, many individuals, organizations and service providers also run their own nodes because of the value of that independence. It reduces third party risk. The more widely used and significant Bitcoin becomes, the more people and companies will have a need to perform their own, independent transaction verification.

Shameless plug: if you use the Copay wallet, considering running your own instance of the bitcore wallet service: bitcore.io

A Fee Market Already Exists

The Bitcoin block size limit that Satoshi set back in 2010 was a stop gap measure intended to prevent a trivial denial of service attack. It was not intended to create an arbitrary and artificial scarcity for space in the block chain. At 1mb, this limit was well above the market demand for transaction volume at the time and allowed a proper transaction fee market to develop.

Miners already have plenty of disincentive for including transactions in a block. Each additional transaction included in a block increases the risk that the block might be orphaned. Therefore, miners will prefer to only include transactions that provide enough fee revenue to offset the increased orphan risk.

Today, the demand for transactions is bumping up against this artificial limit and is impairing the proper functioning of the Bitcoin network. As a result of this artificial scarcity, miners are being forced to only include transactions with artificially high fees. Eventually, Bitcoin transactions will be priced out of the market and alternative cryptocurrencies (or perhaps a fork of the Bitcoin block chain) will take market share. The transaction fee market is alive and well. If Bitcoin fails to deliver a product people want at a price they find attractive, there are many other options.

The Market for Bitcoin Scalability Enhancements

If the block size limit was a consensus rule that was allowed to adjust with the market demand, miners could process increasingly larger volumes of transactions to meet the demand. Eventually they would bump up against the limits of the current Bitcoin implementations and incur increasing costs as a larger percentage of their blocks end up as orphans. Miners would constrain the size of their blocks while developers work to improve scalability and performance. The increasing demand from users would provide the resources and incentive needed to invest in new technology to improve the scalability of Bitcoin.

Continue Reading: A Simple, Adaptive Block Size Limit