It’s tough to make predictions, especially about the future. Throw a dynamic and interconnected social-technical distributed currency network into the mix, and even the famously epigrammatic Yogi Berra would be at a loss to package the issue in an appropriately pithy witticism.

The core developers of the Bitcoin protocol face a difficult technical decision. How they decide to respond to an impending 1MB limit on the maximum block size allowed in the block chain will fundamentally impact how and even whether the technology can be used for everyday purchases or institutional transfers, by average people or professional investors alike.

There are now no easy answers. The many exceptional minds participating in the technical conversation launch diverging and understandable points of disagreement against the most likely proposals. Adding to the complexity is the unfortunate fact that many well-considered arguments are by Bitcoin’s short existence purely theoretical in the present — but urgently possible and potentially quite detrimental once breached. It’s a question fraught with pressure, but one that must be addressed nevertheless.

We here at Plain Text want to help clear up some of these uncertainties. Earlier this month, fellow editor Eli proposed that combinatorial prediction markets can be used to provide insights into the unknown. Now, we have assembled a model market of relevant block size predictions on SciCast, a research project based at George Mason University. We created a website, blocksizedebate.com, summarizing the most recent predictions that developers can observe in real time.

In this post, I’ll explain how prediction markets like SciCast can provide insight into the likely outcomes of the complex factors involved in the block size debate before describing our model and our plans to expand on this project in the future.

The debate, in brief

The crux of the debate hinges on an arbitrary protocol limit to the maximum block size added by Satoshi Nakamoto as an anti-spam measure in the network’s infant years. By temporarily limiting the size of an allowable block to 1MB, Satoshi prevented malicious actors from undermining the early network with pointless transactions.

This was not a problem for most of Bitcoin’s history, since the market-determined equilibrium block size tended far below this short-term production quota. Besides, Satoshi and other early users expected that a higher block size limit would eventually be gradually phased in as the network matured and block sizes inched slowly closer to the limit.

However, the block size limit was never actually changed. Now, average Bitcoin block sizes creep noticeably closer to the protocol limit by the week. If current trends hold, it is projected that the network will hit the 1MB production quota by late 2016.

Some in the Bitcoin community, notably leading developer Gavin Andresen, advocate that the block size limit should be increased through a hard fork to the Bitcoin protocol. Whether they support a one-time rule instituting constantly increasing block sizes, or a series of ad hoc increases as needed, or something in between, this camp worries that the increased costs in time and transaction fees caused by an inefficient production quota on block sizes could render Bitcoin all but unusable to non-institutional users — thereby potentially undermining the technology’s core value proposition and even its future.

Others worry about the potential unintended consequences of such a change. Perhaps sufficiently large block sizes will have the negative effect of centralizing mining and volunteer nodes, thereby rendering the network vulnerable to the same kinds of third party service providers from which it was created to secede. Network security could be compromised as well, if allegedly “race to the bottom” transactions fees for miners slice profits to such thin margins that the hash rate drops to dangerous levels as enough miners simply leave the market.

More existentially, some are concerned about the general effects of any hard fork change to the protocol. Earlier glitches that could be contained by the coordination of a smaller, perhaps more community-minded network may now cause more chaos than can be symbiotically managed.

These issues are complex, interrelated, and extremely potent — some of the least attractive qualities anyone would want in a decision they must make. Fortunately, prediction markets incentivize individuals to provide useful information that can help shed light on the likeliest outcomes.

It’s what Thomas Bayes would want.

How can prediction markets help?

It can be hard to determine and properly weight which opinions are most grounded in accurate evidence and probability and which opinions are founded upon less reliable factors.

Economic decisions are much less fraught with these uncertainties. Market prices promote rational economic decision-making by summarizing complex information about relative material scarcity, production, and demand into an easy-to-digest number that actors factor into their own plans — thereby influencing the price with the information of their own actions (or lack thereof).

The rhetoric of the prediction itself does not compel action, as is often the case with non-market social coordination in politics, culture, and business. Rather, market actors compare the content of any prediction to the reality reflected in prevailing prices in order to adjust or maintain their market behaviors, which is communicated to others as an updated price.

Scientists studying human decision-making believe that prices can be used to coordinate information and guide informed decisions about non-material questions concerning things like politics, business, and technology — what economist Robin Hanson has termed “futarchy.”

How can we “price” such information about the Bitcoin block size debate? Through good, old-fashioned betting.

Prediction markets provide a messaging space for individuals to submit, observe, and react to such evidence in the form of “prices” on specific questions. In encouraging individuals to put their “money where their mouths are” by attaching a monetary incentive to opinions, prediction markets can promote more accurate projections than those arrived at solely through debate and rhetoric.

What is SciCast?

These ideas form the motivating assumptions behind the SciCast project.

While not a full prediction market — strict US regulations all but prohibit the operation of information markets with full monetary rewards — SciCast does allow users to bet and earn tokens by correctly projecting the answers to ongoing scientific questions (and provides some compensation through contests for accuracy and complex predictions). The platform therefore emulates market prices that serve as projections for each question.

For example, let’s say you work for a public health organization and you want to get a better idea of where to focus resources over the next year. You can put up a question asking if increasing polio outbreaks in certain parts of the world will lead to enhanced eradication efforts by the end of the year. SciCast users can read the question to understand how the answer will be determined before researching the issue and betting tokens on their best hunch. They can mull over the issue with other betters in the discussion forum, reacting to relevant news stories and adjusting their predictions accordingly. Projections vary over time as new evidence arrives for evaluation; as time inches closer to the resolution date, you should have a better idea of whether or not it makes sense to dispatch polio eradication efforts in these zones. This particular question indeed resolved in the affirmative by the expiration date; the most recent prediction affirmed the realized trend.

What’s more, SciCast is a combinatorial prediction market. This means that users can not just weigh in on individual questions, but can also make predictions on questions assuming that the answer to another question resolves in a specific way.

Users can view the network of questions that relate to the one they are currently researching, since the answer to other questions may influence the direction of the answer to the current question in many different ways. SciCasters can provide added precision about linked possibilities and therefore a richer matrix of projections to guide decision-making. You can find more detailed explanations of how to use SciCast at its website.

Here’s what we did

Over the past few weeks, Eli and I have been formulating relevant questions about the Bitcoin block size debate to understand what the impact of two specific block size proposals would have on the Bitcoin price, hash rate, number of transactions, and number of node operators at various points in 2016. They were published late last week and are open for predictions.

The two proposals we are studying are (1) increasing the block size through a hard fork and (2) the “replace by fee” patch proposed by Peter Todd. Other proposals can certainly be added for analysis over time, but we decided to begin with these two as a proof of concept.

The official question for (1) reads: “Will a hard fork in the Bitcoin protocol introduce a change to the block size limit by DATE?”

While the official question for (2) reads: “Will the replace by fee patch be adopted into the Bitcoin reference client by DATE?”

The questions were duplicated four times to resolve as either “yes” or “no” on one of the following dates:

- March 31, 2016;

- June 30, 2016;

- September 30, 2016; or

- December 31, 2016

Additionally, questions asking users to predict a specific number or range about the Bitcoin price, network hash rate, daily transaction volume, and number of nodes were duplicated to resolve on each of those four dates. This allows users to make predictions about what will happen to each of these key network variables if the two protocol changes that we are analyzing occurs or does not occur on the last day of every third month over the next year.

Here’s how everything fits together.

Let’s say that you believe that the block size limit needs to be increased through a hard fork as soon as possible. If it is not increased, you believe that the price will crash. If it is increased, you think that the price will at least remain stable.

You might start by looking at the prevailing price projection for March on 2016. Currently, the market projects a price of $358. If the block size limit is not increased by that date, the market projects a price of $338, only slightly lower than the basic projection. You can bet on your hunch that the price of Bitcoin will crash if the block size is not increased by March 31, 2016 by setting the question about a hard fork to resolve as false, and submitting a projection of only $60, as shown in the example to the left.

Next, you could reflect your prediction that the price of Bitcoin will not change very much if the block size limit is enacted. Simply set the question about a hard fork to resolve as true and then select a Bitcoin price that is closer to the basic market projection of $358, as shown in the second example to the left.

You could then repeat the same prediction process for the prices resolving on June 30, September 30, and December 31, perhaps with even more dramatic price movements as the problem grows greater over time. By clicking on the “network” tab of any question, you can see how questions relate to each other and affect connected predictions.

Perhaps you discover that the market projects a patterned association between movements in price and the Bitcoin hash rate. To earn extra tokens, you might place bets on hash rate questions that were initially unrelated to your main thesis that the Bitcoin price will crash without a block size increase.

Or maybe resolutions to other Bitcoin questions inform and change your opinions on the likely impact of the total number of transactions if the replace by fee patch is implemented. The possibilities are not exactly endless, but they are pretty open-ended. SciCast allows users to learn and earn from the market of predictions, resulting in a dynamic probability matrix that developers can use to get insights about the likelihoods of their actions given certain assumptions.

A table displaying the current predictions for all of the questions resolving at the end of 2016 can be accessed at blocksizedebate.com.

The future of our mild futarchy

Since being published on SciCast last week, a number of users have cast their tokens to project the likely outcomes. But we still need more. Because projections are more likely to be accurate when questions receive a sizable number of predictions, we are hoping to spread the word about our project so that plenty of people get involved and add their insights.

SciCast is a unique tool because of its combinational prediction options, but it is only one of many ways to generate market projections. We will continue to monitor and add questions to SciCast as appropriate. At the same time, we will explore new and alternative options to harness the power of markets and provide answers to these difficult questions.

Ideally, we will develop a combinational prediction market with true monetary risks and rewards — perhaps on one of the blockchain-based platforms currently in development.

We’ll post analyses of the results and updates about our progress on Plain Text and at blocksizedebate.com — stay tuned, and happy predicting!