./bitcoin-cli getchaintips | grep height | wc -l

3793

./bitcoin-cli getchaintips | grep height | wc -l

1

Guess which one is a chain that features difficulty adjustment algorithm that is at least in part independent of the blocks before it?

Recently as Bitcoin Cash’s price remained low, and volatile hashrate – due to unstable profitability for the most part – caused a stir, there was a renewed debate about the suitability of its Difficult Adjustment Algorithm (DAA) – currently set at averaging difficulty target over the last 144 blocks – in curbing such volatility. Debates of whether weighing later blocks more for a more responsive algorithm has been brought up; talks about Bobtail, an interactive protocol has been mentioned, kindling discussions about whether such an implementation changes some fundamental assumptions, and whether it’s worth it; the Chinese community has proposed shortening the block interval, which trades off some other things (mostly software overhaul investments, and smart contract migration complexity) for more granularity and smarter responses.



Such debates, while tiring for some, are mostly a good thing – they are all ideas that are at least examined extensively, and have sets of different tradeoffs that are worth educating ourselves about. We have, recently, however, came across a proposal that makes a radical claim:

We describe a class of retargeting algorithms for which the sole inter-block time input is that of the block being searched for.

Wow!

What if, imagine this, Satoshi and the rest of us – literally everyone including the guys worried about Selfish Mining – have been getting it all wrong? What if mining can be an activity that is history-free ?

It’s natural to be skeptical. But we’re all wannabe scientists at heart, so let’s take a closer look anyway.

The article starts with a long description about the woes of unresponsiveness of current algorithm – which all fine, no algorithm is perfect, and there’s probably a better set of tradeoffs that can be had under any circumstances. Right afterwards, it gave us a very important definition:

Progress-freedom is distinct from being memoryless. We define progress-freedom to be the property that a miner’s chance of finding a block at any moment is independent of how much work he has already done since the appearance of block n – 1.

Hmm, sounds quite reasonable – although we’re not sure how exactly this is going to be done without also being memoryless. Note that they are only meaningfully different if difficulty requirements change on the fly, before the next block is even found! Perhaps there is some global, Avalanche or weak blocks style coordination that allows us to have a sense of time as we get farther and farther away from the previous block? That’s how Bobtail does it, after all – that might be interesting. Let’s look closer!

By “time t” we refer to idealized global wall-clock time, and we assume for the moment that timestamps are always recorded as the global time of the respective event. The accuracy of this assumption depends on the incentive framework we create. This is discussed in Section 5.

Wait, what?

For people sophisticated enough to work on this problem, we gotta assume that they know how unreliable timestamps are, right?



https://twitter.com/pokkst/status/1170952157534834688, linked without permission

Okay, there is supposedly some incentive framework that will tackle this in Section 5. The next sections spend a lot time describing some sophisticated algorithm that’s based on local, totally-synchronized clock – “with each passing second, the instantaneous block production rate increases”. We anxiously scroll down, nodding in amazement at the superb ability of the author to tighten T to what we truly want. A cute device that separates actual difficulty from what we pretend to be the difficulty used for targeting was described in painstaking details. Let’s overlook that for a second.

…until we reach the all-important Section 5.

We doubt anyway that miners will be eager to accept a peer’s “block from the future” that was easier to find than allowed right now.

How well the committed timestamps match actual time is an emergent result of the system, but since there is strong incentive to choose the latest possible non-future timestamp, i.e. a timestamp of “now”, we have reason to be optimistic.

Wait, that’s it? “We have reason to be optimistic”?

We’re trying to solve some already annoying oscillating miner-gaming situation, and here we are staring at the grim possibility that under the new system, miners can manipulate timestamps any dozens of ways, mass-ditch the chain the more centralized they are until profitable (due to this mechanism explicitly not being memoryless), “stock up” on future blocks, and will have difficulty telling whether an incoming block is actually valid or not because you don’t know how synced yourself are – and we’re being told “we have reason to be optimistic”?

This is how you get 3793 tips, guys. It’s not an accident; testnet allows for difficulty-reset to 1 when local time hits 20 minutes, which is an trial by fire testcase of how well “real-time” algorithms work – it ain’t well. Especially without coordination. Especially when the author doesn’t even try to take it seriously.

Maybe the paper has room for improvement. I don’t know, I can’t rule things out completely; people’s ingenuity has always surprised me. Perhaps there’s a hidden corollary among the lines that I overlooked, and if so I apologize.

But as far as a difficulty adjustment algorithm goes – even as an overall description – I’m pretty sure the paper is garbage fire. You do not need to have fully reproduced its results and examined each algorithm closely to know that – some things immediately jump out. Note that:

For a paper that purports to make radical modifications to difficulty adjustment – to some, arguably the greatest innovation of Bitcoin – it failed to dissect miners’ expected behavior in detail. Which one is more important – irregular blocktimes, or risk of a fractured and dead chain? If we handwave that, we can pretty much handwave anything, including the original 2016-block algorithm. Surely no death spirals can happen, right? Even if you accept the handwave at face-value, they are themselves appallingly naive. What do you mean “we doubt anyway that miners will be eager to accept a block from future”? How do they even know who is correct? Perhaps it’s simpler to just not mine BCH? For all its grandiose claims about being “progress-free”, is it actually progress-free? Or is it based on the assumption of miners dumbly mining as they do today, with absolutely honest timestamps, without shenanigans?

The number one job of a difficulty adjustment algorithm is not to make sure your blocktime is within some tight error range ε; its number one job is to make sure miners keep mining your chain, it survives, and they’re not incentivized to violate your basic assumptions – such as supply schedule, mass censorship, or one-conf reliability – in adverse ways. Note how the Bobtail paper makes a great effort in covering incentives and scenarios; if a paper do not cover them, I don’t know what place they have within a blockchain context. I might as well just read wiki pages about control algorithms for my enjoyment.

You do not have to be a foremost expert in blockchain to know this.

“But wait, im_uname, this is unfair! It is a radically new thing, and they’re attempting to solve thing at least, no? Should we not give people who make an effort benefits of doubt?”

I generally appreciate people trying new things. As long as an honest attempt was made, it can even turn out to be a dumpster fire, and it’ll be fine; such is science. But what if you proclaim you want to build a plane, then proceed to build a grandiose wingless, engineless, alien-device-less hunk of metal?

“Don’t worry about it being able to fly; just look at this, we solved the problem of fuel consumption!”



“Excuse me? What’s the point of saving fuel for a plane if you don’t care about flying?”



“This is a new thing, don’t be negative!”

I’m not even that upset that garbage fires have been written. The history of science is long, people have and will continue to publish garbage fires long after we’re all dead. But it does get pretty upsetting that garbage fires are paraded around because they look super nice in all the wrong places, with the mob crowning authors as “foremost experts”. What kind of community do we want to be? Do we just grab any control theory grad student, have them type things in proper LaTeX, and call them an expert doing groundbreaking work now? Do we want to be taken seriously or not?

