Welcome. Remember our very first post?

Recall that we didn't like the way Neblio team (not)communicated. We published a vulnerability in their code and waited for the reaction. At first it seemed like they acknowledged the problem with poor communication, but after five months we can say that nothing really improved.

Moreover, maybe you would expect that the vulnerability we published would be fixed by now. But nope, they attempted to fix it and they failed, twice. We notified them about it but it didn't change anything. As of today, their node is still vulnerable.

Today, we are going to explain why their attempt to fix the published bug is insufficient.

Fix Attempt #1

Shortly after we disclosed the vulnerability, Neblio made their first attempt to fix it with this commit. Notably, this fix is what is currently in production on the mainnet of Neblio.

If you recall what the original vulnerability was about, Neblio attempted to fix Fake Stake attack by implementing in memory rewinding of transactions from all blocks on the main chain up to the common block with alternative chain branch that was presented to the node. Presenting alternative chain that forked the main chain at a very early block caused the node to be busy with this task of rewinding for about five minutes (+- depending on the hardware) per each presented block.

If we look into the code of this fix, we can see that the only thing they did is that they have prevented presenting alternative chains with very early fork points by adding a requirement that the fork point must be after the last hardcoded checkpoint:

// protect against a possible attack where an attacker sends predecessors of very early blocks in the

// blockchain, forcing a non-necessary scan of the whole blockchain

int64_t maxCheckpointBlockHeight = Checkpoints::GetLastCheckpointBlockHeight();

if (nBestHeight > maxCheckpointBlockHeight + 1) {

const uint256 prevBlockHash = this->hashPrevBlock;

auto it = mapBlockIndex.find(prevBlockHash);

if (it != mapBlockIndex.cend()) {

int64_t newBlockPrevBlockHeight = it->second->nHeight;

if (newBlockPrevBlockHeight + 1 < maxCheckpointBlockHeight) {

return DoS(

25,

error("Prevblock of block %s, which is %s, is behind the latest checkpoint block "

"height: %" PRId64 "

",

this->GetHash().ToString().c_str(), prevBlockHash.ToString().c_str(),

maxCheckpointBlockHeight));

Obviously, this fix is completely missing the point. Okay, no longer we can present such a block that would cause the whole main chain to be rewound in memory, but we can still present a block after forking the chain after the last hardcoded checkpoint, which will still cause the node to spend a lot of resources on rewinding all the blocks up to that point.

We should note that the whole operation inside VerifyInputsUnspent is super demanding on resources because it does so many things. Loading the blocks from the database is just one step. The function processes each block and all its transactions. This means that instead of having a lot of empty blocks that the function works with, the attacker can spend tiny bit of money in order to fill the blocks with many useless transactions. If the attacker fills enough blocks with transactions, rewinding transactions of those blocks in memory is going to be just as demanding as was previously rewinding the whole chain up to an early block.

So this obviously does not work.

Fix Attempt #2

To be fair, we should also consider the second attempt Neblio made. This one has not been completed as of now, it has never been merged to the master branch and it was never deployed. This code exists on a development branch VerifyInputsUnspent_DoSFix. This branch contains just two new commits and has not been touched for 4 months. So it is unclear if it is even intended to be merged eventually to the master or if it is abandoned. Nevertheless, let's quickly check what is in there.

Only one commit is interesting – Fix the possible DoS of rewinding a very long chain, by only unspending transactions that are in the fork. As its name suggest, it removes expensive loading and processing of the blocks from the main chain.

Recall the original problem this is trying to solve – Fake Stake attack. Briefly, at the moment of calling AcceptBlock, at the end of which resources are allocated for the new block, it is not possible to decide whether the coinstake kernel (first input of the second transaction in the block) belongs to the UTXO set or not. So the original code only checked whether the coinstake kernel represented an output in the past and if it did, it was allowed to be used, even if it was spent before.

VerifyInputsUnspent fix was supposed to make this check so that previously spent outputs could not be used. However, since the block that is being processed could be on an alternative branch, the current UTXO set that the node has for its active chain tip cannot be used directly to verify that. This is because some outputs could have been spent after the fork point of the alternative branch. This is why the original VerifyInputsUnspent rewound all the main chain blocks up to the fork point with the alternative chain. It was done to create a valid view of the UTXO set for the alternative chain.

The second fix attempt no longer rewinds all main chain blocks up to the fork point. Instead, the new idea is that only transactions that are relevant to the alternative chain should be processed. So the attempt is to reduce the total amount of work that is needed to be done. That's reasonable idea, but by itself it is not enough. The attacker can still control the alternative chain and make it heavy enough (by filling blocks with many transactions) to cause the node to get stuck. So even if only relevant transactions are processed, there can be enough relevant transactions such that it is still too expensive to process.

So this does not work either.

Summary

As of today, Neblio have not presented any viable fix for the VerifyInputUnspent vulnerability we presented more than 5 months ago. There are also other problems with their attempts to fix this issue. We may talk about this in the future. Thus so far it does not seem these developers know what they are doing, so chances are small that they will actually fix it correctly by themselves.