In this visual essay, we explore a short paper by Vitalik on Discouragement attacks. Discouragement attacks are a particularly effective way of bringing down blockchains. How to best defend against them is still an active area of research.

(2) - They are preparing for a 51% attack - as more validators drop out, there are fewer honest validators left to ward off the attacker. This lowers the cost of a 51% attack.

(1) - They will make more money in the long term

(1) - They will make more money in the long term - If the consensus mechanism is designed in such a way that validators dropping out increases revenue to the remaining ones -- in other words if it contains a sliding incentive structure -- then the more validators drop out, the more money the attacker stands to make.

Essentially, an attacker may choose to lose money in the short term, if they believe they can discourage other validators from participating, eventually forcing some of them to drop out. There are two reasons why an attacker might do this:

They consist of an attacker acting dishonestly inside a consensus mechanism in order to try to discourage other validators from participating.

A discouragement attack consists of an attacker acting maliciously inside a consensus mechanism in order to reduce other validators' revenue, even at some cost to themselves, in order to encourage the victims to drop out of the mechanism.

Discouragement attacks are a particularly effective way of bringing down blockchains and are one of the hardest types of attacks to defend against.

This stuff ain't just theoretical; CSW's attempted "I'll mine a lot on BCH to get everyone else to hop off as it's unprofitable, and that way take over the chain" spiel was arguably partially a discouragement attack.

In either case, it’s clear that this kind of two-step strategy is potentially a much cheaper way of bringing down blockchains than a direct frontal attack.

Imagine we have a proof-of-stake blockchain with the following rules:

The paper starts by considering a discouragement attack on a simple proof-of-stake blockchain. While it chooses to describe the attack in abstract terms, here we'll focus on a concrete example.

Griefing factors

To help us break down these types of attacks, the paper introduces a useful concept called the griefing factor . The griefing factor of an attack is defined as the losses to other participants divided by the losses to the attacker.

To analyze attacks like these, a common tool that we use is the griefing factor. A mechanism in some equilibrium has a griefing factor of N if the attacker has a mechanism by which they can sacrifice \$1 of their own funds to cause \$N of losses to the other participants. If attackers can harm others costlessly or profitably, the griefing factor is \infty . With respect to attackers with \geq \frac{1}{2} of all validators, the toy proof of stake mechanism we described above has griefing factor \infty .

Why is this a useful metric ? In simple terms, the griefing factor offers important information on the economic incentives for attackers.

For example, in Proof-of-Work-based chains, the lower the value of the cryptocurrency, the lower the incentives for miners to add new blocks, and so the lower the overall hashrate (total attempts per second across the network to add a block). This matters because a lower hashrate means less competition for attackers (in the race to add a new block to the chain). And less competition for attackers means a less secure chain.

The griefing factor gives us an answer to the following question: for every dollar spent on an attack, how much money can an attacker cause the other participants to lose?

For example, say an attacker -- in a fairly unattractive Proof-of-Work chain -- spends $100 (on mining equipment) in order to cause $1000 of losses to the other stakeholders on the chain.

In this case, every dollar spent by the attacker results in $10 of losses (or grief) to the other participants. More formally, $100 bought $1000 of losses, and so the griefing factor was 1000 / 100 = 10 .

Note that in cases where the attacker can harm others costlessly or profitably -- in other words when the attacker is able to attack a chain at no cost to himself or even make a profit from the attack -- the griefing factor is \infty . This was the case for the simple proof-of-stake chain we explored in the introduction.

Reward distribution functions

The good news is that we can do much better than the 🙂😡 example we considered in the introduction. It turns out we can design mechanisms inside proof-of-stake chains that keep griefing factors within certain bounds.

... it is possible to modify the mechanism to bound the griefing factor: if M out of N validators are seen signing a message, then each validator that signs only earns \frac{R}{N} \times \frac{M}{N} . The key here lies in how the mechanism chooses to distribute the total reward, R , between the number of validators, N , each round.

Let's briefly revisit the introductory example. Remember we used R to refer to the total reward, and N to refer to the total number of validators. And that each validator, during every round, had a chance to earn a reward of \frac{R}{N} if they signed a message that contributed to the chain’s consensus during that round.

While \frac{R}{N} wasn't a bad first attempt at a reward distribution function, we discovered that it could lead to ugly situations where the griefing factor could conceivably be \infty .

Before we move on, let's define two terms that we'll use going forward:

- A majority attack is a discouragement attack where the attacker controls a majority of the validators. - A minority attack is a discouragement attack where the attacker controls a minority of the validators.

In this section, the paper analyzes discouragement attacks on a chain with a modified reward distribution function: one that bounds the griefing factor between 2 and 3 under a majority attack , and between 1 and \frac{1}{2} under a minority attack .

It achieves this with one small tweak: each validators' reward is now tied to the proportion of all validators seen signing a message.

More formally, instead of earning a reward of \frac{R}{N} each round, each visible signing validator earns a reward of \frac{R}{N} \times \frac{M}{N} , where M is the total number of validators seen signing a message.

To make the difference clear, let's relate this back to the example we looked at in the introduction.

In the introduction, we saw that on a chain with a reward distribution function of \frac{R}{N} the attacker can harm the other validators at no cost to herself:

However, look what happens if we replay the same attack on a chain with reward distribution function \frac{R}{N} \times \frac{M}{N} :

We see that, to censor the other validators, the attacker is forced to inflict economic damage to herself.

The number of validators, N , is still 10, and the the total reward, R , is still 30, but there are now only M = 6 signing validators. This means each of the attacker's validators now earns \frac{R}{N} \times \frac{M}{N} = \frac{30}{10} \times \frac{6}{10} = 1.8 coins per round.

The key point here is the following: the fewer validators seen signing a message, the lower everyone's reward.

This ensures an attacker can't censor other validators without harming herself. It's this link between the individual and the group outcome that limits the griefing factor.

Don't worry if it's not entirely clear to you why this is the case. We're about to go through an example that should help clarify things.

Majority attacks: an overview

As we mentioned in the previous section, a majority attack is just a discouragement attack where the attacker controls a majority of the validators.

Before we dive into the analysis, here are a few useful reminders: (1) - As we saw in the introduction, majority attacks have the potential to bring down blockchains -- since the attacker controls a majority of the validators, she has the power to censor messages coming from other validators, reducing their rewards to zero. (2) - This results in honest validators being discouraged from participating (hence the term discouragement attack). Eventually they drop out, giving the attacker even more control over the chain. (3) - We introduced the term reward distribution function to refer to the function that determines the reward received by (visibly) signing validators. (4) - The chain we analyzed in the introduction had a reward distribution function of \frac{R}{N}. This resulted in its griefing factor being unbounded, in other words \infty. (5) - By changing the reward distribution function to \frac{R}{N} \times \frac{M}{N} -- in other words by tying each individual validator's reward to the total number of validators seen signing -- we can bound the griefing factor between 2 and 3.

A majority attack always splits the validators into three groups:

Group 1 - The validators controlled by the attacker Group 2 - The validators censored by the attacker Group 3 - The remaining uncensored validators

Note that in the case where the attacker censors all other validators, group 3 is empty -- this is what happened in our introductory example.

In the previous section we touched on the link between the reward distribution function and the effectiveness of a majority attack .

The key point here is the following: under a majority attack on a chain with reward distribution function \frac{R}{N} the attacker can attack the chain without incurring any losses; this is simply not possible on a chain with reward distribution function to \frac{R}{N} \times \frac{M}{N} .

By tying each validators' reward to the proportion of all validators seen signing a message, \frac{R}{N} \times \frac{M}{N} ensures validators from all three groups -- the attacker's included -- stand to lose:

Group 1 - The validators controlled by the attacker lose because there are fewer validators seen signing: they still receive \frac{R}{N} \times \frac{M}{N} , but M (the number of validators seen signing) has decreased. Group 2 - The validators censored by the attacker lose out because they are not seen signing: remember validators that aren't seen signing receive nothing. Group 3 - The remaining uncensored validators also lose because there are fewer validators seen signing: like group 1, they still receive \frac{R}{N} \times \frac{M}{N} but M has decreased.

Two natural questions to ask are: how much does the attacker stand to lose relative to the other validators? And what are the variables that influence the magnitude of these losses?

It turns out we need to answer these questions to show that a reward distribution function of \frac{R}{N} \times \frac{M}{N} bounds the griefing factor between 2 and 3.

Let's take a look at an explorable example to develop our intuition.

Majority attacks: explorable example

An explorable example is an example that allows you (the reader) to explore a model. It makes the abstract concrete, and allows you to develop an intuition for how a system works.

In the paragraphs below, you'll see some green numbers. Drag them with your mouse to adjust them. The consequences of your adjustments will be reflected in the graphs and blue numbers below.

Suppose that the total reward per round is R = 200 coins. And suppose there are N = 100 validators that, at first, are all signing messages honestly. Now imagine that at some point, an attacker with \frac{N}{2} = 50 validators starts ignoring messages from k= 25 honest validators. What is the griefing factor of such an attack? Remember that a majority attack always splits the validators into three groups, and that the griefing factor of an attack is defined as the losses to the other participants (groups 2 and 3) divided by the losses to the attacker (group 1). So in order to get the griefing factor, we first need to figure out how much each group loses because of the attack. Before the censoring attack, every validator is seen signing every message honestly. So each validator -- regardless of which group they are a part of -- earns B = \frac{R}{N} \times \frac{M}{N} = \frac{200}{100} \times \frac{100}{100} = 2 \times 1 = 2 coins each round. The three groups are affected differently by the attack however. Group 1: Validators controlled by the attacker When k validators have been censored, the reward for all online (visibly signing) validators -- including the attacker's -- falls to A = \frac{R}{N} \times \frac{N-k}{N}= 1.5 coins. This means the attacker's loss per validator per round is B - A = 0.5 coins. And since she controls C = \frac{N}{2} validators, her total loss is L1 = C \times (B-A) = 25 coins. Group 2: Validators censored by the attacker Since they aren't seen signing, validators in this group have their reward cut to A' = 0 coins. This means the loss per validator per round in this group is B - A' = 2 coins. And since there are C' = k = 25 validators in this group, their total loss is L2 = C' \times (B - A') = 50 coins. Group 3: Remaining uncensored validators We've seen that the reward for all online (visibly signing) validators -- i.e all validators in this group -- falls to A = \frac{R}{N} \times \frac{N-k}{N}= 1.5 coins. So, as with group 1, the loss per validator per round in this group is B - A = 0.5 coins . And since there are C'' = \frac{N}{2} - k = 25 validators in this group, their total loss is L3 = C'' \times (B-A) = 12.5 coins. Now that we've calculated the losses to all groups involved, the griefing factor basically falls out. We've seen that the attacker loses L1 and the other participants lose L2 + L3. Since the griefing factor of an attack is defined as the losses to the other participants (groups 2 and 3) divided by the losses to the attacker (group 1), the griefing factor is simply equal to \frac{L2 + L3}{L1} = 2.50.

Minority attacks: an overview

We'll finish off with a quick look at a second type of discouragement attack: a minority attack . Remember, a minority attack is just a discouragement attack where the attacker controls a minority of the validators.

Unlike a majority attack, where an attacker chooses to censor other validators, under a minority attack she simply chooses to take her validators offline.

It might not be immediately obvious to you why this counts as an attack.

The logic is the following: even if just a minority of validators go offline, there are fewer validators available to sign each message; and on a chain with reward distribution function \frac{R}{N} \times \frac{M}{N} , fewer validators signing ( M ) means a lower reward for everyone.

It turns out when we use a reward distribution function of \frac{R}{N} \times \frac{M}{N} we can limit the griefing factor under this sort of attack to 1 for small attacks and \frac{1}{2} for attacks with size approaching half the entire validator set.

In plain english, this means we can make sure an attacker always causes at least as much (financial) damage to himself as he does to the other validators. And possibly twice as much.

Minority attacks: explorable example

To develop some intuition for these bounds, let's take a look at another explorable example .