The broken EIP security incentive

And how to fix it…

Constantinople was delayed, and it was delayed due to a pretty obvious flaw. Although the flaw was obvious, the decision to delay the upgrade was made last minute because the bug itself was found last minute. Had the flawed EIP been actively looked at by a broader community earlier, the flaw would’ve probably been abundantly clear (yes, there is some hindsight bias to consider here).

Finding and reporting flaws in an EIP draft has little to no incentive — not just limited to economic incentives, but also social ones. There is almost no social capital to be gained from finding a flaw in an EIP, so why would anyone make the effort?

In fact, as a security researcher, there is both more social and economic incentive for me to wait as long as possible before reporting a bug present in a hardfork. The reason being that the value of the bounty will be larger and the amount of attention the bug would receive would be higher.

During early phases, only implementers and core developers really care about bugs. Later on, the broader community starts to care, and finally, if a bug report is published last minute, it gets the attention of the entire community. This must change because consistently delaying hardforks for last minute bugs should not be a trend we aspire to normalize.

Proposed Solution

The solution I propose is neither novel nor that interesting, but it is one that is simple and I believe would work. What we need to do is offer enough economic incentives in the beginning to be able to compensate for both the economic and social incentives earned at later stages. To do this, we simply introduce an exponential decay on the bounty value offered by the Ethereum Foundation.

Decay: Bounty over time

The proposed initial compensation received for a bounty should be two times the current rate, and it should slowly decay to meet the current rate. This decay period would start as soon as an EIP is accepted and scheduled for a hard fork, or ideally before implementers have begun implementing the proposal (which is currently the case).

This would ensure that researchers actually have enough incentive to start looking for problems before they could become critical, and would foster healthy peer review of accepted EIPs.

Although I’ve only presented this one solution, there are other measures we can take to ensure EIPs are safe to implement and we should implement as many of those measures as possible.