The intention of this post is to explore a possible way to make the Ethereum ecosystem more friendly without requiring any contentious changes to the base-layer protocol or community philosophy.

Personally, I’m against institutionalizing processes for irregular state changes to Ethereum, as I believe such a process undermines the core value proposition of a base-layer blockchain protocol.

However, it is clear that there is an issue that needs to be addressed to ensure that Ethereum lives up to its potential as a platform that is friendly to both developers who are creating valuable applications and to the users of those applications.

Who is responsible for smart contract security?

Do we blame Parity for being negligent and not following best practices, or do we blame the users of the parity multi-sig wallet for choosing to trust unaudited code, or do we say its nobody fault and collectively clean up the mess, again, and again?

Who is to blame when funds are lost?

The operation of smart contracts on Ethereum is a unique risk. Developers deploy open source code but have no liability when it fails. Users are putting their funds at stake but are not sophisticated enough to assess the risk involved. So who do we hold accountable when things break?

Some of the most significant losses of funds, including the Parity multi-sig issue, have boiled down to trivial mistakes that would have certainly been caught by a professional auditing team. But developers are building public infrastructure for free. Requiring development teams to pay for expensive audits and bug bounties on principle may not be feasible and is likely to deter positive contributions in the future.

Looking at the issue strictly from a user’s perspective, whenever a user makes a transaction on the blockchain they are already required to make an assessment of the risk of that action (being your own bank is kind of hard, huh?).

Most users are not equipped to examine the contracts they are using themselves, nor would it be scalable for every user to personally examine every contract even if they had the required skills. So users rely on the reputation and trust of development teams as a heuristic, but what if instead they could pay an insurance provider to take on the risk on their behalf?

The Standard Model of Insurance Markets

Generally an insurance market consists of a single entity (the insurer) which provides insurance agreements (policies) for many entities (policy holders). For each insurance policy the insurer sets a premium which is paid on a periodic basis, or the agreement becomes void. The policy specifies a maximum liability amount as well as the conditions for payout. The insurer’s business model is predicated on its ability to assess the risk of individual policies and maintain a pool of funds to cover expected payouts. From the policy holders perspective, they are paying a consistent fee in order to hedge against a significant risk of loss.

Insurability of Smart Contract Bugs

A key consideration for the feasibility of this proposal is whether smart contract bugs can be considered an insurable risk.

Large number of similar exposure units: There are many instances where users are exposed to the risk of bugs, and their risks depend on the specific contracts they are engaging with. These risks can be classified into various groups and types of exposure by an insurance provider.

Definite Loss: The expected behavior of a contract can be summarized in human readable language, and if the actual result on chain differs from the expected result expressed in the policy then the loss can be shown to have occured at a specific time and from a specific cause.

Accidental Loss: From the perspective of an insurance provider the exploitation of a bug is similar to theft, which is commonly insured. The event is not necessarily random or accidental, but it can be proven to have occured due to no fault of the insured party.

Large Loss: It only makes sense to insure for losses which would be large enough that the individual entity should not simply self-insure by saving, this is probably not relevant for contracts which will only ever handle small amounts, but there are definitely cases where contracts may handle significant value.

Affordable Premium: If the event is too likely to occur, or the cost of the event so large, that the resulting premium is unaffordable then no one will buy insurance, in this particular case the risk of a carefully developed contract should be 0, so carefully constructed contracts that follow industry best practices should be relatively cheap to insure even for very large amounts.

Calculable Loss: Based on a loss event, there should be significant evidence to calculate the damages/loss on the impacted party. In the event of a smart contract bug, it should be pretty easy to quantify losses based on public blockchain records.

Limited risk of catastrophically large losses: An initial distinction between application layer bugs and protocol layer bugs is necessary, as insurance is probably not a good mechanism to mitigate risk for protocol layer / consensus failure, but such failures would likely result in a non-contentious hardfork anyways. Application-layer bugs are less likely to affect all users simultaneously and insurers can help mitigate these risks further by selectively insuring contracts and pushing for developers to implement safety features like trustless global freeze functions.

With minor caveats, it feels like smart contract bugs at the application layer are definitely an insurable risk. So what would a insurance provider and policy actually look like?

Auditors as Insurance Providers

Successful Insurance providers will be experts at assessing the risks of smart contract failures, and can monetize an information asymmetry advantage with regard to the security of contracts.

Unlike other types of risks, a code vulnerability is a binary outcome and once it is uncovered the probability of it being exploited… goes from 0 to 100, real quick. The risk assesment being made by the insurance provider is not about the risk of various known bugs being exploited it is about the existence of an unknown vulnerability. A good auditing company should, for a given contract, be able reach a point where they are confident that there is almost no risk. However, like many problems there is a significant difference in the level of effort to be 95% confident versus 99% or 99.99% confident in their analysis.

Based on their private analysis and degree of confidence they can set a premium for each contract which they believe will be profitable, and direct ongoing efforts towards the contracts that will either attract new customers or represent a excessive risk exposure to the insurance pool.

The result is that auditing companies that provide insurance will have a competitive interest in proactively reviewing significant and broadly useful contracts in order to attract customers rather than being contracted strictly as consultants by development teams.

A significant caveat for insurers, and a possible reason why such a system might fail is that there is inherent information leakage that is both undesirable yet unavoidable from the perspective of the insurance provider, who is attemptoing to capitalize on information assymetry to make a profit.

If users see that insurance is offered for a contract then they implicitly know that it has been reviewed and that the auditors are reasonably sure that it is safe to use, this means that users can significantly reduce risk simply by interacting only with contracts that insurance is offered for, despite never paying premiums. From the users perspective this is simply a positive externality.

It’s also possible that this information leakage could be exploited by competing insurance providers that choose to spend the bulk of their operating budget on marketing, and simply copy the contracts and premiums offered by more legitimate firms — though such a strategy might prompt legitimate firms to lure their competitors into excessively risky positions so it’s hard to say what the equilibrium would end up looking like.

A need for smarter contracts

The proposal above could be implemented right now without any changes to the underlying protocol, but actually managing the policies would be fairly cumbersome.

For these insurance agreements to work it is critical we can tie the insured party to the loss directly because we do not want someone to be able to insure the loss of someone else’s funds. The most practical way to do this is for the policy to apply to specific Ethereum addresses and contracts, this way insurers do not need to worry about the messiness of trying to sort out issues where someone may have had their private key lost or stolen, they just need to look at public blockchain data to assess damages and send payouts.

We also want the contents of the the insurance policy to be human readable and contain language that cannot be interpreted by the EVM. Right now this would requires a traditional legal agreement, but such an agreement is meaningless unless it is associated with a real world identity in addition to the private key. Projects like OpenLaw and Mattereum are working on making it simple to connect traditional legal agreements into smart contracts.

But wait, what if we want to preserve the pseudo-anonymous property of Ethereum, or we want to use an address which doesn’t represent a legal entity but is instead a DAO? Then ideally this would be implemented using a collateralized human readable agreements that is enforceably arbitrated on-chain without needing a traditional legal entity or intruding on the user’s privacy.

Much of the research I’m doing for the Aragon Network applies here, as does the work of projects like Kleros and Delphi which are actively working on improving how to handle arbitration of subjective issues involving smart contracts.

Conclusion and comparison to other proposals

This proposal does not do anything for people who have had issues in the past, It is intended to be a self-sufficient standalone model for insurance that does not depend on the expectation of any future state changes or fund recovery proposals succeeding. Unlike other proposals it does not offer concrete or immediate remedies because it is not a simple code fix, I’m simply pointing it out a possible a path forward that might prove to be less contentious.

A key distinction between this aproach and proposals like EIP-999 is that with an insurance model users are still required to take 100 percent responsibility for their actions within the protocol. Insurance makes it easier and more practical to use the platform safely, but user’s choices still have the same consequences. If they opt to not pay for insurance they are choosing to take on a greater risk, and it should be clear that they will bear the consequences of that choice.

It also differs from Alex Van de Sande’s insurance proposal where the insurer issues tokens that essentially create a futures market based on a contract vulnerability being found and exploited within some time horizon. His approach in some ways is simpler, but may produce some strange side effects because it does not enforce the requirement of a definite and calculable loss. Specifically the beneficiary in the event of a payout is not necessary the individual who experiences the loss. This can lead to a situation where a previously unknown vulnerability is found, and the attacker can profit by buying up futures with the intention of exploiting the vulnerability themselves. That issue isn’t necessary too problematic, but in my opinion a more traditional insurance policy is a better.

My hope is that if we continue to look for constructive solutions we can come to solution that is not contentious and which strengthens the community and technology moving forward. We should not not rush into or force a contentious decision until we feel we have exhausted all alternative paths.