Staking Models and Incentivized Voting Systems: A Deep Dive

Last update, we provided a quick introduction to our learning process about staking models and how regular people could apply them in the future. We are quickly becoming of the belief that staking models are a core piece of the iterative (and revolutionary) value provided by blockchain applications and we deeply believe that a strong framework for applying and understanding them will remain a long-term competitive advantage for whoever achieves it.

Today, we want to share some lessons on the topic — specifically, lessons related to game theoretic challenges and the variants that may solve them. We also wish to spark some conversation within the community on a few open questions.

As you probably know, game theory is the mathematical study of strategic environments. In cryptocurrency and blockchain land, it refers primarily to the deep understanding of how the incentives set up by a decentralized system interact with the real world to produce expected outcomes. Game theoretic thinking provides a lot of value in reasoning about the security, risk model and outcomes of a smart contract. It is important to note that game theoretic outcomes often makes assumptions like having a large number (think infinite) of purely rational actors — an assumption that is usually the most conservative, but in certain cases, may require a developer to understand how their model holds up if there’s only a small number of (say, initial) users.

Photo by Pixies from Pexel

We did a deep dive into existing research on staking models, voting systems and meaningful collaboration and consensus work — in the public material, we often find the following problems:

No incentive to vote. This one is a killer. It is often the case that a staking model is zero-sum or may become zero-sum in a case where it is not “complete.” Since voting, staking and other participation in smart contracts costs gas at a minimum (and often requires risking your stake!), these models simply do not work.

This one is a killer. It is often the case that a staking model is zero-sum or may become zero-sum in a case where it is not “complete.” Since voting, staking and other participation in smart contracts costs gas at a minimum (and often requires risking your stake!), these models simply do not work. No incentive to vote in the “plainly obvious” case. This is a more subtle variant of no incentive to vote. If the model relies on redistributing the stake of the incorrect actors, it may be the case that paying gas and the required intrinsic trust in the functionality of the system (i.e., is the code even correct?) disincentivizes participation unless it is likely to be contentious.

This is a more subtle variant of no incentive to vote. If the model relies on redistributing the stake of the incorrect actors, it may be the case that paying gas and the required intrinsic trust in the functionality of the system (i.e., is the code even correct?) disincentivizes participation unless it is likely to be contentious. No (or partial) incentive to share information. In many staking models, the system works best if individuals who possess unique insight are motivated (incentivized) to place their stake and then share their unique insight with other users. An ideal outcome contains a marketplace of information, where better informed users are rewarded for participating in the vote and then sharing their information. In many designs, because users are compensated either solely or partially from the value of the incorrect stakers, the incentive is to convince enough people to win, but not enough to win nothing (optimally convincing just 51% of people). This is a similar misincentivization as the “plainly obvious” problem.

In many staking models, the system works best if individuals who possess unique insight are motivated (incentivized) to place their stake and then share their unique insight with other users. An ideal outcome contains a marketplace of information, where better informed users are rewarded for participating in the vote and then sharing their information. In many designs, because users are compensated either solely or partially from the value of the incorrect stakers, the incentive is to convince enough people to win, but not enough to win nothing (optimally convincing just 51% of people). This is a similar misincentivization as the “plainly obvious” problem. Malicious experts. For sufficiently complex problems with certain architectural design, a malicious expert can utilize accrued trust to mislead the community — for example, they may create a subtle, unsecure smart contract, stake a large amount on it and “market” their expertise, causing others to also stake on it. Then, at a future time, they remove their stake and reveal the issue.

For sufficiently complex problems with certain architectural design, a malicious expert can utilize accrued trust to mislead the community — for example, they may create a subtle, unsecure smart contract, stake a large amount on it and “market” their expertise, causing others to also stake on it. Then, at a future time, they remove their stake and reveal the issue. Privacy and bandwagon effects. In many environments, we would like to see users share their information, but a model which requires a majority vote at some time to decide how to make distribution is subject to bandwagoning effects, especially with a smaller number of participants. For example, if the model distributes stake proportional to the fraction of non-majority voters, if you can observe the current vote state, you may be incentivized to merely vote with the majority, without putting any analytical effort in to the problem at hand.

These issues are common — but they’re not unaddressable. In the development of our simulation platform and tooling for testing staking models, we have come up with meaningful solutions for each — but they come with associated costs. As examples:

In a two-phase model, we can achieve privacy by having users place their votes fully anonymized in step 1, then reveal their votes in step 2, before any value is redistributed. The cryptographic and incentivized considerations here are well understood.

we can achieve privacy by having users place their votes fully anonymized in step 1, then reveal their votes in step 2, before any value is redistributed. The cryptographic and incentivized considerations here are well understood. In the iterated model , we can do that repeatedly, where the votes from a second round of voting determine the distributions for the first round — this achieves a strong incentive to reveal and widely distribute the best information, solving both the “plainly obvious” and “51%/49%” problems).

, we can do that repeatedly, where the votes from a second round of voting determine the distributions for the first round — this achieves a strong incentive to reveal and widely distribute the best information, solving both the “plainly obvious” and “51%/49%” problems). By adding delegated votes and allowing users to allocate their value to copy the votes of some expert, we prevent the malicious expert from being able to throw their weight around.

We believe there’s some interesting opportunity to replace delegation with elected voters and other hierarchical voting models.

We’ve built some initial contract code to prove these ideas. While it is only a focus of a small part of our team right now (with Rewards and ERC20 creator being our immediate focus), finding deep solutions to these problems and building the infrastructure to elegantly and with great user experience manage and create staking and voting systems is likely to be a huge piece of our unique value proposition.

That’s it for this week! If you have any questions, be sure to join our Discord channel, tweet to us on Twitter or chime in on the /r/BlockCAT subreddit!