2. Requirements

Protocol Goal

The protocol goal is to 1) significantly reward curators whose votes have measurably benefitted the overall network, and 2) to generate a goal policy among curators.

Protocol Assumption

There are four types of players in this model. Consumers (people referring to the list for decision making), listees (objects/people who are on the list), candidates (objects/people willing to join the list) and token holders (people curating the list).

In this model we assume that the list/registry is updated every X blocks (e.g. every 100 blocks). If the list is updated on timestamp T, the next update will come on timestamp T+X . We also assume that application periods, challenge periods and other mechanisms relying on voting, match and synchronise with this timestamp (e.g. if a candidate applies to a registry at T+1 block, voters have 99 blocks to decide whether or not to accept the candidate). From T to T+X the public state of the list is frozen.

This creates a direct constraint: the inability of the model to work on a time sensitive dynamic list (i.e. a list that updates itself every minute). To avoid a situation where candidates would apply to the list just before the end of the timestamp, we recommend to introduce a min_time function (e.g. 10 blocks) that requires the application and review process to be opened for at least R amount of time. That way, if someone applies during the T+X-R period, his application is automatically transferred to the next review period (T+2X). S is the KPI based state of the list before T+X and S’ is the state of the list at the end of the timestamp.

The protocol can only work if consumers interact with the listees on-chain so that there’s proof of the activity resulting from the listing. For instance, listees could be public ETH addresses of data providers or decentralised exchanges where the payment settlement happens on-chain. A list of universities wouldn’t fit in this model simply because their cash flows are siloed in a private set up and can’t be tracked by the network. Consumers and listees are free to use the blockchain that best suit their needs as long as they are public and the verification is automatically computable. The last assumption is that there must be a finite number of objects on the list. An infinite list would defeat the purpose of a KPI driven TCR.

Variables

This protocol is flexible in many aspects and the following variables are yet to be empirically tested before deciding how to most effectively set them up:

The list’s new state, S’

The objectivity rate, O

R is the minimum review period

X is the update cycle

NEG(T) Network Expected Growth as a function of Time

Protocol Setup

Consumers refer to the list for making decisions, token holders vote to determine what’s on the list and listees stake their tokens to be on the list. In our example, the list consists of smart contract public addresses for a given service. Consumers send money to these addresses for accessing the service.

Key Performance Indicators (KPIs)

KPIs are used in every organisation for a reason. A good working definition is provided to us by kpi.org: “Key Performance Indicators (KPIs) are the critical (key) indicators of progress toward an intended result. KPIs provides a focus for strategic and operational improvement, create an analytical basis for decision making and help focus attention on what matters most.”

As Peter Drucker said, “What gets measured gets done.”

This quote from Peter Drucker is an interesting one, because even though self governing networks aren’t comparable to standard companies, they are built for a specific purpose and, here KPIs can help. When measurable, these indicators must be used to direct focus on a specific path and chances are high that most blockchain networks are after measurable results. Governance mechanisms can enable network participants to vote on which KPI(s) should be their primary focus and act upon this decision.

The following proposal makes it possible to create a dynamic environment where curators are rewarded based on predefined KPIs for the network.

3. KPI TCR in practice

Alice participates in a vote to decide whether the network should accept or reject Bob’s application to the list. Like her, 60% of the voters believe Bob should be added to the list. Here the list is an unordered TCR. The minority’s stake (40% of the quorum) is not all directly distributed to the voting majority. Only half (objectivity rate is set at 50%) of these 40% are automatically granted to the majority, but the remaining tokens are locked in a smart contract. Alice is part of the majority during this vote so she receives 50% of her potential profit in addition to what she originally staked. Now Bob is added to the list at T point in time and the list is frozen for X amount of time.

During T+X, the economic activity of the list evolves and this evolution can be measured with three different, complementary and non exhaustive KPIs:

Transactions frequency (i.e. how many times has money been sent to the addresses on the list) Transaction volume (how much value has been sent to these addresses) New applicants (how many new candidates do we have for this list)

At the end of T+X, the protocol automatically computes a comparison between the list’s previous state (S) and the list’s new state (S’). No third-party reporter should be required, but the comparison between S and S’ should be done by the Virtual Machine. The network expected growth as a function of time NEG (T) is the network’s KPI. A simple model could be that the system only cares about transaction volume. For instance if the list before Bob joined was able to generate $100,000 (S) in transaction volume (TV) and generates $120,000 (S’) after he joined, that’s great! It means there’s been a 20% growth in volume during T+X. If NEG(T) was programmed to unlock the smart contract at +10% in TV, then the pool is released to the majority.

Using the example above, there are now two possible outcomes (TV example): If TV(S’)>NEG(T) or TV(S’)=NEG(T), then Alice and the others receive the amount that was locked in the pool and the other half of the slashed minority because there is proof that their vote has met the network objectives. If TV(S’)<NEG(T) then the pool remains locked in the smart contract and Alice doesn’t receive the other half of the slashed minority. She needs to vote again if she wants a chance to earn what’s in the smart contract.

As you noticed, the objectivity rate of 50% in this example is arbitrary. We call this objectivity rate because from an economic perspective, the amount that is automatically granted to the majority does not necessarily mean value creation for the network, so the reward is based on a subjective concept. One could decide that no tokens should be granted to the majority without tangible economic value being created. I’d argue that this is a tricky situation because the more is locked in the pool, the less incentivised players are to vote in the first place. Although this protocol is best-suited for economic value creation, players would have next to no incentives to vote if the objectivity rate was set close to 100%.

S’ can be determined by the community using any relevant on-chain available information. If the second outcome occurs and S’<NEG(T), new participants are incentivised to join because the potential reward for voting has increased. If after the new voting round S’ meets the network’s KPI objective, the reward will consist of the slashed minority plus the amount locked in the smart contract, proportionally to the voter’s stake. One might wonder what would happen if each new voting round didn’t led S’ to exceed or equal NEG(T).

The answer is simple: people would have the Fear Of Missing Out. If the network’s KPIs don’t grow, the amount locked in the smart contract will. This means that if there’s been 500 voting rounds and none have managed to meet the KPI objectives, 50% of what’s been at stake during these 500 rounds is still locked in a smart contract. This not only incentivises users to vote, but it also encourages them to do everything they can to make the network meet its KPI objectives.

4. Network Expected Growth

Graph (1) is helpful to understand how NEG(T) can change over time: let’s imagine that you are launching a data marketplace where the listees are public ETH addresses of data providers. At the very early days of your network, you expect many people to join the network because your service is new and there’s a lot of room to grow. Staking is cheap for applicants because there isn’t much competition and you want to incentivise early participation.

Consequently, you set NEG(T) at a high level, e.g. 10% for the first timestamp. This means that for the pool to be unlocked, transaction volume, frequency, or registry applications must increase by 10% during T+X. As time goes on, NEG(T) slightly decreases such that it is more in line with the state of the market. For instance, the red point on the graph might represent a time when NEG(T) is set at 2% for T+X.

As the market approaches maturity, NEG(T) is set closer and closer to zero but never actually reaches it. NEG(T) should never be equal to zero because you always want players to be incentivised by the pool, even under equilibrium conditions.

5. Challenges

Now the network might face several challenges at this stage. The first one is that if the pool is so big that, say, one million people participate in the voting process, then the potential reward decreases proportionally to the number of people who voted. If one million people take part in the voting process and the pool is unlocked, the potential gains are small, so the network faces an incentive problem. One way to solve this is to create a “lottery” function that randomly selects one or a finite number of public address (who were part of the majority in the most recent voting process) in case S’ condition is met. That way, people are incentivised to keep playing because irrationally, they hope to win the lottery and their potential reward is much higher that their cost. I’ll argue that if one wants to set a lottery function in the protocol, then he must lower its objectivity rate.

When people participate in a real world lottery, one reason why they behave irrationally is that they have no work to do. They simply buy a ticket and they get a chance to win. Here participants must take part in the voting process (which requires work) and bare the risk of losing their stake before they can have the chance to win the pool. One other possible solution is to grant larger potential rewards to users who vote regularly and repeatedly. The more often a user bears the “risk of voting”, the higher his potential reward is.

Another challenge can come from people waiting for the pool to grow. Just like in a real world lottery, if no one finds the correct combination to win the pool, then the lottery keeps growing. Naturally, as the lottery gets bigger, more people are incentivised to participate because the reward increases, but the ticket price doesn’t. In a tokenised ecosystem, we want to make sure that the ticket price increases because otherwise early-participants can be discouraged to vote early in the process but encouraged to vote at a later stage.

To solve this problem, we can use a very useful tool called a bonding curve. Check out Simon de la Rouviere’s post on the topic for more background. In our setting, we can use a bonding curve to make it increasingly expensive to participate in the voting process when the pool is growing. At the end of each timestamp, for example, the minimum staking amount for voters increases if the the pool isn’t unlocked. But this is not the only time a bonding curve can be useful in this system.

One additional assumption we have to make is that in this model, which relies on on-chain activity, some players are going to wait until the end of each timestamp to vote for a particular outcome. Indeed, as time goes on, they can collect freely available information on the particular status of a given listee. Just like in prediction markets (e.g. Augur), market participants collect information that helps them determine the likelihood of an event to happen.

They can consequently easily find out whether a listee should be removed or not from the list while getting closer to the end of the timestamp. To solve this problem, we can also use a bonding curve. We can make it more and more expensive for people to take part in the voting process within each timestamp, such that it is much cheaper to vote right after a list has been updated than at the end of T+X.

A bonding curve can also be implemented to determine the staking amount required for applicants to join the registry. That way one can make sure that the cost of having a spot on the list is increasingly high, and applicants have incentives joining the registry at an early-stage. On the other hand, one can start with a very high objective rate that slowly decreases over time. For instance, starting by having a pool where 60% of the tokens from the slashed minority are sent to the pool and it gradually goes down to 10%.

Conclusion

Alongside the other regular staking mechanisms, the pool greatly reduces velocity, which is a key determinant for a token’s appreciation. A KPI Driven TCR also “gamifies” normal TCRs such that stakeholders are more likely to be entertained, and entertainment is something most crypto economic systems currently lack.

Networks can now coordinate efforts around what they think is most important and measurable. At any point in time, new voters can come, cast their vote and hope that their decisions will meet the network’s KPIs. It’s also a fairer system: automatically rewarding the majority regardless of their vote’s impact is a problem and KPI driven TCRs might be a solution.

One problem that remains with this model and all TCRs is the lack of automation in the curation process. The idea of collectively curating a list is great but it heavily relies on human participation to even work. Human performed work creates frictions. Bitcoin and Ethereum consensus mechanisms have only been functioning so well because they (for most of the work that needs to be done) don’t require humans and have automated, machine performed processes.

This model requires further development and is of course not bulletproof so it’s great to have as many inputs as possible to improve it. Hence, I warmly welcome all comments and suggestions from the community.