It’s fair to say that nowadays there is increasing interest in systems that coordinate behaviour and build trust by setting incentives for users.

How do you design such a system? How do you reward and penalise users such that their dominant strategy is to be honest, reliable, unbiased and hard-working? It’s all about the incentives.

We might also want to decentralise our system such that nobody owns it, or rather, everyone owns it. Enter blockchain and smart contracts. News curation is just one application among many: DAOs, insurance, prediction markets, lending, storage, compute power and many more…

Getting the incentives right is all about mechanism design. Mechanism design is hard.

If there is no free lunch in machine learning, in mechanism design there is nothing to eat at all.

At Incentivai, we build a tool for testing the incentive structure of your system. We simulate your environment and observe the behaviour and failure modes identified by ML agents. One way to look at it is that the agents approach your smart contract system the way AlphaZero approaches chess.

See our case study, the first two blog posts and the concept paper to learn more.

Why Machine Learning?

The use of Machine Learning agents in simulations is critical for several reasons:

agent behaviour is real-world-like

agents are capable of identifying new failure modes

agent behaviour quantifies the importance of failure modes

Will they offer bribes? Will they accept bribes?

The recently published analysis looked at the Nexus Mutual system (decentralised alternative to insurance). One of the key failure modes is the possibility of submitting false insurance claims and offerring bribes to users who vote to accept them.

While it is obvious that such an attack exists in theory, it is crucial to simulate and check under what circumstances it is more prevalent and how to tune system parameters to mitigate it.

For the attack to be a real threat, there need to be both users who find it beneficial to offer a bribe and those who are willing to accept it. During simulations, Machine Learning agents make decisions that are most likely to be beneficial for them.