Editor’s Note: These notes, based on talks at the Ethereum Foundation’s third annual developer’s conference earlier this month, were shared internally over email, as part of our ongoing sharing of ideas and learning inside a16z. They’ve now been reposted here, unedited, as a resource for those interested in ethereum technology, research, development, and more. For more about the Ethereum Foundation and resources, visit their site. For our own podcasts and writings, go here.

Ethereum in 25 Minutes (Vitalik Buterin)

Why ethereum?

Seeming public consensus since 2013 that blockchains are useful for… stuff. Not just money! Asset issuance, etc

Problem: most existing blockchain protocols were designed like a calculator — can do one thing well. People were starting to create protocols that worked like a Swiss army knife. Come up with 25 different blockchain applications, protocols. But breaks down when 26th application comes around Why not make a protocol that works like a smartphone? Instead of supporting small suite of applications, support a programming language which gives you the ability to create applications. Anyone can write code, package it, and you have an app. Anyone can download the app, use it and run it



The concept — Ethereum is a blockchain with a few additions. There are two types of accounts:

User accounts (controlled by private keys). If transaction included in a block, it gets executed

Contracts (controlled by code): If thing A happens, send coins to person X. If you send coins into the account, the code is the only thing that can move coins around. Digital assets controlled by computer program. Can be used to represent arbitrarily complex business logic on the blockchain (e.g. on blockchain voting scheme, ENS (name service), issuer backed assets, etc)

Anyone can create an application with any rules by defining it as a contract

State vs history. State == “current” information. Accounts to balances (how much Ether), nonce (counter for replay protection), contract code and contract storage (mini database that any contract on blockchain can use). History == things that happened. Transactions, receipts. Currently, all “full” nodes store state, some store history, and some do not store history

Code execution: Every transaction has a TO address it sends to (unless it’s creating a contract) . The TO address’s code runs. Code can: send ETH to other contracts, read/write storage, call other contracts

Gas — Halting problem: cannot tell whether or not a program will run infinitely. Solution: charge fee per computational step (“gas”). Special gas fees applied to operations that take up storage. Not a currency (can’t transfer or hold), but a unit of measurement

Gas limit — Counterpart to the block size limit in bitcoin, set by miners voting on it. Currently at 6.7m

Transactions contain: nonce (anti-replay attack) — Send 10 ETH to Bob, prevents Bob from including it in the blockchain 100 times gasprice (amount of ether per unit gas) startgas (max gas consumable) to (Destination address) value (amount of ETH to send) data (readable by contract code) v, r, s (ECDSA signature values)



Logs: Append-only, not readable by contracts. Bloom filter protocol to allow easier searching by topic. Put in Merkle tree so efficient light client access to event records (domain name changed, etc)

EVM: Stack, Memory (temporary array of data VM can access), Storage (contract’s database), Environment variables (block number, timestamp), Logs, Sub-calling

High level languages (most of the time won’t be writing in EVM code): Viper, Solidity, LLL, Bamboo

The ABI

Function calls are compiled into transaction data

First 4 bytes: function ID

Next 64 bytes: first argument (32 byte), second argument

Create transaction, broadcast to network.

Merkle trees are singlehandedly responsible for making light clients possible. Allows for efficiently verifiable proofs that a transaction was included in a block (or verify that a piece of data included in a larger piece of data). Branch of hashes that allow you to check membership. State tree — Merkle tree of entire Ethereum state — contract code, storage, transactions, etc. Root hash goes into the block header — called the state root

New in Byzantium (hard fork live Oct 16) — Privacy features: Ring signatures, zk-snarks, big number exponentiation (think: RSA)

Future directions

Casper

Sharding

EVM and other protocol upgrades

Introduction to Casper Implementation (Chang-Wu Chen)

Ethereum’s next generation Proof of Stake, Deposit-based proof of stake consensus protocol. Put ETH into contract to become validator.

Byzantine fault tolerant (BFT) style Proof of Stake. If more than 2/3 validators follow protocol, then no conflicting blocks will be finalized.

Two versions of Casper in internal research team:

Casper the Friendly Finality Gadget (Casper-FFG), designed by Vitalik

Casper the Friendly Ghost: Correct by Construction (Casper-CBC), designed by Vlad

How does Casper work?

Advantages: Energy efficiency (don’t need hardware for mining). Reduces the risk of centralization. Can only get 1 dollar’s worth of benefit per dollar vs PoW — cheaper cost when you have high computational power. Economic finality (security)

BFT-based (Byzantine Fault Tolerance). If more than 2/3 validators follow protocol, then no conflicting blocks will be finalized. It also means if the majority is less than 2/3, the system will get stuck if no additional action is taken

Challenges: Goal: an ever-growing canonical chain. In PoW, a single miner can make progress over time. In PoS, system can get stuck if validators misbehave. Keep in mind that validators could malfunction or misbehave: Accountable safety, Plausible liveness

Any ETH holder can become a validator (a fixed amount of ETH) by simply depositing ETH into the Casper smart contract. Validator’s job to reach a consensus via vote. Validator’s vote weight is proportional to the total deposit size

Reward: if the checkpoint is getting finalized, then the validator gets the reward

Hybrid Casper Layering PoS on top of PoW — check points are coming from PoW Every 100 blocks (epoch) will have a checkpoint that validators will vote for Incentivize miners to migrate to PoS

Vote message — A vote has four parameters: epoch number, checkpoint hash, epoch source, checkpoint hash source

Supermajority link — An ordered pair of checkpoints with target and source (a, b), such that at least 2/3 of validators (by deposit) have published votes with source a and target b

Justified checkpoint — The genesis is always justified and finalized. If a chain has accepted votes from 2/3rds of validators for some checkpoint C that all use as a source the same ancestor C’, and C’ is justified, then C is justified

Finalized checkpoint — A checkpoint C is finalized if it is justified and has a direct child C’ that is justified

Accountable safety — At least 1/3 of total deposits will be slashed if two conflicting checkpoints are finalized

Avoid double vote — Don’t contradict yourself within the same epoch. Avoid surround vote — make sure targets in the DAGs are consistent with your previous vote

Plausible liveness — Supermajority links can always be added to produce new finalized checkpoints, provided there exist children extending the finalized chain

Status Casper contract: Manage dynamic validator sets, vote records in each epoch, slash conditions, rewards and penalties Proof of concept: Casper-FFG contract, testing language, run on testate to evaluate the correctness, no particular release date



Presenting Parity: A Light Client for a Heavy Chain (Robert Habermeier)

Classifications of Ethereum clients

Full nodes verify all blocks and transactions

Light clients verify block headers, but don’t verify transactions

Thin client — isn’t checking consensus process, but has someone else do checking and then fetch data about the chain

Security guarantees of a light client

Checks validity of headers, but not validity of state transitions (transaction execution, block reward, etc)

What kinds of attacks does this leave us open to? Targeted attacks toward individuals (not able to do this on the whole network), by sending them fake headers

When doing large transactions would want security of a full node

1. Backreferencing — We want not only homogeneous requests to be responded to in a specific packet (e.g. account balance at block A, data at block A, etc), minimizes round trips

2. State proofs

Transaction execution (for gas estimation or etc_call) requires an unpredictable subset of the state trie

Naive requesting from network leads tumor round trips and duplicate info

Request an entire transaction proof from a peer

3. Request credits

Serving nodes give peers “credits” that can be used to make requests

Credits recharge slowly over time

Different requests have different costs: determined by server

Servers gather data on request serving times and adapt costs

4. Pub/sub

Querying for data like events or contract states changing can be wasteful

Full node peers know better than we do when something has changed

Subscribing to changes let us keep track better

Caveat: catching out peers who don’t publish when they’re supposed to

Canonical Hash Tries — This is how we avoid storing most headers on the light client

Block Pruning on Full Nodes

We already use DHT-style routing in the network protocols

K-buckets are sybil resistant

Make Node IDs assign responsibilities for batches of N historical blocks

Challenges Ahead for Smart Contracts (Emin Gun Sirer)

Ethereum today — “blockchain” is a word that needs no article, definite or indefinite

Cryptocurrencies are a new asset class, worth $100b

Ethereum has emerged as a platform for new projects — $1.4b traditional VC money invested in startups as of Jan 2017. More than $2b projected in ICOs this year

Strong foundation: EVM, Solidity, Dapps. No downtime since launch

Multiple implementations of EVM: Parity, Gets

Supportive, constructive, science-driven community (worth its weight in any currency, actual value proposition)

Significance

We are witnessing the emergence of a new class of systems: A new take on eliminating trust, Trustworthiness comes through audibility and transparency

As well as the emergence of a new class of financial instruments, instruments that are executable

And a new form of social organization: Crypto-backed corporations, trusts, partnerships

Challenges ahead for Ethereum

Scale — How do we pack more transactions/s without impacting decentralization?

Correctness — how do we make sure that contracts do what we think they do?

Privacy — how do we couple confidential, private data with smart contracts?

Scale

Blockchain operation — create interlinked blocks, append to ledger

Makes it infeasible to modify the past, as long as attacker does not control majority hash power

The Block Size dilemma The Bitcoin community is having a civil war over the maximum size of a block Ethereum: Blocks cannot exceed 1MB, are issued every 14s. Throughput: 7-25tx/s (Bay bridge probably processes ~7 tx/s in traffic). Latency: 14s Simply increasing the block size is not a long term option. Very large blocks can lead to centralization Science of consensus protocols (vs. “Design by gut”) — New metrics for probabilistic consensus protocols: Mining power utilization. Fairness (even to small miners). Consensus delay (want it to be low). Time to win and Time to prune (latency of consensus process from the point of view of a miner). Apparatus for measurement and evaluation — Miniature world, a 1:1 replica (of crypto world in basement of Cornell CS department). Falcon network, relay backbone (Collect data from inside Bitcoin network).



Ethereum findings Ethereum PoW system incurs many uncle blocks (would be nice to reduce) Ethereum network would benefit from a relay network Ethereum mining ecosystem is more centralized than Bitcoin State of the art in an- and off-chain scaling?

Bitcoin-NG — (best) protocol for on-chain scaling. Reassigning roles to blocks Assigning a leader to the key block that verifies transactions as they come in Key blocks are small and rare, micro blocks are small and frequent Critical part is in getting th incentives structure just right so the subsequent miner will extend the longest chain Set of constraints guide us to precise parameters Bitcoin-NG has better fairness than Bitcoin as block size increases Can yield ~300tx/s. Not quite VISA/Google scale ByzCoin generalizes the NG idea to a coalition of leaders



Scaling, Off-chain

Lightning and Raiden Take load off blockchain to two parties that communicate in private and resolve back to the blockchain Fundamental problem: the participants have access to old settlement transactions. They can push an old state, rewinding the channel Ecosystem: requires constant monitoring of the chain (so parties don’t try settle early). Starts to look more like a bank Security: vulnerable to transaction malleability Performance: require multiple round trips Teechan and Teechain. Inside every recent Intel chip is a TEE (trusted execution environment), something far better than a Trezor/Ledger. Also inside ARM and AMD. All memory is encrypted, if program is hacked, access to decryption keys is lost Remote attestation — prove to a remote host what code you are running

Teechan Secure, hardware based protocol Freeze the on-chain state of two contracts: Exchange keys (safe because executing in TEE, cannot get keys out). Use peer-to-peer state changes, off chain. Settle and update on chain state of two contracts.

Teechain Generalized Teechan to multi hop, atomic transactions. Implemented on Bitcoin, signed with SGX keys >100ktx/s/channel <30ms latency UK to US Work underway for Ethereum



Correctness

Verify smart contracts do what you think they should

Typically done with the aid of a specification

A rich field, many decades old

Software verification as we know it falls short (so far focusses on safety and liveness); also needs game theoretic verification

Safety: proving that a predicate is always true on all parts of execution: “sum totals of tokens <N”, “token balances are conserved”

Liveness: proving that eventually something good happens on every path “smart contract does not get stuck”

Game theoretic properties: “this contract is fair to all participants” “this contract is truthful or incentive-compatible” “late voters in a DAO are not disadvantaged” “DAO++ is designed to be regret-free” “contract maximizes social utility/outcomes” typically need to be coupled with models of utility. almost always need to be evaluated with respect to other execution paths

We still need basic program verification We still need oracles (virtual notary, town crier, Oraclize, Augur, …) We will need escape hatches — crowd-sourced possible for use in decentralized settings But we need to start tackling problems on the horizon — btw, the DAO suffered from multiple such problems that required human examination to find



Privacy

Private data and public blockchains do not mix — all contract state is public. Contracts cannot hold or exercise secret keys (e.g. API keys, interact with AWS, exchanges)

There is private data we’d love to store

CreDB — new database, implemented on secure hardware Not on-chain, but on network. NoSQL-like Eidetic — remembers everything that ever happened Blockchain-driven — gets its instructions from the blockchain in encrypted form. Can store data, can query, can store methods



Conclusions

Planetary scale decentralized computer that executes in tandem with integrity

There are exciting challenges ahead, we know at least the next few steps

With a science driven constructive community we are poised to tackle them and more

Intro to Cryptoeconomics (Karl Floersch)

When you have programmable incentives, you can program human behaviour

Cryptoeconomics

The application of economics and cryptography to achieve information security goals

Designing Incentives. “You cannot reasonably about security of blockchain protocols without reasoning about economics” — Vitalik

Cryptography — hash functions, digital signatures, Merkle DAGs, zkSNARKS

Economics — tokens, voting rights, bribing attacker, auctions

Good outcomes — trusted execution, open access, fast finality, decentralized control, inexpensive

Bad outcomes — safety failure, censorship, etc

Casper is just one of many crypto economic mechanisms, e.g. Automated market maker contract. Design a mechanism, analyze incentives, make a website & logo, publish analysis & website, observe behavior, record findings. Smart contract trading ETH to UNI.

Participation incentive

Exchange creator: +Passive fee income (race to the bottom? can model out, play around with it. what tokens is it useful for? [don’t want to create market maker for a token that will plummet]). -Capital lockup. ? Maintain 50/50 hedge

Token buyers +Minimal transactions +Trustless exchange – exchange fee

Token arbitrageurs + Low risk profit (how close will the price track other exchanges?) + trustless exchange – exchange fee

If assumptions hold true, the exchange will produce the following outcomes within these bounds… good outcomes? Good — easier exchange, trestles trades, liquidity for small trades. Bad — no KYC, bots get richer, miner front running

Going live — code, open source, publish. Anyone can deploy a contract which can be trusted with millions of dollars. Can recreate financial systems and anyone can do it

Learning from mistakes — miner front running

Assumptions were incorrect, forgot about miner front runners Slip in my transaction before it closes, sell for quick profit +super low risk profit – dev costs

Disincentivize — flat rate per block? trusted tx ordering oracle? just accept it as a minor tax?

tax? Token arbitrageurs initially human, but are eventually replaced with bots

Programming incentives

Superpowers for makers

Incentives drive behavior

Program incentives which promote cooperation, equitability, not incentives which promote domination

Solving big problems — Energy, financial tools (decentralized exchanges), reward content creators with more interesting incentives

Casper the Friendly Ghost: A correct-by-construction blockchain (Vlad Zamfir)

Process for Casper the friendly finality gadget — taking existing consensus protocols and simplifying them, for consensus safety

Casper the friendly ghost — derived to satisfy

“Proof of work” blockchain consensus does not decide on one block at a time, more overhead

Casper: Each block finalized with async BFT consensus safety. And it only requires O(1) per node

Abstract consensus safety proof:

We need: “Protocol states”, “protocols tate transitions”; an “estimator” to define “estimate safety” Protocol states are elements in a set, and protocol state transitions are directed paths through protocol states An estimator maps protocol states to estimates. In binary consensus it maps to 0 or 1

An estimate is said to be safe if it is consistent to future protocol states

If two estimates are safe at two states, and they have a future in common, then they are also safe at that future

Key intuition — we remove the states that have too many Byzantine faults, states have a common future because there are not more than N Byzantine faults

Binary Consensus

Protocol messages that are bits

Score of an estimate is sum of estimators that have that estimate, Look at all recent validators

Detecting Byzantine faults — attest to what they see from other nodes. Equivocation — find Byzantine faults by equivocations

Arrive at an binary Estimate safety

Casper the friendly ghost

Blocks have another block in the estimate spot.

Score of a block is weight of validators whose latest messages have the block in their blockchains

Greedy heaviest observed sub-tree

Protocol states and state transitions are defined in the same way as before

Estimate safety — For all future protocol states, this block is in the blockchain we will choose in all future states

Nodes running either of these protocols will need to detect estimate safety in order to make decisions on safe estimates

Clique Oracle: Nodes in a clique can’t change each other’s estimates

Validator rotation

Define a block structure with weights mappings in the justification

We will score blocks according to the weights map in their parent/prev block

But none of the other definitions need to change, so done with addicting validator rotation to consensus safety

Experimental observations: blockchain

Blocks can contribute to the finality of many blocks; blocks will agree and disagree will other blocks; will determine how future safety turns out

Panel: Casper and Consensus (Vitalik Buterin, Vlad Zamfir, Peter Czaban, Emin Gun Sirer, Elaine Shi)

Casper principles: decentralization, Byzantine fault tolerance, economic security

Goal of Casper-FFG, as simple as possible, as simple to graft onto PoW chains as possible while being safe under asynchroncity, BFT

Proof of stake increases security because you get finality that’s safe in asynchrony (vs PoW). Attacking nodes attack at a higher cost than PoW

Vitalik: Thought experiment: attacker spends $1b to buy ASICs, 51% attack. Can spawn camp. Developers will change the PoW algorithm so ASICs become useless. Attacker has lost $1b. All of good miners lost useful hardware as well

Get another $1b, corner market on GPUs. Launch another 51% attack, general purpose so developers have no countermove.

Has to make cost of attack very high — survival through domination of hash power

Proof of stake attack. Buy $100m of Ether Breaks finality, censors transaction for a while — Worst case: community can coordinate hard fork. Legitimate because PoW people agree to changing algorithm, which is also a hard fork. Attacker loses money, community not affected. Eventually run out of money, price of Ether increases after multiple attacks.



PoW — at the mercy of hardware trends. — as people come with better ASIC tricks, heat/cooling, no knob to turn to adjust. PoS have ability to put independent individual knobs on individual participants in system

Introducing the TrueBit Virtual Machine (Jason Teutsch)

Take C/C++ code, run verification code

Ethereum sacrificed computation speed for immutability + communication component

Scalability idea #1 — obliterate Ethereum’s gasLimit

gasLimit helps incentivize transaction verification

A world without gasLimit — a heavy transaction with a massive transaction fee

Verifier’s Dilemma — in the case of a heavy transaction, what should a rational miner do? Skipping verification affords an advantage in the mining race — therefore skip verification Mining on the “wrong” chain means the other miners will ignore a found block. Therefore do the verification Prevents smart contracts from running for more than a fraction of a second



TrueBit gives Ethereum a computational boost — do heavy work off chain

Protocol goals:

Anyone can offer a reward for performing a computational task

Anyone can solve a task in exchange for a reward

The protocol guarantees correctness of solutions

Consists of an Ethereum smart contract and a off chain computation framework

Unanimous consensus algorithm (unlike Ethereum which is majority)

Anyone can challenge a computation by taking it to court — verification game

Judges enforce a binary search to pinpoint the step where the disagreement occurs

Scalability ideas #2 & #3

How can we ensure that verification actually occurs?

Idea 2: offer rewards for checking coputations. Incentivizes participation, but doesn’t incentivize correctness

Idea 3: Offer rewards for finding bugs. Incentivizes correctness, but doesn’t incentivize participation (where there are no bugs)

Verifiers will stop paying attention if they don’t expect rewards

Scalability idea #4: forced errors

Offer a bug bounty and provide expectation that bugs will exist

TrueBit protocol occasionally pays “Solvers” for submitting incorrect solutions

Dispute resolution subroutine ensures correctness whenever “forced errors” are not in effect

Ideally, this subroutine is never used

Three properties of TVM:

Tasks must compile and execute identically on all machines

A single computation step on the TVM runs comfortably within Ethereum’s gas limit

Space required to describe a TVM state change fits inside a single Ethereum transactions

A Turing machine would be ideal but there’s no such compiler. Next best thing: go to WebAssembly steps, broken down into 16 sub steps

Plasma (Joseph Poon)

At its core, Building a blockchain on top of a blockchain

Write a contract on Ethereum, taking a library and modifying it to suit your needs (e.g. social network, private blockchain that binds to public network — Ethereum becomes base chain for private chains, micropayments, decentralized exchange)

Initialize Plasma blockchain

Localized computation — only periodic commitments

Unique rules per Plasma blockchain

Decentralized applications at scale

Periodic commitments to the root chain. Not really trusting them — periodically commit small (hundreds of bytes) hashes. Not proof of existence, actually enforceable

Fraud proof enforcement. Submit proof on Ethereum — Merkleized attestation and proof. Can represent information very compactly

Core novelty in Plasma is around exits

Exiting Byzantine behavior — predesign contracts so all Plasma blockchains allow for orderly exits

Data unavailability. Can’t prove when someone is withholding data. Exit in the event blocks are withheld

When you discover blocks are being withheld (could be bad data, bad state transitions, can’t prove because no data), you exit to another Plasmas blockchain or the root blockchain as soon as possible

Allows computation with map reduce

Map — blockchains doing computation

Reduce computations back down — return values

Reach computation scale

Design goals

One blockchain can encompass all worldwide blockchains

Trust minimization

Payment and ledger scalability

Transaction Data Availability Problem

Transactions in a Plasma block — let’s say you wanted to create a simple payment in a Plasma block. Try to create a transaction similar to a standard payment on root chain Ethereum. Potential Problem: Cannot attest to this payment in the event of block withholding Potential Problem: If blocks are withheld it is unclear whether an unconfirmed transaction has spent funds or not. Difficult to prove double spends in a Plasma block and enforce it on Ethereum

Adding funds to Plasma block — same problem but when sending funds to a Plasma contract

Two-Phase Commit Mechanism — create transaction, sign and broadcast it

Plasma block with payment gets signed by validators — Plasma block confirmed and ultimately committed on Ethereum. At this point the transaction is not usable, Alice can still recover her funds from the old state Alice commits to seeing the funds in a block — Alice signs a message confirming that she sees the block. Alice shares it with Bob or Plasma chain Bob receives the funds and is treated as confirmed — Alice cannot take back

Why do we do this? Alice needs to be sure she is certain her transaction occurred on the Plasma chain in order to do a withdrawal in the event of a chain halt or other byzantine behavior. If money was sent to Bob, Bob needs the data available. This ensures either Alice or Bob own the funds. 3 parties: Alice, Bob, and Plasma chain itself.

Let’s say Alice tries to maliciously withdraw. The Plasma chain has not confirmed the commitment yet. Alice sends a transaction on Ethereum to withdraw her 1ETH from the Plasma Chain contract. She must put up a bond and wait some time. During that time Bob notices that Alice is trying to taking his money. Bob broadcasts a fraud proof. He can now take her bond and his money back

Attestation on information availability. Emerging theme when dealing with states when not everyone has information

Scaling the world computer. Root blockchain as the supreme court

ZoKrates – A Toolbox for zkSNARKs on Ethereum (Jacob Eberhardt)

Motivation: Off-chain transaction processing, result is written back to chain and verified to be correct on chain

Benefits of processing off chain

Scalability: If verification is cheaper than execution, throughput can be increased. Block complexity gas limit does not apply (halting problem does not apply)

Confidentiality: Private information can be used on external node without revealing to public blockchain

Approaches: TrueBit, Non-interactive Zero-Knowledge Proofs (zkSNARKS)

zero-knowledge Succint Non-interactive Arguments of Knowledge

Short and non interactive proofs

Zero knowledge

Verification cost independent of computational complexity

Ethereum Byzantium zkSNARK precompiles — EC Add, Scalar multiplication, Pairing check

ZoKrates Vision

Provide usable abstraction and tooling for zkSNARKS

Support complete process: from program code to on-chain verification

Seamlessly integrate with Ethereum

ZoKrates — a toolbox for zkSNARKS

A high-level language

A compiler, which transforms programs to provable constraint systems

Tools — Setup phase, witness computation, proof generation, Verification Smart Contract generation

A Modest Proposal for Ethereum 2.0 (Vitalik Buterin)

Let’s look at Ethereum so far…

Successes — it works! many applications, high adoption (>500k/day == ~7 tx/s) — includes ERC20 dapps smart contracts, Byzantium (a whole bunch of privacy features). More than 20k nodes worldwide

What are the challenges? Scalability — trilemma claims blockchain systems can at most have two of the three properties:

Decentralization (defined as the system being able to run in a scenario where each participant only has access to O(c) resources, i.e. a regular laptop or small VPS

Scalability (defined as being able to process O(n) > O(c) transactions)

Security (defined as being secure against attackers with up to O(n) resources)

Scalability

Every node processes every transaction

This means that the capacity of a blockchain is limited to the capacity of one node

Limited by disk reads

Likely solution: sharding

Spit blockchain state into N universes (“shards”)

Only allow asynchronous communication between shards

Each client only processes a small portion of transactions

Governance / protocol evolution

High demand for protocol stability, hard forks making deep changes are hard. Long time to code, long time to test, high risk of consensus bugs

But to get to Ethereum 2.0, deep changes are exactly what we need, so what do we do?

Intuition: “One blockchain, two systems”

Add validator manager contract. Runs proof of stake system that maintains consensus for 2 layer sharding system Would not maintain copy of all consensus rules. Enforce data in shards are available and data availability in the shard First a low risk thing to participate in. Get fairly significant portion of Ether stake participating, creating blocks in each shards periodically. Blocks in shards called “collations”

2 layer structure Top layer is blockchain Each shard is its own universe. Have validator manager contractor bridging them. Process moving Ether from main chain to shard and one shard to another

Scalability Main shard O(C) — one node | new shards O(C^2)

Consensus main shard PoW –> Hybrid PoS –> PoS | new shards PoS through on-chain contract

Security Main shard Full | new shards 50% honest majority of on-chain PoS now, full security later

Governance main shard emphasize conservatism and strong immutability more than today | on-chain ETH voting governance through PoS, emphasize fast evolution now, conservatism later

Implementation roadmap/ Sharding roadmap

Implement as proof of stake sidechain Collation headers get verified and processed by on main chain validator manager contract Block creation rights assigned by simple proof of stake Clients randomly assigned One way ETH convertability

Add two way convertibility

Move collation headers from contract position to uncle position

Tight coupling — reject blocks with invalid headers. Ideally would happen when things on sharding side are stabilized

Shards are creating new address space, not affect existing address space. Incomptatible

Changes to make:

Changing Merkle trees from hexary -> binary. 4x shorter Merkle proofs

Account/state tree redesigns

EVM upgrades: EVM 1.5, E-WASM

Parallelizability

Stateless clients Consensus nodes do not hold entire state, only state root Transaction senders attach Merkle branches (“witnesses”) proving state that they access Blocks get passed around with the witnesses Substitute disk read with ~1kb bandwidth (assuming 10^9 accounts, optimized Merkle branch) Make it much easier to reshuffle validators when sharding Allow us to care less about state size Massively increase parallelizability. Allows us unlimited leeway along the traditional “big block” scaling route if we want it



Split ongoing development into two layers: Layer 1: safe and conservative (getting Casper rolled out),

Layer 2: rapid development and experimentation. Get benefits of both at the same time in the short-medium term.

Using Ethereum for Secure Decentralized Optimization (Eric Munsing)

Solve hard math problems off chain — machine learning, market clearing, operation scheduling

Smart contracts coordinate consensus on global optimum

Guarantee security, feasibility, and optimality

Optimization problems — convex optimization, decentralized optimization, fully-decentralized optimization

Using smart contracts for security

Convex optimization: minimize/maximize subject to constraints

Break big tasks into small pieces: solve local problems on distributed hardware, bring local solutions into consensus

Create analytically solvable sub-problems, break into a set of smaller problems, respect privacy of local information

Weaknesses of aggregator-based consensus

Trust issues in market clearing, monopoly price distortion, single point of failure/attack

Alternative: fully decentralized optimization. Consensus with neighbors who share variables

Swarm team development update (Viktor Tron, Daniel Nagy, Aron Fischer, Louis Holbrook)

Network of computers that share data with each other, p2p decentralized storage

Same p2p connections as the Ethereum network, tightly integrated into Ethereum

Decentralized store of app data

Type address into URL bar bzz://some-dapp-site.eth

Use Ethereum ENS to turn name.eth into a hash

Use hash to query peers for data

Render Dapp/website

Integrity protected, resilient, censorship free, server less, p2p web

Swarm under the hood

Split file into 4k chunks, send to different places for storage.

Content addressed: each chunk gets a hash as its “id”

Chunks stored at node with “closest” address

Data stored uniformly across the data

Swarm payment infrastructure is modular, flexible, and extensible

Can provide incentive structure for content delivery and content storage, but also many other services

Status

If you’ve tried swarm, you’ve been using Swarm PoC Client 0.2. Proof of concept. Shows basic idea works. Performance slow

Learned a lot, rewriting parts of the system

Scope of the project has grown beyond just storage — direct node to node communication, live streaming

Swarm is a devp2p network. Can use routed network to send messages between any two nodes connected by intermediary nodes

Ethereuem Whisper is to serve the need of anonymity at cost of performance

PSS (Postal Service over Swarm) is routed, so it knows the fastest way with minimum of network load, different purpose

Uses whisper (v5) for Topics, expiry, cryptography

FUSE integration and SWARM — Synchronize files and directories between own devices

Scalable Responsive Dapps with Swarm and ENS (Daniel Nagy)

The challenges of scaling / bottlenecks

The blockchain Information stored on every node Requires immediate full ordering of events Requires fast broadcast of transactions and blocks

Network links, network-wide broadcasts

Overcoming storage bottlenecks — keep data in swarm, store root hash in ENS

Overcoming transaction bottlenecks — Raiden style updates to ENS resolvers

Overcoming broadcast bottlenecks — clever use of PSS with pub/sub

Eventual consistency on blockchain

15 minute block time x 12 confirmations ~= 3 minutes

Updates reach concerned nodes faster

Update structure requires clever design

Just sending a root hash is insufficient: concurrency and delay issues

Endpoint aggregation is often preferred to complicated updates

Rely on generic infrastructure, if possible

Enough redundancy even for unpopular/niche apps.

Make sure data eventually reaches the blockchain: ETH > LES > BZZ > PSS

Make sure that important state is eventually committed. Think about incentives!

Use deposits or other commitment to reduce provider churn

On-chain dispute resolution

Designing Future-proof Smart Contract Systems (Jorge Izquierdo)

Aragon: decentralized organizations platform built on Ethereum

Usable by nontechnical users

Allow extendability of the system with third party on-chain modules

The case against upgradeability

Changing the rules on a live contract

Trust required on the entity that can upgrade

Front-running attack with an upgrade

Upgradeable libraries

Pros: transparent for developer, allows to ‘modify’ the linked library

Cons: main contract ABI cannot be modified, data structures are fixed

AragonOS: Tiny kernel, upgradeable business logic on edges –> apps

Exploring the Ethereum Blockchain (Matthew Tan)

Etherscan — Ethereum block explorer. 1m MAUs, 9b API calls/mo, launched 8/2015

Ethereum blockchain. Genesis block – July 30th 2015

>4.4m blocks, >71m txns, >37.5m “internal transactions” — message call that resulted in a value transfer, >10k ERC20 token

Devcon #1 11/2015 — 8.5k txns/day.

Devcon#2 9/2016 46k txns/day

Devcon #3 11/2017 500k tx/day

Challenges in building an infrastructure level service

Scalability, security, support, “just keeping up”, nodes management

Ethereum nodes management

Multiple client implementations (Geth and Parity). Different feature sets + network robustness

Node deployment strategies for public nodes: Use SSD drives Cache settings Monitor node response Load balancers with smaller multiple nodes



Deriving the “Ethereum user persona” with Google Analytics

US remains dominant user base.

China #2 in 2017

The Raiden Network (Augusto Hack, Jannik Luhn, Loredana Cirstea)

Scalability solution for Ethereum, up to 1m tx/s

Currently only on Linux (MacOS coming)

Not an Ethereum node, separate process communicating with Ethereum node, also need Ethereum node running on machine

API (for building applications on top of Raiden)

PUT /api/1/connections/<token> Automatic channel management Tell Raiden how much funds to use on channel

PUT /api/1/channels Manually opens a new channel. Manage it on your own

PATCH /api/1/channels/<address> Change the channel state. deposit/close

POST /api/1/transfers/<token>/<target> (once connected to network) Make transfers, use identifiers

GET /api/1/events/tokens/<address> Polling for EventTransferReceivedSuccess, EventTransferSentSuccess, EventTransferSentFailed



Microraiden

Robust off chain payment channel framework

Easy to use off the shelf, unidirectional, many to one, so don’t need to think about routing issues or fees for people who have routing channels open

Fast and free off chain

Available now on Kovan testate, available on main net by end of month

Microraiden applications

Paywalled content — pay per use, pay per view for static resources or streamed videos

M2M services: utilities, data storage, data usage, API usage. Scheduled API queries, token field robots

How does it work? off chain payments

Sender and receiver keep copies of channel data, block number, current balance. Receiver has last balance proof received from sender. Message signed by the sender container balance, address, block number and receiver’s address

ERC223 option of using less gas

Towards a Permanent ENS Registrar (Nick Johnson)

Ethereum name service

Launched on May 4

To date: 180,822 names auctioned, 168k ETH deposited

Dispute resolution not initially built in. Decided to build in as a separate layer, so can implement blacklists

Permanent registrar design — differing costs of capital

User — 1 ETH (total cost for indefinite use), Investor 5%/year, Speculator 0

Conclusion: transition to rent based model — 1 ETH / year, equalizes cost for everyone

Rolling auctions — no more ‘open auction’, any name can be bid on at any time. Makes guessing the name a user is bidding on impossible. 48 hour bid period, 48 hour reveal

DNSSEC Oracle — user proves to oracle; for any records it doesn’t know about it contains the record and a signature. Claim with text records to registrar, registrar queries oracle, oracle responds with answer. Registrar sends setSubnodeOwner to ENS

Starting with .xyz domains, 3/4 of domains compatible with algorithm

Intro to Solidity (Hudson Jameson)

High-level language for Ethereum contracts

It looks like Javascript (with types)

Contracts look like classes

Code is compiled to the EVM, once deployed to the EVM, code is completely isolated and cannot reach outside of the EVM. It’s deployed and run on every node in the Ethereum network.

Cannot have outside sources of information (like sports scores) unless you explicitly include it

Trusted data feeds / oracles

Easy to write contracts, hard to make sure they are secure

Contracts standards are beginning to emerge

ERC20 — interface for tokens to interoperate with others on Ethereum blockchain

Consensys has smart contract best practices writeup

Tools:

Metamask (online plugin for browser. Acts as a bridge between browser and Ethereum network)

Remix (online Solidity IDE — debugging, static analysis, call functions, etc)

Etherscan (block explorer)

More tools:

Solgraph (contracts talking to other contracts is hard to. tells you inputs and outputs of contract), solidity repl, Solium (linter for Solidity), evmdis (EVM disassembler), Truffle, Populus (Dapp framework), Embark, Ethereum package management

Questions:

What do you think is the biggest problem of Solidity? It is a young language, and like all young languages, it’s missing a lot of features / advanced tools that programmers are used to

Questions about safety of Solidity, money that’s been lost. It’s an experiment and evolving

What upcoming feature are you most excited about? New features to get cryptographic primitives into EVM — zkSNARKS



Enigma: Building an Ethereum-assisted Decentralized Data Marketplace (Guy Zyskind)

Launched from MIT in 2015

Decentralized computation platform with better privacy

Data is everywhere — the challenge is building a marketplace where data providers create supply and data consumers create demand.

Catalyst — the first App on the Enigma protocol

Data marketplace protocol

Ethereum smart contracts handle dataset registration, and remuneration

Routing, data storage and processing is done completely off chain

Only occasional payments and disputes need to touch the chain

Registration (off chain)

When a data provider wants to register a data set, they send a request to the off chain network with metadata where the dataset lives

Data provider deposits ENG to serve as collateral

Incentives and data discovery

Value of data — data sets are denominated with ENG tokens

Work — storage and computational work is rewarded through fees paid from the earning data curator to the workers

Discovery and ranking — the system associates rank of a dataset with the stake put on it (rank <<- subscribers x price + C x deposit

Heavy lifting happens off chain. Global consensus is expensive

Shard data into logical segments

For each segment, select a random node (or a quorum) to store the data and run subsequent computations over it Using pseudo-random verifiable secret sharing Using threshold signatures (e.g. DFINITY) Public random becaons



Outsourcing private computation

Methods: blockchain, partial encryption, FHE, zkSNARKS, sMPC, Secure Hardware

Multi-party computation

In the real world, we can’t rely on one trusted machine

MPC shows that for at most t<n bad actors, data can remain confidential even in use, and the result is correct

g. Multiplication needs computation between the nodes

With addition and multiplication, we can securely store/compute any circuit –> we can construct a secure VM

[zk] — Introduction to zkSNARKs (Christian Reitwiessner)

Why zkSNARKs and why they are important: blockchains scale better if they only verify, not compute.

zkSNARKs: verification magnitudes faster than computation of zkSNARKs

In blockchains everything is public –> zkSNARKs are “zero knowledge”

2 parties: prover (tx sender), verifier (blockchain)

Prover wants to convince verifier that some statement is true without revealing why it is true (zero knowledge) Given message is valid transaction There is some w such that f(x,w) = y for fixed f g. Mini Sudoku (4×4) board is solvable — Size of puzzle is not relevant. Need: table, paper, opaque tape, dice

Prover has secret solution to Mini Sudoku puzzle Step 1: Prover shuffles numbers, replace each number with a different one (with dice). Shuffled solution is still a solution, but initially given numbers are different Step 2: Prover writes shuffled puzzle on a piece of paper and covers each number with piece of tape. Prover puts paper on table for verifier to see Step 3: Verifier chooses one of: Reveal certain row, column, sub square, or reveal initially filled cells and shuffling Step 4: Prover removes tape on cells Step 5: Verifier checks numbers and shuffling against initially filled cells Prover did not reveal any private information apart from the shuffling chosen Repeat steps 1-5 until Verifier is convinced

Round 2, covered shuffled solution Still, prover is not able to cheat (up to certain probability — because verifier does not look at complete solution), because of commitment to solution (paper and tape). Probability of cheating decreases exponentially with number of rounds

Still some problems with Mini Sudoku algorithm: Does not work for generic computations. But can take an arbitrary problem in computation and transform it into the Sudoku problem (reduction). Solved problem. But lots of engineering work to make it efficient Many rounds of interactions — Especially problematic for blockchain. Reason: single error can be hidden in any cell (why you need many rounds). Motivation: find a game where a single error can be found in one round. Can be done with polynomials

Use polynomials to make error visible almost everywhere Changing a coefficient makes a polynomial function at every point Theorem: two different polynomials of degree up to n can coincide in at most n points. n is tiny compared to the number of possible evaluation point



How do SNARKs work?

Map computation to polynomial equations. Prover knows a (secret only known to prover) w such that for all x: a(x) * w(x) = b(x) * c(x)

Verifier chooses secret evaluation point

Prover evaluates polynomials…using homomorphic encryption

Verifier checks equality

Homomorphic encryption is computation on encrypted data (known function / program, perform computation on inputs, result in encrypted output)

Arbitrary computation not yet practical, but “pairings” can do arbitrary sums and a single multiplication at the end. a(x) * w(x) = b(x) * c(x) works if done separately for each power of x. Allows prover to evaluate polynomial at encrypted point and check if outputs are equal

Trusted Setup

Reduced to 2-round protocol, but would like no interaction

Randomly generate x, encrypt to [x], send [x] to prover, evaluate polynomial, check equality

Equality can be checked on encrypted form

Not required to know x, only [x] , and can be reused

Trusted setup: generate [x] in a distributed process a la [x] = [a] + [b] + [c] + [d] (= [a + b + c + d]). Distributed to improve security — need all secretes to be published to break encryption. x = a + b + c + d can only be created if a, b, c, d are known

Zero knowledge is achieved by adding the same random number to both sides

Interesting thing in SNARKs is they allow speed up in verification

Zero knowledge is a triviality to add to protocol, comes for free

Most of the work in the protocol is in the speedup

Tiny proof: 8 group elements

Because random number is unencrypted, prover cannot cheat, so you achieve zero knowledge

Verification: 5 pairing checks plus some elliptic curve arithmetic

[zk] — Democratizing zk-SNARKs: Improved setup, performance and security (Sean Bowe)

zkSNARKs in production software. zcash brought zkSNARKs to production last year. Very expensive — only people to production so far.

Engineering intensive, auditing is complex

zkSNARK challenges: curve security, performance, trust

BN128

Originally targeted 128-bit security, now closer to 110 bits

Implemented in zcash and Ethereum

Not a very rigid construction; not particularly optimized for pairing performance

Designing a new curve BLS12-381

Moving to next version of zcash

Targets closer to 128-bit security with minimal performance overhead

Implemented in Rust library called ‘pairing’

Very rigid and optimized construction with extensive review; appears in a new paper (BGM17)

Performance improvements — newer ones use less than 8 group elements

Proving system performance

Streamed proving: Public parameters of zkSNARKs are loaded from disk as computation is performed when constructing a proof. Led to 100x memory usage during proving

Jubjub curve

Proving systems

State of the art proving systems are based on Jens Groth’s 2016 pairing-based SNARK — 3 group elements

Improved papers followed — subversion resistant SNARKs, snarky signatures

Groth’s original construction performs the best. Clear path toward non-malleability and reduced assumptions, without added proving costs

Jubjub curve

Special “embedded” elliptic curve optimized for use in zkSNARKs built on top of BLS12-381

Construct efficient cryptographic primitives: Collision-resistant hash functions, Commitment schemes, Authentication

Use SHA256 for everything in zcash, but not necessarily efficient. New crypto primitives are algebraic primitives so more efficient (5-10x)

Trusted setup

Biggest complaint about zkSNARKs

STARKs are trustless (and quantum resistant) but still on academic horizon. Still large proofs. So probably deal with SNARKs for next few years

Zcash and other projects need to perform these setups and making them trustworthy is difficult

We have a new paper that addresses this problem for zkSNARKs today

Multiparty computation

zkSNARK parameters can be setup via multiparty computation

6 people get together, only one participant needs to be honest (all have to be compromised to compromise the final result — construct false proofs)

The more participants the less likely all are compromised or secretly colluding

Zcash’s original MPC

Commitment round round robin across MPC group

Expensive FFT

Then 2 more random coefficient rounds

Disadvantages: Couldn’t scale to large number of participants Less flexibility for each participant (software, hardware, OS) Coordination was difficult because participants had to be selected in advance, could not drop out



New MPC just published

Eliminates pre-commitment round, scales to an unbounded number of participants, much cheaper per circuit

Phase 1 (Powers of Tau) Communical ceremony for all zkSNARK circuits Expensive but scales to hundreds/thousands of participants

Phase 2 Statement-specific zkSNARK MPC Much cheaper



[zk] — Scalable, Transparent and Post-quantum Secure Computational Integrity, with Applications to Cryptocurrencies (Eli Ben-Sasson)

Interactive Oracle Proofs (IOP) — Mathematically clean model but unrealistic

Scalable Transparent IOP of Knowledge (STIK) — IOP with near optimal complexity, Probably sound, no crypto but still unrealistic

Scalable Transparent Argument of Knowledge (STARK)

Realization of STIK

ZK optional (doubles prover time)

requires crypto primitives/assumptions

Bob (Prover) wishes to prove to Alice (Verifier) in ZK:

I know private input w, such that running computation C on w and public input x reaches output y in T steps

g. zcash JoinSplit, proof of solvency, smart contract

IOP — Informal Multi-round interactive protocol between P and V P sends USB, V sends randomness, etc

Unrealistic model but mathematically clean — Every language in model (NEXP) has a fully scalable, transparent, IOP of knowledge with perfect ZK Double scalable — prover time is quasi-linear Tlog^O(1)(T) Verifier time is exponentially fast log^O(1)(T) Transparent: V only uses public randomness IOP of knowledge — w can be extracted from USB sticks Perfect ZK: strongest form of ZK — Assumes V limits # of queries Last two points contradictory, can compromise ZK



STARK (Scalable transparent argument of knowledge)

Crypto based realizations of STIKS are called STARKs: two options Interactive (iSTARK): P sends root of Merkle hash instead of USB, V responds with public randomness (Kilian92) Noninteractive (nSTARK): P uses root of Merkle hash as V’s randomness (Mic00,BCS16) Open Research Question: are nSTARKs invariably SNARKs?

First STARK without ZK: called SCI presented at Eurocrypt 2016

First zkSTARK [BBHR17] PoC code ready, paper not yet



Practice: zkSTARK benchmark — forensic DNA profile search

FBI has DNA profile DB D

Knows hash(D) — Davies-Meyer- AES160

FBI reports Andy’s DNA profile match result, along with zkSTARK proof

Arithmetization: use of low-degree polynomials to argue about computation

Goes back to Godel’s Incompleteness theorem (1930s)

Used by Razborov to prove circuit flower bounds (1980s)

Modified by Baibai and Fortnow to interactive proof systems

Polynomials P(X) = Sigma(i<=d)a_i*X^i, degree (d) is maximal non-zero term

Function f : S –> F is called “degree d” if it is evaluation of degree d polynomial: for all x_0 in S, f(x) = P(X)

Fact: two distinct polynomials of degree d intersect at <=d points (e.g. two distinct lines intersect at <= 1 point)

So: two distinct functions of degree d evaluated at 100 * d points are 99%-far in relative hamming distance

Corollary: space of low degree functions forms a linear error correcting code, called the Reed-Soloman (RS) code

Succinct computational integrity and privacy

Given (i) program C, (ii) input x_in, (iii) time bound T. Bob claims C(x_in, x_aux) = x_out after T steps, (a_aux is auxiliary private input).

Three goals: Integrity: Can we trust Bob’s output? Can he prove it’s correct? Privacy: Will the proof leak Bob’s x_aux (medical / financial info)? Succinctness: Can the proof be verified in time O(T)?

Bob claim 1: “have list with 10^6 integers, all are in {1,…,10} Problem: Can alice check this with 2 queries to file and 99% certainty? No, has to check all million points.

Bob claim 2: “I have encoded the list as a degree-10^6 function f (low-degree extension), evaluated on 10^9 points? Problem: Can Alice check now with 2 queries to f and 99% certainty? No, not enough information because still need to read 10^6 points

Bob: “What more do you need to be convinced claim 2 is true?” Alice: “evaluate one more polynomial g of degree 10^7-10^6, then query 1 random point in each of f and g, based on answers will verify claim 2 is true with 99% certainty Let C(X) = (X-1)(X-2)…(X-10) be the constrained polynomial of degree 10 that vanishes exactly on 1,…10 Let D(X) be the domain polynomial of degree 10^6 that vanishes on the domain of interest {1…10^6}

Alice’s test: Pick random x_0 in {1…10^9} Query f(x_0) and g(x_0), let alpha and beta be answers Acceptable Claim 2 as true iff C(alpha) = beta * D(x_0)

Completeness — suppose Bob honest Let P(X) be the interpolant of f Then C(P(X)) (new polynomial) vanishes on x_0 in {1…10^6} Fact: Q(X) vanishes on x_0 iff some other polynomial Q’(X) (one degree less) such that Q(X) = (X – x_0)(Q’(X) Corollary: Q(X) = C(P(X)) vanishes on domain {1…10^6} only if there exists Q’(X) such that degree of Q’ = degree(Q) – deg(D) <= 10^7 – 10^6

Soundness — suppose Bob cheats, say f(1) = 11 Let P(X) be the interpolant of f Then C(P(X)) does not vanish on all of x_o in {1…10^6} For any polynomial Q’(X) the product Q’(X)D(X) does vanish on all x_0 in {1…10^6} because D(x_0) = 0 So for any Q’(X) of degree 10^7-10^6 the polynomial Q’(X)D(X) differs from C(P(x)) of degree 10^7 The evaluations of C({(X)) and Q’(X)D(X) disagree on 10^9-10^7 points out of 10^9 points So for random x_0, test fails with probability 99%



EtherDelta: Off-chain Order Book with On-chain Settlement (Zack Coburn)

Send funds into a smart contract

Off-chain orderbook

Order parameters tightly packed, hashed, signed

Order has an expiry block number

To trade, submit order and signature

Cost — placing an order is free (no gas, no trading fee); because not actually making a transaction

Other operations involves a gas fee

Taking liquidity involves a trading fee (Etherdelta’s business model)

90k addresses that have traded with smart contracts

30k DAUs, 3.4m transactions

0x: An Open protocol for Trustless, Low Friction Exchange of ERC20 Tokens (Will Warren)

Background and timeline — started work over a year ago, released white paper in Feb

Motivation: the world’s value is becoming tokenized (currency, traditional assets, digital goods)

Value will be tokenized and moved into a financial market like the Ethereum blockchain

Exchange ecosystem – fractured

Each app must bootstrap liquidity

Each app uses custom smart contracts –> no interoperability

…Should be structured as modular building blocks, and open standards win in the long run

Existing work:

On chain order books are expensive, bloat the blockchain, high latency

Automated Market Makers — low throughput, front running, gameable

State channels — 3+ txs, security deposit, challenge period Off blockchain, aggregate series of state changes and resolve on blockchain Great for day trading, but has friction for one time use — have to move on a channel, put down security deposit (makes sure everyone is behaving honestly), and a challenge period when moving off. 2When you move onto a state channel you isolate yourself from interesting smart contracts

Off-chain order relay, on-chain settlement Standard order schema + smart contract system for settlement Arbitrary transport layer and message passing semantics Anyone can relay orders and charge trading fees — keep 100% of fees the charge Relayer strategies — Can use an open order book strategy (like EtherDelta), can have a centralized matching strategy, someone acts as a reserve manager and bid/asks for trading pair



Why create an open standard?

Can create really great developer tools — lower barrier to entry for developers and others who want to create decentralized exchanges — highly competitive one fees and UX, diversity that have different niches

If every decentralized exchange talks to each other, there’s network effects — they talk to each other and benefit each other

dapps can be customers of relayers like Kin Alpha (9/25) Created by single developer in spare time over few weeks Radar Relay (10/10/2017) Paradex (launch Q4 2017) — centralized matching strategy



Exchange module for the EVM

0x allows smart contracts to natively execute trades in a single line of Solidity: exchange.fill(…order, value)

Governance

0x is an open system with a variety of stakeholders, all of which benefit from network effects

Open system must remain future proof — Token voting for future upgrades to protocol

Decentralized application stack

Unbundling -> specialized networks with distinct incentives and tokens

External Data — oracle, inject real-world data into blockchain | Augur

Computation — scalable computation. Not feasible in EVM | TrueBit

File Storage — scalable data storage. Too expensive in EVM | Filecoin, Swarm

Dispatcher — Snow

Example App stack — 1. retrieve data, 2. computation. 3 injection result C(D). 4. process result P(C(D))

Token abstraction: As end user of decentralized application, instead of holding a variety of tokens, can hold onto tokenized fiat. Hold and pass into 0x orders –> dApp will exchange with 0x

v2 Order generation by smart contracts Atomic order matching Fee splitting between relayers Protocol optimizations



The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.