I am concerned that “written in Haskell and OCaml are closer to algebra and therefore easier to formally verify” may turn out to be a myth that is accepted without quantification or critique

– Andrew Miller https://twitter.com/socrates1024/status/1082284477395681280

Let’s first review how some projects have addressed the formal verification problem

Verification of resources with Refinement Types : Zen Protocol

The Zen Protocol project decided to write its smart-contracts directly in F*, a language used in formal verification that has some advanced features like dependent types and refinement types.

The advantage of doing so is that one can represent directly into the type system the resources consumed by the contract. Imagine a contract is a function int -> int, and let’s add to its type an extra int that represents how many “steps” are required to evaluate that contract. Something like

x -> x + 1 1 step type : (int -> int) * 1

x -> 2 * x + 3 2 steps type : (int -> int) * 2

When we write any function, say x -> 3 * x * x + 2 * x – 1, the type checker will be able to tell if it can be evaluated under m steps.

A very simplified version of that in F* would look like this (you can type-check the code in https://www.fstar-lang.org/run.php/)

module Resources noeq type op : nat -> Type = | Op : (int -> int) -> n:nat -> op n (* Identity takes 0 steps *) val id : op 0 let id = Op (fun x -> x) 0 (* Addition of a constant adds 1 step *) val (+) : #n:nat -> int -> op n -> op (n + 1) let (+) #n = fun v (Op f n) -> Op (fun x -> v + f x) (n + 1) (* Evaluation in less than m steps at point 0 *) val eval : #n:nat -> m:nat{n <= m} -> op n -> int let eval #_ = fun _ (Op f _) -> f 0 (* Let’s test if the type-checker catches the number of steps *) let r0 = eval 0 id (* x -> x can be evaluated in 0 steps *) let r1 = eval 1 id (* x -> x can be evaluated in 1 step *) let r2 = eval 1 (1 + id) (* x -> 1 + x can be evaluated in 1 step *) (* Type-check failure when uncommented *) (* let r3 = eval 0 (1 + id) *) (* x -> 1 + x cannot be evaluated in 0 steps *)

The upside is you get compile-time verification of resource consumption of the contract. That allows you to compile contracts and run them natively, instead of interpreting them in an instrumented interpreter that counts the gas step-by-step.

The downside is people writing contracts need to understand F* and its refinement types.

F* could be used to prove other properties of the code, for instance that the function n -> 2n always returns an even number

module Even open FStar.Mul (* Defines even *) val even : nat -> bool let even = fun n -> n % 2 = 0 (* Requires result to be a positive integer satisfying [even n = true] *) val double: nat -> n:nat{even n} let double = fun n -> 2 * n

But as far as I am aware, Zen Protocol has mainly been working on resource consumption.

Pre / post conditions with SMT : PACT from Kadena

PACT is a database inspired smart-contract language with a LISP syntax. Its interpreter / compiler is written in Haskell.

It has been extended with pre / post conditions that are sent to an SMT, in the same way WhyML works https://blog.monic.co/introducing-the-pact-property-checker/

Writing the pre / post conditions is left to the user, therefore you verify any property you wish as long as you are able to express it and the SMT is able to prove it.

WhyML has a gallery of verified programs http://toccata.lri.fr/gallery/why3.en.html so let’s pick a small one in there

module MaxAndSum use int.Int use ref.Ref use array.Array let max_sum (a: array int) (n: int) : (sum: int, max: int) requires { n = length a } requires { forall i. 0 <= i < n -> a[i] >= 0 } ensures { sum <= n * max } = let sum = ref 0 in let max = ref 0 in for i = 0 to n - 1 do invariant { !sum <= i * !max } if !max < a[i] then max := a[i]; sum := !sum + a[i] done; !sum, !max end

WhyML requires you to

Specify pre-conditions to the usage of the function

Specify post-conditions that the engine needs to prove

Help from time to time with some hints, in this case a loop invariant

[Not visible here] help the SMT with some lemmas for harder proofs

The upside is you have a powerful and flexible system that allows you to prove many properties about your program and the underlying language is a subset of OCaml.

The downside is the user is still required to express in logic the properties he wants to prove, and help the SMT with not-so-obvious lemmas when the SMT-engine cannot complete the proof by itself.

Embedding of a DSL into Coq : Zilliqa

Ilya Sergey and his colleagues from Zilliqa and NUS embedded a crowdfunding contract written in Scilla into Coq, and proved various properties about it

https://blog.zilliqa.com/zilliqas-first-progress-update-event-partnerships-project-grants-and-the-unveiling-of-scilla-74fd397ca4c6

The property proven are https://ilyasergey.net/papers/temporal-isola18.pdf

No leaking funds

Donation record preservation

The backer can get refunded

The authors also consider an auction and multi-party game, and suggest to prove the following properties

Auction

The balance of SimpleAuction should be greater or equal than the sum of the highestBid and values of all entries in pendingReturns

For any account a, the value of the corresponding entry in pendingReturns (if present) should be equal to the sum of values of all transfers a has made during its interaction with the contract

An account a, which is not the higher bidder, should be able to retrieve the full amount of their bids from the contract, and do it exactly once

The game

No other party besides player1, player2, or owner can be awarded the prize, which is equal to the contract’s balance remaining constant before then

Each player can only submit their non-trivial choice once, and this choice will have to be a key from payoffMatrix in order to be recorded in the corresponding contract field

They have also embedded the full Scilla language into Coq, and I believe they have now coded the tactics required to automatically prove specific properties of interest that generalise in some sense the particular cases previously considered, what the authors call prodigal (loses funds) and greedy (locks funds) contracts.

The upside is the system is completely automated.

The downside is the set properties about contracts that can be proven is closed, and if Coq cannot complete the proof, you don’t really have an option besides rewriting your program until Coq can prove it.

Scilla’s syntax is reminiscent of OCaml’s as the authors explain (https://arxiv.org/pdf/1801.00687.pdf)

As implementations of transitions in Figure 1 demonstrate, the language syntax includes binding of pure expressions (such as arithmetic and boolean operations, as well as manipulation with mappings) via OCaml-style let-in construct

Which is then mapped into Coq

Coq’s programming component, Gallina, is an ML-family language […] Our embedding of Scilla to Coq is, thus, shallow [30]: we model the semantics of the pure (i.e., non message-passing) component of Scilla contracts via native Gallina functions, while the message-passing, state-transition aspect of Scilla is accounted for by encoding contract semantics as a relation in Coq. In essence, this is similar to implementing a regular domain-specific language with a tailored execution runtime on top of Lisp or Scala

Is OCaml really easier to formally verify ?

All the formal verification methods discussed above map their code into some variant of ML (meta-language) which is the same basis on which OCaml is built :

If you ignore the types and some details (implicit parameters), F* is just a subset of OCaml

If you ignore the pre / post conditions and invariant annotations, WhyML is just a subset of OCaml

If you ignore some details, Gallina is just a subset of OCaml

Which obviously makes mapping / embedding OCaml into these systems much easier than other languages, as long as the OCaml was written with that mapping in mind – it mostly means avoiding imperative traits, objects and keeping the code as simple as possible.

OCaml is also a better language

OCaml by default catches already many mistakes that languages like C, C++, Java, Python or JavaScript don’t

Typecasting : improper usage of a pointer

int* a = new int; char* b = a

Sub-typing : usage of a class in the wrong context, missing methods, due to interfaces / inheritance, covariance and contravariance

Object [] a = new String[1]; a[0] = new Integer(0)

Memory leaks

At least most “modern” languages have adopted garbage collection even if some of them are notorious for getting it wrong like Python. In older languages you malloc and free at your own risk.

Sum types (enums)

Most languages only support “label-like” enums and force you to use inner classes for structured ones. Also, without pattern matching, destructuring sum types is a nightmare, which is probably why most languages just support the “label-like” ones. Some languages like C have union types which are rarely used.

Product types (structs)

OCaml gives you light, intermediate and heavy product types under the form of anonymous pairs x -> (x, x + 1), records { name : “x”, value : 1 } and modules. Just being able to return a pair of values without declaring a complex object to hold them is valuable and makes codes shorter and cleaner

First class functions

Languages that don’t have first class functions need to use function pointers or function objects which end in a mess.

Polymorphism (generics)

Notoriously broken in Java and C#

But there is actually a reason : System F

Have you noticed that programmers are usually fluent in many programming languages ? I have written industrial (deployed) code in C, C++, Perl, Java, C#, Python, JavaScript and OCaml. And I have written personal / informal code in Logo, Basic, Pascal, Fortran, Smalltalk, Scheme and Haskell. This is by no means unusual.

How many programmers do you know can speak 15 languages ?

How many programmers can just as easily write code in Lambda Prolog

The wikipedia page of Lambda Prolog gives some interesting links about its use in formal verification

Amy Felty has written in 1997 a tutorial on lambda Prolog and its Applications to Theorem Proving

John Hannan has written a tutorial on Program Analysis in lambda Prolog for the 1998 PLILP Conference

Just TRY IT ! It’s mind-bending.

The trick to our apparent multilingualism is that all those languages are just surface-syntax on top of a single language. They are all subtly broken versions of System F, which is why it is so easy to pick-up new languages of the same class.

There is a theorem that morally says that: via the Curry-Howard isomorphism, System F is isomorphic to second order intuitionistic propositional logic. Without entering into the details of its expressive power, this mainly means System F is is closer to the maths we all learnt in university and easier to formalise with said maths (actually, isomorphic). OCaml and Haskell being closer to System F than the other languages like C, C++, Java or Python, I am afraid it means this sentence is (kind of) correct

Haskell and OCaml are closer to algebra and therefore easier to formally verify

[updated on 2019-01-24 : fixed some typos]

Get Involved! Join our mailing list for updates on our events, meet-ups, workshops & more! By clicking the button:

1) It confirms you are a genius.

2) We promise that your privacy is respected and will not be shared.

3) You will not be spammed.