As conventional simulation-based testing has increasingly struggled to cope with design complexity, strategies centered around formal verification have quietly evolved

In this article, I review the origins and evolution of formal verification, which is a field with a fascinating history spanning over 70 years. I’ve been a user of formal methods for over two decades, and although I’ve not had a career as illustrious as some of the people whose contributions I cover in this column, I’ve certainly enjoyed using formal methods in all their glory, from interactive theorem proving to state-of-the-art model checking to equivalence checking. Along the way, I like to think that I’ve made my own small contributions.

The foundation of formal verification

As conventional simulation-based testing has increasingly struggled to cope with design complexity, somewhere in parallel, strategies centered around formal verification methods have quietly evolved. Edsger Wybe Dijkstra famously coined the phrase “Testing shows the presence, not the absence, of bugs.” In April 1970, he challenged the design community to think differently. Even though his remark was made in the context of software verification, Dijkstra’s call to action has had much wider influence. Nor was Dijkstra alone in this train of thought. In fact, this quiet revolution in the U.S. and the U.K. can be traced back to the 1950s. In 1954, Martin Davis developed the first computer-generated mathematical proof for a theorem for a decidable fragment of first-order logic called Pressburger arithmetic. The actual theorem was that the product of two even numbers is even.



The notion of mathematical proof has been the cornerstone of formal methods. (Source: Axiomise.com)

Theorem proving

In the late 1960s, first-order theorem provers were applied to the problem of verifying the correctness of computer programs written in languages such as Pascal, Ada, and Java. Notable among such early verification systems was the Stanford Pascal Verifier, which was developed by David Luckham at Stanford University and which was based on the Stanford Resolution Prover developed using J.A. Robinson’s Resolution Principle.

In Edinburgh, Scotland, in 1972, Robert S. Boyer and J. Strother Moore built the first successful machine-based prover called Nqthm. This became the basis for ACL2, which could prove mathematical theorems automatically for logic based on a dialect of Pure Lisp. Almost at the same time, Sir Robin Milner built the original LCF system for proof checking at Stanford University. Descendants of LCF make up a thriving family of theorem provers, the most famous ones being HOL 4, built by Mike Gordon and Tom Melham; HOL Light, built by John Harrison; and Coq, built at INRIA in France drawing on original work done by Gérard Huet and Thierry Coquand.

Both ACL2 and the various provers such as HOL 4 and HOL Light have been used extensively for digital design verification of floating-point units, most notably at AMD and Intel. In relation to this, John Harrison, Josef Urban, and Freek Wiedijk have written a useful article covering the “History of Interactive Theorem Proving.”



Timeline of theorem proving (Source: Axiomise.com)

Another notable theorem prover that is not considered part of the LCF family is PVS. Developed by Sam Owre, John Rushby, and Natrajan Shankar at SRI International, PVS has been used extensively during the verification of safety-critical systems, especially on space-related work at NASA.

Theorem provers have long proved valuable and were once seen as the formal “tools of choice,” but they had a major shortcoming. If they couldn’t prove a theorem, they couldn’t say why. There was no way of generating a counter-example or any form of explanation as to why a conjecture could not be a lemma, i.e., a subsidiary or intermediate theorem in an argument or proof. This limited their applicability to formal experts with solid foundations in computer science and mathematics — so much so that many engineers used to quip, “You need a Ph.D. in formal just to use formal tools.”

Model checking

Whilst theorem proving was gaining attention for proving the properties of infinite systems, it clearly had its shortcomings. These began to be addressed by several people during the 1970s. Particularly notable in this group were Allen Emerson and Edmund Clarke at Carnegie Mellon University (CMU) in the U.S. and J.P. Quielle and Joseph Sifakis in France. Also, in the late 1970s and early 1980s, Amir Pnueli, Owicki, and Lamport began work toward creating a language to capture the properties of concurrent programs using temporal logic.

In 1981, Emerson and Clarke combined the state-space exploration approach with temporal logic in a way that provided the first automated model-checking algorithm. The result was fast, capable of proving properties of programs, and, more importantly, could provide counter-examples when it ran across something that it could not prove. Moreover, it happily coped with partial specifications.



Timeline of model checking (Source: Axiomise.com)

Model checking finds bugs and builds exhaustive proofs through state-space search using automatic solvers. During the mid-1980s, several papers were written showing how model checking could be applied to hardware verification, and the user base for formal verification began to grow. Soon, however, a new challenge emerged: The size of the hardware designs that could be verified with model checking was being limited because of the explicit state-space reachability.

Around this time, Randall Bryant from CMU’s electrical engineering department began playing with the idea of circuit-level simulation for the logical simulation of transistors. Specifically, he considered using the idea of a three-valued simulation. Bryant also explored the use of symbolic encoding in compressing simulation test vectors. The biggest challenge was an efficient encoding of these symbols and the Boolean formulas made on them. In response, Bryant invented ordered binary decision diagrams (OBDDs). Ken McMillan, a graduate student working with Edmund Clarke on his Ph.D., got to know of this, and in 1993, symbolic model checking was born.

OBDDs provide a canonical form for Boolean formulas that is often substantially more compact than the normal conjunctive or disjunctive forms. Also, very efficient algorithms have been developed for manipulating them. To quote Ed Clarke from this paper: “Because the symbolic representation captures some of the regularity in the state space determined by circuits and protocols, it is possible to verify systems with an extremely large number of states — many orders of magnitude larger than could be handled by the explicit-state algorithms.”

Together with Carl J. Seger, Bryant went on to invent another very powerful model-checking technique called symbolic trajectory evaluation (STE), which has been used extensively for processor verification at companies such as Intel for more than 20 years. Several generations of Pentium floating-point units have been verified using STE, and Intel continues to use this technique to this day.

The main benefit that STE provides over conventional model checking is that it employs three-value logic (0, 1, X) with partial order over a lattice to carry out symbolic simulation over finite time periods. The use of “X” to denote an unknown state in the design, along with the ability to perform Boolean operations on “X” values (e.g., 0 AND X = 0, 1 AND X = X), supported an automatic fast data abstraction. When coupled with BDDs, this provided a very fast algorithm for finite-state model checking.

The main disadvantage of STE is that it is unable to specify properties over unbounded time periods. This limits its applicability to verifying digital designs for finite bounded time behavior.

Other prominent developments in formal technology still in use that include model checking and some form of theorem proving for the verification of concurrent systems include the following:

However, all of these languages and supported formal verification tools have primarily been applied to software or the high-level modeling of systems and less so to hardware. Furthermore, using them requires a substantial background in mathematics.

Model-checking applications for hardware verification have come a long way. Today, we can verify designs as big as 1 billion gates (equivalent to 10 to the power of 120 million states). This is in stark contrast to the 1980s, when we could only verify designs comprising hundreds of states.



Axiomise reports finding bugs in seconds in designs with 338 million flip flops (1.1 billion gates). (Source: Axiomise.com)

Equivalence checking

Both theorem proving and model checking continue to be used for different reasons on both hardware and software, but a third type of formal verification — equivalence checking — is also being used extensively.



Equivalence checking is one of the main formal technologies in use today. (Source: Axiomise.com)

This method relies on comparing two models of a design and producing an outcome that either proves that they are equal (equivalent) or provides a counter-example to show where they disagree. Early forms of equivalence checking were targeted at combinational hardware designs, but scalable equivalence checkers now exist for the equivalence checking of sequential designs.

These tools are now widely used, most notably for the combinational equivalence checking of an RTL description and a netlist but also for sequential netlist synthesis verification. It is now common practice for hardware designers to use equivalence checkers to establish by comparison that an unoptimized digital design is functionally the same as its optimized counterpart, wherein the optimization algorithms may have applied power-saving features such as clock gating.

Summary

Hopefully, this quick tour has provided a sense of how formal verification techniques have steadily evolved, rising to each challenge set before them. Formal verification is an established approach that is coming into its own because it helps us confront ever-increasing complexity. Formal verification also has an elegance to it that is perhaps not as fully recognized as it should be.

At Axiomise, we love enabling people in the use of formal verification through a combination of training, consulting, and service offerings. Do you use formal verification, or are you considering using it in the future? I welcome your comments and questions and would love to hear your thoughts.