Gernot Heiser is CTO and founder of Open Kernel Labs.

Earlier this year, Open Kernel Labs and its research partner, NICTA (Australia's Information and Communications Technology Research Centre of Excellence), announced completion of groundbreaking long-term research and development providing formal mathematical proof of the correctness of a general-purpose operating-system microkernel and virtualization platform.

The project centered on the need to assure extremely high levels of reliability, safety, and security in mission-critical domains that include aerospace, national security, and transportation. The correctness proof will significantly reduce the effort required for certification under formal security or safety evaluation regimes, such as Common Criteria, and paves the way for deploying trustworthy virtualization technology for business-critical applications in mobile telephony, business intelligence, and mobile financial transactions.

Formal Verification: Key to Secure and Reliable Software

Existing certification regimes center on software processes, testing, and conformance to specifications of models of software. By contrast, the formal verification project actually proved the correctness of the code itself, using formal logic and programmatic theorem checking -- the first time this has been achieved for a general-purpose operating system (OS) kernel or hypervisor. The verification eliminated a wide range of exploitable errors in the kernel, including design flaws and code-based errors like buffer overflows, null pointer dereference and other pointer errors, memory leaks and arithmetic overflows, and exceptions.

The project entailed more than five years of work by NICTA and researchers from the University of New South Wales on behalf of OK Labs.The joint team verified 7,500 lines of source code, proving over 10,000 intermediate theorems in over 200,000 lines of formal proof. The verified code base, which underlies the OK:Verified platform (project name seL4), is a newly developed OS microkernel. It is derived from the globally developed and deployed open source L4 project and is designed for use as a hypervisor that can support multiple virtual machines.

The code constitutes a third-generation microkernel and is influenced by the Extremely Reliable Operating System (EROS) project. It features abstractions for virtual address spaces, threads, inter-process communication (IPC), and, like EROS but unlike most commercial systems, capabilities for access control. Initial development and all verification work was performed on an ARMv6-based platform, with a subsequent port of the kernel (so far without proof) to the x86 architecture.

The proof was machine-checked and is one of the largest proofs ever completed. Formal proofs for specific properties have been conducted for smaller kernels and code bases, but what the team achieved constitutes a general, functional correctness proof, which has never before been realized for software of this complexity or size.

To reach this milestone, the team invented new techniques in formal machine-checked proof methods, made advances in the mathematical understanding of real world programming languages, and developed new methodologies for rapid prototyping of OS kernels. This work went beyond the usual checks for the absence of certain specific errors. Instead, it verified full compliance with the system specification. The project has yielded not only a verified microkernel but also a body of techniques that can be used to develop and verify other pieces and types of software.

Smaller is Better

Large code size and complexity are the archenemies of software reliability. Experience shows that well-engineered software tends to exhibit at least two to five bugs per thousand lines of code (kLoC), and where there are bugs, there are, most certainly, security or safety vulnerabilities. And, as many mobile/wireless devices now field hundreds of thousands, even millions of lines of code, they therefore may have literally thousands of bugs with concomitant vulnerabilities.

The security and reliability of a computer system can only be as good as that of the underlying OS kernel. The kernel, defined as the part of the system executing in the most privileged mode of the processor, has unlimited hardware access. Therefore, any fault in kernel implementation has the potential to undermine the correct operation of the entire system.

The existence of bugs in any sizeable code base is inevitable. As a consequence, when security or reliability is paramount, the usual approach is to reduce the amount of privileged code, in order to minimize the exposure to bugs.

The best way to manage this complexity and accompanying reliability challenges is to architect systems to limit the size of security- or safety-critical components. The trusted computing base (TCB) comprises the part of the system that can violate security policies. If the TCB is small enough, it becomes feasible to engineer it to the highest standards.

TCB reduction is a primary motivation behind security kernels and separation kernels, the MILS (multiple independent levels of security) approach, the use of small hypervisors as a minimal trusted base, as well as systems that require the use of type-safe languages for all code except some "dirty" core. It is the reason why Common Criteria, at the strictest evaluation level, requires the system under evaluation to have a "simple" design.

Easily said, but not as easily done. Rich functionality (like GUIs and sophisticated application programs) is generally implemented on top of powerful, function-rich OSes like Windows, Linux, or Android. These are themselves large and complex, and consequently full of bugs. But the OS is inherently part of the TCB of whatever program is running on top.

One simple and elegant solution to the challenge of TCB reduction is based on virtualization: A system is partitioned into several virtual machines (VMs), each containing a separate subsystem. For a typical mobile device, one VM would host a rich OS supporting the user-facing software, another a real-time OS (RTOS) supporting the wireless communication software, and a third VM would contain and protect security- or safety-critical software (possibly running directly on the virtual hardware without any OS). In such a setup, the TCB of the critical subsystem would consist of little more than the hypervisor that implements the virtual machines. The security and safety of the system would then hinge on the trustworthiness of a relatively modest-sized hypervisor. By adapting the hypervisor to the specific requirements of mobile systems (and the similar requirements of other consumer electronic devices), with careful design and implementation of a minimum set of primitives, a total TCB size of approximately 10 kLoC is achievable.

The number of bugs in such a small code base can be reduced to a small number by using good software-engineering techniques, and one can hope that none of them constitute vulnerabilities. However, with human lives and national security at stake, we should take a more pessimistic approach than hoping for the best, and assume that there are critical bugs unless proven otherwise.