In light of ever-evolving needs, the safety-critical community has spawned a variety of software certification standards. These standards typically define multiple levels of compliance based on the criticality of the application. One of these standards is DO-178B[1] for commercial avionics (Figure 1), where compliance with the highest level means demonstrating traceability from requirements to design to code, with extensive coverage analysis to verify that all requirements have been correctly implemented and that all code can be traced back to requirements.

Figure 1: DO-178B Life-cycle Processes and Safety Levels

Key to a system’s safety certification is that the programming language meets these requirements:

Reliability. The language should not contain “traps and pitfalls,” and it should provide features that promote early error detection (at compile time, if possible).

Analyzability. The language should have an unambiguous definition so that the effect of any program is predictable (thus, no features with unspecified semantics). Further, the language features should facilitate automated analysis techniques through which the developer can show that the program does what it is supposed to do and does not do what it shouldn’t.

Perhaps surprisingly, these requirements may conflict. Analyzability demands simplicity, whereas reliability sometimes implies language features that often bring semantic complexity. Thus, commonly used programming languages such as C, C++, Ada, and Java are too complex in their entirety, but subsets can provide a viable and effective alternative in providing reliability and analyzability in DO-178B certified applications.

Subsetting for safety

Defining and implementing subsets of a programming language is almost always a poor idea, because it defeats source code portability. A program that is written for one vendor’s compiler might not compile on another vendor’s compiler, since the two might implement different feature sets.

However, safety-critical systems present special circumstances. Languages that for various reasons might seem attractive candidates – for example C, C++, Ada, and Java – are all too large and complex to be used for code that must be certified to the highest levels of DO-178B. The problem with certification might be due to error-prone features (compromising reliability) or features with complex semantics (complicating analyzability). The only viable option for the developer is to restrict usage to a subset. However, different languages offer different answers, approaches, and suitability characteristics to utilizing a subset.

C-oriented technology

C and C++ are popular choices for many applications since compilers (and knowledgeable programmers) are widely available. However, C was not designed with the goal of supporting safety-critical applications: It has a variety of error-prone constructs – such as confusing type conversion rules and an absence of array index checking – that compromise reliability. In addition, loose type checking complicates analyzability. And C’s simplicity comes at the expense of useful features such as exception handling and concurrency.

C++

C++ can be regarded as a “better C.” It is based on C and shares C’s goal of efficient code generation but provides major extensions including full Object Oriented Programming (OOP), generic templates, exception handling, and flexible modularization (namespaces). C++ was not designed with an explicit goal of supporting safety-critical programming, though, and shares many of C’s reliability issues.

MISRA C

The natural question is whether subsetting can address these C and C++ issues. Given the widespread usage of C and C++, it is not surprising that several efforts have been made with the goal of answering “yes.” For C, the best-known subset is MISRA C[2], which was designed to promote the safest possible use of C, to prevent or restrict C constructs with unpredictable behavior, and to encourage production of static checking tools that enforce compliance. The current version of MISRA C is based on the 1990 C standard. The next version of MISRA C will be based on the 1999 C standard.

On the positive side, MISRA C codifies “best practice” for C programming and has a large supply of tool vendors and service providers; it has become a de facto standard for safe C programming. On the other hand, since C was not initially designed to meet the reliability and analyzability requirements for safety-critical systems, retrofitting the language – even by subsetting as done by MISRA – is difficult. Further, some rules (such as for documentation) are not enforceable by an automated tool, and other rules are subject to interpretation on how they should be enforced. For example, the prohibition against “dangling references” (pointers to local variables that have been popped off the stack) can be enforced by a range of implementation techniques with varying generality.

MISRA C++

C++ presents a similar picture. There have been several proposed “safe C++” subsets, most recently MISRA C++[3]. The scope of MISRA C++ is broader than just safety-critical software; it is aimed at “production code in critical systems.” It comprises 228 rules, most of which are required but some of which are advisory or documentary, keyed to the sections of the 2003 C++ standard. MISRA C++ also includes an itemization of features that are not completely defined by the C++ standard, which serves as a valuable reference.

MISRA C++ has basically the same trade-offs as MISRA C. It codifies “best practice” for C++ programming and will likely attract a large pool of tool vendors. However, the rules are not always automatically enforceable, and some are subject to interpretation. Moreover, with its audience broader than just the safety-critical community, MISRA C++ does not address some of the issues that complicate certification (for example, the difficulties with inheritance and dynamic binding). It is (rightly) focused on reliability issues but comes up short for analyzability.

Ada-oriented technology

Ada was designed to support development of high-integrity systems and avoids the various reliability “traps and pitfalls” common to C. For example, in Ada an integer arithmetic overflow raises an exception instead of silently “wrapping around.” And Ada performs runtime checking on array indexing (raising an exception when the index is out of bounds).

A number of Ada features also help analyzability. For example, Ada allows the programmer to specify ranges on scalar data, allowing the compiler or supplemental tool (or a human reviewer) to use such information in data flow analysis. Nevertheless, Ada is still too large to be used in its entirety for a safety-critical program. Subsetting is necessary.

The Ada model for user-defined subsets, embodied in a feature known as the Restrictions pragma, is based on the “you only get what you pay for” philosophy. If a program specifies that it is not using a particular language feature, then the implementation should ensure that the executable does not contain code for that feature’s runtime support. This simplifies the certification effort. (Indeed, if a support library for an unused feature were included in an executable, then it would be regarded as dead code and would violate DO-178B.)

In addition to the a la carte style of subset definition, the SPARK language[4] and toolset is another Ada-oriented technology for safety-critical applications. SPARK is basically an Ada subset augmented with a contract language (annotations, as Ada comments) that allows a program’s specification to be precisely expressed and verified. SPARK omits Ada features that are hard to analyze (such as “pointers” and exceptions). These Ada-based approaches present the opposite trade-offs to the MISRA C and MISRA C++ solutions. However, they have a smaller presence in the marketplace, with fewer vendors to choose from.

Java-oriented technology

Java is an interesting candidate for safety-critical systems. The language was designed with a focus on security and somewhat as a reaction to the perceived complexity of C++. It opts for reliability (runtime checks on array indices, absence of features that could result in dangling references), and some of its major design decisions help analyzability. For example, it has few of the implementation-dependent features found in Ada, C, and C++, at least among its sequential programming features.

On the other hand, safety certification requirements present some significant challenges to Java technology. The language’s concurrency semantics are rather loose, possibly resulting in priority inversions and missed deadlines for real-time programs. The essence of Java is OOP and dynamic allocation, both of which complicate program analysis. Java’s lexical and syntactic framework, as well as its arithmetic type model, is based on C, with the resulting traps and pitfalls such as silent wraparound on integer overflow. And there are the questions of whether the execution environment is a Java Virtual Machine (which would require certification) and how much of the class libraries would be included.

These issues are not necessarily insurmountable, and efforts under the auspices of Sun Microsystems’ Java Community Process have been underway for some time to address them. The Real-Time Specification for Java (RTSJ)[5] is an extension of the Java platform to make it amenable to real-time systems development. However, the RTSJ, as an extension of the Java platform and with complex features such as asynchronous transfer of control, is not appropriate for safety-critical systems. However, as with other languages, subsetting seems a viable approach. To this end, an effort known as Safety-Critical Java Technology[6] is currently underway. It has defined three levels of features, reflecting the amount of generality/complexity that an application can tolerate. These levels correspond to particular features in the RTSJ and in the Java language and class libraries.

How to decide

No programming language stands out as a perfect choice for DO-178B safety-critical programming. The trade-off is generally between technical merit (how the language supports safety-critical subsets) and external factors. Properties of a language that might be advantages in other contexts – powerful dynamic features or extensive libraries – are disadvantages in developing safety-critical code. Here, reliability and analyzability “rule,” less is more, and subsetting is essential.

References

1. RTCA SC-167/EUROCAE WG-12. RTCA/DO-178B – Software Considerations in Airborne Systems and Equipment Certification, December 1992

2. MISRA-C: 2004. Guidelines for the use of the C language in critical systems. www.misra-c.com

3. MISRA-C++: 2008. Guidelines for the use of the C++ language in critical systems. www.misra-cpp.org

4. J. Barnes, High Integrity Software – The SPARK Approach to Safety and Security, Addison-Wesley, 2003

5. JSR-1: jcp.org/en/jsr/detail?id=1

6. JSR-302: jcp.org/en/jsr/detail?id=302

Ben Brosgol is a member of the senior technical staff at AdaCore, where he specializes in language technology for safety- and security-critical systems. He was involved with the design of Ada 83 and Ada 95 and is a member of the Expert Groups for real-time and safety-critical Java. He holds a PhD in Applied Mathematics from Harvard University. He can be reached at brosgol@adacore.com.