Safety-critical realtime with Linux

LWN.net needs you! Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing

Doing realtime processing with a general-purpose operating-system like Linux can be a challenge by itself, but safety-critical realtime processing ups the ante considerably. During a session at Open Source Summit North America, Wolfgang Mauerer discussed the difficulties involved in this kind of work and what Linux has to offer.

Realtime processing, as many have said, is not synonymous with "real fast". It is, instead, focused on deterministic response time and repeatable results. Getting there involves quantifying the worst-case scenario and being prepared to handle it — a 99% success rate is not good enough. The emphasis on worst-case performance is at the core of the difference with performance-oriented processing, which uses caches, lookahead algorithms, pipelines, and more to optimize the average case.

Mauerer divided realtime processing into three sub-cases. "Soft realtime" is concerned with subjective deadlines, and is used in situations where "nobody dies if you miss the deadline", though the deadline should be hit most of the time. Media rendering was an example of this type of work. "95% Realtime" applies when the deadline must be hit most of the time, but an occasional miss can be tolerated. Data acquisition and stock trading are examples; a missed deadline might mean a missed trade, but life as a whole goes on. The 100% realtime scenario, instead, applies to areas like industrial automation and aviation; if a deadline is missed, bad things happen.

Getting to 100% realtime performance requires quite a bit of work. To the extent possible, the worst-case execution time (WCET) of any task must be determined. Statistical testing is used to test that calculation. The best approach is formal verification, where the response time is proved, but that can only be done for the smallest applications and operating systems. Formal verification has been performed for the L4 system, but that kernel only has 10,000 lines of code. Formal verification is not possible for a kernel like Linux.

100% Realtime performance is hard enough to achieve, but "safety-critical" adds another dimension of reliability requirements. You do not, he said, want to see a segmentation fault when you hit the brakes. Safety-critical computing is not the same as realtime, but the two tend to go together. There is a long list of standards covering various aspects of safety-critical computing; they all come together under the IEC 61508 umbrella standard — "10,000 pages of bureaucratic poetry" according to Mauerer.

Compliance with those standards is one of three routes to safety. The second is called "proven in use", which is essentially a way of saying that a system has been used for twenty years and hasn't shown problems yet. It is scary how far the "proven in use" claim can be pushed, he said. The final approach is "compliant non-compliant development", which is how many of these systems are actually built.

Components of a realtime safety-critical system

There is a list of design patterns used to put together safety-critical realtime systems; Mauerer described some of them:

Run a traditional realtime operating system in a dedicated "side device" to handle the safety-critical work. There are a lot of these devices and systems available; they are simple and come pre-certified. The WCET of a given task can be calculated relatively easily. These systems can be hard to extend, though; they also suffer from vendor lock-in and unusual APIs.

Use a realtime-enhanced kernel — a solution that is common in the Linux community. With this approach, it's possible to keep and use existing Linux know-how and incorporate high-level technologies. The downside is that certification is difficult, the resulting systems are complex, and only statistical assurance is possible.

Run a "separation kernel" on hardware that enforces partitioning. This solution is common in the proprietary world. It offers a clean split between the realtime and non-realtime parts of the system, and there is a lot of certification experience with these systems. But there is strong coupling between the two parts at the hardware level, and there are vendor lock-in issues.

Run a co-kernel on the same hardware — like the separation kernel, but without the hardware partitioning. Once again, there is a clean division between the two parts of the system, and this solution is resource-efficient. But the necessary code is not in the mainline kernel, leading to maintenance difficulties, and there can be implicit couplings between the two kernels.

Use asymmetric multiprocessing; this solution is becoming more popular, he said. A multiprocessor system is partitioned, with some CPUs dedicated to realtime processing. Performance tends to be good, but there can be implicit coupling between the two parts. This solution is also relatively new and fast-moving, complicating maintenance.

One common feature between all of these approaches (excepting the second) is that they use some sort of partitioning to separate the realtime processing from everything else that a system is doing. The exception (the realtime-enhanced kernel) was brought about by adding support for full preemption, deterministic timing behavior, and avoidance of priority inversion. All that was needed to accomplish this, he said, was the world's best programmers and about two decades of time.

If you want to create a system with such a kernel, you need to start by avoiding "stupid stuff". No page faults can be taken; memory must be locked into RAM. Inappropriate system calls — those involving networking, for example — must be avoided; access to block devices is also not allowed. If these rules are followed, a realtime Linux system can achieve maximum latencies of about 50µs on an x86 processor, or 150µs on a Raspberry Pi. This is far from perfect, but it is enough for most uses.

There are a lot of advantages to using a realtime Linux kernel. The patches are readily available, as is community support. Existing engineering knowledge can be reused. Realtime Linux offers multi-core scalability and the ability to run realtime code in user space. On the other hand, the resulting system is hard to certify. If much smaller latencies are required, one needs specialized deep knowledge — it's best to have Thomas Gleixner available. It is also easy to mix up the realtime and non-realtime parts of the system; this does happen in practice.

One could, instead, use Xenomai, which Mauerer described as a skin for a traditional realtime operating system. It can run over Linux or in a co-kernel arrangement, but some patches need to be applied. I-pipe is used to dispatch interrupts to the realtime or Linux kernel as needed. Xenomai can achieve 10µs latencies on x86 systems, or 50µs on a Raspberry Pi. It offers a clean split between the parts of the system and is a lightweight solution. On the other hand, there are few developers working with Xenomai, and it tends to experience regressions when the upstream kernel changes.

Yet another approach, along the separation kernel lines, is an ARM system with a programmable realtime unit (PRU). The PRU has its own ARM-like processors and its own memory, so there is no contention with the main CPU. The main core can run Linux and communicate with the PRU via the remoteproc interface. Such systems are highly deterministic, cleanly split out the realtime work, and are simple. But they are also hardware-specific and require more maintenance.

Getting to safety-critical

There are, he said, two common approaches to the creation of safety-critical Linux systems. The first is called SIL2LinuxMP; it works by partitioning the system's CPUs and running applications in containers. Dedicated CPUs can then be used to isolate the safety-critical work from the rest of the system. This work is aiming for SIL certification, but at the SIL2 level only. SIL3 is considered to be too hard to reach with a Linux-based system.

The alternative is the Jailhouse system. It is a hypervisor that uses hardware virtualization to partition the system. Realtime code can be run in one partition, while safety-critical code can run in another. There are a couple of Jailhouse-like systems, but they have some disadvantages. SafeG relies on the Arm TrustZone mechanism, and only supports two partitions. The Quest-V kernel [PDF] is purely a research system that is not suitable for real-world use. So Jailhouse is Mauerer's preferred approach.

The Jailhouse project is focused on simplicity, he said. It lets an ordinary Linux kernel bring up the system and deal with hardware initialization issues; the partitioning is done once things are up and running. The "regular" Linux system is then shunted over into a virtual machine that is under the hypervisor's control. It works, but there are still some issues to deal with. Jailhouse cannot split up memory-mapped I/O regions, for example, leading to implicit coupling between the system parts. There are other hardware resources that cannot be partitioned; he mentioned clocks as being particularly problematic. The "unsafe" part of the system might manage to turn off a clock needed by the safety-critical partition, for example.

Overall, though, he said that he is happy with the Jailhouse approach. It is able to achieve 15µs maximum latencies in most settings. The obvious conclusion of the talk was thus a recommendation that Jailhouse is the best approach for safety-critical systems at this time.

[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting your editor's travel to the Open Source Summit.]

