Safety and Security from a Language Engineering Perspective

A brief heads-up on today’s WJAX talk

When I started my writing here on Medium, I planned to write original stuff (as I did in all my posts so far), but also to “reuse” other stuff I’ve been producing for other media. This is the first post in this second style; it’s shorter, because it references other material, slides in this case. I gave a talk at the WJAX conference today with Bernd Kolb called Safer Software Through Better Abstractions and Static Analysis.

A few years ago, Gary McGraw published a book called Software Security, where he looks at flaws in the implementation of a software system that can be exploited maliciously. Traditionally, security is seen more like a process issue (education/awareness, reviews, pen testing) or a matter of architecture (authentication, encryption, DMZs, runtime monitoring). And of course, both of those are very relevant. But bugs in the implementation are also a problem as we know from loads of security exploits in SSL libraries (Heartbleed, goto fail) or blockchain contracts (the DAO and several other more recent ones). Many others examples exist. I liked Gary’s book, and we expand on this perspective by looking at language extensions that prevent security problems constructively, and to other extensions that make analysis or review simpler. Of course, many of the ideas are “the usual language engineering stuff”, but applying them to security is relevant, I think.

Check out the slides here; I add a few more comments below.

Looks like one cannot use images as links on Medium…

Here are some of the core ideas:

Security, Safety and Robustness cannot be separated. In some sense, security are maliciously exploited robustness issues, safety issues are just “unfortunate”. You also hear sentences like “this exploit constitutes a massive safety risk”, further illustrating the relationship. Yes, there are some particular security risks as well (e.g., making sure key material cannot be read from a memory image). But there’s a lot of overlap.

In some sense, security are maliciously exploited robustness issues, safety issues are just “unfortunate”. You also hear sentences like “this exploit constitutes a massive safety risk”, further illustrating the relationship. Yes, there are some particular security risks as well (e.g., making sure key material cannot be read from a memory image). But there’s a lot of overlap. By using better languages, many low level errors are avoided. Things like mbeddr’s statemachines, a first-class extension for gotofail-style error handling or wiping of the stack when a leaving a scope (for key material) can be supported by the language directly.

Things like mbeddr’s statemachines, a first-class extension for gotofail-style error handling or wiping of the stack when a leaving a scope (for key material) can be supported by the language directly. Advanced type systems, such as those that support option types (to avoid null dereferencing), number ranges (to avoid overflows) or support tagging (to track tainted data) are relatively easy to build and are a quick win .

such as those that support option types (to avoid null dereferencing), number ranges (to avoid overflows) or support tagging (to track tainted data) are relatively easy to build and . Program verification techniques , such as SMT solving or model checking, c an find non-trivial problems . Sure, trying to use those for the high-hanging fruits is very non-trivial. But the low hanging ones are easier to reach and should be harvested.

, such as SMT solving or model checking, c . Sure, trying to use those for the high-hanging fruits is very non-trivial. But the low hanging ones are easier to reach and should be harvested. High Quality Tests are crucial. Measure coverage, generate test cases, use mutation testing. The tools exist. Learn them!

Measure coverage, generate test cases, use mutation testing. The tools exist. Learn them! Better abstractions and notations , such as tables, state machines, or mathematical formulas are much easier to read than “code”. They make review easier , and thus, make the code more trustworthy.

, such as tables, state machines, or mathematical formulas are much easier to read than “code”. They , and thus, make the code more trustworthy. Make programs simulatable by stakeholders, because, again, this helps program understanding and thus reduces the likelihood of unintended behaviors.

Just to repeat: we are not suggesting that “classical” security techniques like penetration testing are not needed or not useful. But we do think the stuff in these slides is very relevant as well.