Last year, the U.S. National Highway Traffic Safety Administration (NHTSA ) released an enormous report on its investigation into unintended acceleration in Toyota cars. It all seems very familiar, if you remember the Audi 5000. New electronic throttle control reaches the market. Reports of failure emerge from a few users. The vendor denies any problem. A few engineers quietly report having reproduced the problem. But an intensive publicly funded investigation finds nothing.

What makes this report particularly interesting is that the NHTSA called in an evaluation team from the space agency, NASA, to do the heavy lifting. And that team included a software evaluation group. While the hardware folks were shaking, baking, and irradiating cars and car parts, the software team had at the Engine Controller Module code for the four-cylinder 2005 Camry  all 280K lines of ANSI C. The team's report, included as part of the larger "Report on Unintended Acceleration in Toyota Vehicles" could be a case study.

The NASA team applied static source-code analysis, formal logic model checking, and algorithm analysis through simulation. The report states, "The team's experience is that there is no single analysis technique today that can reliably intercept all vulnerabilities, but that it is strongly recommended to deploy a range of different leading tools."

For code analysis, the team used Coverity, CodeSonar, and Bell Labs' Uno to identify "common coding defects and suspicious coding patterns." They also used CodeSonar to compare Toyota's code against a Jet Propulsion Lab coding standard.

For model checking, the team used open-source Spin and its companion tool Swarm. Here, the tale gets more interesting. To use a formal model checker, you first have to write formal models. The team decided to build models only for those software modules they believed could be culprits  so the formal analysis depended upon human judgment of possible fault modes.

The algorithm analysis started with  once again  building models, this time in Matlab. That process was undertaken by reading Toyota documentation and talking with Toyota engineers. It then progressed to analyzing the source code, and finally testing the models against actual Camrys. Once the NASA team was satisfied with the models, they explored failure scenarios in Simulink and checked delays with AbsInt aiT.

Some conclusions suggest themselves. First, there are no silver bullets: Effective debugging means using everything you've got. Second, even when it's grounded in exhaustive and formal techniques, an evaluation is circumscribed by the evaluators' beliefs about the possible behavior of the system. Third, there is no certainty. Despite Toyota's great care in developing their code, NASA's analysis found significant errors, including serious underestimates of delays in the multiprocessing system. But the investigation could not link those errors to any proposed mechanism for unintended acceleration. Contrary to what you probably read in the papers, the NASA Executive Summary stated, "Because proof that the ETCS-I [that is, the Electronic Throttle Control  Ed.] caused the reported UAs [unintended accelerations] was not found does not mean it could not occur."

 Ron Wilson is the guest Editorial director of Embedded.com, EDN, and the Designlines at UBM Electronics. This is a slightly revised version of this article, which first appeared in EETimes.

Note: Andrew Binstock's editorial on writing multithreaded programs from small, discrete tasks will appear next week, as promised in If Small Tasks Are the New Program Unit for a Multicore World, When Will We Assemble Programs From Them?.