You probably think, like I do, that anything can be fixed in software, But it appears that the latest class of attacks need more than software to fix. Google's researchers are convinced that only hardware changes can do the job.

If you have been following the on-going slow train wreck that are the attempts to fix the side channel attacks invented last year then you will not be surprised by Google's conclusion. The problem is that modern processors run multiple programs using the same hardware without complete isolation between them. In fact, the lack of isolation is intentional because it can speed things up. For example, if one thread has data in the cache that another thread needs, then the data will be fetched quicker than the new thread had inherited a "cold" cache. All of these architectural features are a marvel but they don't take into account how evil the world can be. A warm cache can reveal, just by timing how long it takes to retrieve data, what another thread was working with. It appears, after some very practical demonstrations, that this sort of timing attack can leak large amounts of data fairly reliably.

The details of all of these attacks are complicated, but the general idea is simple - shared state of any kind can leak data if we are inventive enough. At first it seems that these defects are localized and are some sort of error in the design of particular processors, but it seems that we have all fallen into the same trap:

"As a result of our work on Spectre, we now know that information leaks may affect all processors that perform speculation, regardless of instruction set architecture, manufacturer, clock speed, virtualization, or timer resolution."

That is, the fault is part of the common microarchitecture that nearly all processors have adopted in an effort to become more efficient.

"Vulnerabilities from speculative execution are not processor bugs but are more properly considered fundamental design flaws, since they do not arise from errata. Troublingly, these fundamental design flaws were overlooked by top minds for decades."

The new paper from Google researchers comes to the conclusion that there exists a universal read gadget on most of today's CPUs. What is more, the common fix of reducing the accuracy of the timer available to a language doesn't help as there are "amplification" procedures which can increase the time differences used in the attack.

To prove that it is possible, Google's research team says:

"...we developed proofs of concept in C++, JavaScript, and WebAssembly for all the reported vulnerabilities. We were able to leak over 1KB/s from variant 1 gadgets in C++ using rdtsc with 99.99% accuracy and over 10B/s from JavaScript using a low resolution timer."

They also tried to implement defences against the attacks and came to the conclusion that none of them were perfect. Using memory barriers slowed things down by just less than a factor of 3. The commonly used retpoline approach slowed things down by a factor of 1.5. In all cases the slowdown was unacceptable for most uses. More importantly. an approach they called "variant 4" could not be mitigated in software.

The conclusion to the paper is also worrying:

"The community has assumed for decades that programming language security enforced with static and dynamic checks could guarantee confidentiality between computations in the same address space. Our work has discovered there are numerous vulnerabilities in today’s languages that when run on today’s CPUs allow construction of the universal read gadget, which completely destroys language-enforced confidentiality ...

Computer systems have become massively complex in pursuit of the seemingly number-one goal of performance. We’ve been extraordinarily successful at making them faster and more powerful, but also more complicated, facilitated by our many ways of creating abstractions. The tower of abstractions has allowed us to gain confidence in our designs through separate reasoning and verification, separating hardware from software, and introducing security boundaries. But we see again that our abstractions leak, side-channels exist outside of our models, and now, down deep in the hardware where we were not supposed to see, there are vulnerabilities in the very chips we deployed the world over. Our models, our mental models, are wrong; we have been trading security for performance and complexity all along and didn’t know it. It is now a painful irony that today, defense requires even more complexity with software mitigations, most of which we know to be incomplete. And complexity makes these three open problems all that much harder. Spectre is perhaps, too appropriately named, as it seems destined to haunt us for a long time."

It seems we have to live with the simple fact that all of our machines are susceptible to this sort of attack and, if a rogue process runs on the same machine alongside our process, it can read all of the memory contents. This is, of course, particularly worrying for cloud computing where it is common for virtual machines belonging to different companies to run on the same hardware.

Until the hardware manufacturers find a solution it seems that the this is the century of completely insecure computing.

More Information

Spectre is here to stay: An analysis of side-channels and speculative execution

Ross McIlroy, Jaroslav Sevcik, Tobias Tebbi, Ben L. Titzer and Toon Verwaest, all at Google

Related Articles

How Spectre Works

How Meltdown Works

Rowhammer - Changing Memory Without Accessing It

Poodle Is A Very Different Sort Of Security Breach

ShellShock - Yet Another Code Injection Vulnerability

Heartbleed - The Programmer's View

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook or Linkedin.







Comments



Make a Comment or View Existing Comments Using Disqus





or email your comment to: comments@i-programmer.info