By Andrew Mayo, 1E

"I’m fixing a hole where the rain gets in….” - The Beatles

Related: Important Lesson from Equifax: You’re not Secure Until You’re Sure

I recall owning a property where the roof was tiled with metal tiles. It leaked. The previous owners had tried valiantly to repair it over time. Each pinhole – and there were many - had been carefully sealed with silicone sealant. As new failures developed, the owners had placed an elaborate walkway of planks around the roof space to make it easy to get around and plug further leaks.

It was clear after a few trips up there that we were losing the battle. The entire roof had to be replaced. Sometimes you have to know when to quit patching. As new owners, we hadn’t invested the time on repairs and could see things from a new perspective. The previous owners had, perhaps understandably, become focused on ‘just one more leak’ and couldn’t see it was time to move on.

A recent article in The Register attacked Microsoft for “silently fixing” security holes in Windows 10 but “dumping Win 7 and 8 out in the cold.”

The author suggests that, at best, it can be months before security fixes are back-ported from Windows 10 to earlier releases of Windows, leaving customers potentially vulnerable if an exploit is developed for a known vulnerability.

This criticism, though, doesn’t take into account the way modern mission-critical software is developed and tested. Microsoft has stated that Windows 10 is to be the last version of Windows. As such, it has started on a long program of much-needed structural overhauls of each of the complex subsystems that interact with each other within Windows.

These overhauls focus, obviously, on security, along with resilience, performance, resource consumption, modularity, and, in many cases, compliance with external standards (such as those promulgated by W3C for web-based technologies such as HTML).

During a subsystem overhaul it wouldn’t be unusual for quite significant chunks of code to be rewritten. When security is a consideration, it’s normal to rip out old, unsafe APIs or library functions (such as the notorious sprintf C function, which does not check for buffer overflow conditions), replacing them with newer, safer alternatives which, by design, mitigate against a host of exploits.

At a higher level, entire algorithms and data structures might be intrinsically unsound. It’s like finding dry rot in a wall. Sure, you can chop out the damaged wood and plunk some filler in there. But there comes a point where the integrity of the whole structure is compromised. At that point you have to pull the whole framing out and rebuild.

But software is far more complex than a timber framed wall. Each subsystem must pass thousands, or in many cases, tens of thousands, of tests. Modern software design focuses on automating these test suites so that changes to the software don’t inadvertently introduce so-called ‘regressions’, where in making a change a bug is inadvertently introduced into pre-existing functionality.

However, there’s no free lunch. Creating the tests adds significantly to development costs and also constrains the nature of changes; tests that rely on subsystem internals might need to be completely rewritten if those internals change.

At the end of this arduous process, we – hopefully – have a new subsystem that is faster, more secure, and possibly also refactored to ‘separate concerns’, ensuring that future changes will be easier, quicker and safer to make, as well as lessening the possibility that a future change will compromise security.

Now let’s consider security vulnerabilities and back-porting. If a subsystem is completely rewritten we may well be able to show that a whole class of vulnerabilities has now been mitigated. There could be potentially hundreds of potential attacks that are now not possible.

Meanwhile, the old code, still present in previous versions of the operating system, is vulnerable.

Can we ‘back port’ the ‘security fixes’? Well, quite probably not. If we totally rewrote the subsystem (and possibly at the same time changed other subsystems with which it communicates), carrying those changes back could be very difficult. They might also introduce regressions into a previously stable operating system. For previous versions the changes we make – or patches – are, by design, intended to be conservative.

We’re up on the roof replacing a broken tile, not replacing the whole roof.

We would then have to test, for each known vulnerability, that the patched code was no longer vulnerable. Our new subsystem might well be invulnerable by design; the old one may not be, since it interacts with big chunks of code that have NOT been changed. How could you be sure you really have ‘carried back the fixes’?

It would be like taking your current car, say three years old, and expecting the manufacturer to retro-fit features from this year’s model next time you get it serviced.

So I think it’s unfair to criticize Microsoft over this issue. Instead, we should give it credit for the huge effort it is making with Windows 10, embarking on a massive structural re-engineering that has already significantly improved many areas of the operating system. Indeed, rather than worry about outmoded operating systems, we should be strategizing for staying current in a post-Windows 10 environment, with its exacting new release cadence.

We need to look at our legacy operating systems and plan now to upgrade, rather than complain that vulnerabilities aren’t patched. It’s like my leaky roof. At some point you just have to know when to cut your losses and “rip and replace”. Windows 10 is that time, so let’s come in from the rain and stop fixing holes.

About the Author

Andrew Mayo has been involved in IT, both in software and hardware roles, for enough years to have worked through the tail-end of the punched card and paper tape era, and the subsequent invention of the PC. Currently he’s working on the evolution of 1E’s Tachyon solution, looking in depth at both attack and defense strategies and the evolution of the threat landscape. Previously Team Lead for the AppClarity project, he’s worked previously in various verticals including healthcare, finance and ERP. When he’s not wrangling with databases, he enjoys playing piano and hiking, especially when the destination is one of England’s picturesque pubs.