​(Image: file photo)

How long do you put off restarting your computer, phone, or tablet for the sake of a security update or software patch? All too often, it's far too long.

Updating your devices is a tedious, time-consuming and productivity-killing process. But in the vast majority of cases these updates fix bugs in operating systems, browsers, productivity suites, and even on our phones and tablets, which can mitigate some of the worst vulnerabilities discovered in recent times.

Patches are good for you. According to Homeland Security's cyber-emergency unit, US-CERT, as many as 85 percent of all targeted attacks can be prevented by applying a security patch.

The problem is that far too many have experienced a case when a patch has gone disastrously wrong. That's not just a problem for the device owner short term, but it's a lasting trust issue with software giants and device makers.

In the past two years, there have been at least half-a-dozen major software screw-ups that have left users clambering for answers, and -- in some cases -- their devices have stopped working altogether. That's a problem, but not least because any security expert should tell you that antivirus software and strong passwords are good, but nothing prevents hacks and attacks better than up-to-date software.

Take Apple, Google, and Microsoft. In the past year or so, all three have released a "botched" update, which either failed to fix the problem or caused new issues.

Apple's iOS 8.0.1 update was meant to fix initial problems with Apple's new eight generation mobile operating system, but killed cell service on affected phones -- leaving millions stranded until a fix was issued a day later. Google had to patch the so-called Stagefright flaw, which affected every Android device, for a second time after the first fix failed to do the job. Meanwhile, Microsoft has seen more patch recalls in the past two years than in the past decade.

And there have been numerous other botched updates, affecting millions of users.

The trust that we put in tech companies to deliver patches free from flaws or issues has been dented. And we forget that many companies have unfettered update "backdoors" into their products to provide patches and fixes for the masses. Get it wrong, and that trust can be wiped out to zero.

The other side to it isn't any less stressful for the companies offering the patches.

For software companies, patching vulnerabilities might not be just fixing a line of code. The bigger the software, the more complicated it gets. And when something goes wrong, it's never just one person whose complaints they hear.

"Developers need to test to see if the changes they made had any impact on their ecosystem," said Dustin Childs, senior security content developer at Hewlett Packard Enterprise. "Any change in code could potentially bring a negative impact to surrounding code. Testing must be done to ensure patches do not cause problems to existing code developed by the vendor, or to third-party software that relies on that code."

Childs, a veteran in the security industry, prior to his work at Hewlett Packard Enterprise, was a senior technical evangelist and security program manager at Microsoft. He knows all too well how important security patches are to the wider world, as one mistake can lead to a massive clean-up operation of hundreds of millions of PCs.

"Development and testing must be done with the understanding that the clock is ticking," said Childs. "Vendors cannot take too long in any section of the process. Once a vulnerability is found, the likelihood of someone else finding and exploiting it always increases."

It's why companies, more often than not, work towards private disclosure timelines. Companies often have three months from the point of private disclosure to fix and test a security patch. Private disclosure prevents hackers from posing an immediate risk to customers.

But that's not how the general population sees it. One bad patch can leave a sour taste down the line. That can lead to users not updating as soon as they should be, which creates software and security fragmentation. Simply put, users aren't patching as often as they should be, which raises the risk of attack for not only them, but others as well.

More often than not, it's a balance between acceptable risk and trust in the software maker.

Mark Nunnikohoven, vice-president of cloud research at security firm Trend Micro, said Microsoft, Google, and Apple have a "fantastic track record" of successfully issuing patches. But, the very nature of the beast is when you're dealing with almost a billion combined users, any slip up is going to be widely noticed, which can "skew the user's perception of that company and their ability to deliver a successful patch."

Microsoft, for example, issued 135 security bulletins this year alone with thousands of separate vulnerabilities patched. All it takes is one or two patches to fail or break something -- which has happened -- to account for a 1 percent failure rate.

"When you put that into the larger context of the data those systems are protecting and the high volumes of attacks we see online every day, most people would consider that an acceptable risk," said Nunnikohoven.

But to the affected user, which can be an ordinary home PC user or an enterprise administrator overseeing hundreds or thousands of machines, that risk is not acceptable.

The big question is, will this patch protect my computer, my network, my data, and my security and privacy, but will it break something functional in the process? On the rare cases things go wrong, it's exactly that bittersweet pill to swallow.

"For the user, you're evaluating the value of the fix versus the impact to the service you provide. Will this fix my problem? Will it cause other problems? In the case of security patch, this get a little trickier because now there's a new risk in play -- potential attackers," said Nunnikhoven.

At the heart of any software update is a trust relationship between the user and the company. When things go wrong, it can affect thousands or millions of users. Just ignoring the issue and pulling patches can undermine a user's trust, which can damage the future patching process.

The key, according to Childs, is transparency and clarity, rather than obfuscation by design. And when things go wrong, companies should speak up and offer workarounds.

"Customers don't always expect vendors to be 100 percent perfect 100 percent of the time, or at least they shouldn't," said Childs. "However, if vendors are upfront and honest about the situation and provide actionable guidance, it goes a long way to reestablishing the trust that has been lost over the years."