I'm compiling a little list, of architectural sins of the founders (between 1945 and 1990, more or less) that have bequeathed us the current mess. They're fundamental design errors in our computing architectures; their emergent side-effects have permitted the current wave of computer crime to happen ...

Even allowing for self-serving reporting (the FBI would obviously find it useful to inflate the threat of crime, if only to justify their budget requests), that's a lot of money being pumped down a rat-hole. Extrapolate it worldwide and the figures are horrendous — probably nearer to $300Bn a year. To put it in perspective, it's like the combined revenue (not profits; gross turnover) of Intel, Microsoft, Apple, and IBM — and probably a few left-overs like HP and Dell — being lost due to deliberate criminal activity.

According to one estimate pushed by the FBI in 2006, computer crime costs US businesses $67 billion a year . And identity fraud in the US allegedly hit $52.6Bn in 2004.

1) The Von Neumann architecture triumphed over Harvard Architecture in the design of computer systems in the late 1940s/early 1950s.

In the Von Neumann architecture, data and executable code are stored in the same contiguous memory in a computer; in Harvard Architecture machines, data and code have separate, disjoint areas of memory and never the twain shall meet. Von Neumann architectures are simpler and cheaper, hence were more popular for about the first forty or fifty years of the computing revolution. They're also more flexible. Allowing data and executable code to share the same address space allows for self-modifying code and for execution of data as code — sometimes these are useful, but they're horrible security holes insofar as it permits code injection attacks to happen. There have been some recent moves by the likes of Intel in their more recent architecture iterations to permit chunks of memory to be locked to one function or the other, thus reducing the risk of code injection attacks — but it's too little, and much too late.

2) String handling in C uses null-terminated strings rather than pointer-delimited strings. A null character (ASCII 0) denotes the end of a string (a block of adjacent memory cells containing one character of data each) in the C programming language's memory management (cough, choke) system. What if you want to write a string containing ASCII 0, or read or write beyond a null? C will let you. (C will not only let you shoot yourself in the foot, it will hand you a new magazine when you run out of bullets.) Overwriting the end of a string or array with some code and then tricking an application into moving its execution pointer to that code is one of the classic ways of tricking a Von Neumann architecture into doing something naughty.

In contrast, we've known for many decades that if you want safe string handling, you use an array — and stick a pointer to the end of the array in the first word or so of the array. By enforcing bounds checking, we can make it much harder to scribble over restricted chunks of memory.

Why does C use null-terminated strings? Because ASCII NUL is a single byte, and a pointer needs to be at least two bytes (16 bits) to be any use. (Unless you want short strings, limited to 256 bytes.) Each string in C was thus a byte shorter than a pointer-delimited string, saving, ooh, hundreds or thousands of bytes of memory on those early 1970s UNIX machines.

(To those who might carp that C isn't really used much any more, I should reply that (a) yes it is, and (b) what do you think C++ is compiled to, before it's fed back to a compiler to produce object code?)

3) TCP/IP lacks encryption at the IP packet level. Thank the NSA in the early 1980s for this particular clanger: our networking is fundamentally insecure, and slapping encryption on high-level protocols (e.g. SSL) doesn't address the underlying problem: if you are serious about comsec, you do not allow listeners to promiscuously log all your traffic and work at cracking it at their leisure. On the other hand, if you're the NSA, you don't want the handful of scientists and engineers using the NSF's backbone to hide things from you. And that's all TCP/IP was seen as, back in the early 80s.

If we had proper authentication and/or encryption of packets, distributed denial-of-service attacks would be a lot harder, if not impossible.

DNS lacked authentication until stunningly recently. (This is a sub-category of (3) above, but shouldn't be underestimated.)

4) The World Wide Web. Which was designed by and for academics working in a research environment who needed to share data, not by and for banks who wanted to enable their customers to pay off their credit card bills at 3 in the morning from an airport departure lounge. (This is a whole 'nother rant, but let's just say that embedding JavaScript within HTML is another instance of the same code/data exploit-inviting security failure as the Von Neumann/Hardward Architecture model. And if you don't use a web browser with scripting disabled for all untrusted sites, you are some random black hat hacker's bitch.)

5) User education, or the lack of it. (Clutches head.) I have seen a computer that is probably safe for most users; it's called an iPad, and it's the digital equivalent of a fascist police state: if you try to do anything dodgy, you'll find that it's either impossible or very difficult. On the other hand? It's rather difficult to do anything dodgy. There aren't, as yet, any viable malware species in the wild that target the curated one-app-store-to-rule-them-all world of Apple. (Jailbroken iOS devices are vulnerable, but that's the jailbreaker's responsibility. Do not point gun at foot unless you have personally ensured that it isn't loaded and you're wearing a bulletproof boot.)

In the meantime, the state of user interfaces is such that even folks with degrees in computer science often find them perplexing, infuriating, and misleading. It's hardly surprising that the digital illiterati have problems — but a few years of reading RISKS Digest should drive even the most Panglossian optimist into a bleak cynicism about the ability of human beings to chew gum and operate Turing Machines at the same time.

6) Microsoft.

Sorry, let me rephrase that: Bloody Microsoft.

Specifically, Microsoft started out on stand-alone microcomputers with a single user. They took a very long time to grasp multitasking, and much longer to grasp internetworking, and even longer to get serious about security. In fact, they got serious about memory protection criminally late — in the early to mid 2000s, a decade after the cat was out of the bag. Meanwhile, in their eagerness to embrace and extend existing protocols for networking, they weren't paying attention to the security implications (because security wasn't an obvious focus for their commercial activities until relatively recently).

We have a multiculture of software — even Microsoft's OSs aren't a monoculture any more — but there are many tens or hundreds of millions of machines running pre-Vista releases of Windows. Despite Vista being a performance dog, it was at least their first release to take security seriously. But the old, bad, pre-security Microsoft OSs are still out there, and still prone to catching any passing worm or virus or spyware. And Microsoft, by dropping security support for older OSs, aren't helping the problem.

Anyway, I'm now open for suggestions as to other structural problems that have brought us to the current sorry state of networking security. Not specific problems — I don't want to hear about individual viruses or worms or companies — but architectural flaws that have contributed to the current mess.

Where did we go wrong?