Technology regulation is notoriously tricky. It's always tempting to say that certain devices – say, firearms – should be illegal or restricted, and license everyone who can manufacture them to ensure compliance. This works, up to a point. But this kind of regulation works best where the gizmos being regulated are difficult and/or expensive to make. Things that you can knock up in your sitting room or buy five for a pound in a blister pack by the newsagent's till are not good candidates for this: we don't try to regulate bleach and ammonia cleaners, even though mixing them together makes a potentially fatal poison gas. Indeed, where we try to regulate general-purpose objects as though they were special and tricky, all kinds of bad, weird stuff happens, as with Britain's failing war on knives, which makes it a very tricky business for a young apprentice chef or model-maker to buy and transport the perfectly reasonable blade she legitimately needs, while still not making a dent in actual knife crime.

Historically, we've thought of computers as being expensive, complex things, and thus a good candidate for regulation. After all, there was a time not so long ago when you could say: "We will regulate a ballistics computer, but not a computer used in medical diagnostics or video games." But increasingly – and inevitably – all computing devices are composed of the same fundamental commodity components. Supercomputers are likely to be built out of stacks of the same sort of PC you've got on your desk; as are video game systems, hearing aids, personal stereos, automobile control systems, laser printers, phones and tablet PCs. Each of these have approximately the same guts, simply repackaged in a way that makes them better at doing one job than others.

But their fundamental similarity means that a regulation you apply to one device will apply to every device that has a general-purpose computer at its heart.

Networks have also undergone this shift from specific to general: where we once maintained separate devices and infrastructure for different kinds of communications (voice, emergency services, faxes, television), now it's increasingly the case that all these uses are mere applications running on the internet.

This generalisation has lots going for it: not least that an investment in faster internet access or more robust computers for one application (say, medical telemetry or commercial video encoding) ends up benefiting all those other applications that thrive on general-purpose computers and networks. This has created an era of computation, in which people with all sorts of problems try to find ways to use computation to make them more tractable – hence the enormous rush to use number-crunching in fields as diverse as psychology and genomics, stock-trading and music recommendation.

The dark corollary of this is that anything that harms general-purpose computers and networks has the knock-on effect of undermining all the disciplines that thrive on them. For example, the entertainment industries have clamoured for years to extend copyright liability to "intermediaries" — that is, to make services such as YouTube liable if they allow a user to post infringing material. There's not really any way that Google (YouTube's owner) could police YouTube as thoroughly as litigants such as Viacom have demanded – more than 29 hours of video are uploaded to YouTube every minute, more than all the copyright experts in the world could hope to examine in detail. Practically speaking, the only way YouTube could rise to the standard sought by Viacom would be to offload the cost of confirming copyright status to uploaders – either by requiring them to pay a fee to have their material vetted by YouTube's experts, or to present some paid expert's assurance that the material won't infringe copyright. Both of these would "solve" the problem of YouTube's enormous volume of video by reducing it to a trickle represented by those with the money to pay for copyright clearance before their video goes live.

But of course, only a tiny fraction of YouTube's volume has anything to do with the entertainment industries' products. The vast bulk is entirely outside that narrow interest – it's shaky cameraphone shots of police atrocities in the Middle East, exuberant student films, footage of technical conference presentations, and personal opinion and social messages produced by and for small groups of friends. The negative impact of any copyright enforcement that raises the bar to using YouTube will be felt by everyone who uses YouTube – not just pirates.

And if intermediary liability is increased – either through lawsuits against YouTube, laws such as the Digital Economy Act, or secretive treaties such as the Anti-Counterfeiting Trade Agreement and Transpacific Partnership agreement – then every entity that hosts or carries content will be at risk. Universities will have to vet their student papers for copyright infringement before allowing academic material to see the light of day. Should intermediary liability extend to monitoring and barring "bad" network traffic, then every institution that provides network access, including private companies and schools, would have to invest in "traffic management" spyware that keeps users under surveillance, making this capacity standard in networking equipment. It would also generate automatic suspicion for those who use encryption and proxies to ensure the privacy and integrity of their communications, be they whistleblowers, revolutionaries in autocratic states, gay teens who don't want to out themselves to their parents or abused wives who don't want to reveal that they are researching shelters for battered women.

But if there's one thing that the filesharing wars have taught us, it's that this sort of prohibition is remarkably ineffective at stopping people who really want to get through. Even the staunchest defendants of anti-copying technology and network blocks will tell you that they are intended as "speed bumps" that will discourage casual filesharers, without having any noticeable effect on people who really want to get past them.

Despite the admitted failure of this model, the calls to use it are on the increase. The traditional bogeymen of the information age – pornographers, pirates, mafias and terrorists – now head a list of pretences for locking down the information society.

Today's business world is full of companies that have built their fortunes on the idea of designing a computer that can only run certain approved programs, from iPhones to PlayStations. Operating system vendors such as Apple and Microsoft are building in digital rights management – programs that run even when the user doesn't want them, that users can't inspect or remove – at the bottom layer of the software stack. Video game companies such as Blizzard bundle popular offerings such as World of Warcraft with spyware that can examine and tamper with every file on the player's computer, in the name of preventing cheaters.

Software-defined radios are already challenging the traditional regulatory model for radio emitters: formerly, a regulator would demand that a manufacturer limit the emissions from a radio to certain bands and power. Now, software-based radios that are sold as, say, Wi-Fi cards, can be reprogrammed to use bands reserved for emergency services or air-traffic control. No one's quite sure what to do about this – software radios are built out of commodity components, the sort of thing that would be nearly as expensive and impractical to regulate as a paper clip.

The growing realm of 3D printing will generate all sorts of new problems in search of solutions. From sex toys (banned in some southern US states) to kits to modify semi-automatic guns and render them automatic, new groups of would-be network/device cops will crop up every day. The list of problematic 3D objects is practically endless: anatomically correct Barbie torsos that can fit the standard head and limbs; keys for high-security locks; patented gizmos; even objects held sacred by indigenous people.

Around the corner are the bio-printers that can output organisms, pharmaceutical compounds, and biological material. The potential for these devices is enormous, but so are the problems, from patent infringement to bioweapons (inadvertent and deliberate).

The thing is, we'll be no more effective at building a bio-printer or a 3D printer or a software radio that can only execute certain programs than we were at building a PC that won't copy a copyrighted song. The flexibility of the universal computer and the universal network is fundamental and non-negotiable. Building a computer that can run every program is infinitely simpler than building a computer that can run any program except for naughty ones. Building a network that can carry every packet is infinitely simpler than building a network that carries all traffic except for the traffic you wish it wouldn't carry.

These are not moral statements, they're technical realities. There are plenty of things I hope people don't do with their computers, from outputting superbugs to interfering with my TV reception (or the reception on the radio in the ambulance that's rushing to scrape me off the pavement somewhere).

It's because the potential for harm is so great that we can't afford to put our faith in magic computer-controlling technology. By the time I need a hearing aid or an artificial joint, I fully expect it to include one or more networked, general-purpose computers. I don't want things in my body that are designed to run code against my wishes, or to prevent me from changing the programs that run on them. I don't want my relationships with my family and friends and colleagues moderated by a network that has been redesigned to spy on and suppress the "wrong" sort of speech.

Because even though these technologies won't stop dedicated bad guys, pirates or klutzes, they will be beyond the ability of many ordinary people to control. Which means that increasingly, our technological infrastructure will be designed to enforce policy against its users, without its users' consent or even knowledge. Even features added with the best of intentions are liable to put users at risk at some time in the future: mobile phones today are often designed with anti-theft measures like remote kill-switches or the ability to covertly read the device's camera, GPS or microphone; Android devices are designed so that Google can remove malicious software remotely; Kindles are designed so that Amazon can delete ebooks if they are found to breach copyright.

For reasons good and bad, the things we rely on for our jobs, our political organising, our family affairs, our social lives and our cultural transactions are being rebuilt to control us and spy on us.

For each of these control measures, the question isn't whether they'll fail, but when they will, and who will hijack their capabilities. Virus writers have already noticed that their malicious software can get a free ride if it targets digital rights management technology that hides itself from the operating system. Will it be an identity thief next? A dodgy "private investigator" who wants to read an MP's email over her shoulder? A totalitarian government that wants to broadcast the kill-signal to phones being used to organise mass demonstrations?

I believe that we can find creative answers to our legitimate regulatory problems – for example, we could create software radio utilities that turn every device into part of a grid that detects malicious or badly configured radio devices, to help regulators catch and shut down bad actors. But we'll only arrive at those solutions once we stop reflexively demanding limits on the general functionality of a PC and a network – and the sooner we do, the sooner we'll legimitise a technology world whose first rule is "Obey your owner" and whose second rule is "Protect your owner's interests".