This efficiency improvement has held true for a long time; today's high-end microprocessors require far less power per instruction than those of a decade ago, much less two or three decades ago. A regular ARM-powered smartphone, such as an iPhone 4S, is some 12-13 orders of magnitude more powerful as a computing device than a late 1970s-vintage Cray 1 supercomputer, but consumes milliwatts of power for computing (rather than radio) operations, rather than the 115 kilowatts of the Cray.

Having said that, predictions of the imminent demise of Moore's Law within a decade go back to the 1970s. And if we can't increase the two-dimensional structure count on an integrated circuit, we may still be able to increase the number of structures by building vertically.

A couple of basic physical rules underly the dizzying progress in electronics that we have seen over the past fifty years. Moore's Law, attributed to Intel co-founder Gordon Moore, postulates that the number of transistors that can be placed on an integrated circuit of constant size doubles approximately every two years. Originally coined in 1965, Moore's law has run more or less constantly ever since. It can't continue indefinitely, if only because we're getting close to the atomic scale; a silicon atom has a Van der Waals radius of around 200 picometres, and to build circuits that mediate electron transport we need discrete atomic-scale structures. It is not obvious that we can build electronics (or other molecular structures) with a resolution below one nanometre. So it's possible that Moore's law will expire within another decade.

The first consequence: we can expect our computing devices to be more powerful. This is no surprise — we have become used to the performance of our computers increasing by a factor of roughly 100 every decade for a long time now. If we assume another two decades before Moore's Law breaks down, then by 2022 we can expect our smartphones (or equivalents) to be as powerful as today's leading edge desktop workstations; and by 2032, to be capable of delivering peta-FLOPS of performance per processor core, with multiple cores as standard, terabytes of RAM, and multiple terabytes of non-volatile storage. Koomey's law meanwhile implies that these devices will have similar, if not better, battery life to today's equipment.

Interestingly, it is not impossible that Moore's Law — the increase in circuit density — will taper off before Koomey's law — the increasing performance per watt of power — follows suit. In this case, our computing devices will be somewhat more powerful, but vastly more energy efficient. This will tend to drive compute-intensive applications towards massively parallel architectures.

It's unlikely that we'll see Koomey's law taper off before Moore's law comes to an end: if only because a failure to improve power efficiency as we increase component counts will result in circuits that dissipate disproportionately more heat. We've been here before and it doesn't end well.

Another side-effect of a slowdown in Moore's law is that as we near the limits of semiconductor node resolution, the fab lines in use will be amortized over very long production runs: if we can't move to a smaller node size because of the laws of physics, then existing node sizes will be exploited and the pressure to compete on performance will be replaced by pressure to compete on price.

So the most probable outcome I can see is that we will be entering an era of small, ultra-low-power wireless devices. I'm thinking in terms of RFID tags that are actual processor units — microprocessors with associated storage that are electrically powered by the radio frequency radiation that gets data into them, and modulate the radiation they re-emit to broadcast output from whatever programs they run. Or maybe they'll be powered by solar cells. Either way, they'll be churned out in such vast numbers that the price tends towards euro-cents. What can we do with them? Or rather, what will be done with them?

As a science fiction writer, I like scribbling crude calculations on the back of an envelope and seeing where they take me.

Let's start with a current generation low power microprocessor aimed at consumer tablet devices and smartphones — the NVidia Tegra 3. At a 40 nanometre node size, it has four ARM Cortex A9 processor cores, running at 1 to 1.5Ghz, along with a low power GPU. I don't have a figure for power consumption, but as it's a smartphone or tablet processor, it's likely to be on the order of 1 watt. In raw performance it's probably about as powerful as a desktop processor from a decade ago. NVidia's road map runs approximately three years into the future, by which time the fourth generation of this architecture should be established — they're promising roughly 75 times the performance of their 2010-vintage Tegra 2 series processor.

I'm picking on this processor purely because it's a useful reference point. If you have an Android tablet or high end smartphone today, this is the sort of processor that's running it.

What does 20 years of Koomey's law — on diminishing power consumption — get us, if we hold the performance side of the equation constant?

20 years of a process that doubles every 18 months gives us around 13 generations, or a factor of 10,000 increase. In this case it corresponds to a ten-thousand-fold decrease in power. So by 2032 or thereabouts, we could expect a processor roughly as powerful as the one powering the tablet I'm giving this talk with to be available and to operate on around one hundred microwatts of electricity. Sensors and i/o and networking obviously make their own claims on power consumption, but: we're talking about sub-milliwatt power consumption.

(Note that I don't think we can push our power efficiency much further. Eventually we run into problems with thermal noise, not to mention that with only 1.6 x 10 19 electron volts per joule, we face a hard limit on how many individual electrons we can move around our circuits. An electron volt is on the order of the same amount of energy as a covalent bond, and around fifty times the energy of a Van der Waals interaction between atoms; it's thus a rough approximation for how much energy it takes to kick an electron around. If we're trying to carry out on the order of a billion computational operations per second, and each operation relies on interactions that involve setting around a thousand electrons in motion, then we can't obviously get our power consumption down below microwatt levels.)

Typical commercially available photovoltaic cells today deliver around 150 watts per square metre, or 150 microwatts per square millimetre. So it's reasonable to assume that a 2040 processor unit of the kind I'm sketching on my used envelope here, with a one square millimetre surface area, could just barely be powered by daylight — but if we increase it to two millimetres on a side it can probably produce sufficient surplus to charge a battery or capacitor for nighttime operation, and to run some significant i/o devices as well. And if one square millimetre doesn't supply enough electricity, we can always make it three or five millimetres on an edge, and gain an order of magnitude for our calculations.

The reason I picked the one millimetre dimension is simply because, from the eye level of a standing human, a one millimetre square device at ground level is all but invisible. Today we are used to the public sensors around us being noticeable if you know what to look for. In 20 years time this may no longer be the case, and the social implications are worth exploring.

I am not a semiconductor industry expert, so I have no direct sources for manufacturing costs — much less for costs we are likely to see in 30 years time — but going by ARMs figures from 2006, the license fees per processor are on the order of four euro-cents; at the low end, and in massive production volumes, ARM based processors can be priced in double-digit cents.

So. What sort of applications can we imagine that might result from this trend?

Let's look at London, a fairly typical large capital city. London has a surface area of approximately 1570 square kilometres, and around 7.5 million inhabitants (not counting outlying commuter towns). Let us assume that our hypothetical low-power processor costs 10 euro cents per unit, in large volumes. To cover London in CPUs roughly as powerful as the brains of the Android tablet I'm reading this talk from, to a density of one per square metre, should therefore cost around €150M in 2040, or €20 per citizen.

To put this in perspective, in 2007 it was estimated that councils in London spent around £150M, or €190M, per year on removing chewing gum from the city streets.

So for the cost of removing chewing gum, a city in 2030 will be able to give every square metre of its streets the processing power of a 2012 tablet computer (or a 2002 workstation, or a 1982 supercomputer).

Getting data in and out of such open-air devices is an interesting issue.

Looking around at current standards, I note that Bluetooth Low Energy devices exhibit peak power consumption in the tens of milliamps, but drop to nanoampere power consumption when inactive: overall average current draw is on the order of a microamp, and it provides data rates of around 200kb/s at ranges up to 50 metres. Taking a wild leap in the dark, I therefore conclude that if we are willing to reduce our range by an order of magnitude, we should be able to push up our data rate accordingly — and reduce power consumption at the same time. Obviously there are limits to how far we can reduce the power requirements for a radio or optical transceiver, but I would conservatively expect it to be possible to shift tens of megabits per second across ranges of up to five metres using microwatt levels of power within the next thirty years.

Alternatively: we are already beginning to see sodium vapour and halogen street lamps replaced with LED installations to save power. It's not unreasonable to use LEDs in public street lights to both illuminate open-air photovoltaic electronic devices (thereby powering them) and to broadcast data to them at optical wavelengths. Getting data out of such devices may require them to draw more current for radio or optical signalling, but again: some cities are already using street lighting installations as convenient structures to support public wireless internet routers. Just as we are never more than ten metres away from a rat, it's reasonable to expect that such low power devices will never be more than ten metres away from a street light.

Now. What can we do with a city that has 1.5 billion networked ambient-light-powered processors, or roughly 200 cpus per resident?

In my previous talk at a TNG TechDay, I discussed the limits to data storage, the possibility of lifelogging — of using wearable devices to record and index everything we do — and the implications for civilization once everything that happens to everyone is recorded for perpetuity.

I also noted that the combined video and audio streams from the entire population of Germany, over a period of a century, would occupy on the order of a hundred kilograms of Memory Diamond — a hypothetical crystalline form of carbon used for data storage, in which each bit is represented positionally by an atom of one isotope or another (in this case, carbon-12 or carbon-13). With Avogardro's number of bits storable in 12.5 grams of carbon, if we can figure out how to read and write this stuff we can store roughly 0.5 petabytes in each gram of substrate.

(Using this yardstick, on a world-wide scale Google currently processes about 2 grams of data per hour.)

So, the first point to note is that if the world of 2032 has this level of ambient computing power at all, we're likely to have the data storage to go with it.

Let's assume we have found a use for our billion cpu city, and we're running a billion operations per second on each cpu. If each operation generates one byte of useful output — from air quality sensors, or cameras, or whatever — then our city is producing 10 18 bytes of data per second. That's heavy data: that's 2000 grams per second. We're really going to have to get our data de-duplication strategies under control, lest we build up memory diamond landfill at a rate of seven tons per hour! Luckily most computer programs don't generate anything like one byte of output per operation — that's a ridiculous edge condition. Given the bandwidth and power constraints on our tiny solar powered processors, I'd be surprised if they averaged even a megabit per second of output — and even that would correspond to uncompressed high-definition video from every square metre of our city. So let's arbitrarily hack six orders of magnitude off that peak data output figure. Our city of 2032 is emitting as much information in a second as Google processes in an hour today: remarkable, but not outrageous in context.

What can we do with all those chips?

For starters, we can monitor the hell out of everything. The price of genome sequencing is collapsing at present, in a manner eerily paralleling Moore's Law: this year, Oxford Nanopore will sell you a USB-connected sequencer for €700, and are proposing to sell rackmounted systems that, for around €20,000, will be able to sequence an entire human genome in 5-6 hours in 2013. Large scale integration and lithography techniques developed in the semiconductor industry feed into the rapid improvements in sequence analysers. Taking 2013 as our baseline, and a cost of €1000 for a human genome, then applying the same scaling laws to genomics, we can conclude by by 2032 it should be possible to exhaustively sequence a personal genome for under a cent. The equipment to do so will be cheap and effectively solid-state. So why don't we monitor the city's genome?

A city does not consist solely of human beings. A myriad of macroscopic and microscopic organisms coexist with us. This month, my home city — Edinburgh — has fallen victim to a outbreak of Legionnaire's disease. The local health authorities believe the bacteria responsible are being emitted from cooling towers used by some local businesses, but are slowly and painstakingly trying to trace them. With this level of distributed processing, though, we should be able to conduct real-time epidemiological surveillance, tracking disease agents even before they have infected human or animal hosts (by sequencing DNA samples taken from airborn particles). Certainly with 1.5 billion processors in a mesh network performing sequence matching on the data from our street-level genome samplers should be practical.

I can't emphasize the importance of this aspect too strongly. We are living through a very dangerous period, in which our long-neglected arsenal of antibiotics are under threat from rapidly developing antibiotic resistance. Some strains, such as the New Delhi metallo-beta-lactamase 1 plasmid, render most available antibiotics useless, and can spread horizontally between bacterial species. Other diseases such as extreme drug-resistant tuberculosis (XDR-TB) threaten a return to 19th century levels of mortality if they take hold. (Untreated XDR-TB is fatal in up to 90% of untreated cases within a month; the treatment requires up to two years of intensive chemotherapy.) And to add to the fun, our increased long-distance mobility and the increased population density of our cities mean more opportunities for epidemics to emerge from isolated reservoirs of infection and spread globally. In 2006 we barely dodged a bullet in the shape of SARS, a respiratory infective virus with a fatality rate of around 25%, that was as contagious as the common cold: it would be much, much better to deal with such dangerous pathogens by identifying and disinfecting potential infection sites *before* they get into a human population.

Monitoring air quality and dangerous pollutants at a local, square metre level, is a no-brainer — even anti-climactic in comparison.

We can set our ubiquitous processors to work in other ways, too.

Climatology and meteorology: being able to monitor environmental conditions down to the square metre level may give us truly local weather forecasting, not to mention a better handle on optimizing our energy conservation strategies.

Local monitoring: they can each track their square metre of ground, photographing and reporting changes. If a pavement or road surface is degrading, maintenance can be scheduled before a dangerous pot-hole develops.

Traffic control: if we have metre-level resolution in our monitoring, we can not only optimize our vehicle and pedestrian traffic flow (to the extent possible, within the hard limits of routing algorithms), we can also handle emergencies more effectively. A six-year-old chasing a ball towards a street may result in the local nodes notifying the autopilot of an approaching car to apply the brakes before the child runs out from between the row of parked vehicles ahead. Alternatively, automated vehicles can be diverted away from potential congestion choke-points before they develop, rather than blindly following a route in a map database.

Lately, urban planners in some towns and cities (I believe Bohmte near Bad Essen is one example) have been experimenting with removing road signage, in order to compel drivers and pedestrians to pay more attention to their surroundings, in an attempt to improve road safety.

If we have streets that are self-monitoring, we could see road markings vanish entirely, as people and vehicles rely on navigation services for guidance and cues from the streets for safety. Think in terms of city streets with no sidewalks and no road markings, where pedestrians have total right of way — but where cyclists have glasses that show them pedestrian-free safe cycling routes, and self-driving are directed around collision hazards by the street itself, in time to modify their courses before the vehicles reach the kids.

To an un-networked eye such streets would look very 19th century and dangerous — vehicles, people, would appear to be mixing indiscriminately, not even driving on the correct side of the road but following random-looking paths — but it would all be monitored and controlled for public safety.

Speaking of public safety: we know that some human behaviours are unusual and may require intervention. An adult sitting or lying down on the sidewalk may require medical assistance. Signs of violent disorder may require police intervention. Rising background temperatures beyond safe limits may indicate a fire. And so on.

This is the picture for what we get from ultra-low powered devices on an infrastructure level. But there are other applications. Today, clothing sold in shops comes with labels that instruct the owner in how to wash and care for their garments. Wouldn't it make more sense if the garments had enough on-board intelligence to tell the washing machine how to treat them, so that it was impossible to accidentally damage an item of clothing? (Or if your phone could identify itself to the washing machine, and the washing machine could call you to extract the phone from the pocket you left it in, rather than giving you a soggy, expensive, and unpleasant surprise?)

With ten cent processors we can see a variety of manufacturing and production uses, too. Every individual plant growing on a farm might have its own processor, monitoring its sunlight and nutrient availability, with a biosensor primed to detect chemicals released by insect damage or fungal attack. Today, we waste a lot of food before it is harvested simply because we can't individually inspect every plant. But if suitable sensors cost significantly less than the spoilage, then it makes sense to instrument everything.

And now for the bad consequences.

Our sensibilities are offended by ubiquitous cameras monitoring us as it is. How much worse is it going to be if the city itself monitors everybody's movement, every hour of every day? Worse: if every square metre of ground is aware of every RFID (radio frequency ID) tag passing over it, and can associate clusters of RFID chips (for example, the labels that your underwear use to tell your washing machine how to clean them without damaging them) uniquely with people? Wearing a hoodie will not help you regain your lost privacy when the hoodie itself helps define and identify you.

While a benign or well-intentioned government might choose to use the capabilities of such monitoring systems only for the public good, the question of what a dictatorship might do with them has an obvious answer. Anonymity is possible in crowds today, and even the surveillance cameras can't always break it. In a city with distributed processing and monitoring of everything down to the square metre level, anonymity breaks down because you just can't cram enough human bodies onto a square metre of sidewalk to blur the combinations of characteristics which identify us to the machines — even without ambient genome sampling.

It has been said that the internet means the death of privacy — but internet-based tracking technologies aren't useful if you leave your computer at home and switch off your smartphone. In contrast, the internet of things — the city wallpapered from edge to edge with sensors and communicating processors — really does mean the death of privacy. You'd have to lock yourself in a faraday cage and switch off all the electrical devices near to you in order to regain any measure of invisibility.

I don't want to dwell to excess on the uses dictatorships or police states might make of such technologies. It's enough to note that advertising agencies would kill to have access to such a surveillance network; just knowing which shop windows individuals spent most time lingering in front of in a shopping mall is valuable information to retailers.

One thing is for sure: even if our governments are benign, we're going to be subjected to more monitoring than most people today can possibly imagine.

Finally, I'd like to leave you with this question: what socially beneficial uses can you think of for a billion loosely coupled, low power microprocessors and their associated sensors? Because in 20 years time, buying and deploying such a network will be cheap enough for city planners to consider it routine. The logical end-point of Moore's Law and Koomey's Law is a computer for every square metre of land area on this planet — within our lifetimes. And, speaking as a science fiction writer, trying to get my head around the implications of this technology for our lives is giving me a headache. We've lived through the personal computing revolution, and the internet, and now the advent of convergent wireless devices — smartphones and tablets. Ubiquitous programmable sensors will, I think, be the next big step, and I wouldn't be surprised if their impact is as big as all the earlier computing technologies combined.