There's a pretty common argument in tech that though of course there are billions more smartphones than PCs, and will be many more still, smartphones are not really the next computing platform, just a computing platform, because smartphones (and the tablets that derive from them) are only used for consumption where PCs are used for creation. You might look at your smartphone a lot, but once you need to create, you'll go back to a PC.

There are two pretty basic problems with this line of thinking. First, the idea that you cannot create on a smartphone or tablet assumes both that the software on the new device doesn't change and that the nature of the work won't change. Neither are good assumptions. You begin by making the new tool fit the old way of working, but then the tool changes how you work. More importantly though, I think the whole idea that people create on PCs today, with today's tools and tasks, is flawed, and, so, I think, is the idea that people aren't already creating on mobile. It's the other way around. People don't create on PCs - they create on mobile.

There are around 1.5bn PCs on earth today (using the term 'PC in the broad sense covering Wintel, Mac and Linux). Maybe as many as 100m PCs are being used for some kind of embedded product: elevators, points of sale, ATMs, machine tools, security systems etc. Setting those aside, the rest are split roughly evenly between corporate and consumer, and many of these (especially the consumer ones) are shared, such that there are over 3bn people online. But what are all those PCs being used for?

It's pretty clear that only a small proportion are actually being used for professional applications. Perhaps 50m people are using everything from Adobe to Autodesk to software development tools; adding in Office users is more complex, since there are notionally a billion installed copies, but ‘power’ Office users probably number a further 25-50m. So, there are perhaps 100m people who today engage in some form of complex creation using what one might call 'sophisticated professional software' on a windows + mouse + keyboard-based personal computer. (I’ve outlined my workings and sources for this at the bottom).

If less than 10% of PCs are actually doing professional, precise, complex creation, what are the other 90% being used for, if not creation?

Well, they do email, and the web. Some of the consumer ones also play games - there are over 125m 90-day active Steam accounts (which would be under 20% of consumer PCs - one could look at this as an analogue of the professional creation app users, except that there's probably a substantial overlap in the two sets). They do Facebook and buy groceries. The corporate ones perhaps do accounts payable and customer support, and SAP or Salesforce or Success Factors or dozens of other vertical business process applications. Many of those applications will still be around in a decade or two (if they’ve not been replaced by machine learning) - they might move to SaaS web apps if they're not there already, and might be accessed on Chromebooks or Android tablets or iPads or just on $250 Windows boxes, but it doesn’t really matter. They don't need a (user-accessible) file system and they don't need a 'precision pointer', a complex multi-window interface and all the other things that separate ‘real computers’ from the new generation, any more than email or a web browser do. Quite a lot of them just need a Gmail box. They probably need a biggish screen and perhaps a keyboard, but that’s not what makes a ‘PC’.

Conversely, what is being done on ‘phones’ - or rather, on these small touch-screen computers that we all carry around with us? We write - people have been writing more on phones than on PCs since the days of SMS - and we share, take pictures, create videos, play games and talk to our friends. That is, we do most of things that those 90% of PCs are used for, but we also do everything that you can do with a touch screen and internet-connected image sensor, and GPS, and all the other things a PC doesn't have, plus everything you can do with all of the billions of app downloads.

The big difference on mobile is that now people know how to do this. In my first term at Cambridge, in 1995, I explained to a future president of the Union that though he had been told to ‘download Netscape’, clicking repeatedly on the ‘download’ graphic on Netscape’s site had merely put 15 copies of the installer file onto his desktop, and he would also have to 'double-click' on one of them. That was pretty typical - installing software by yourself, that added capabilities to your computer, or edited video (something every ten-year-old now does all day), was something for experts. More recently, I’ve seen data suggesting that a large proportion of people who owned digital cameras never loaded the pictures onto a computer (even if they owned one). They looked at the pictures on the camera screen, or got them printed at a kiosk - but didn't print them until the card was full, as they often thought that you couldn’t add more pictures to the card after you’d ‘developed’ it in this way. My father-in-law prints things out by taking a photo of the computer screen and then taking his camera to the kiosk in the supermarket. This piece from NNG last year provides some handy quantification of what computer literacy really looks like. These kinds of questions start to go away with mobile.