I strapped on a pair of virtual reality (VR) goggles and looked around the room. There were three PageWide printers sitting off in the corner, so I reached out and pressed a lever to open the paper tray. A voice off in the distance, like a ghost in the fog, told me to try lifting the copier lid as well.

I was at HP in Palo Alto, California, and the demo was meant to show me how their multi-function printers work. Interestingly, VR and augmented reality (AR) today do not take advantage of artificial intelligence (AI) as much as you might think. I wasn’t able to ask a question and have a bot respond. The VR looked ultra-realistic, but it was all self-contained within a structured environment. The demo was more like a 3D-rendered slideshow with some interactions. But in the future, AI will play a much bigger role.

Microsoft knows this, and that’s why a recent announcement about adding an AI co-processor to the HoloLens 2 caught my eye. The chip will help with tagging data in the real world, which sounds just as complex and compute intensive as you can imagine. Also, this processing will work much faster when the chip runs locally instead of in the cloud.

[ To comment on this story, visit Computerworld's Facebook page. ]

You can picture how this might work in a printer demo. The HoloLens is not VR; it uses AR instead. In a real office, the kit might show 3D-rendered printers and you might still reach out and interact with them. With more processing power for AI, the HoloLens 2 might show you stats such as how many sheets you can print in a minute. It might be able to take data from the office floor plan and calculate whether the printer will fit on a desk. You could see how much power the printer would use over a week, a month or a year. You could see other animations that show the network infrastructure and the actual wiring installed. And you could correlate all of this data and see how it all could work in your office, in real time.

This real-time processing could create a whole new paradigm of interaction. AI could calculate a lot more than printer output times. With a fast co-processor, I might be able to see how much the printer will cost in consumables and printing for the actual output from the office based on network usage. I might be able to quickly visualize who will print the most in an office—say, marketing will use the printer much more often than accounting. I could see how the printer fits into a budget for all of the devices in a room, including laptops and phones.

Overall, the AI could power a new set of overlays on the real world that are incredibly useful. Using the technology in your car as a HUD in the dashboard instead of as a head-mounted device, you could see traffic estimates based on a sensor network and your autonomous car could analyze thousands of inputs to calculate the safest route through some thorny traffic. During a Skype call in a virtual room, you could ask a bot for information about those in attendance and see a summary for each person in real time. In a manufacturing demo, you could look at parts of a new product and see their associated costs, how long each one would take to produce in the overall manufacturing process, and who could supply the part.

Of course, we know this is coming soon because there are already glimmers of how this will work. Yelp used AR in their app a few years ago. What’s different is that the chip could process the data much faster and locally, helping us interpret the world and react by making smarter business decisions. It could be the difference between a VR demo that shows how a printer works to an AR demo that helps us make a better decision. That's the ultimate win in the enterprise.