Somewhere on Steven Spielberg’s cutting-room floor are 20 minutes of footage shot for Minority Report, the sci-fi film from 2002 in which Tom Cruise memorably manoeuvred content around wall-sized computer screens by waving his hands. The 20 minutes were written by John Underkoffler, who also came up with the idea for the “gesture interface” used by Cruise, in his role as a chief of police catching criminals before they knew they intended to break the law.

The extra scenes – lost from the film – showed the back room: the people who processed all the data that was then routed upstairs to Cruise (playing police chief John Anderton) who would then waft them around the room.



But 13 years on, where are the wall-sized gestural interfaces? Or even laptop-screen-sized gestural interfaces? Why aren’t we all filing documents and doing ... whatever we want, by waving our hands around? Underkoffler, now chief executive, chief scientist and founder of Oblong Industries, says there have been two problems: getting the technology right, and getting that integrated into the “full stack” of tasks that we want to do.

Underkoffler’s advisory role to the film came because he had worked at MIT’s Media Lab, where “I became sort of obsessed with user interfaces. I felt that by 1994 the UI [user interface] that the Mac had introduced [in 1984] and had come to be standard, well, after 10 years surely it’s time for something new?” He says he became “sort of obsessed” with UI, and with giving human hands top priority because “they are incredibly fine instruments”.

When Spielberg was looking around for advisers, the Media Lab was an obvious place to go because of its reputation as the source of many ideas and people that shaped the future. “They felt they wanted to get the direction for how computers might be operated in 50 years’ time. I was responsible for making sure all the technology in the film was coherent” – that you didn’t have gestures in one place yet clunky keyboards in others. “It was a simplification of the MIT work.”



The idea of throwing files, text, video and pictures around a giant screen suspended in front of you wowed audiences. But, despite a lot of effort since then, gesture-control systems haven’t, by and large, thrived. An independent company called Leap Motion, based in San Francisco, excited some in 2013 by releasing a small gesture detection system that plugged into Windows PCs and used infrared light to detect finger movement. Google has also demonstrated its intention to explore this space: at its Google I/O developer conference in May, there was a brief demonstration of “Project Soli”, which is a short-range radar-based system that can measure movement, velocity and distance at up to 10,000 frames per second – and in theory would let you control your smartphone, or any other equipped device, by waving your hand and waggling your fingers in a particular way.

And, of course, millions of Xbox owners also have the gesture- and voice-driven Kinect module on or near their TV sets for games interaction.

But Leap Motion has disappointed: even the developers to whom it was dished out have struggled to find compelling uses, and Techcrunch reported in May 2014 that only 500,000 units had been sold, short of expectations. Leap Motion laid off 10% of its staff early in 2014. Meanwhile Project Soli remains an unknown: it might become part of future Android phones, though Google doesn’t have a strong track record in making hardware; for every Chromecast, there’s a Nexus Q and Google Glass.

As for the Kinect, even Microsoft seems to have lost heart. It’s not a requirement for the Xbox One; and few games even make use of it, because although it could work well, it wasn’t totally reliable at interpreting motion. That made precise gaming frustrating.

So what is the problem with gestural control? Why hasn’t it broken through, in the way that we so expected from Minority Report?

“I think there are two problems,” says Underkoffler. “First is getting the enabling technology. Leap Motion have done an impressive job with one piece of the technology, which is the detection of hands and fingers, but it stops there. It’s missing the full stack – how you manipulate things such as files. It’s as if you had a Logitech mouse but you didn’t have the operating system that could understand what it meant. You wouldn’t have much. We really need people who are thinking from end to end, from the raw mechanics to the input through to the human interface. It has to be expressed by the software and the OS [operating system]; that’s what the original Mac team did: they started with a bitmapped UI [where each point on the screen was represented by data in a memory location] that was also the OS.”



The other issue, he says, is using gestures for the right tasks. “For instance, touch isn’t popular on laptops or desktops, but it’s very, very good on phones. It’s the wrong mode for a laptop, but exactly right for a mobile device, where your expectation about interactions is very reduced. So the question becomes, where is a gesture the right way to get things done?”

In other words, what problems does a gestural system solve that a GUI with a more accurate pointing device such as a remote doesn’t? Underkoffler thinks that present UIs have “a clerical soul” - they’re about moving files around, generally, and are “anti-collaborative.”

That idea of collaboration is part of what got left behind on the Minority Report cutting-room floor: all those people working together to create the coherent picture that Anderton would then dissect. (The film was already long, though.) Now Underkoffler is trying to create an environment for it. Oblong Industries’ key product, Mezzanine, is “a commercial version of the Minority Report computers”.

Mezzanine is a collaborative conference room system, which melds standard presentations and videoconferencing, with interactions controlled by “wands”. So it’s not quite the gestural world that we thought we’d glimpsed.

But Underkoffler is sure that bigger screens, and gestures to control them, are going to become increasingly important as we move towards doing more ambitious tasks mediated directly through computers. “We started our digital lives about 30 years ago, with screens about the size of a laptop now. That’s about the size of an A4 piece of paper. Weirdly, the pages we work with have gotten smaller with tablets – and then phone screens, which are about the size of 3in by 5in cards.

“Different tasks fit into these different spaces more or less well. But if you’re working on a complex problem such as planning a difficult surgery, or doing urban planning, or bridge design, you wouldn’t plan those on three-by-five cards, you’d use a table or a whiteboard. You need space to display ideas. And it’s the same for their digital counterparts.”

Bigger screens are certainly becoming available. Microsoft has probably gone furthest, with its gigantic 234in (5.94m) “Cinema” interactive screen, intended for trade shows. By contrast Apple, like Microsoft with the Kinect, seems to have let the chance go by. Despite buying Primesense, an Israeli company that specialised in 3D sensor technology (rather like the Kinect, developed by another Israeli company) for around $350m in November 2013, there is no sign of any gesture-based devices or interfaces from the company: its new Apple TV interface, shown off at an event in September, uses the Siri voice control system and buttons, not gestures. (It’s not known whether the Primesense expertise is part of the motion-sensitive controller, which can be used for games.)

It may take the arrival – at a date unknown – of Google’s Project Soli to show whether gestural control has a future. Until then, Underkoffler will be unsatisfied. But we’ll always have Minority Report.

