The Setup

Humans are great at categorizing objects spatially. We have bookshelves to arrange physical books, dressers to arrange clothes, and cabinets for food and utensils. These tools make retrieving objects fast and effortless. You can probably picture exactly where your forks and knives are in your kitchen.

However, with computers we are limited by the 4 edges of our small screens. This makes storing information spatially a challenge. Sure, you can scroll around a large window to give the illusion that your computer is bigger — but it’s not the same. On a single screen, it’s hard to quickly reference detailed information while you are working. It’s likely you would have to switch tabs, or windows, or minimize the view at hand to see complementary information. You can’t review a chart, or access a stream of incoming information with just a glance without interruption.

The Current Solution

One solution to this problem is to have more than one screen. Multiple-monitor-setups have become fairly common on desks and in the workplace — especially for programmers.

Organizations that need vast amounts of glance-able information take this to an extreme. NASA engineers and scientists, who have to quickly process lots of incoming information, use control rooms with dozens of monitors.

This solution has two challenges though. First, every additional monitor is an additional expense. Most people can not justify the expense of more than one or two monitors. Second, it’s completely immobile. Multimonitor setups must be on a desk. You can’t carry them around with your laptop.

A Better Solution

MakeMIT is MIT's hardware focused hackathon.

At MakeMIT 2015, a hackathon at MIT last March, my team and I prototyped a solution to this problem in virtual reality (VR). VR has been called the ultimate display, since it can take your entire view of vision and it appears to be infinite in all directions.

A multimonitor setup can be simulated in VR to quickly glance at different screens with just the turn of your head. Below is a video of our working prototype. As you can see, we have a 3x3, 9 monitor setup. Of course, a user could have as many screens as they want — without any additional cost. Also, the VR headset — in this case, an Oculus VR, is as portable as the laptop that it runs off of. So you can take your multimonitor setup anywhere.

The Future

I love the idea of augmented reality (AR). AR will free us from having to sit at a desk to be productive. We won't be limited to a 15" screen to access information about the real world. With AR, the information will be "in" the real world; it will blend the physical and digital world into one. But for now, we'll have to make baby steps while we wait for the computer vision, high quality see-through displays, and miniaturization to significantly improve.

So where's a good starting point? VR. Virtual Reality doesn't require advanced computer vision, so it will be cheaper and faster for the masses much more quickly than AR. In the long run, I would hope VR would be a completely immersive environment to explore and think in, without a need for the concept of screens. But since that is many years away, this is a good short term solution. Lot of people are focused on the gaming potential of VR — and I won’t deny that gaming has pushed the boundary of computation for decades — but I would love to see more tools for “productivity” in the VR environment.

Special thanks to my teammates: Ben Chrobot, Ariel Wexler, Anthony Kawecki, and Ostin Zarse. At the same hackathon, we also made Baymax.