I’ve been wondering for a while about when user interfaces are really going to move out of their mostly Euclidean worldview, giving us something more like this:

(click on the image to pop up a video in a new window… at least until I can figure out why WordPress isn’t letting me properly embed a Vimeo video).

The reason why I think an interface like this can be superior in many ways is that it allows you to specify an area of interest where you get full, detailed information yet you can still see the full document/page/object at the same time. Which means that one is able to avoid the zoom(in/out)-scroll-zoom(in/out)-scroll paradigm that you often get stuck in when using, for example, the iphone. In many ways it gives the user an analogue to the way peripheral vision works in the ‘real world’. You have an area of interest that you can focus on but then you’re also aware of the surroundings and glean information from that as well.

(The mockup I did above just shows a single point-of-interest but it’s certainly expandable to multiple points if you’ve got a touchscreen or other such device. And there’s all sort of little refinements you’d want to implement if you really wanted to make it swank – drag&drop from one place to another might want to keep the source area zoomed but also follow the dragged object with a zoom-region until you get to the appropriate destination. This all gets even sexier once eye-tracking becomes more available – the area you’re looking at would bubble up to full resolution but you’d still be able to quickly scan the entire page and re-target the area of interest. Somebody get busy on this, okay?)

Of course it’s not like nobody’s started down this road – there’s plenty of what I’d consider ‘minor’ examples, including an optional behavior under OSX for the ‘Dock’ application launcher. (Although that particular implementation is done primarily to make the target icon easier to find rather than to add information in the enlarged area). But the general concept of having an adaptive interface that is smart about where it shows you more detail is really only in its infancy.

Following the same thoughts in a slightly different direction, I’m wondering if anybody has done a video game yet where this sort of rendering is implemented? In such a scenario we’d have the bulk of the image presented in the normal fashion but as you get nearer to the edge of the screen you’d have a much larger field-of-view (like, out to 180degrees) compressed into a relatively small space. Yes, you wouldn’t be able to see a whole lot of detail about what exactly is going on to your extreme right or left, but you would be able to see/sense any anomalous motion along the borders… exactly the same sort of thing your peripheral vision provides you. Take a look at these examples: For this first one we’ve got a normal rendering of the scene. Looks safe enough out there. Relax.

Now consider the same setting where we’re rendering with peripheral vision implemented.

See the guy on the left-hand side? The one with the BIG GIANT GUN who’s getting ready to SHOOT YOU IN THE FACE…

(Click here to see a video comparison of what these look like in action). [UPDATE, OCT 17. VIMEO has some silly policy about not allowing ‘Video Game Footage’ on their site, so they just tookdown the videos. So this now links over to a slightly lower-rez version on Flickr.]

Clearly our survival as a species has relied on exactly this sort of wider field-of-view awareness of our surroundings and having a game provide the same feeling (at least until we get to the point where fully immersive displays are common) would seem to be a compelling feature.