Part of my job as Chief Innovation Officer at The Campfire Union is to research concepts in virtual reality. Inspired from the article "Making Great VR: Six Lessons Learned From I Expect You To Die" by Jesse Schell, I decided to do a crash course in depth cues by de-constructing them in Blocked In from The Shoebox Diorama.

Here is a 10-minute video exploration I made on the topic.

I based my research the following two articles I found on depth cues.



http://www.hitl.washington.edu/.../virtual-worlds/EVE/III.A.1.c.DepthCues.html

http://www.colorado.edu/physics/phys1230/phys1230_fa08/VisualCues.pdf

Here is a summary of the concepts presented in the Blocked In deconstruction.



Accommodation is when you look at an object up close, objects in the background are blurred out. This works well when you are looking at objects 2 meters or closer. Edit: Accommodation does not work in VR. It can be simulated, but only when eye tracking becomes part of the VR hardware spec.

Convergence is measured by how much your eyes rotate inward when looking at objects. This works well within 10 meters. You brain creates depth by analyzing this eye movement.

Parallax works with one eye or two. When you move your head you will notice that an object closer to you moves faster than an object far away. Many side scrollers use parallax to create depth.

Binocular disparity takes advantage of the approximate 6.5 cm distance between our eyes. Our right and left eye see the world from slightly different perspectives. When you combine these perspectives, the brain produces depth.

Linear perspective can be observed when looking at straight lines like the edges of roof tops, sidewalks, or roads that converge into the horizon. The brain uses linear perspective to understand the height of a tall building, or the distance of a street or sidewalk. This can be simulated on paper through one, two, or three point perspective.

Distance and size are related. Objects far way are perceived as smaller than objects that are up close.

The size of common objects is another way we understand scale and distance. If you show common objects like mugs, pencils, and sandwiches in a scene, for example, our brain will start building depth cues base on its understanding of the common object size based on prior experiences.

A texture gradient is the management of texture detail in relation to the distance of an object. For example, objects up close need to have detailed textures because our brains expect detail when up close. Objects that are far away can be smoother and less detailed because we would normally see less detail in faraway objects.

When an object occludes another object this is called overlapping. My computer monitor is occluding the wall behind the monitor. The wall is occluding the tree outside my office. The tree is occluding the street below. When objects overlap, the brain know that the closest object is the one that is not occluded by anything. The brain uses overlapping to build depth.

Aerial perspective is observable when looking at objects that are very far away. When looking at something that is close to the horizon or a significant distance away, it looks like it’s behind a haze. This is because there is particulate in the air. Our brain will use this as a depth cue as well.



Shades and Shadows. If we know where the light source is, we can understand that objects that are lit and are not in shadow are closer to the light source than objects are have fallen completely into shadow. Our brain uses this information to understand which objects are closer and farther away.

This is a quick overview of the types of depth cues that can help you understand and construct more compelling virtual environments. Hope you enjoyed learning, and stay tuned for more explorations like this in the future.

- by Lesley Klassen