The gaming industry is at the beginning of a transition to VR that is already prompting some deep re-thinking of the very fundamentals of game development. There’s intense activity in VR right now — from indie developers to triple-A giants (and all this before a single high-end VR headset has shipped to a single consumer). It’s becoming increasingly apparent that there are significant differences between developing a conventional video game and a VR title — differences that extend to every aspect of production, putting unique demands even on the 3D models that populate these environments.

Many great game models will also work perfectly as VR assets, but others will fail in surprising ways. Virtual Reality is a real-time art, and as such the models used to create these experiences must look great while also loading and rendering as quickly as possible. Sure, optimization of assets has always been important for game dev, but VR significantly increases the need for efficiency, while also introducing a host of visual considerations unique to VR.

Render Efficiency

Comfortably rendering VR content to a Head-Mounted Display [HMD] asks a lot of the hardware driving the experience. The upcoming Oculus Rift and HTC Vive headsets each demand that 1,296,000 pixels are rendered per eye (2,592,000 pixels total) at a breathtaking 90 frames per second. That nearly doubles the pixel-throughput target of the 1080p/60fps standard for high-end PC and console gaming. But that’s just half the story, for while VR’s throughput requirements are staggering in and of themselves, they’re also notoriously inflexible. In conventional gaming you can dial down the resolution a bit and still have a great experience, and if you dip below 60fps now and then it’s not the end of the world. In VR the stakes are considerably higher — failure to maintain minimum resolutions and frame rates can wrench the user out of the sense of presence that VR enables, and can even result in users becoming physically ill. The throughput requirements of Mobile VR platforms such as Google Cardboard and Samsung GearVR are somewhat reduced from those of their PC and console counterparts but tend to be even less forgiving when not met.

Stereoscopic Fidelity

Stereoscopy is the process by which your brain infers the spatial structure of your surroundings by comparing the slightly offset points of view of your left and right eyes. By interpreting that offset, your brain is able to build an intuitive understanding of your space and the objects within it. VR leverages this process by rapidly displaying pairs of offset images, each to the correct eye. A few of the techniques that have been used for decades to optimize 3D models for real-time use can’t be recommended in the presence of true depth perception, and the nearly limitless ability to explore one’s virtual space that VR provides.

3D Asset Creation — Important Considerations for VR

Baking Geometry into Textures Not Nearly as Effective as it Used to be

When your eyes converge on a nearby object your brain receives critical spatial information. The first bit of info arrives as a pair of offset images — one from the right eye and one from the left. The slight differences between those two images, combined with the degree of convergence of each eye (how “crossed” your eyes are when the images were focussed onto the retinas) provide you with an intuitive understanding of the 3D-ness of your environment.

In the absence of stereoscopic vision, you have no way of determining whether this surface has recessed bullet holes punched through the metal, or has simply had photorealistic bullet-hole decals applied.

When a game is displayed on a standard PC monitor (or any other single screen), the brain no longer has access to a pair of stereoscopic images, and regardless of whether you’re looking at distant or nearby objects on screen, your eyes don’t converge any differently, since really you’re focussing on screen pixels and not on physical objects in space. Game devs have long taken advantage of this spatial blindness by removing render-hogs like detailed geometry and replacing them with lightweight texture-based substitutes, and for all but the most oblique angles of view, this has worked well enough.

For conventional game development, the hack on the right works well enough in many cases. With the help of normal maps it will even dynamically receive light fairly well. Viewed stereoscopically, however, the illusion falls apart just as it would if it were an actual surface half a meter away from you with actual stickers applied — you would know, and you would know immediately.

Viewed close-up in VR, these texture-for-geo techniques can come off as cheap hacks and rapidly pull users out of in-world presence. With all that in mind, here are some things to consider:

Baked Geo Textures Remain an Option for Distant Objects

Items that will remain fairly distant — say, 10 meters or more — are seen by the left and right eye almost identically, with very little convergence. So, if you know that an object will remain distant, the old tricks still work. This is very context-dependant however, so always consider feature size and complexity when deciding how far is far enough.

Parallax Mapping is a Great Compromise

Like displacement mapping, parallax mapping uses a height map to create depth, but does so without actually affecting geometry. At a similar GPU cost, parallax mapping delivers what bump and normal mapping can not: the convincing appearance of self-occlusion and even fairly accurate shadow casting/receiving. It’s not perfect, and takes some wrangling, but parallax mapping can really deliver when you need the detail without the cost. The major game engines have all embraced parallax mapping (Unreal calls it Bump Offset) and there are now some great third-party tools for parallax mapping out there as well.

Normal map on the left, and parallax map on the right — each surface is comprised of just two triangles

Avoid Planar Construction of Foliage and Ground Cover

Plant life is another place where mapping complex geometry to a simple surface is commonly used to avoid GPU-intensive geometry. Again, stereoscopy greatly diminishes the effectiveness of this technique. Trees are complicated, so there aren’t a ton of great alternatives, so use the density that you can stand in your scene, but plant life — especially that seen up close — should be structured more like their real-word counterparts when possible.

The tree above was constructed by texturing three intersecting planes as seen on the right. On a monitor, it looks sort of ok. In VR, it looks like three intersecting planes with trees painted on them and could not possibly be mistaken for a tree.

Polygons in VR Look Exactly Like Polygons IRL

On a flat monitor you tend to smooth out rough edges and faces subconsciously, probably because you are consciously aware that you’re looking at a representation of reality and not reality itself. VR undermines that awareness of artifice. The uncanny valley used to be the exclusive haunt of human faces. No longer. When your brain detects polygonal faceting in VR it readily accepts it as real and intentional, making it possible for even the most mundane items to feel weird and off-putting if poly counts are insufficient. Interestingly, stylized polygonal objects are largely off the hook — they don’t seem to feel strange or incorrect since they disarm any expectation of reality at a glance. The problem of uncanniness is mostly limited to environments that seek to pass as realistic rather that stylized.

When visible facets are part of the intended design of an object, like the one above, they look great in VR. When polygons are prominent on an object that was intended to be realistic however, they don’t just look bad, they feel bad too.

Primary Models in VR Must be Complete

It’s tempting to shave expensive polygons by only including them “where needed”. In conventional games this is all fine and good — I mean, they’ll only get so close, and they can only crouch so low, right? Well…in room-scale VR, if the user wants to lie down on her back and watch the clouds go by, she can (and, if thousands of play tests are any indication — she probably will) and when she does, she’ll see that the fountain/table/vehicle is missing polys underneath, or using only rudimentary textures there. The ability in VR to not just explore your environment but really inspect it, and to live in it in a natural way, invites a level of scrutiny that game assets have seldom needed to accommodate in the past.

Transparency is a Massive Resource Hog

Any pixel of a rendered image occupied by a transparent surface needs to be entirely re-processed multiple times, and can quickly overwhelm the GPU bus. At VR’s 90fps one really must consider how badly they need transparency in a given situation. Some game engines offer ways of simulating transparency at a reduced render cost (such as Unreal 4’s DitherTemporalAA), but in many cases you may decide that transparency can be sacrificed.

VR Loves Texture Atlases

In all real-time applications it is critical to keep GPU draw-calls to a minimum. Therefore it’s always preferable to apply a few large textures rather than many smaller ones. By using texture atlases, many different textures can be joined into one large texture map, then applied to different parts of geometry by their coordinates on the main map. More than any other best-practice you could adopt to make game assets more VR-friendly, this is arguably the most beneficial.

Four boxes, one draw-call

It’s Really Not that Bad, and Help is on the Way

Most of these suggestions for VR will also hold benefits for real-time GPU rendering across the board, and once they become part of your workflow they shouldn’t add much more effort to your process. Also, the cavalry is on the way in the form of the next generation of GPUs. nVidia’s Pascal and AMD’s Polaris GPUs — both using new FinFET tech — will provide nearly ten times the power of current consumer cards, not to mention the huge efficiencies provided by the upcoming release of DirectX 12 and Vulkan.