We’re still evolving and learning as we go. Every day presents a new challenge that keeps us on our toes and humble. We wanted to keep that “wow” factor and needed to rethink what was important and what was mandatory. Even minor changes, like adding a simple stationary light can change performance in VR.

Purchasing content

We started working heavily on the first demo in December 2016 and by January 2017 we had a very rough proof of concept in the works. In early January, we hired our Executive Producer who has over 20 years of AAA experience. After he saw what we had, we talked about an aggressive plan to make GDC with a vertical slice of the game. The staff was on board and this was our first major challenge as a company. Could we make a basic prototype that would get the feeling of the game across in only 2 ½ months of development time?

The demo for GDC was done in Unity and we set an aggressive schedule to get it completed on time. The demo included an outside area, along with one level of the game to demonstrate travel, immersion and the unique elements of the game. The GDC demo had voice overs, complex animations, effects, missions and some pretty cool AI mixed in. The reaction was very positive from the companies we met with, which included Microsoft, Sony, Valve and Twitch, to name a few. That said, I was saying a little prayer to myself each time someone put the headset on in hopes they would have a great experience.

We of course had a tremendous amount of work to be done after GDC. We accomplished what we set out to do, which was to show we could put together a vertical slice quickly that captured the essence of the game.

In order to make GDC, we needed the internal team to focus on the core assets that defined the game, while finding alternative ways to create the secondary assets. One of the major stumbling blocks was the need for a fully modeled cityscape that surrounded the rooftop your character starts on. At this point, we had about 10 people involved in the project. Instead of pulling those resources to create the city, I decided to check out TurboSquid to see if we could find some ready built assets that fit into the game.

Because this was VR, we needed to make sure the assets were already optimized, met certain texture and material requirements and looked the part once you put the headset on. After searching and locating a few asset packs of sci-fi themed buildings, I had our level designer look at the details and verify they would work. What’s great about TurboSquid is the amount of detail you get when looking at their models. You can clearly see if they are game ready, what the poly/tri count is, have they been textured, what maps come with them, have they been unwrapped, animated and what file types are included. We made the purchase and decided to try the building pack out in game. Within a few hours we had a skyline surrounding the rooftop and it saved us a couple of weeks of development time so we could stay on schedule and make GDC.

Once we returned from GDC, we switched to Unreal and have been recreating all of the assets. We want every model in the game to be unique, original and created by our art team. This includes the buildings and items we used in the original demo.

We’ve tested many ideas out in VR to get our level building in line with the vision of the game. For example, we wanted to test out a camera moving along a cityscape while the player is attached to the camera. Instead of needing to create a new city, we were able to reuse the existing TurboSquid buildings to quickly assemble a city skyline. This was a major time and cost saver for this small experiment.

Optimization

This has been the biggest learning curve to date. We could do several articles on the trials and tribulations of optimizing content for VR. One of the lessons we learned quickly was to profile the project constantly. We’ve had dozens of instances where a model, particle effect or animation looks fantastic onscreen, runs extremely well in the editor, then when you bring it in VR it kills performance.

Now, when we make any major changes or additions to the game, we profile the project immediately. We check everything against the base scores we had previously. FPS, latency, draw calls, lightmap density, shader complexity, triangles on screen and everything else. We record these results and refer to them when running a new set of profiling tests. With numerous people contributing to a project, we found it very helpful to constantly profile levels. Something as simple as adjusting a particle effect can wreak havoc with performance in VR.

There is also the extra challenge of getting things to run at a constant 90fps. We try to consider everything when building a level in VR. We’ve had intense discussions with the level designers about building specific geometry to maximize resources, reduce calls and limit drops in framerate. We’ve spent a good amount of time and effort to put real thought in not only how levels and models can “look cool”, but how can we design them with VR in mind. For example, we are making an effort to design intentional, story driven occlusion culling based geometry as part of our strategy.

Scale

Getting scale correct was something we decided to do right from the start. Unlike a first-person shooter, in VR you get the entire surround feeling. This makes it incredibly important to get scale correct. We found a key way to do this is to scale things according to real-world sizes. This meant actually measuring items in engine.