Wow – $9.1 million and climbing! The team at Cloud Imperium is glued to the site watching the counter; we simply can’t believe the response to the Aurora sale. There are three days left to get your Aurora LX, so please pledge if you are interested… and spread the word if you’re already a backer!

Mocap Stretch Goal

Today we’d like to talk more about our $10 million motion capture stretch goal. Some backers have noticed that this isn’t so much a reward as it is a statement. That’s the point: we want to share the development process with you in a mature fashion. There will be rewards for our longtime supporters, but we also want to tell you exactly what we’re doing with the additional money. In this case, that’s building a motion capture studio, something that will significantly enhance Star Citizen!

For those unfamiliar, motion capture is a process of recording motion. Actors are fitted with sensors and recorded with special cameras to gauge how they move. Films like Avatar use this technology to allow their CGI characters to act just like humans. Actors wearing sensor suits are filmed as though they were acting regular scenes, then overlaid with their computer generated characters. In gaming, the process is much the same, although not always so linear: the data is provided to the animation team so they can create more lifelike characters.

Games have been using motion capture for years: Origin Systems pioneered the process in the mid-1990s, utilizing “Flock of Birds” technology to build Bioforge. Since then, it has become a standard for serious game development that requires “actors” (largely supplanting the live action shoots that Chris Roberts popularized with Wing Commander III.) For a game like Star Citizen, motion capture would be used both to record cut scenes and to decide “moves” – how your interactive characters will react to various inputs and stimuli.

There are essentially three levels of mocap: body and motion movement capture, facial capture and then full performance capture which combines the two. In body capture, we record movements. In facial capture, nuanced facial expressions (including lips, to match voice actors.) The third captures all of this at once to make for the best possible high end performance.

We haven’t done any mocap yet; the team is currently using reference moves that will need to be replaced before the game is finished. We did do a reference shoot to get a baseline early in the process, though. This was a much less complicated process, using a model to generate footage we can refer to when building our characters. Here’s a Wingman’s Hangar segment explaining that shoot:

Unfortunately, motion capture is expensive. Very few studios have their own motion capture rigs: typically, development teams rent out the technology, studio space and talent for a limited amount of time. A day of motion capture costs between $25,000 and $50,000 and provides roughly 200 “moves”; simple gestures, limb movements and so on. More complex shoots which require props, additional actors, finger movements and other factors are significantly more expensive. Still more expensive are shoots that capture audio and facial movements. This expense-to-benefit ration means that there’s a great deal of preparation required for a mocap shoot… and that messing up or deciding you want something more in the game later means another chunk of money.

What we want to do is build our own studio. We want to dedicate an area for mocap and purchase our own mocap system outright. It would cost more than we have currently budgeted for mocap leasing to do this to start… but the result would improve the game significantly. With our own mocap system we could generate cutscenes and moves as we determine they are needed, which will be especially valuable for the Star Citizen live team charged with feeding the game constant content! It’s even conceivable that we could rent it out when not in use, ultimately funneling more money into Star Citizen’s development!

Lead Animator Bryan Brewer is currently looking at two potential mocap systems for body movement. The first is the Vicon system, which he calls the Ferrari of mocap rigs. We would purchase sixteen of their 2.0 megapixel T20S cameras and sixteen of their 1.0 mexapixel T10S cameras for roughly $230,000. A second option is OptiTrack, the “Porsche” of mocap systems, which would be 24 4.1 megapixel Prime 41 cameras and 2 Prime17 cameras for significantly less: $150,000.

This hardware is just the start: it would track body motions, but an additional system would be needed for the game’s facial animations. For this, we are looking at a product called FaceWare. Other money would go towards a physical space to install the system and various pieces of rigging. Finally, some of the additional million would go to professional actors. While team members and other volunteers are suitable for simple motions, real actors are needed for some captures!

We genuinely believe our own mocap studio would significantly improve Star Citizen. The team is very excited about the possibilities for expanding the game beyond what our original budget allowed. We think this is a great example of how we can be responsible with additional money… and a way to continue showing our backers that we’re doing the right thing with their pledges!

Finally, we asked Bryan a few questions to clarify issues that we felt fans would like to know:

Why Vicon Motion capture system?

Vicon offers the latest in Motion capture technology. The Cameras shoot at over 120fps and can capture large volumes with multiple actors. The cameras also have a high resolution. This means we can use smaller markers and small markers produces more accurate data. The Vicon Solution also works with FaceWare Head cams. The same technology used in movies like Avatar. The data produced by a high end system like this much cleaner with less cleanup in the solving. Other systems data is usually much more difficult to work with causing longer solves and editing animations to make game ready. The Motion Capture stage we are wanting to build is very similar to the one Naughty Dog built for doing their Uncharted games. This will offer us the ability to capture body, fingers, face, props and audio. Being able to do this in house rather than outsourcing to a Motion Capture Studio will afford us the ability to create animations whenever we want and not be forced to hold off because of someone else’s schedule.

Are cheaper unconventional solutions possible?

I have looked into other capture solutions like setting up 2 Kinects. The problem with that is limited capture volume and only being able to capture one character at a time. Also, there is heavy cleanup needed on the data produced with those type of systems. It is good tech for prototyping but not good for production.