WHAT IS IT?

You probably get that motion capture involves performers prancing around in tights that have ping-pong balls attached. But what the heck is going on, exactly? Simple: The producers of a game or film want to transmit the complex motion of the performer's body (and face) to an animated character. The process doesn't even need a computer. Animator Max Fleischer invented "rotoscoping" in 1914, a method of creating cartoons like Out of the Inkwell by tracing live-action footage, frame by tedious frame. The first use of rotoscoping in a feature film was in Disney's Snow White and the Seven Dwarfs from 1937.

Even when animators are creating character movements by hand, they often reference video footage, study someone acting out a scene or even look at themselves in a mirror. Creating digital animation by hand is known as "keyframing" -- or filling in the movement of a character between different "keyframe" poses over time.

To automate that process, animators looked to motion capture. Bio-kinetic researchers like Simon Fraser University's Tom Calvert were breaking new ground with mechanical capture suits. One company created the "Waldo" face and body capture devices (shown above), used by an actor to drive a Nintendo Mario avatar, who interacted with crowds at trade shows. Meanwhile, the Massachusetts Institute of Technology developed its LED-based "graphical marionette": one of the first optical motion-tracking systems. An early animation exploiting that tech is the infamous, creepy Dozo music video from pioneering firm Kleiser-Walczak (seen below).

Early on, mocap was a studio-only process where tight-suited actors were alone in barren sets surrounded by special cameras and lights. Avatar introduced "performance capture," which added multiple performers, facial expressions and lip movement. Games like L.A. Noire also drastically improved realism by combining facial and full-body capture. Lord of the Rings, meanwhile, brought mocap out of the studio and onto the set, allowing pioneering mocap actor Andy Serkis to interact with other actors as Gollum. On-set performance capture (including the face) is now the norm for feature films with digital characters, as seen below in Dawn of the Planet of the Apes. (Yes, that's Serkis again -- he's pretty popular.)

HOW DOES IT WORK?

Motion capture transfers the movement of an actor to a digital character. Systems that use tracking cameras (with or without markers) can be referred to as "optical," while systems that measure inertia or mechanical motion are "non-optical." An example of the latter is the XSens MVN inertial capture suit worn by Seth Rogan playing the alien in Paul. Other tech has emerged lately, like Leap Motion's finger-tracking depth camera system and MYO's wristbands, which detect muscle activity in the hands and wrists. Project Tango from Google is being used mostly for mapping, but with its Kinect-like depth sensors, it also has the potential for mocap.

Optical systems work by tracking position markers or features in 3D and assembling the data into an approximation of the actor's motion. Active systems use markers that light up or blink distinctively, while passive systems use inert objects like white balls or just painted dots (the latter is often used for face capture). Markerless systems use algorithms from match-moving software to track distinctive features, like an actor's clothing or nose, instead of markers. Once captured, motion is then mapped onto a virtual "skeleton" of the animated character using software like Autodesk's MotionBuilder. The result? Animated characters that move like real-life performers.

It's difficult to predict how an actor's movement will translate to an animated character, so "virtual cinematography," developed by James Cameron for Avatar, is often used. In a nutshell, that shows the digital character moving with the actor in real time -- on a virtual set -- so the director can see a rough version of the "performance." That involves plenty of math, but computers and graphics cards are now fast enough to pull it off. The video below from Weta Digital for The Hobbit: The Desolation of Smaug illustrates the process.

HOW MUCH DOES IT COST?

Nothing to do with 3D animation is cheap, motion capture included. But, like anything digital, prices have come way down as of late. On the low end of the scale, you or I can do markerless motion capture at home with a Kinect and iPi Motion Capture software for $295. On the other end of the scale, EA's new Capture Lab (pictured below) covers 18,000 square feet, and uses the latest Vicon Blade mocap software and 132 Vicon cameras. We don't know exactly how much that cost them, but a two-camera Vicon system with one software license is $12,500. (Bear in mind that you'll also need software like MotionBuilder to map the capture data to a character, which runs about $4,200 per seat.) Despite those prices, doing motion capture reportedly costs anywhere from a quarter to half as much as keyframe animation, and results in more lifelike animation.

WHAT'S THE ARGUMENT?

Lifelike? Meh. Lots of folks hate mocap, plain and simple. If you're one of them, it's hard to beat classic Nintendo-style games and old-school, hand-animated cartoons like Spirited Away or the Warner Bros. Looney Toons series. Those were done by animation giants like Chuck Jones and Hayao Miyazaki, who applied an artistic sensibility -- and thousands of hours of hand-drawn animation -- to create memorable characters. Though Serkis' mocap performance is indelibly etched into Gollum's schizo character, considerable work was done by keyframe animators to improve the character. Serkis, however, famously took full credit and called the animators' jobs "digital makeup."

For producers, motion capture might be a tempting way to save money. But most of the time, mocap data isn't ready to be used "out of the box," and often requires considerable (expensive) cleanup. The end result may also not be what producers expect. When animation movement is almost, but not quite human-like, then you're in Uncanny Valley territory and risk repulsing your audience. By contrast, we recently saw a video game called Cuphead (below) that charmed us using 1930s-style, hand-made animations.

Yet, motion capture has its place. Modern video games demand realistic character movement to ratchet up the realism. Mocap cinema characters like LOTR's Gollum, Captain Davy Jones from Pirates of the Caribbean and Benedict Cumberbatch's Smaug have all become classics, thanks in part to the actors who portrayed them. And that's the essence of mocap, isn't it? The best way to get a memorable digital character is from an equally memorable performance by a talented, larger-than-life performer.

WANT EVEN MORE?

Are you looking to get into 3D animation and/or motion capture? There's lots of free stuff! Autodesk will let you try most of its programs without restrictions free for 30 days (students get it free for three years) -- MotionBuilder and Maya or 3DS Max are good places to start. Autodesk also has plenty of tutorials and tips in its Area section. If you have a couple of Sony PS Eye cams or a Kinect lying around, you can also play with a free trial from iPi Soft. To skip that part and try motion files that have already been captured, Carnegie Mellon University has thousands of them here, and you can use a viewer like this to preview them. If you're just a fan of 3D and mocap, you can check the sites for digital effects and gaming shops like Weta Digital, EA's Capture Lab, Digital Domain and of course, Industrial Light & Magic.

[Image credits: Ethan Miller/Getty Images (Hockey mocap); Disney Studios/JamieLeto/Reddit (Snow White rotoscoping); The Character Shop (Waldo suit); Chernin Entertainment/20th Century Fox (Serkis in Dawn of the Planet of the Apes); EA Capture Lab (studio, dog)]