158 SHARES Facebook Twitter Linkedin Reddit

Google’s team at Daydream Labs have been prototyping ways of animating objects and characters in Blocks, the company’s recently released VR modelling tool. In a recent entry on the Google Blog, senior UX engineer Logan Olson describes how it could give users the power to “create expressive animations without needing to learn complex animation software.”

With its low-poly aesthetic and simple menu systems, Blocks is perhaps the least intimidating 3D modelling tool currently available for VR, and the Daydream Labs team looked to retain that approachability as they prototyped animation systems during their ‘one-week hackathon’. Olson explains that this boils down to three steps: preparing the model, controlling it, and finally recording sequences for playback.

Firstly, the static models created in Blocks require some ‘prep’, adding appropriate control points and joints for inverse kinematic techniques (for models with a rigid skeleton), or for a ‘shape matching’ technique that works better for ‘sentient blobs’ or anything with a less defined shape, good for ‘wiggling’. Olson explains that there is a short setup process for shape matching but it “could eventually be automated”.

Once prepared, controlling the movement is where VR is at its most intuitive, as the motion-tracked hardware means that a simple form of motion capture is readily available, although it’s not always appropriate, depending upon what’s being animated. Olson references the creative app Mindshow that embraces this ‘puppeteering’ technique, due to launch into open beta soon. “People loved ‘becoming’ the object when in direct control,” writes Olson. “Many would role-play as the character when using this interface.”

Alternatively, you can simply grab specific control points of objects and manipulate them, which also works well with multiple users, or you can directly pose the skeleton for keyframes, which Olson notes is ‘much more intuitive’ than traditional apps due to the spatial awareness and control afforded in VR.

Finally, recording and playing back movements could be done with ‘pose-to-pose’ or ‘live-looping’, the former operating with a sequence of keyframe poses for complex animations, the latter being suitable for simpler animations, allowing the recording of movement in real-time to be played back in a repeating loop. “Press the record button, move, press the button again, and you’re done—the animation starts looping,” writes Olson. “We got these two characters dancing in under a minute.”

As a proof of concept, the experimentation appears to be a success, although it will likely require further refinement before the team considers rolling out these features into a future Blocks update.