SIGGRAPH 2015 is taking place in LA right now, gathering together the people pushing the boundaries of what's possible to achieve with computer graphics. We've already seen it produce a highly realistic new method for animating clothes, and now a couple of other fascinating projects have been revealed, both of which draw us deeper into the uncanny valley of human simulation.

The first among them is Dyna, which is a system for modeling human soft-tissue deformations caused by motion. Basically, it adds real-world jiggling to the traditionally solid-as-a-rock 3D models of CGI. Researchers from the Max Planck Institute for Intelligent Systems performed 40,000 scans on 10 subjects with varying body shapes and sizes. The ingenuity of their system is that it can be generalized beyond the particular subjects they used to develop their model. This means, for example, that you can use the data from an overweight large man to add realistic soft-tissue motion to a big and burly troll. Watch the video above for more examples of this adaptability and all the creative ways in which jiggling can be exaggerated or minimized.

No less impressive is the skin stretch demo produced by a collaboration between the University of Southern California and Imperial College London. It's also concerned with simulating deformations, but focuses on the skin of the face and the microscopic changes in its texture and appearance as it's either stretched or compressed.

Digital faces are looking more human than ever

As with the Dyna model, the USC animation system begins with lots of highly detailed observations of real-world subjects. The researchers performed scans of various patches of skin at a 10-micron (one-hundredth of a millimeter) resolution, collecting precise information about the appearance of skin while it's stretched, relaxed, or contracted. Skin stretching was measured using a caliper and a custom 3D-printed stretching aperture.

All of that data was then combined to produce a pleasingly simple solution to enhancing realism: a displacement map, which determines the roughness of the rendered skin, is blurred when the skin is stretched and sharpened when the skin is compressed. The researchers admit that their method doesn't emulate the full spectrum of morphological changes in skin as it's manipulated, but they certainly do a fine job of recreating the small, almost imperceptible variations. The best thing about their approach, though, is that it can be deployed in real-time with conventional graphics cards. So it's just a matter of time before our video game heroes and villains become that extra bit more human.