Building HMCs is no easy feat. Sensors need to be packed into headsets people will find comfortable. Illuminating the face leads to an unpleasant user experience, so the HMCs created at the Pittsburgh lab use infrared, which is invisible to the human eye. “If the experience is to be indistinguishable from a physical face-to-face experience, we need to have comprehensive sensing ability while making sure the headset won’t limit users’ ability to gesture and express themselves,” says FRL Research Scientist Hernan Badino.

Software is an equally important part of the equation, and the team has cooked up a suite of programs to work with data from HMCs. “A researcher might want to obtain very specific images from a device or have full control on the capture system to test a particular hypothesis,” says Badino. “The software our team developed gives us flexible control over the capture system, letting us focus and study specific areas. Within the software there are also plenty of tools for deploying headsets within the lab, such as calibration, data diagnostics, and analysis tools.”

Keeping it safe

Trust is a critical component when talking to people in real life, and it shouldn’t be any different in virtual reality. The system needs to deliver lifelike avatars that people can trust immediately. A big part of this is accurately capturing the subtle expressions, like the way a person blinks or chuckles, so there’s no mistaking who’s behind the virtual face. “The only proof we have for what makes social engagements compelling, physical or otherwise, is authenticity. There is an implicit trust that you are receiving ‘real’ information from the other person,” says Sheikh.

Giving people a way to build their own lifelike avatars quickly and easily is only part of the challenge. Making sure people (and their avatars) stay safe is the other. The Pittsburgh team is mitigating potential issues through a combination of user authentication, device authentication, and hardware encryption. But it all starts with the proper handling of data. “This is incredibly important to all of us,” says Belko. “Before starting any collection efforts, we made sure we had a robust system in place for handling and storing data.”

One technology the team is keenly aware of is “deepfakes” — images and videos that use AI and preexisting images and footage in order to fabricate a scene, such as a person saying something they never actually said in real life. This technology will only improve in the future, making it hard to tell the difference between a real event, such as a live television interview, and one artificially created using deepfake technology. “Deepfakes are an existential threat to our telepresence project because trust is so intrinsically related to communication,” says Sheikh. “If you hear your mother’s voice on a call, you don’t have an iota of doubt that what she said is what you heard. You have this trust despite the fact that her voice is sensed by a noisy microphone, compressed, transmitted over many miles, reconstructed on the far side, and played by an imperfect speaker.”

FRL Pittsburgh is thinking pragmatically about safeguards to keep avatar data safe. For example, the team is exploring the idea of securing future avatars through an authentic account. How we work with real identities on our platforms will be a key part of this, and we have discussed several security and identity verification options for future devices. We’re still years away from this type of technology reaching consumer headsets, but FRL is already working through possible solutions.

The team also has regular reviews with privacy, security, and IT experts to make sure they’re following protocol and implementing the latest and most rigorous safeguards possible. “We’ve considered all possible use cases for this technology,” says Hoover. “We’re aware of the risk and routinely talk about the positive and negative impacts this technology can have. As a lab, we’re excited about making this technology, but only if it’s done the right way. Everyone knows how important this research is and how important it is that people trust it.”



Connecting with people anywhere

Imagine putting on a headset and being transported thousands of miles away to attend class, go to work, or attend a relative’s birthday party. You’d be recognized immediately by everyone there because, for all intents and purposes, you’ve arrived at the event. You’ll look, move, and sound just like you do in real life. It’s not just for convenience; a lifelike avatar can be somewhere you can’t be physically, whether because of circumstances or simple distance. It would help solve a lot of the challenges people today face in maintaining long-distance friendships and finding community.

The point isn’t to replace physical connection but rather to give people new tools when they can’t interact in person, as telephone and video calls have. There’s a lot of work to do, and several remaining challenges to solve, before lifelike avatars are ready for prime time. When you’re building a new way for people to spend time together at distance — to see each other, talk to each other, and literally feel like they’re together in the same room — there are plenty of issues to resolve and breakthroughs to make before the project is ready to take the stage.

But this kind of authentic closeness is exactly what the folks at FRL’s Pittsburgh office have been working on through Codec Avatars. “We have the resources to drive new ideas,” says Sheikh. “And when you add the chance to bring together diverse expertise to fully tackle these massive design challenges, it fuels a pace of innovation unlike anything I have seen before.”