Another day, another social media challenge with potentially murky consequences. If you’re one of the millions of people who recently downloaded FaceApp to take part in the “ #FaceApp Challenge ” and show the world what you’re going to look like when you’re old and grey, bad news: You may have unintentionally given access to your likeness to malicious actors … to do whatever they want with that content … for life.

What Is FaceApp?

FaceApp first blew up in 2017, when it was downloaded 80 million times, and is now experiencing a renewed level of virality thanks to the challenge. The app uses neural networks to simulate what you will look like as you age⁠—think: adding wrinkles, coloring your teeth⁠—and the challenge is the company’s marketing campaign encouraging you to share the image.

Seems like a fun game, right? Well, as soon as you upload your selfie to the app, you’re forking over your face and data to shadowy figures who could use it for potentially nefarious purposes.

Wireless Lab, the company behind FaceApp, has very expansive Terms of Service that raise a growing number of privacy concerns. Section 5 of the Terms of Service “grants FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you.”

Admittedly, this type of content ownership is pretty standard for app services. But FaceApp’s TOS is particularly vague.

FaceApp’s privacy policy gives it the ability to collect information sent by your device including the websites you visit, add-ons, and other information that helps the app “improve its service.” That means FaceApp has wide ranging access to your device, your photos, and more, even if the app just responded to TechCrunch saying it has no intent to misuse your data or information.

But there’s an additional, potentially problematic wrinkle to the access issue: FaceApp just so happens to be based in Russia.

Who Is Behind FaceApp?

Wireless Lab is based out of St. Petersburg, Russia and headed by Yaroslav Goncharov, an ex-employee of Yandex. Given the confirmed role that Russia and Russian companies played in the United States’ 2016 elections and ongoing propaganda war, security and privacy communities are understandably concerned about the levels of access given when you use FaceApp. While there’s no direct and explicit link to the Russian government, what if there could be? And what kind of impact could it have?

Why Should I Care About Giving My Image to a Russian Company?

“There is the very real possibility that applications like these are simply honeypots designed to get you to give up information about yourself,” says Marc Boudria, VP of Technology for AI-company Hypergiant .

“You just sent them close up, well-lit images of your face,” he continues. “Now, they know your name and vital details and can create an annotated image record of you as a human. The next model would have no problem triangulating and verifying and adding more data from other sources like LinkedIn which would then give them your education, your work history, sky's the limit.”

Current conversations about facial recognition software and deep fakes are highlighting the dangers of individual companies owning large data sets⁠—particularly data sets of human faces that can power facial recognition technology.

A recent white paper from Moscow-based scientists details the development of a computational learning model that can use few or just single images to create deep fakes from those images. Meanwhile, a recent article in the New York Times noted “documents released last Sunday revealed that Immigration and Customs Enforcement [ICE] officials employed facial recognition technology to scan motorists’ photos to identify undocumented immigrants. The FBI also spent more than a decade using such systems to compare driver’s license and visa photos against the faces of suspected criminals, according to a Government Accountability Office report last month.”

Now that another company has access to your data and a strong record that includes your likeness, that information can be weaponized by any actors who are interested in doing harm through a cyber-attack or a propaganda campaign. As we look to the upcoming 2020 election⁠—but also the increasingly connected nature of our daily existence⁠—this is a very real and vivid concern.

“There are obvious political concerns with sharing personal identifying information, especially given how Russia has weaponized information back at democracies, often in manipulated and false forms, and bent local business to its will,” says New America Senior Fellow and Peter Singer. “But there’s also some major privacy issues that would be there, even beyond the Russia aspect. Like so much of social media, most of the users are just thinking about the fun aspects, not how it might be monetized and weaponized.”

Update 7/18/19: On July 17, U.S. Senate minority leader Chuck Schumer sent a letter to the FBI and FTC to investigate FaceApp, according to a Reuters report. The app, Schumer said, could pose “national security and privacy risks for millions of U.S. citizens.”

What Can I Do to Protect Myself?

For starters, don’t take an apathetic approach to personal security. We know it’s easy to breeze past privacy policies, but the sooner you start asking questions and paying attention, the sooner you can begin to safeguard your data.

“Consumers should use the latest versions of iOS and Android to help control these risks,” says Dan Guido, CEO of Trail of Bits . “iOS 13, coming out this Fall, alerts users when apps collect their location data or activate Bluetooth in the background.”

Apps don’t have to be so insecure, Guido says. “The iOS Photos app uses on-device processing to recognize faces and places,” he says. “Building software this way is harder. Apple and Google should release software APIs that make on-device processing easier, then ask app developers for an explanation if they're not used.”

When platform providers take a strong stance in protecting human security, we remove the burden on people to maintain their own security. Instead of being one person against all bad actors, a focus on security at all levels—from Google and Apple to individual users—ensures a more resilient and safe community for all.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io