Today, Google announced a new tool for selectively blurring sensitive parts in videos uploaded to YouTube, such as faces. This is an important option for preserving anonymity in visual media, at a time when video is taking over the web. To protect the ability to speak out in our visual age, we need obfuscation tools in the platforms we all use.

As Mark Zuckerberg recently said: “We’re reaching this period where video is going to be the primary thing we use on the Internet, because of the emotional weight of it.” The figures bear this out: an explosion of video on YouTube to more than 400 hours uploaded a minute, more than eight billion videos viewed daily on Facebook, six billion viewed on Snapchat, and 40 years of video watched on Periscope every day.

WIRED Opinion About Sam Gregory is the Program Director at WITNESS, the international human rights organization, and an Adjunct Lecturer at the Harvard Kennedy School of Government. Connect with him @samgregory.

All this video is powerful. Activists like myself use it to capture the world on camera and make our voices heard. But we must balance our desire to show what happening in our communities with the need to protect people at the same time. This is particularly true for activists in repressive countries. For them, every time someone speaks out to them or is seen on camera, that person faces the risk that there will be a knock on their door in the coming days, and the authorities will come and take them away. Being able to expose atrocity while protecting the identities of the vulnerable or innocent is of vital importance.

Why Visual Anonymity Matters

The downsides of being recognized online have already been made all too clear for people in high-risk situations over the last few years. The Burmese military junta trawled online images to target activists as far back as the Saffron Revolution in 2007. The Iranian government did the same during the Green Revolution marches of 2009, and Syrian authorities singled out citizens who appear in videos of anti-regime demonstrations.

In countries like Brazil people caught on the margins of live streams shot during protests have found that material arbitrarily used against them. And indeed the greater risk may be to the lone witness or victim who speaks out about abuse or injustice, from denouncing politically motivated sexual violence to LGBT activists in hostile environments. Additionally, now that police from Brazil to Boston, and companies like Facebook, are trying out tools that can utilize facial recognition on a large scale, the need for a new understanding of how to protect ourselves is critical.

We Need Better Tools

YouTube is taking the lead in the effort to make visual anonymity possible with the tool Google announced today. It’s a custom blurring feature that allows anyone to easily selectively obscure faces or regions, like license plates, in videos they upload. My organization, WITNESS, advocated for and then advised Google in its development of an initial version of this feature that blurred all faces, in 2012. Since then we’ve continued to advocate for the feature launched today—a feature that can be used in a more effective way to protect vulnerable people. As citizen witnesses become a more frequent source of news and as more people turn to the Internet as a tool for exposing abuses and defending their rights, the need for this tool has only grown. Other companies with a growing stake in visual media—from Facebook to Snapchat to Twitter—should follow YouTube’s lead.

YouTube's new blurring tool. YouTube's new blurring tool. YouTube's new blurring tool. YouTube's new blurring tool. ObscuraCam. ObscuraCam in action. ObscuraCam in action.

We need better tools and choices to support visual anonymity and to build it into the ways we shoot and share images. Another project I’ve been involved with is ObscuraCam. Built in collaboration with The Guardian Project, an open source mobile developers’ collective, it’s a tool that allows you to anonymize faces on a mobile phone immediately after you film something—from pixelating out faces or backgrounds to posting a Groucho Marx face on an individual to confuse facial recognition. It also scrubs the metadata in your image that might be just as damning—telling your tech-savvy oppressor just where you live.

And beyond better tools for anonymity in visual media, we first need to understand why anonymity matters.

We need more guidance targeted to the people who use these digital platforms to help them understand how to use these tools—particularly for people in high-risk contexts. When it comes to the burgeoning explosion of live video we need the best engineering minds working on how to come up with visual anonymity solutions for the bystander caught live on camera and compromised for life.

Is there also a risk in building these kinds of tools that they will be misused? Yes. However, the sad truth is that the worst of the worst of humanity— child molesters and child pornographers, as well as other rights abusers—have known how to use the existing professional versions of these types of tools to hide their own identities, and undoubtedly will continue to do so. This won’t change that. It’s the ever-growing number of citizens filming and being caught on camera worldwide that will benefit much more from a greater option for visual anonymity.

Now you may be thinking: what does this have to do with me? Well, one use of ObscuraCam has been parents using it to selectively obscure the images of other people’s children in pictures that they are about to upload to Facebook. And more dramatically, just think about those children and the world they’ll live in in 10 or 15 years time. It’ll be a world where they live with ever more ubiquitous image making. A world where facial recognition will correlate a single act of defiance captured on camera to their identity for life. A world where to stand up for their rights they will need to be visible.

So will it be possible for them exercise a much-needed right to selective anonymity and still be part of this world mediated by images? That’s up to us to decide now, as we consider how to make the option of visual anonymity available to everyone.