We’re very happy to announce the beta release of ObscuraCam for Android. This is the first release from the SecureSmartCam project, a partnership with WITNESS, a leading human rights video advocacy and training organization. This is the result of an open-source development cycle, comprised of multiple sprints (and branches), that took place over the last five months. This “v1” release is just the first step towards the complete vision of the project.

The goal of the SecureSmartCam project to to design and develop a new type of smartphone camera app that makes it simple for the user to respect the visual privacy, anonymity and consent of the subjects they photograph or record, while also enhancing their own ability to control the personally identifiable data stored inside that photo or video. Also, we think an app that allows you to pixelize your friends, disguise their faces and otherwise defend their privacy just a little bit, is a lot of fun and helps raise awareness about an important issue. In this first release we have focused on ‘obscura’ by optimizing the workflow of identity obfuscation in still images. Future releases will look at ‘informa,’ the process of properly gaining and recording informed consent from subjects, while also moving to video.

For those of you who just want to get to it, head over to the Android Market to grab the latest version of the app. You can also scan the QR code to the left, and it will take you in that direction.



For those without access to the Android Market, you can get the ObscuraCam.APK file from our public builds folder. The official signed release binary is also available here. For these options, be sure to check back for updates, because the app will not auto-update itself.

The “Cameras Everywhere” Initiative

In January, WITNESS launched their Cameras Everywhere initiative, in which they ask:

As more and more people film people speaking out and taking a stand against human rights crises, how can we protect victims and witnesses and ensure informed consent as much as possible? As more and more footage circulates from human rights crises around the world, how does powerful footage reach audiences in comprehensible ways that move people to action? And how do we know how to trust that footage? … _Critical issues to address in this realm include safety and security in the use of video; ethical questions raised by the widespread capacity to shoot and circulate human rights video; challenges around the authenticity of video and the preservation of evidence; and the need for effective documentation around the use of video in advocacy.

_

Through our collaboration, WITNESS has decided to move beyond just awareness, training and advocacy, and instead help design a next generation of Camera app software that is not just intended to share and capture more, but is meant to allow its operator to stop, think and be empowered to control the media they are capturing.

A Primer on Visual Privacy and Anonymity

Visual Privacy is the relationship between collection and dissemination of visual information, the public expectation of privacy, and the legal issues surrounding them. It relates particularly to the increasing presence of large-scale still- and video-camera networks in everyday life. This not only includes those surveillance-oriented networks under the control of corporations and governments, but also applies to the vast new network of citizen-controlled media capture devices such as smartphones and handheld cameras that has created a peer-to-peer, social-networking based surveillance. At the same time that these networks have exploded in size, face detection and recognition technologies have also improved considerably while policy regarding the privacy and fair use of such systems and content, as well as the rights of those imaged by such networks, are topics that are not resolved. What results is a situation in which massive amounts of media are being captured every day with little to no protection of individual rights to privacy or anonymity – something that is especially detrimental to human rights efforts.

As Sam Gregory of WITNESS points out, most contemporary discussions around anonymous communication on the Internet focus on the data protection side – for instance options for data encryption or censorship circumvention. In the case of media content, a largely unaddressed question arises: what about the rights to anonymity and privacy for those people who appear, intentionally or not, in visual recordings? Visual privacy and anonymity may sound like a contradiction in terms, but people often wish to speak out and to ‘be seen’ while at the same time concealing their face and identifying surroundings. As human rights documentation and organizing increasingly involves media capture, how are people enabled to make purposeful choices about when they speak out and what degrees of anonymity they hold onto for themselves? Conversely, people caught in the background of a video or still may be unaware that they are even being filmed in that moment and have no option to protect themselves – particularly true in mass protest settings where the wave of group solidarity may overwhelm any sense of personal privacy. For those speaking out from marginalized positions, personal safety is a very real risk.

Some examples where visual privacy and anonymity is being diluted in the name of features or security:

The persecution later faced by bystanders and people who stepped in to film or assist Neda Agha-Soltan as she lay dying during the 2009 Iranian election protests.

Facebook’s opt-out feature for auto-detection and tagging of faces

British Columbia’s privacy watchdog OKs the use of facial recognition technology to identify rioters from video and still images of Vancouver’s 2011 hockey riots.

Viewdle’s Social Camera automatically tags your friends in photos based on the social networking profile pictures they have published

While some of these examples might seem harmless, or even a useful feature for law enforcement, the main issue is that the subjects of these photos and videos are never asked if they wish to participate in them, not to mention whether they want their photo published online in the first place. The permanence of media on the Web means that any uploaded content can be poured over again and again to identify individuals – either by old-fashioned investigative techniques, but crowd-sourcing, or by face detection /recognition software.

How ObscuraCam Helps

Part of the problem currently surrounding visual privacy and anonymity is the fact that many of the tools and applications that people use on an everyday basis do not have features built in to protect privacy. As a result, everyone with a smartphone, tablet or laptop – not to mention an actual video camera! – captures raw, unedited content that exposes the identities of participants and bystanders present at sensitive events or activities.

ObscuraCam is a mobile application for Android that makes it easy for anyone to protect the identity of individuals or groups represented in their photos by building obfuscation and redaction directly into the app. It can be used on photos taken directly from the app itself, or on any photo that your mobile device has access to, including local memory card images or linked Picasa albums. By moving a usually cumbersome post-production process into the daily workflow of those capturing sensitive images, it’s our hope that visual privacy will be respected when it really matters.

Using ObscuraCam

ObscuraCam features a simple, touch-based user interface for easy manipulation and redaction of images, as well as an automated removal of identifying metadata stored in the photo itself. The following steps walk through the process of capturing and sharing an obscured photo using ObscuraCam.

From the application home page, choose to either capture a new image or choose an existing image from your existing collections. These options just launch your standard Camera and Gallery application. When the photo is imported, identifying EXIF metadata stored in the file itself, such as GPS location, camera make and model or timestamp, will be removed. After you capture or open an image with ObscuraCam, it is automatically scanned to detect faces. Any faces detected are marked as tagged regions in an image, and the user is able to create as many additional tagged regions as they wish – either via the menu or by long-pressing the desired region. By default, tagged regions are set to be obscured via pixelation. Once a tagged region has been created, the user can interact with that region by simply touching it to bring up a contextual menu. Options available from the contextual tagging menu include: Edit – select to scale and move tagged regions

Redact – select to fully redact tagged region and replace with black space

Pixelate – select to selectively obfuscate identities of persons or situations

bgPixelate – select to easily obfuscate everything BUT the tagged region

Mask – select to pin a set of ‘groucho marks’ glasses on the tagged region – not only a bit of fun, but useful for quickly defeating facial recognition schemes.

Delete – delete the current tagged region Once you’re done selecting and obfuscating tagged regions, you can use the options from the main application menu to see a preview of the finished image, save it to your local memory, or share the picture with any application on your handset that is configured to accept images. This includes applications like Facebook, Twitter, or the default Messaging app.

Share With Us and “Save Your Face”!

As impediments of visual privacy continue to expand, help us get the word out that we can take back control over our online identities with ObscuraCam! We’ve set up a Facebook Page where you can share your creations with us, and with eachother!

Source Code & Issue Reporting

We’re big fans of open source and living in public. As consistent with all our projects, source code for the SecureSmartCam project, along with the ObscuraCam release, is available online at GitHub.

We also use GitHub to manage our development milestones and active bugs / issues. If you encounter any bugs or issues when testing out this beta build, please report them directly to us in the comments below or by filing directly on the Issues page.