You can take off that ninja mask now. A new facial-recognition algorithm created by researchers at the University of California at Berkeley and University of Illinois at Urbana-Champaign is able to recognize faces with 90-95 percent accuracy, even if the eyes, nose and mouth are obscured.

"Most algorithms use what's known as meaningful facial features to recognize people – things like the eyes, nose and mouth," says Allen Yang, a postdoctoral researcher at UC Berkeley's College of Engineering who developed the new algorithm. "But that's incredibly limiting because you're only looking at pixels from a designated portion of the face and those pixels end up being much smaller than the whole image. Our algorithm shows that you only need to randomly select pixels from anywhere on the face. If you select enough of them, you can produce extremely high accuracy."

Yang's new algorithm, which was created with the help of a team of researchers at UIUC, could mark a quantum leap in face-recognition technology. Current feature-based systems have accuracy that tops out at 65 percent when some form of occlusion is introduced. They also require relatively high-resolution images, and can easily be fooled by changing small details such as adding a mustache, donning a hood or changing one's expression.

The secret sauce in Yang's new method is a mathematical technique for solving linear equations with sparse entries called, appropriately enough, sparse representation (.pdf). While all other facial-recognition algorithms tend to compare a given feature set against all others in a database (generating percentages of likeliness along the way), Yang's algorithm ignores all but the most compelling match from one subject – basically, its most confident choice.

"It sounds like a simple idea, but by enforcing that one extra constraint you can suddenly see a huge boost in the performance," Yang says.

As Shankar Sastry, the dean of UC Berkeley's College of Engineering, notes, Yang's new facial-detection method also renders years of research in the field obsolete.

"The academic community is really upset," he says. "It sounds terrible. You don't care what features you choose? It flies in the face of many years of research."

Nevertheless, the new technique could pave the way for completely new models for online advertising, new ways of annotating video and still images, and new techniques for monitoring and identifying people in public places.

Yang says he's already been approached by one startup (which he wouldn't name) interested in adopting this technique for what he calls "preannotation." For instance, this technology could automatically add family members' names to each image in a massive photo library, Yang says, saving you the trouble of flipping through thousands of photos to find that one of Uncle Bill.

It's also easy to imagine search engines like Google being interested in automatically recognizing the faces of the humans portrayed in publicly available photos, adding the image data to the textual information surrounding those photos to produce yet another dimension for targeting advertisements. Looking at a party photo of Johnny Depp on a fan site? Google could display advertisements for Sweeney Todd.

This new technique is also bound to raise a series of red flags for privacy advocates, since what Yang has developed is a highly accurate way of recognizing people even with occlusion or distortion.

With more and more cities, retailers and employers deploying security cameras in public places, it's only a matter of time before face-recognition technology like Yang's gets added to these cameras. Then the question will be not just who is watching you – but whether they know exactly who you are.