Last Wednesday, Houston police arrested a man after finding explicit images of a child on his phone and tablet. The suspect, who was convicted of aggravated sexual assault on an 8-year-old child in 1994, has now been charged with the possession and promotion of child pornography. But according to Houston police, law enforcement would have been in the dark if it weren’t for the help of a very powerful tipster. It was Google that went to the authorities after detecting the image in an email message. Child abusers online should be absolutely terrified. For them, the Internet just got a whole lot smaller. But how is Google catching them?

Google isn’t saying exactly how it discovered the alleged criminal activity, but it is well known that the search giant works extensively with the Internet Watch Foundation and the National Center for Missing and Exploited Children. According to Detective David Nettles of the Houston Metro Internet Crimes Against Children Taskforce, Google first reported the alleged criminal activity to NCMEC, who then alerted his team. Police quickly obtained a search warrant, after which they found the images on the suspect’s devices.

Google has been working with NCMEC for years, creating tools specifically designed for tracking down images of child abuse online and bringing perpetrators to justice. In an interview with Houston-area news outlet KHOU, Detective Nettles said of Google: “I really don’t know how they do their job, but I’m just glad they do it.” Without Google’s assistance, authorities would never have been able to obtain a search warrant leading to the arrest.

Writing for the U.K.’s Daily Mail last November, Google Executive Chairman Eric Schmidt outlined some of the ways the company is combating child porn. (Disclosure: Schmidt is the chairman of the board of New America; New America is a partner with Slate and Arizona State University in Future Tense.) One detection and removal technique assigns a “unique digital fingerprint” to all illegal images, allowing Google’s search mechanism to immediately recognize and call attention to these images whenever and wherever they appear. Previously, Google implemented a similar system using “hashing” to combat abusive content.* But the new technology, which implements more sophisticated encryption techniques, is a team effort. Last year, the company announced collaboration with other industry leaders (including Microsoft) in the creation and implementation of the new tools. It is unclear whether a similar tool was used to catch the Texas suspect, or whether the image he allegedly possessed included a fingerprint.

In the past, privacy experts have expressed concern over Google’s automated processing systems, some of which scan Gmail messages for content related to marketing tool AdWords. These scans, detailed in Google’s terms of service, “provide [the user] personally relevant product features.” The terms also make it very clear that illegal activity will not be tolerated, explaining of its services: “We may review content to determine whether it is illegal or violates our policies, and we may remove or refuse to display content that we reasonably believe violates our policies or the law.” In other words, Google is not only purging child abuse images from its system. It’s actively pursuing those who create, store, and share them.

Last year, in a legal filing to have a class-action data-mining lawsuit dismissed, Google famously explained “a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.” At the time, privacy activists howled. In light of last week’s arrest, it all seems to make a little more sense. While Google could always offer some more specific clarification on the details in such cases, doing so could inadvertently help other child predators lurking online. Perhaps this is the unique case where privacy experts will actually agree with the actions of the Internet overlords.

Update, Aug. 5, 2014: In a statement issued to media outlets, Google confirmed use of the digital fingerprinting technique in their email services as well as search. The statement read: “Google actively removes illegal imagery from our services—including search and Gmail—and immediately reports abuse to NCMEC. This evidence is regularly used to convict criminals. Each child sexual abuse image is given a unique digital fingerprint which enables our systems to identify those pictures, including in Gmail. It is important to remember that we only use this technology to identify child sexual abuse imagery, not other email content that could be associated with criminal activity (for example using email to plot a burglary).” Good for Google!

*Correction, Aug. 4, 2014: This article originally misidentified the technique previously used to identify similar images as “assigning hashtags.” The method used is called “hashing.”