Earlier this week, we reported that Google had reported an individual to the police after discovering a large quantity of child abuse images on his Gmail account. After the police obtained a search warrant, they found a considerable stash of similar content on his home computer, and the person was arrested.

While the capture of such an individual is certainly cause for celebration, some have raised concerns about tech companies like Google intruding on the privacy of users, leading the company to defend its actions, clarifying that its inbox scanning is specifically tuned to identify content that may be related to child abuse.

But Google isn't the only company to use methods such as these. Details have now emerged of Microsoft also having identified someone that was sharing child abuse images in recent days, as BBC News reports.

Microsoft found an image involving a "young girl" that had allegedly been saved to a OneDrive account belonging to a man in his twenties from Pennsylvania. The man was later discovered to be trying to send two illegal images from a Microsoft live.com email address.

He was arrested on July 31 after Microsoft contacted the National Center for Missing and Exploited Children's CyberTipline - the same process that Google used - which then referred details of the company's findings to the police. He will appear in court next week. According to the affidavit detailing his case, the man reportedly admitted to having got hold of the pictures through chat app Kik Messenger, and to "trading and receiving images of child pornography on his mobile cellular device."

Pennsylvania State Police confirmed that Microsoft's actions had prompted the investigation, and acknowledged the details of the leaked affidavit.

Microsoft's terms and conditions of use for its services unambiguously state that it has the right to employ "automated technologies to detect child pornography or abusive behavior that might harm the system, our customers, or others."

Microsoft routinely uses PhotoDNA, a technology that it developed five years ago, as part of this effort. Mark Lamb, from Microsoft's Digital Crimes Unit, explained that PhotoDNA helps to "disrupt the spread of exploitative images of children, which we report to the National Center for Missing and Exploited Children, as required by law."

Microsoft has made this technology available to other software companies as well, including Google, which uses it alongside its own algorithms and tools to help crack down on images of child abuse. Facebook and Twitter are among other companies that also use PhotoDNA in their services.

While there are certainly compelling arguments to be made about privacy, the point must surely be made that actions such as those taken by Google and Microsoft in recent days are those of responsible companies doing what it takes to identify people sharing such imagery. Do you think tech companies are right to take these actions? Let us know your thoughts in the comments below.

Source: BBC News | image via Microsoft