“What if technology could help improve conversations online?”

That’s the lowkey Orwellian message that greets visitors to the website of Perspective, Google’s new AI system for detecting (and potentially deleting, hiding, or burying) “toxic” comments on the web.

Perspective is still in early days of development, but in the future, you may have to adjust your speech in order to satisfy the lofty standards of Google. Otherwise, the company’s faceless AI might just have to “improve” you. Where’s Sarah Connor when you need her?

The good news is that, for now at least, Perspective is about as effective as C-3PO with a lisp. Software engineer and columnist David Auerbach has found the program woefully inept at sorting “toxic” comments from ordinary ones. Because the AI currently focuses on words rather than meanings, inoffensive comments like, “Rape is a horrible crime,” or, “few Muslims are a terrorist threat,” were assigned “toxicity” ratings of over 75 percent.

Of course, even if Perspective could successfully sort “toxic” comments from innocuous ones, that doesn’t necessarily mean they’re going to be deleted or buried. According to the project’s homepage, the system performs no function other than detection.

But statements from the project’s developers make it clear that censorship is the end goal. Indeed, the system seems to have been developed to augment the left’s ongoing war on comments sections. The software was initially made available just to organizations that are part of Google’s Digital News Initiative, including the BBC, The Financial Times, and The Guardian, which promptly began testing the software to moderate their comments sections.

“News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labour and time,” says Jared Cohen, president of Jigsaw, the Google social incubator that built the tool. “As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want.”

Google couldn’t be clearer: it’s a censorship bot. And just because it’s currently limited to news sites and comments sections doesn’t mean it won’t be rolled out to social networks and the rest of the web. Twitter, which just introduced yet another system to punish users who hurt celebrities’ feelings, would probably love to get their hands on a working version of Perspective.

Twitter already has a tremendous depth of data on its users, including gender, location, and personal interests. Imagine that data, combined with an AI tool designed to pinpoint inconvenient content, in the hands of a CEO who has done little to conceal his political biases.

The idea of an all-powerful Google robot watching over us all, making sure our speech is “improved,” has greatly excited mainstream media. Google, says the BBC, is going to “make talk less toxic.” According to WIRED, Perspective will put a stop to “abusive comments that silence vulnerable voices.” New York magazine portrays Perspsective as a friendly a robot, “a kind of Clippy for the comments section.” Our robot overlords are certainly getting a warm welcome

The left are likely to be disappointed though. If Auerbach’s early research on Perspective is any guide, the system is designed to filter out impoliteness, not political disagreement. Google’s censorbot might turn the comments section – and perhaps the web – into a grey, sanitized dystopia scrubbed of strong emotions and trollish humor, but it won’t get rid of facts.

In other words, myths about gender wage gaps, police racism, and “moderate” Islam are still going to get debunked. Even Skynet can’t keep some things quiet.

You can follow Allum Bokhari on Twitter and add him on Facebook. Email tips and suggestions to abokhari@breitbart.com.