





August 3, 2017



Google Labels Religious Beliefs 'Toxic' With Censorship AI Program - 'This Is The Empire Of Darkness With Google Acting As SkyNet'





By Susan Duclos - All News PipeLine



Back in February 2017, Wired highlighted a new Google AI tool, named "Perspective," supposedly created to help fight online "trolling." When completed the API claims it will give "any developer access to the anti-harassment tools that Jigsaw has worked on for over a year."





Enter a sentence into its interface, and Jigsaw says its AI can immediately spit out an assessment of the phrase's "toxicity" more accurately than any keyword blacklist, and faster than any human moderator.

To understand how prevalent this technology will be used online by news outlets, by June 2017, it was announced that The New York Times was "going to expand the availability of online comments from 10 percent of articles to 80 percent by the end of the year, without adding more moderators to its staff," by "rolling out a new structure of comment moderation using software from Google called Perspective," by using a Moderator tool that would "automatically approve some comments and help moderators wade through others more quickly."





According to Google's "Perspective" website, the term "Toxic" is defined as "a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion."



What we found when we started running direct quotes from the bible into their "Writing Experiment," where people can type in something they may say in a comment section regarding an issue, was that ambiguous bible passages received low "toxic" ratings, while passages reflecting religious beliefs on marriage, infidelity, adultery, and other related issues are considered by those that programmed this AI bot, as highly "toxic."











RELIGIOUS BELIEFS ON MORALITY ARE CONSIDERED 'TOXIC' BY GOOGLE



This morning I ran across an excellent experiment conducted by Conservative Tribune, where they provide 27 examples of Google's censorship bot, rating a random number of statements and phrases as "toxic," such as the statement "We must learn to live together as brothers or perish together as fools," said to 87% likely to be perceived as 'toxic'."



Other statements such as "Jews have a right to exist," and "Hillary Clinton is a criminal," and "Obamacare is disastrous," all received high levels of perceived toxicity," yet comparison examples dealing with race issues, show a startling bias.



For example: Conservative Tribune tested the statement "Blacks have no rights," which was judged as 63% likely to be perceived as toxic, yet flip the statement around, to "Whites have no rights," and it that statement is only 10% likely to be perceived as toxic.



All 27 examples can be seen here.



That experiment got me thinking of exactly what the globalists at Google would teach their bots about religion and religious beliefs and many verses that were innocuous received low "toxic" ratings, others, in fact most that deal with any type of morality, are list with extremely high percentages of "Likely to be perceived toxic."



For example, Revelation 3:20, "Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me," was on 2% likely to be perceived as toxic, but Revelation 21:8, "But the fearful, and unbelieving, and the abominable, and murderers, and whoremongers, and sorcerers, and idolaters, and all liars, shall have their part in the lake which burneth with fire and brimstone: which is the second death," is listed as "66% likely to be perceived as 'toxic'."



In other words, as was said to me in an email this morning while discussing this issue, "Anything that goes against perversion and the globalists is considered toxic. This is the empire of darkness with Google acting as SkyNet."



A few examples shown below:















The examples go on and on, with passages that do not hold beliefs on morality being given low toxic ratings, yet religious beliefs by those of faith that are dared stated online in comments, are considered a "rude, disrespectful, or unreasonable comment," by Google's very own definition of what "Toxic" means for them.



Trolling programs, and moderators alike, generally consider a troll as someone that attacks others, are rude and uncivil, yet the statements above are directly from the King James Bible, but to the Google globalists programming these AI bots, the statements in an of themselves, are toxic.







HOW 'TOXIC' ARE YOUR THOUGHTS AND BELIEFS ACCORING TO GOOGLE



While all ANP articles are interactive by way of welcoming comments, on topic and off, with people encouraged to discuss the article itself or any breaking news of interest with each other, we are going to ask readers to interact in another way as well here.



Test Google's "Perspective" on "toxic" commenting, by going to their site, scrolling down a little more than halfway, where it says "Writing Experiment," and using the civility all of you are so wonderful about showing each other here, but typing in your own opinion on any news of the day, or article you have seen today, see what level of toxicity Google rates your comment... if possible, take a screen shot to show in the comments..... if not, just copy/paste your statement, question or comment, and let us know what % they give your comment as being "perceived" to be toxic.



Here is mine:





I guess my opinion on Google's censorship practices is considered "rude, disrespectful and unreasonable."



BOTTOM LINE



These programs are going to be used on MSM sites, with the NYT and others already announcing that plan, with the specific intent of labeling any opinion that doesn't conform to the liberal progressive, social justice warrior, group-think, as "trolling" or "Toxic," so they can prevent other readers in their audience from seeing multiple points of view, and they can control the narrative and assassinate free speech.











Language warning for the video below, but the videographer does test a number of phrases and comments, using comparison sentences and gets some surprising results, like disagreeing with a "black" person is listed as more toxic than disagreeing with a "white" person, and other differences.









Help Keep Independent Media Alive, Become A Patron for All News PipeLine at https://www.patreon.com/AllNewsPipeLine







