Data is capital. Technologists and privacy advocates have been warning about – and campaining against – the prevailing, arguably exponentially growing, tendency of state-aided and private companies to mine, gather, repackage, and monetize user data.

Probably the biggest story of the week, a saga that has occupied the headlines of media outlets worldwide – the undercover Cambridge Analytica report caught on camera by Channel 4‘s investigative reporters – shed light on the power and dangers of social media. Rather, the report has shed light on how easy it is to propagandize and subvert entire populations.

Allowing tremendous, unprecedented reach, social networks have, since their inception, been considered a precious tool of the marketing industry. Where is the line between marketing and propaganda, real and fake news drawn? Who is to blame? Us for voluntarily giving up on our privacy, or social networks for burying consent forms under layers of legalese?

Variations of “If a service is free, you are the product” have, for years, been conventional wisdoms in technological circles. Paradigmatic of this prevailing sentiment is the recent push-back and public outrage aimed at Facebook.

In the era of fake news, many have been asking questions about the power of artificial intelligence and its use in what some consider to be psychological warfare. Google’s AI and deep learning researcher, François Chollet, has attempted to answer at least some of them in an 800-word Twitter thread published yesterday.

The problem with Facebook is not *just* the loss of your privacy and the fact that it can be used as a totalitarian panopticon. The more worrying issue, in my opinion, is its use of digital information consumption as a psychological control vector. Time for a thread — François Chollet (@fchollet) March 21, 2018

We “pay” internet companies for their services, by providing them with the data they crave. This, in turn, improves their artificial intelligence algorithms. Algorithms which, as Facebook has so blatantly demonstrated, can be used to feed us the information that it deems we want, or rather the information that its puppet masters deem we should be exposed to.

“These two trends overlap at the level of the algorithms that shape our digital content consumption. Opaque social media algorithms get to decide, to an ever-increasing extent, which articles we read, who we keep in touch with, whose opinions we read, whose feedback we get,” Chollet wrote, adding that “by moving our lives to the digital realm, we become vulnerable to that which rules it — AI algorithms.”

Indeed, if Facebook gets to decide which news you will see, does it not create and cultivate an echo chamber, a feedback loop, controlling and shaping your political and other beliefs? None of this is, Chollet argued, news. Facebook has been running experiments since 2013, experiments aimed at controlling and predicting user decisions.

This, he wrote, pours into the outside world, creating an “optimization loop for human behavior, a loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see.”

These “psychological attack patterns,” as Chollet calls them, have been used for a long time in advertising. They are used to attack and take over the fragile system that is our mind.

“The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.”

AI has, according to Chollet, evolved considerably over the years, but it is still not nearly as sophisticated as it will be. Deep learning, something Facebook has been investing heavily in since 2016, will change this. “Who knows what will be next. It is quite striking that Facebook has been investing enormous amounts in AI research and development, with the explicit goal of becoming a leader in the field. What does that tell you?” Chollet wrote.

Google’s AI and deep learning researcher thinks we are “looking at a powerful entity that builds fine-grained psychological profiles of over two billion humans, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen.”

His message to those working in AI is: “Don’t help them. Don’t play their game. Don’t participate in their research ecosystem. Please show some conscience.”

Predictably, Twitter users used this as an opportunity to pose questions about Google’s data mining and collection habits. None of this, Chollet added, applies to Google, nor Apple, nor Amazon, nor Twitter.

Essentially nothing about the threat described applies to Google. Nor Amazon. Nor Apple.



It could apply to Twitter, in principle, but in practice it almost entirely doesn't. — François Chollet (@fchollet) March 22, 2018

Facebook has, in his opinion, a “morally bankrupt” leadership. “If I had been working at FB, I would have left in 2017,” he concluded.

Google’s privacy statement, however, explicitly states that the company collects three separate kinds of user data. The things users do, the things users create, and the things “that make you you.” This includes search queries, websites users visit, videos they watch, ads they click on, IP addresses, browser cookies, and location data. Under the “Things you create” section; emails, contacts, photos, videos, documents are listed. And lastly; name, password, birthday, gender are listed in the “Things that make you you” section of Google’s publicly available privacy policy.

For example, in January 2017, Google was accused of illegally mining the public data of Mississippi public school students. This was reported by Government Technology. Although the company has since then decided not to personalize ads based on AI scanning of email messages, it “still doesn’t care about your privacy,” Fortune‘s Joseph Turow argued in a June 2017 piece, writing that Google’s activities “may affect the ads you get, the deals you are exposed to, the purchases you make, the discounts you receive, the entertainment and news you see, and your very sense that surveillance is natural.”

Primarily a search-engine oriented technological company, Google may not use AI to create echo chambers and feedback loops in the way Facebook clearly does, so Chollet’s argument that “nothing about the threat described applies to Google” may hold water in principle, but in reality it does not seem to.

At the very least, Google has the means and the power to do what Facebook has done, although it may not have. Whether this is due to the fact that their leadership is not as “morally bankrupt,” as Chollet puts it, as Facebook’s, or due to something else, is up for debate.

Nonetheless, if living in an AI-powered future is inevitable, discussions need to be held and responsibility needs to be taken. Time will tell whether the Cambridge Analytica Facebook scandal will generate discussion and debate or not. As things stand right now, Facebook may end up as merely a scapegoat.