Over the course of the last year, there has been much talk about the role AI has played, and continues to play, in swinging election results and spreading fake news but what if a far more pernicious use of this technology lies just around the corner?

AI propaganda machines have been blamed for manipulating elections, disrupting democracy and getting specific candidates elected, and while the specifics vary the broad idea is that AI, by analyzing our online profiles, can build an extremely accurate picture of personality and desires and use this information to manipulate us en masse.

(Two great articles which breakdown the details are this by Hannes Grassegger and Mikael Krogerus and this by Berit Anderson)

But what if these same techniques could be used to enact far greater social control than the furtherance of a questionable political agenda or election of a corrupt leader?

In Extremis

The role of fake news has generated screeds of controversy but one area that engenders very little disagreement is the need to remove extremist content online.

In response to pressure from governments and users tech giants Facebook and Google have both implemented AI measures to prevent extremist content appearing on their platforms.

The UK government has even branched into the realm of software development to create its own tools to detect terrorist group’s propaganda and AI is now being developed to recognise ideology in ways that would not have been thought possible just a few years ago.

At first glance it seems a purely good use of AI, after all, giant tech firm certainly have the funding and the capability to develop formidable AI tools and if these can be used to curb the radicalisation of vulnerable people surely this is only good news?

There are very few people, outside perhaps of ISIS training camps or the KKK, that would argue that YouTube should be showing more videos that might radicalise individuals and lead to the unprovoked killing of innocent people.

But this consensus papers over a key difference in approach between identifying violent content and identifying content which is ideologically motivated.

While the ultimate aim may be to prevent violence, recognizing ideological content is an entirely different problem to crack as it depends on first defining ideology.

And once we create such a definition it instantly begs the question if there is anything to stop the same artificial intelligence and machine learning algorithms tools being used to manipulate or indeed to create new idealogical content?

“We may be building the infrastructure of authoritarianism… you build the infrastructure and it gets taken over by the people with money, with power, with authority.” Zeynep Tufekci, techno-sociologist.

A century of religion

Living in some parts of the secular West it is easy to believe that religion is a long outdated concept, a quaint tradition that still persists in remote backwaters, but in essence an idea whose time has passed and one that will steadily wither as modern society progresses.

To those in the secular West, religion is often seen as a concept that was surpassed with the arrival of the enlightenment. Post-Darwin, it is argued, we no longer need God to explain or permit either our existence or behaviour. We live in the age of reason now.

If asked to consider religion the average secular westerner might assume that, globally, religion will follow a similar path to that taken by Christianity in Western Europe over the last two hundred years — a steady decline from the religion of the state to a harmless past-time for a small proportion on a Sunday morning.

The likelihood of this secular vision of the future coming to pass however is in no way borne out by evidence. In fact research by the Pew Research Center suggests that by 2050 those unaffiliated with any religion will have shrunk from 16% in 2013 to just 13%.

In other words, if correct, the proportion of people ascribing to religious doctrine globally is set to increase not decrease. Even as things stand religious believers outnumber non-believers several fold.

Religious practice and extremist terrorism are of course not the same thing and attempts to use AI to tackle online extremism are not solely restricted to religious radicalization, however by stepping into the arena of ideological manipulation it opens the way for a very different sort of control than the mere political.

Using the power of machine learning to identify those vulnerable to radicalisation might seem like a valuable tool but what happens when fundamentalists themselves seek to use these same tools?

If Google can identify individuals vulnerable to extremist propaganda what is to say that evangelical fundamentalists cannot harness the same tools?

Gen Alpha

One reason this might be of concern is the nature of new ‘converts’ that create this projected growth — By far the largest source of new believers for any organised religion are children born to parents already in the faith.

While a small percentage of people may convert to a religion as adults, the vast majority of believers belong to a church because their parents do. The younger a child is introduced to the ideas and the more pervasive those ideas appear the more likely they are to retain those beliefs into adulthood.

In the sort of AI propaganda machines created to date there has been little need to target children. A political campaign manager has no desire to target five year olds — they can’t vote — but organised religion generally takes a longer view of things.

In the right hands could AI tools be used to create targeted content over a period of decades rather than months? To inculcate particular messages aimed at spawning a generation of believers, adherents to a particular faith?

What sort of profile could an AI build of a child that is born in the era of social media, where almost every action from birth is digitally recorded? What changes could it affect if it subtly presented, and refined, targeted content over a period of years, or decades, as opposed to over a single election campaign?

At present the means that organised religions have to ‘spread the faith’ are largely localised, with relatively little co-ordination in online presence, but is there any reason to believe that the sort of AI algorithms that were able to identify and target potential Trump voters couldn't be used to identify and target children with religious messaging?

What is the likelihood that religious zealots might want to invest heavily in such systems?

It might be argued that AI will struggle to approximate the symbolic richness required to convince anybody in the real world but the beauty of any AI propaganda machine is that it never needs to ‘know’ anything about the subject matter.

All it needs to do is calculate which buttons to press to get a reaction from which user. It would not need to ‘understand’ religion to convert users any more than it needed to ‘understand’ politics to sways voters.

Doubtless the infrastructure and development required would be immensely expensive but well-funded organisations, such as the Vatican bank or the Saudi state, have already shown an inclination to spend heavily to acquire new followers.

The Catholic church today operates the world’s largest non-governmental school system, supporting some 43,800 secondary schools, and 95,200 primary schools.

What effect could billions of dollars of investment from religious institutions have on the educational, cultural and AI landscape?

What is the likelihood that a well-funded organised religious group, that genuinely believes in the reality of their theology, will invest in AI to reach potential converts?

For hearts and minds

Throughout history, organised religion has proven to be one of the most persistent and robust edifices of civilisation. The promise of a higher purpose has proven intoxicating to the point that vast corruption in organised religion has often been overlooked.

In fact, religion has frequently been used to control populations and for overt political means. What new power structures may develop if the process of evangelism can simply be scaled up at the click of a mouse?

For many, religious faith is the most deeply held belief and so it may seem far-fetched that AI might somehow takeover, but this current question is not about AI becoming the object of worship, so much as what religious leaders equipped with AI tools may seek to do.

The dangers posed are no reflection on the good or ill of organised religions per se but a sign that the future landscape of spiritual discussion may be about to radically alter and that there is an urgent need to consider and understand exactly what forces might shape that landscape and who will control it.

Equally, it is quite distinct from any discussion of a singularity — where computers become more intelligent and wiser than us. The question of AI as a religious propaganda machine is much simpler — the age old problem of those with power and money using it to indoctrinate the masses.

By planting a flag in the sand in opposition to extremism, AI may provide a valuable service but at the same time does it plant the seeds for the development of a weaponized form of AI, designed to wage a war of ideology for the hearts and minds of potential converts?

“Anyone who can appease a man’s conscience can take his freedom away from him… we shall keep the secret, and for their happiness we shall allure them with the reward of heaven and eternity.”

~ Fyodor Dostoyevsky, The Grand Inquisitor

In a world where over 80% of the global population subscribes to a religion how long before religious organisations become adept at using machine learning algorithms to spread the faith? How easy it will it then be to detect ‘fake spirituality’ from true insight? What means are in place to prevent an arms race of religious proselytization?

If you enjoyed this, please consider sharing or follow me on Twitter or Medium.