Artificial intelligence has become a hot political and cultural topic.

How has this complex and seemingly niche subject moved to the forefront of mainstream debate?

The simple answer is that it is no longer niche. The pace of development of AI in recent years has accelerated enormously, driven by many factors including the introduction of powerful “deep learning” algorithms, a massive proliferation of data for these algorithms to learn from, and significant increases in investment.

Algorithmic processes now affect many aspects of our lives ─ whether we know it or not. Which makes the notable absence of one player from the debate surrounding AI all the more glaring: civil society. This must be remedied if we are to maximise the positive potential of AI whilst minimizing the risk of harm.

Improving lives

AI brings huge opportunities for civil society organisations (CSOs) to improve the lives of people and communities around the world.

The charity Parkinson’s UK is exploring whether machine learning could be applied to develop better early warning indicators for Parkinson ’s disease.

The Lindbergh Foundation, meanwhile, has partnered with the start-up Neurala to apply machine-learning to video from drone surveillance in game reserves, and develop algorithms that can predict poacher behaviour, and thus enable more effective interventions.

But there is a growing pool of examples of the risks AI poses to civil society.

Some stem from deliberately malign uses of the technology (as outlined in a recent paper) or the use of algorithms to generate targeted misinformation and propaganda in order to influence public opinion and elections.

Equally important is the risk of unintended negative consequences. It is increasingly apparent, for instance, that when machine learning algorithms are applied to data that contain historical statistical biases for factors like race or gender, they very quickly reflect and even strengthen those biases unless steps are taken to mitigate this danger.

With the populace outside the political and corporate world most vulnerable to this kind of machine-driven decision-making, it has never been more urgent to bring civil society into the wider AI debate.

Lack of awareness

Charities and non-profits do not have a seat at the table in many forums where these issues are being debated. Conversely, many civil society organisations (CSOs) may not yet be aware of the issues or understand their importance and relevance to their work.

We cannot just accept this.

CSOs represent many of the most marginalised individuals and communities in our society; and since these groups are likely to be hit soonest and hardest by the negative impacts of AI, it is vital that the organisations representing them are in a position to speak out on their behalf. If they do not, then not only will those CSOs be failing to deliver on their missions, but also the chances of minimising the wider harmful effects of AI will be significantly reduced.

So, what needs to be done to ensure that CSOs play their full part in shaping the development of AI for the better?

The implications of getting AI wrong are so far-reaching that decisions about its future cannot simply be left up to technologists.

Partly it is an issue of education and skills. CSOs need support if they are to get to grips with AI and help identify ways in which it can be put to use for societal good; as well as playing a key role in identifying some of the risks and potential unintended consequences.

And this is important: the implications of getting AI wrong are so far-reaching that decisions about its future cannot simply be left up to technologists.

A broad range of communities and organisations representing different viewpoints must be brought into the debate; and if they require up-skilling to make that happen, then the onus is on governments and the tech industry to ensure they get the support they need.

It is also imperative that any civil society involvement is meaningful; and that CSOs are valued for the perspective they bring. This must be reflected in the approach of policymakers.

We are already seeing policy; the UK is keen to position itself as a world leader when it comes to the field of AI ethics. It is particularly well-placed to do so, given the role the nation has played in the historical development of AI. A number of new partnership institutions have been established with names that reflect this rich heritage (e.g. The Alan Turing Institute, the Ada Lovelace Institute). There has also been a great deal of parliamentary interest, with groups established in both the House of Commons and House of Lords to explore AI.

Any governmental strategy for AI or the wider Fourth Industrial Revolution should acknowledge the role that civil society must play in shaping the development of new technology, as well as the impact that these technologies might have on civil society itself.

The Charities Aids Foundation’s Future:Good project aims to play a part in addressing these challenges.

Through our work we have been helping to drive the debate over the impact of disruptive technologies like AI and blockchain on philanthropy and nonprofits, and we want to continue to act as a focal point that can help to inform CSOs about key issues, whilst also highlighting to governments and tech companies the value and importance of engaging with civil society.