The cancer-screening controversy resulting from the CervicalCheck debacle has devastated lives. It has also badly damaged the reputation of some health service organisations and their leadership.

Over the last 10 years, some three million tests have been performed on 1.8 million women in Ireland. About 280,000 women per year now undergo the test. It transpires that all these Irish smear slides were manually individually examined. Computer-based imaging and a separate HPV test, already used in many other countries, will now be introduced for Irish tests later this year.

However, the repetitive nature of screening, where suitably trained experts can be overwhelmed by the sheer number of tests involved, ultimately points to a fully automated process. A software deep learning algorithm developed at Stanford University already has proven as effective in identifying skin cancers as certified dermatologists. Fully-automated visual diagnosis of smear tests, skin cancers and many other medical conditions is surely imminent.

Replace jobs

Should we trust deep learning and artificial intelligence (AI) as much as we trust human decisions? The e-voting machine debacle in 2012 set down a marker for Irish society. Regardless, our society and our economy may be overcome by the current global tsunami of interest and investment in autonomous software systems. Driverless vehicles may replace our taxis and private cars. Driverless delivery trucks will replace jobs in transportation. Automated decision systems may replace not just medical test specialists, but also security staff, insurance and credit assessors, tax and social welfare officers, and others.

Some governments fear an AI arms race. Dominance in the technology could be used to probe a nation’s cyber defences, threatening a crippling catastrophic failure of national infrastructure. Mastery of deep learning could well win a war, both defensively in detecting threats and also offensively at no human risk to the aggressor. Because AI technology is becoming so strategic, there is concern that advanced research may “go dark”, remaining unpublished as national secrets.

Various governments have published national strategies on AI. China would appear to have the most concerted focus, with its state council publishing its “New Generation of Artificial Intelligence Plan” in July last year. Just in the last three months, France, Britain South Korea and India have all announced national strategic initiatives.

The US has yet to announce any plan, and it is perhaps telling that the Trump administration has yet to appoint a president’s science adviser.

In Ireland in January, UCD announced a €4 million collaboration to help enable Samsung to create smarter products for its customers. The University of Limerick announced a new masters degree in AI. In February, Kerry’s Dairymaster announced a €2 million R&D partnership with IT Tralee and the Lero software research centre for intelligent autonomous systems for farming. SFI gave a €570,000 grant to the UCC INFANT research centre to use AI to monitor brain health in new-born children.

Co-ordinated research

Some indigenous Irish companies which are globally recognised for their expertise in AI include Artomatix (texture manipulation for gaming and also fashion design), Aylien (natural language processing), Movidius (edge computing, and now a part of Intel), Nuritas (peptide discovery) and Orreco (health management for professional athletes). Despite all these initiatives, there is not yet a published national Irish strategy for AI.

What could an Irish strategic plan for AI comprise? There is clearly a need for additional resourcing of nationally co-ordinated research, skills development and technology transfer from academia to industry. This would then mirror the emphasis in national plans elsewhere. However, it is always challenging for a small nation to compete with substantial resourcing available abroad.

Notwithstanding that, a national strategy for a powerful emerging technology should not just consider investment and skills development, but also the likely impact and challenges on society at large. In both of these, Ireland could take a global leadership role.

The European Union has recently led the citizen’s right to data privacy and right to be forgotten, through our GDPR legislation. The global software industry is changing in response.

With autonomous software and fully automated decision making now imminent, is there not a need for a citizen’s “right to verify”? If we are to trust in the fairness of computer decisions, free from human vagaries and prejudices, in life-impacting judgments including vehicle safety, medical screening, financial and benefits assessments, and myriad other applications, should there not be a citizen’s right to verify that any automated decision is fair and correct?

A software algorithm would thus, by law, be required to explain its decision if challenged (if necessary through the courts). The operator of an algorithm should be legally liable for any damaging discrepancy from a similar decision made by a well-informed human expert presented with the same evidence.

Judea Pearl, a computer scientist and philosopher, was a pioneer of Bayesian networks, which mathematically model the probability that one event causes another. His eminently accessible recent book The Book Of Why observes that current deep learning systems struggle to justify their reasoning, and explains how a new approach to AI can and should be developed.

Maybe Ireland can take a lead both in legislation for the right to verify, and also in R&D on how software algorithms can be engineered to honour that right.