The Government AI Readiness Index is a quantitative toolkit designed to provide an overview of any government’s readiness to use AI. The Index incorporates a wide range of data, from desk research on the presence of AI strategies, to Crunchbase statistics on AI startups, to UN indices, and distills it into a single number. This facilitates global comparisons, as well as the ability to track a government’s progress in this area over time.

Like all indexes, however, it does not capture the full complexity of the picture on the ground. Ghana’s relatively low score, for example, does not acknowledge developments such as Google recently choosing to open the first AI research facility in Africa there. Belarus similarly fares poorly in the rankings, but has been making strides in opening up tech hubs and working with China on AI R&D. The nature of the indicators mean that some of these details will be excluded; something we have tried to compensate for through our regional analysis, contributed by local experts to bring additional context to the quantitative findings.

Approach and structure

To start the process of designing our 2019 Government AI Readiness Index methodology, we set out our ‘exam question’: how ready is a given government to implement AI in the delivery of public services to their citizens? From this, we devised a number of working hypotheses around what makes a government ‘ready’ to use AI in public service delivery.

Cluster Hypothesis Governance Governments need to implement AI in a way that builds trust and legitimacy, which ideally requires legal and ethical frameworks to be in place for handling and protecting citizens’ data and algorithm use. A coherent national AI strategy is a good proxy for measuring the strength of AI-focused governance. Infrastructure and data Artificial intelligence systems are built on data. Therefore the quality and availability of data, as well as the ability of a government to work with it effectively, are critical. Skills and education In order to develop and implement AI in public service delivery, there is ideally a strong pool of in-country talent, which can be measured both through AI skills/education and the strength of the AI sector (which can be measured through a proxy such as the number of start-ups). Government and public services An AI-ready government will display both strong political will and capacity to push for innovation. This can be measured through the proxies of general effectiveness of the government, and the degree of innovation already in place through digital public services.

The approach and hypotheses for our 2017 Government AI Readiness Index formed the basis of our thinking about 2019’s Index. We also knew that there were a number of changes we wanted to make this time round, based both on our own ideas and helpful feedback from around the world that we received last year.

As a starting point, we wanted this year’s Index to be more globally representative than the previous group of OECD governments, so we have included all UN countries, plus Taiwan. This was important in guiding our data selection, as we needed to find data sets which covered as many of these as possible (some of the last Index’s datasets were OECD-specific).

We followed a similar structure to last year of high-level ‘clusters’ containing multiple indicators or proxies for measuring government AI readiness. This time, we added a fourth cluster that we felt was missing from last year’s Index: governance, to measure a government’s AI-related vision, policies, and ethical and legal frameworks, all of which are vital prerequisites for widespread AI implementation in public service delivery.

We added new indicators and removed some from our last Index, and have ended up with 11 indicators in total, up from nine in 2017:

Calculating the rankings

In most cases we worked with existing indexes which have cleaned data. In those cases we took the data sets for each indicator and normalised the scores for each country between zero and one to make them comparable. For AI startups we mined the Crunchbase database. This database skews towards Silicon Valley and the USA. To mitigate the impact of this we have applied a logarithmic scale (base 10) to the scores before normalising to provide a fairer sense of the relative intensity of private sector capacity in each country. We then added the numbers for each indicator together to get our final scores for government AI readiness. We decided to weight each indicator equally based on the feedback we received when consulting on our methodology, as it was felt that each was of equal importance.

Limitations

Our methodology has certain limitations, which we outline here. We greatly welcome any feedback and ideas for how we can improve next year’s index: see below.

Missing data points

As we started with the aim of including all UN governments, we were faced with the problem of trying to find high quality datasets containing as wide a sample of countries as possible. Some datasets, such as the UN’s eGovernment Development Index, are complete and cover all countries in our survey. Others are not, and contain much smaller samples of countries, such as the OKFN Open Data Index. Where we have included a dataset which contains gaps such as this one, it is only after a thorough search for better indicators or proxies to capture what we are trying to measure. In the absence of any alternative, we reverted to less comprehensive datasets we still judged to be of a high quality.

We did not attempt to estimate missing data points as we did not feel able to carry out the interpolation sufficiently accurately, and we felt that the absence of this data from the Index was itself revealing. This does mean, however, that the scores of governments who have missing data points have suffered as a result. Unfortunately, this tends to benefit countries with stronger economies, which were generally better represented in the data.

In the case of China, which is not represented in the OKFN Open Data Index, the Government received a lower score for AI readiness than we feel reflects reality. China has prioritised implementing AI in public service delivery, and already has widespread use of AI in a number of public service programmes. As a result, we would expect China to be at the top, or very near to the top, of our rankings. Its actual place (20th) can therefore be attributed at least in part to missing data points. However, we felt that data availability was too vital of a precondition for widespread AI implementation to leave out, as it is both necessary for training and powering algorithms, but can also indicate good governance, transparency and accountability. As we could not find a more complete dataset or proxy to sufficiently capture data availability, we made the decision to use the OKFN dataset, despite the missing data points.

Other limitations in the data

While most of our datasets are from 2018, some (the WEF Networked Readiness Report and the OKFN Open Data Index) are from 2016 or 2017. We have decided to include these, in the absence of any recent high quality datasets that capture these vital aspects of our rankings. Given that our Government AI Readiness Index is the first of its kind in the world, and that we are not comparing results with last year’s Index due to the changes in scope and methodology, we judged that these were acceptable to include this year. For next year’s Index, however, we will need to reconsider including these indicators if more up-to-date data is not available, due to the problems it will cause for comparability.

Future research and the limits of the quantitative method

There is the risk that indices such as these create a global race for AI. Higher rankings are predominantly held by countries from the Global North, which highlights the risk of cementing the global dominance of countries with a history of funding scientific and technological research and development.

We are well aware that the Government AI Readiness Index does not show a complete picture; rather it simply shows one specifically quantitative way of viewing a government’s AI readiness. There are a number of things that might make a government AI ready that are unquantifiable, and therefore out of the scope of our study. Further qualitative studies would hopefully draw out more of these less tangible elements, to produce a more balanced view of global government AI readiness.

If you have any feedback or recommendations for next year’s Index, please get in touch with us at research@oxfordinsights.com .