This blog outlines a few examples that show some of the similarities and opportunities for borrowing good ideas.

We are researching the use of artificial intelligence (AI) in our school system, as part of our ambition to build a smarter education system, where teachers and learners take advantage of technology and data. We know that there is great potential for AI to tackle some of our schools’ biggest problems - from excessive teacher workload to a one-size-fits-all approach to learning. Despite that, conversations about AI in the education sector are less developed than in other sectors like healthcare. Like all good researchers, we want to find the best ideas from other sectors to see what we can learn. Discussions about AI in the healthcare sector are more advanced (for example, in Nesta’s recent “Confronting Dr Robot” report or the House of Lords’ Select Committee AI report), so we started by looking at our health system. So what can AI in health tell us about AI in education?

Despite very different contexts, there are similar shared benefits.

1. Assisting with administrative tasks

AI tools such as virtual medical assistants can provide assistance for physicians, relieving them of administrative tasks and allowing them to focus on patients. Educators can do the same. For example, “Jill Watson” is an AI-Teaching Assistant being used by a Georgia Tech University professor to answer students’ questions posted on an online forum. While this example of an AI teaching assistant was used in universities, a similar solution can be implemented in schools. Companies such as SnatchBot, assist teachers with tasks such as scheduling and answering questions on lesson plans and deadlines. With AI lifting the administrative burden, teachers are able to focus more on the students’ learning progress.

2. Personalisation

AI is helping to ‘personalise’ healthcare services from personalised advice to precision medicine. Similarly, AI is paving the way for a more personalised approach to education. For example, adaptive learning platforms use insights gleaned from individual students to change the pace and focus of learning.

We also identified similar challenges in the health and education sectors. From this we’re able to learn from what has (and hasn’t) worked in the health sector.

1. Trust and privacy

Current forms of AI demand huge amounts of data. Finding a balance between reaping the benefits of AI and ensuring trust and privacy is tricky. This issue was illustrated in healthcare, when the Royal Free NHS Foundation Trust signed a 5-year deal with DeepMind. The collaboration required transfer of real patients data and an ICO investigation found that the Trust did not comply with Data Protection Act. What surfaced from the DeepMind fiasco is that ultimately users (in this case, patients) need to know about how their data is going to be used. There needs to be a tailored approach to trust and privacy concerns in education, to avoid a blunder, similar to DeepMind. For example, AI in the education sector involves the data of pupils who are minors, hence it would be important to come up with a robust structure for responsible data sharing. So, what can we learn from the health sector context that might be useful for schools? The Lords’ AI report recommends that NHS Digital and the National Data Guardian for Health and Care construct a framework for data sharing. Such a framework could ensure a higher level of public trust, if it clearly sets out mechanisms that allows patients to understand how their data is going to be used. A similar, robust framework of data sharing and processing, perhaps overseen by an accountable public body, could possibly be implemented in the education sector.

2. Public value and competitiveness

NHS data is extremely valuable. It is a unique source of value for the nation, with data going back decades. This creates a tension. If data can be shared safely with companies leading to improved health outcomes, there are obvious benefits. Equally, if this data is only shared with a small number of companies and has the ability to generate eye-watering profits for them (since 2015, VC investment in AI for healthcare has soared, reaching almost $1.3 billion across 103 deals in 2017) rather than the public, there are obvious downsides. Sensitive health data should not be shared lightly and should be handled in a way that ensures value is recouped by the public. Small and medium-sized enterprises (SMEs) should also have access to health data, to avoid a monopoly status of big tech corporations and ensure competition within the Healthcare AI space. A similar challenge faces the education sector. Just as data needs to be shared across the health system, if we want AI tools to deliver benefit for our schools we need to find ways for schools to safely share data.

3. Real-world testing and public scrutiny

Relevant stakeholders - in the health sector this could include citizens, clinical professionals, health policy experts - should be involved in the design and evaluation of AI systems. Without this, we risk oversight of accountability in the health sector, and the public may lose control over their healthcare data. Similarly, involving teachers, parents, and students in the development and testing of AI in the education sector would ensure the system meets their needs. It would also help ensure stakeholders understand and agree with the mechanisms to protect the privacy and security of their data. Nesta recommends controlled, real-world experimentation of healthcare AI systems in designated test sites, comparing them with non-AI systems. Similar ‘test beds’ across schools could be used to improve AI tools and ensure that they are suited to the needs of schools, teachers and the real world. Nesta also called for public panels to ensure that AI tools take into account demands and perspectives of citizens and healthcare professionals. Perhaps there are similar models, facilitating student, teacher and parental involvement in the development of AI tools, that could be adopted in the education sector.

What's next?