Science and technology are upending how we learn. We separate the science from the snake oil and look at how parents, teachers, and policymakers respond.

There is a saying in education that you treasure what you measure. Going by the standardized tests that dominate schools in many countries around the world, we’re teaching children that we value only a very narrow definition of intelligence—the ability to solve word problems about train times, or identify the purpose of a World War I treaty on a multiple-choice test.

The truth is human intelligence is vast and complex. Yet it is measured—and valued—crassly. And in an age when artificial intelligence is capable of nailing IQ tests and mastering knowledge-based curricula, humans may be setting ourselves up to be outshone by technology.

“I think we are in danger of dumbing ourselves down,” says Rose Luckin, a professor of learning-centered design at University College London who has been studying artificial intelligence and learning for more than 25 years. Because we measure intelligence in very limited ways, “we are very impressed by the sort of intelligent behavior our technology can produce.”

Luckin’s latest book, Machine Learning and Human Intelligence: The future of education for the 21st Century, argues that if we want to avoid turning our kids—and their teachers—into robots, we have to radically redefine intelligence. She advocates using AI to help us develop and measure human intelligence in various forms to better prepare students for a workplace that requires constant adaptation and learning.

Redefining intelligence

Luckin identifies seven kinds of intelligence that kids will need to thrive in the future.

First, there’s interdisciplinary academic intelligence, the ability to tie subjects together rather than studying them in silos. (Finland, of course, is ahead of the curve on this front, having jettisoned the idea of teaching by subjects in favor of showing students to make connections between math, history, economics, and language under umbrella topics like “The European Union.”)

Then there’s social intelligence, or developing an awareness of our own emotions and how we regulate those in a group. This is something humans can excel at; robots, not so much.

Luckin also says that there are four meta-intelligences:

Meta-knowing, or our relationship to knowledge. Do students “understand where knowledge comes from?” Luckin asks. “Do they see it as something they are given and they have to learn, or do they realize it is something they construct and is contextual?” Kids with this kind of intelligence understand what constitutes good evidence, and how to make judgments based on that evidence.

Metacognition, or knowing ourselves and regulating our cognitive processes. (For example, if know I’m a procrastinator and someone who needs to write things down to learn them, I should not wait until one hour before a major exam to try and re-write all my notes.)

Meta subjective intelligence, or understanding our emotions and their relationship to our learning and well-being. Motivation is a key piece of this.

Meta contextual intelligence, which is about the dynamic context in which learning takes place—not just in a class, but with people, things, and locations. “Our intelligence is not just in our brain,” Luckin says. “There’s an increasing amount of evidence that context is huge,” and context is something AI can’t do well, Luckin says.

Accurate perceived self-efficacy, our ability to assess our own abilities, is perhaps the most important kind of intelligence. “Can we accurately predict whether we are likely to be successful at something, whether we are effective?” Luckin asks.

Humans are notoriously bad at predicting our own performance. In general, behavioral psychologists and economists have shown we are prone to overconfidence, among other biases. Luckin argues this is where AI comes in.

Bringing AI into the classroom

“AI is a powerful tool to open up the ‘black box of learning’ by providing a deep, fine-grained understanding of when and how learning actually happens,” Luckin writes in Nature. She proposes that AI systems can allow us to better develop this wider range of intelligences—in part because AI could help to measure things beyond knowledge, including collaboration, persistence, confidence, and motivation. It would also allow us to dispose of one-time tests used to assess students. Instead, students could be tested on a continuous basis, with a computer, phone, or tablet using tools to evaluate aspects of students’ social, interdisciplinary and multiple meta intelligences. By providing kids and teachers with a more accurate portrait of what they can and cannot do, students would thus have more efficient ways to improve.

“Often seeing some evidence for yourself about how you are doing is very instructive to shed a light on what you are doing,” Luckin says. This method would free up the teacher to focus on making sense of the data and working on key students issues like motivation and perseverance. While she acknowledges that AI can’t fully measure any of the intelligences, she believes “it can help us get better at all of them.”

Luckin offers some examples of how AI could help improve learning. In a paper published in Computer Sciences, she examines how to measure collaborative problem-solving, a skill that’s been much-touted as necessary for the modern workplace. But it’s impossible for one teacher to keep complete tabs on which students are working well together during classroom small-group activities.

In one experiment, she and her colleagues had cameras film kids’ hand movements and head orientations to measure how effectively they were working together. The detection tool was then cross-checked by humans who judged whether the groups were working collaboratively or not. The goal, Luckin said, was to build evidence of social interaction, which is one element of successful collaborative problem solving. This evidence could be used to form a dashboard that would flag to teachers which groups need their attention, allowing teachers to use their time more efficiently.

Another example of how AI could work in the classroom is offered by the UK learning platform Century Tech, which uses AI and big data to tailor educational content and activities based on individual students’ areas of strengths and weaknesses. Teachers get real-time updates on students’ progress, allowing them to target how best to support learners.

Developing every kind of intelligence

There are ways to develop a range of intelligences that go beyond AI, too. Some teachers seeking to build metacognitive intelligence are using a computer program called “Betty’s Brain.” In the program, science students teach a cartoon character named Betty about river-ecosystem processes, including the food chain, photosynthesis, and the waste cycle. They then test Betty to see what she has learned, and observe the role testing plays in learning. “In checking her, the students are really checking themselves and discovering that self-monitoring is an important strategy that applies to all learning situations,” Vanderbilt magazine explains. “In order to teach, they first have to learn,” said Gautam Biswas, the Vanderbilt professor of electrical engineering and computer science who developed it.

Luckin also says that students can build their intelligences by studying AI itself. She points to how exploring IBM’s Watson, which has an enormous knowledge base, could help students build meta-knowing—the understanding that knowledge is not just information that we are presented with, but something we build. Watson can answer complex questions because it is programmed to make observations and build bodies of evidence, generate and evaluate hypotheses, and decide on the best possible answer. In other words, Watson learns in much the way that students should learn. “We can use that as a sandpit for learners to see that knowledge has been constructed,” Luckin says.

The downside of AI surveillance

There are some obvious obstacles to the AI-enabled world that Luckin envisions. For one thing, education systems are notoriously averse to change. For another, the idea of AI tracking students’ performance raises significant concerns about data privacy. If technology is constantly evaluating your child’s intelligences, it is also collecting data on their strengths and weaknesses.

It’s easy to imagine the ways that this could be used to pigeonhole students or deny them opportunities. A story in the Financial Times offers one instructive example, explaining how in a high school in eastern China decided to track students: “A surveillance system, powered by facial recognition and artificial intelligence, tracks the state school’s 1,010 pupils, informing teachers which students are late or have missed class, while in the café, their menu choices leave a digital dietary footprint that staff can monitor to see who is gorging on too much fatty food.” The school eventually halted the program because of local controversy, the Financial Times reports. But the portrait it paints is a scary one.

Luckin admits that data privacy is an enormous issue, though she doesn’t necessarily have a solution herself. “This is the big discussion that has to happen,” she says, and teachers and policymakers should join the academics and engineers who are already talking about how to make use of AI in education.

As for the issue of schools’ historical resistance to change, Luckin isn’t alone in believing that it’s inevitable that AI will become more embedded in the classrooms. Simon Balderson, an assistant headteacher at Wells Cathedral School in the UK, organizes an international conference about AI and education. He tells Tes, a UK website and magazine about teaching and learning:

“At the moment, we deliver content and assess pupils but, as AI infiltrates classrooms, this will change. AI is developing so rapidly that, in the future, it will be able to detect, for example, the micro-expressions that pass across someone’s face when they are struggling to understand a concept, and will pick up on that and adapt a lesson to take account of it.”

Like teachers, the AI would adapt its approach to each student. But it would do it consistently, on a constant basis, and for every pupil. “No teacher can do that with 30 children per class,” Balderson notes. “AI will also manage data for each pupil, ensuring that work is always pitched at exactly the right level for every student. Currently, that level of differentiation is impossible.”

The future of testing

It seems unlikely that schools will jettison high-stakes, academic testing anytime soon. But there is a growing acknowledgement on both sides of the Atlantic that the exam system is broken: it rewards students for regurgitating information rather than making meaning from it, and incentivizes extrinsic—rather than intrinsic—motivation.

Luckin argues that AI is a viable option to replace some tests. “Now that we have ways of collecting data and analyzing that can help us to do very accurate formative, continual assessment,” she says, “there is a realistic alternative to exams if we want it.”

She’s excited by the possibility of how changing what we measure would change what our education system values: “If we can accept that we need to change that assessment system,” she says, “then it opens the door to that radical rethink about what the education system is for.”

It’s too soon to know whether the vision Luckin has is utopian, dystopian, or just plain off. But the conclusions of a recent House of Lords report on AI included this assertion:

All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

This kind of thinking bodes well for a shakeup in the way schools work. “When you unpack it,” Luckin says, “that’s huge.”