Artificial intelligence can predict a student’s ability to solve problems by looking at past performance – and help them learn better too

HOW do you show that you know what you know? Often, you have no choice but to take a test.

A new algorithm could both improve your knowledge and do away with formal tests altogether. Developed by researchers at Stanford University and Google in California, it analyses students’ performance on past practice problems, identifies where they tend to go wrong and forms a picture of their overall knowledge.

The idea of using software to track a student’s progress isn’t new. But few attempts so far have exploited deep learning, the cutting-edge discipline of making machines learn by digesting large amounts of data.


Chris Piech at Stanford and his team fed their system more than 1.4 million student answers to maths problems set on the online learning platform Khan Academy, and the corresponding scores. They also trained a neural network to sort questions by type: those involving square roots, the slope of graphs, or calculating where a line meets the horizontal axis on a graph, for example.

With all this information, the system then began to learn each student’s capabilities on each question type.

The model could predict with up to 85 per cent accuracy whether a student would get a new exercise right or wrong, just by looking at a few dozen other questions they had already answered. Piech presented the results at the Neural Information Processing Systems conference in Montreal, Canada, last month.

Piech envisions a more sophisticated version that not only predicts which questions a student is likely to get wrong, but also understands why. It would be nice, says Piech, “if we could all afford a really expensive tutor who could spend time thinking about what you should learn”. That’s not realistic, but we could one day just use this type of software to pinpoint where someone is struggling and help them improve.

Eventually, the system could become accurate enough to do away with exams altogether, he says. “Our intuition tells us if you pay enough attention to what a student did as they were learning, you wouldn’t need to have them sit down and do a test.”

“If you pay enough attention to the student as they learn, you don’t need to test them at the end”

The algorithm is a significant advance in the state of the art, says Tamara Sumner of the University of Colorado, Boulder. “What is particularly impressive is that this approach does not require significant human input to annotate training data or hand-craft models of expertise.”

Neil Heffernan, a computer scientist at Worcester Polytechnic Institute in Massachusetts, agrees that it’s important to develop better ways to predict students’ performance. But he wonders whether the new system is of any practical value: can it, for instance, tell us how to better teach students of different backgrounds or skill levels? “What does that mean, to be able to do a much better job at predicting stuff?” he asks. “I wish we could turn that into something that’s meaningful.”

(Image: Michael Gottschalk/Getty)

This article appeared in print under the headline “RoboTutor is a class act”