Imagine some software to teach basic math to grade-school kids. It’s a research project. Cutting edge stuff… in the year 1985. It presents a math problem. 87 x 43 ---- You put in the answer. The program checks your work. And here’s the clever part. If you miss it, the program analyzes your answer and helpfully tells you where you screwed up. Like if I type, 32 , Enter: Your answer is wrong. Possible causes of error: 1. You multiplied the number in the multiplicand by the number directly beneath it in the multiplier, and you wrote down the carried number, ignoring the units number. See, there’s your problem right there. Your answer is wrong. It’s the, you know, multiplicand… Well. I imagine you’re thinking, This is an awful error message. Computers can’t talk to human beings like that. You need to tweak the wording, fix the tone, tighten up the explanation… But that’s because you and I, we’re software people. We have lots of experience, too much experience, with error messages. It turns out the error message is not the key thing that’s wrong here. This story starts with a program. But it’s not about software or software people at all.

The problem The error message quoted above is real. This story is about a real person and what happened when he read that error message. His name is Mark Lepper. And he’s not a software person. He’s a social psychologist. An academic. In the late 1980s and early ’90s, computer-based education was a big topic due to the PC, much as it is today due to rich Web apps. Lepper was interested. He saw a lot of these programs, and like us, he found them lacking. But he saw a different reason, a different underlying problem. Lepper thought: the root problem here is that these programmers just don’t have the slightest idea what they’re doing. They clearly don’t understand the psychology of one-on-one teaching situations. Particularly how a kid’s motivation can be undermined. I should have pity on them, send them a few papers on this. So he did a literature search. And came up empty. A lot of research had been done on classroom teaching. But of one-on-one teaching—tutoring—almost nothing was known.

The plan Naturally, Lepper went to work building a team of researchers, and they ran studies and wrote papers on tutoring for the next decade. All the research had the same basic plan. First, they would pick a topic in basic math. Say, fractions. They’d find kids in need of remedial tutoring in that topic and tutors with experience teaching it. They’d pre-test the kids for both math skills and motivation. The tutoring sessions would be videotaped. And they’d test the kids again afterwards. Sometimes they’d have the tutors watch the tape and provide a running commentary. Note how this is set up. Because they measured the student’s skills and motivation before and after, they could objectively distinguish effective sessions from ineffective ones. Did the scores go up? or stay flat? Then, they could watch the video and identify the strategies or techniques used in the good sessions.

The findings So what did they find? Two things. One, there is such a thing as a highly effective tutor. The best tutors were amazing. At their best, they were able to turn initially resistant, alienated, and seemingly helpless students into interested and excited participants in the learning process. At their best, they were able to help remedial students to progress through what would normally have been weeks or months of curriculum material in a very short time. Moreover, gains in students' learning remained apparent following and outside of the tutoring situation, showing that these gains were not simply the result of […] scaffolding that tutors provided [directly]. Two. The best tutors had a surprisingly consistent approach to tutoring. In fact, Lepper and Woolverton list about twenty specific qualities and techniques of good tutors. We’ll just touch on a few of them here. Math knowledge. The best tutors knew the subject matter surprisingly well, even for math tutors. They knew why things work the way they work. They knew details of math history. They had a deep pool of real-world analogies and examples to draw from. They knew what the likely misunderstandings were in advance.

Questions, not directions. Good tutors ask questions. They don’t tell the student what to do. The researchers counted what percentage of the remarks their best tutors made were questions, as opposed to statements or instructions. Eighty to ninety percent. This number is insane! Ninety percent questions! You can’t talk like that. Nobody talks like that. Except the very best tutors.

Hints, not answers. The best tutors never give away an answer. Instead, they give a hint, the tiniest hint they can think of, let you think it over, and if you’re still stuck, they’ll give you another hint. And they’re very patient. They want you to take the next step on your own. Good tutors may offer five or six hints in a row. The researchers wrote: “if we did not have clear outcome data establishing the great success of these same tutors, it would be easy to believe that such [a] strategy [was] dysfunctional.”

Productive vs. nonproductive errors My favorite. Good tutors would just ignore trivial errors. Quote: “Our less successful tutors, however, seemed unable to let any error pass, no matter how trivial or inconsequential.” Furthermore: good tutors distinguished between errors that would lead to a wild goose chase and errors that would lead to a teaching moment. “[T]o these tutors, some student errors seemed ‘productive’ […] [They] would provide good occasions for students […] to discover their own mistakes[.] Such errors were therefore deliberately allowed by the tutors, so that they could then be systematically ‘debugged’.” Some tutors even intentionally chose problems that would reveal particular errors in the student’s understanding. So. Devious. Like white-hat hackers, expert tutors are taking valuable supervillain skills and using them for good. OK. There are some other things too, but first let’s review. We have an expert teacher who knows the material thoroughly,

asking lots of questions

and giving hints, instead of presenting answers from authority, and

leading the student into and out of the classic errors. Does this sound familiar? Does it have a name? This is the Socratic method. Popularized by Plato around 380 BCE. This educational technology is over two thousand years old. It’s the state of the art. Computers can’t do this yet! I think we’re fairly close to being able to do a little of it in a very shallow way. Doing this well requires a model of the student’s understanding of the material. You have to understand their misunderstandings and systematically debug them. I like this picture of how we really learn. But this isn’t the whole picture. There’s a much more important lesson to learn here.

The deeper lesson What else do good tutors do? They take a few minutes in each lesson to chat with the student. Not about math. Small talk.

They exhibit warmth and concern, and confidence that the student can succeed.

They present challenging problems, always problems that are just at the edge of what they think the student can do.

They are extremely careful with negative feedback. Some tutors never explicitly told a student that they got something wrong! Instead, they’d ask a question. “Did you remember to carry the one?” They’re also careful not to overdo it with positive feedback. Now these traits are interesting because none of these are teaching techniques at all. They point at something else about learning. Tutoring, apparently, is about motivation. It turns out your mind has to be in a very specific state in order for you to learn. Good tutors not only track the student’s understanding. They also track the student’s motivation. They know if a kid is bored, discouraged, frustrated, tired, distracted. And they act on that knowledge. Because if your mind is not engaged, you’re not learning. Even the Socratic techniques are all about motivation! Let’s go back to them. Why do good tutors allow students to make errors, even lead them into errors? Because once you gain a little confidence, detecting a bug in your own understanding is super motivating. You’re engaged. You want to fix it. Why do good tutors give hints and not answers? Well, partly because struggling with a problem is how you learn. But just as importantly, giving the student the answer ends the exercise in failure. It tells the student, I give up on you. Whereas slowly guiding the student all the way through ends with them earning a win. Why do good tutors ask questions? In a one-on-one situation, a question does something almost magical. It compels a response. It turns on the brain. A question makes us care.

So what? And that’s what I know about tutoring. All of this comes from one chapter of one book: “The Wisdom of Practice: Lessons Learned from the Study of Highly Effective Tutors” by Mark Lepper and Maria Woolverton, in Improving Academic Achievement. It is packed with amazing discoveries, much more than I could write about here. But… so what? Why should we, as programmers, care about all this? This is a recording of a popular Web site that helps kids learn math. Millions of kids use this site. I recorded this in January 2014. I decided to pretend I was a kid who really didn’t get it, and see what it would do if I put in the wrong answer. Note the behavior when I click the button to submit the wrong answer. Now… this is really no better or worse than the error message I showed you at the beginning, the one with the multiplicand. But to me, it’s really sad, because it’s so much less ambitious. Think this over. Now that you know: what would an expert tutor do? What if it was you tutoring the kid in this movie? Is this a productive or nonproductive error? At what point would you intervene? What would you say? Is the computer doing it right? Does it even have the information it would need to do it right? I don’t want to pick on this web site. They never claimed to be a substitute for a human tutor. They’ve put a lot of work in, and the site does some things quite well. My point is that educational software is not approaching a stable state where it’s good enough and we can work on other things. If we take human tutors as the gold standard, we have to admit that educational software just isn’t very good yet. There is something exciting about this. We’re doing it wrong; and doing it right looks very hard given programming as we know it. New techniques may be needed. Excellent teaching technology seemed just around the corner in the 1960s. But I expect we will be laboriously catching up to Socrates for another hundred years.