All the quizzes were administered online, which made it relatively easy to manipulate the massing or spacing of questions (or, more accurately, it was easy after one of my colleagues put in many hours of prep work). Online administration also allowed us to run some students—taught side by side with their peers—through the entire course without spacing any of the questions. Basically, these students comprised a control group—taught exactly like in past semesters—that we could compare to our experimental group, in which some questions were spaced. The control and experimental groups were essentially identical in terms of demographics, average high school GPA, and average Math ACT performance.

We examined performance on the final exam in the precalculus course, on which students had to perform all the important operations learned throughout the semester. Because we had manipulated spacing within participants (the experimental group practiced some material with spacing, and other material without) and also between participants (the experimental group got some spacing, and the control group got none), we were able to look at the data from two perspectives.

Looking at just the experimental group alone, we compared performance on questions targeting spaced and massed operations within participants. We expected students in the experimental group to do better on exam questions targeting spaced operations than massed ones, and that’s exactly what we found. On average, students did about 3% better on spaced operations than massed ones. This may not sound like an enormous effect, but it translates into about a third of a letter grade. That would be enough to change a student’s grade from, say, a C+ to a B-.

Looking across the two conditions, we were able to compare performance on the same operations when they were spaced in the experimental group versus massed in the control group. We expected better performance on operations when they were spaced in the experimental group versus when they were massed in the control group and, again, that’s what we found. In this analysis, the benefit associated with spacing was about 8%, suggesting that students taught in a traditional all-massed format might be underperforming by nearly one whole letter grade what they could achieve with the help of spacing.

An especially nice feature of our study is that we were able to follow students in the precalculus course who advanced into a calculus course the next semester. If spacing really helped students retain their hard-won precalculus knowledge, then it should have set them up to do better in calculus. Note that we didn’t manipulate anything in the calculus course. It was taught as it had been in past semesters with no meddling by us. We examined performance on the cumulative final exam as a function of whether students had been in the experimental group or the control group in precaclulus. The result was striking: Students who had been in the experimental group performed, on average, 10% better than students who had been in the control group. Moreover, 68% of students who had been in the experimental group earned at least a C in calculus, versus only 48% of students who had been in the control group (but note that this difference, although practically important, did not reach statistical significance, most likely because we were limited to analyzing only the 54 students from the precalculus class who advanced into the calculus class and actually finished the calculus class).

So, what’s the solution to the Mystery of the Failed Engineering Student? One answer might be that collegiate instruction typically doesn’t involve enough spaced retrieval practice. Our study showed that spacing can increase retention of a complex body of mathematical knowledge and thereby improve performance in genuine college classrooms. The notion of a simple, no-cost intervention that could help more students earn an engineering degree is, like Sam Spade says in The Maltese Falcon, the stuff that dreams are made of.