We’ll be honest, though: across many of these interventions, the empirical evidence of effectiveness was underwhelming. That’s not to say the interventions were useless, necessarily. But rather, it was just so rare that many of the great ideas had been examined more than once, to provide any kind of converging evidence. And even when they had, a vast portion of the evidence came from questionnaires or focus groups of students, rather than from directly observing or measuring their behavior. As a result, the literature tells us a great deal about what learners think helps them to engage with feedback, but relatively little about what objectively does help.

Taking a step back from specific interventions and the evidence of their effectiveness, we began to think about a more fundamental question: Why should any intervention work? In other words, which learning skills should our interventions enhance, and how should this enhancement in turn affect students’ engagement with feedback? We examined each intervention to find an explicit or implicit rationale for its use. We then gathered these rationales together, and looked for common themes. Four such themes emerged, which we have called the SAGE processes:

S elf-appraisal

elf-appraisal A ssessment literacy

ssessment literacy G oal-setting and self-regulation

oal-setting and self-regulation Engagement and motivation

Each of these SAGE processes represents a broad set of (meta)cognitive skills underlying engagement with feedback that researchers have tried to target in their interventions. Our review helps to map how practitioner-researchers have tried to tackle each of them. By extracting these conceptual themes, we can also begin to think of our interventions in more strategic ways. We can see that no single intervention is likely to cover all of these bases: potentially a ‘package’ of interventions is what’s needed. A teacher might, for example, use peer-assessment exercises as a means to promote students’ assessment literacy skills, but might also need something else if part of the problem relates to their goal-setting and self-regulation skills. An effective solution to the problem of weak engagement with feedback may be one that, by combining several interventions, maps strongly onto all of the prerequisite skills.

In September, the UK’s Higher Education Academy published our Developing Engagement with Feedback Toolkit (DEFT) (2). The DEFT is one such package of interventions (though by no means a perfect, comprehensive, or empirically validated one), which we developed in collaboration with students, taking inspiration from here and there as we conducted our systematic review. The teaching resources in the DEFT are designed to tackle some of the key barriers that our students tell us prevent them from using feedback. It includes a feedback guide that our students played a major role in producing; discussion topics and activities for inclusion within a “Using Feedback Effectively” workshop; and resources for compiling a feedback portfolio. All of the resources can be downloaded in an editable format so that individual teachers can selectively adapt any bits they see as potentially beneficial to their own students.

As we continue our search for effective interventions, an unexpected side-effect has become apparent: we are finding ourselves increasingly aware of our own aversive and defensive reactions when receiving critical feedback. Somehow this awareness feels valuable – by reflecting on the barriers that we put up in these situations, we might be better equipped to intervene when our students put up the same barriers (and indeed, better equipped to change our own behavior). Certainly there’s more we could do to model for students the fact that defensive reactions to feedback are human rather than a sign of inadequacy and to provide springboards for discussing helpful and unhelpful responses. So maybe next time you receive infuriating feedback, you could try thinking of it as a “teachable moment.”