A teacher’s guide to retrieval practice: Feedback and elaboration

Written by: Kristian Still | Published:
Image: Adobe Stock

Continuing his series on the potential of retrieval practice, spaced learning, successive relearning, and metacognitive approaches in the classroom, Kristian Still turns to the importance of feedback, elaboration, and the use of hints in making retrieval practice a success

In this series, I am attempting to elaborate and share what the recipe of test-enhanced learning (more commonly known as retrieval practice), spaced learning, interleaving, feedback, metacognition, and motivation might look like in and out of the classroom.

I am reviewing the research and cognitive science behind these concepts and the modulators underpinning the effective retention of knowledge.

In writing this series, nine clear but interlinked elements emerged. I am considering these elements across nine distinct but related articles:

I would urge readers to also listen to a recent episode of the SecEd Podcast (SecEd, 2022) looking at retrieval practice, spaced learning, and interleaving and featuring a practical discussion between myself, teacher Helen Webb, and Dr Tom Perry, who led the Education Endowment Foundation’s Cognitive Science Approaches in the Classroom review (Perry et al, 2021).

This series, in reviewing the evidence-base, seeks to help you reflect on what will work for you, your classroom, and your pupils. This is article five and it focuses on feedback, the power of hints, and the art of elaboration.


Feedback. A goliath of educational pedagogy research. It is a critical component of any learning process because it allows learners to reduce the discrepancy between actual and desired knowledge (Black & Wiliam, 1998). Of course, it is not only learners who benefit. Testing insights are equally valuable to teachers.

The fact of the matter is that we cannot discuss the pedagogy of test-enhanced learning (repeated retrieval practice, spacing or interleaving) without stopping off to focus on feedback and the potential gains of elaboration.

A broad view of feedback

Generally, feedback is reported to make learning more transparent, addressing knowledge gaps (Son & Kornell, 2008) and enhancing the experience of competence.

A meta-analysis (Wisniewski et al, 2020) of 435 studies revealed a huge variability in the effectiveness of feedback, but also showed a medium positive effect size (0.48). The authors concluded that feedback was “a complex and differentiated construct that includes many different forms with, at times, quite different effects on student learning”.

However, feedback can be detrimental to learning. Kluger and DeNisi’s (1996) review showed that in 38% of the well-designed studies they analysed, feedback actually made things worse – that is to say the learners would have done better if they had received no feedback at all. The study, which is worth reading, suggests that feedback effectiveness decreases as attention moves “closer to the self and away from the task”.

And take possibly two of the most common forms of feedback: grades and rankings. Neither tell the pupils how to improve, just that improvement may be required. They are as likely to disappoint as encourage.

And we cannot mention feedback without mentioning the research summaries within the Education Endowment Foundation’s Teaching and Learning Toolkit, which continues to rank feedback as second only to metacognition and self-regulation when it comes to boosting learning progress (EEF, 2022).

It states: “Feedback redirects or refocuses the learner’s actions to achieve a goal, by aligning effort and activity with an outcome. It can be about the output or outcome of the task, the process of the task, the student’s management of their learning or self-regulation, or about them as individuals (which tends to be the least effective). This feedback can be verbal or written, or can be given through tests or via digital technology. It can come from a teacher or someone taking a teaching role, or from peers.”

Three further points to consider

“An extensive amount of research has shown that taking a memory test on some learning material can improve long-term retention relative to repeatedly studying the material, a phenomenon known as the testing effect.”
Racsmány et al, 2020

The first point to address is an issue of metacognitive belief and control. Most pupils (90%) use retrieval practice “posthumously” to assess their knowledge at the end of a phase of study – what has been referred to as post-testing. Meanwhile, 64% report not revisiting material once they feel they have mastered it, and only 36% restudy or test themselves later on that information (Hartwig & Dunlosky, 2012; Kornell & Bjork, 2007).

Pupils often ignore the benefits of retrieval practice for learning despite the powerful learning and metacognitive benefits on offer (Rivers, 2021). Hence feedback from retrieval practice often does not even get a look in before it is too late (i.e. the end of the unit or course).

However, Hui et al (2021) report that giving individual feedback about retrieval practice performance leads to enhanced use of retrieval practice in the long term.

In their study, after receiving feedback on actual learning outcomes following retrieval practice and restudy, “students who had experienced the testing effect chose retrieval practice more often than those who had not benefited from retrieval practice”.

It would appear that a spoonful of feedback helps the cough medicine (of retrieval practice) to go down. We will come back to metacognition in article seven.

The second point to address is the importance of motivation. Changing students’ beliefs is not easy, even in light of the evidence – but success breeds motivation. We will come back to motivation in article nine.

And the third point is that where there is feedback, there is emotion. Feedback, metacognitive beliefs, monitoring and control, and motivation are possibly inseparable.

Feedback and retrieval practice

“A complex and differentiated construct” (Wisniewski et al, 2020), feedback is no less complex, no less differentiated when referenced in the context of test-enhanced learning.

Providing feedback after an attempt to retrieve information from memory is critical because it helps to correct memory errors (Pashler et al, 2005) and maintain correct responses (Butler et al, 2008), and in the case of multiple-choice tests protects pupils from “learning” from selecting incorrect answers.

There are many moderators to consider for how (and when) to use feedback during retrieval practice, not least:

  • The distinction between supervised and unsupervised conditions, by which I mean retrieval practice for encoding, learning, and relearning in the classroom, and retrieval practice as preparation for learning or for relearning independently (homework, revision). Is feedback required during the encoding or initial learning phase? Or is it for the retrieval phase?
  • What type of feedback is required for different retrieval practice formats?
  • Should hints be available (see later)?
  • When should feedback be offered? Immediately? After a delay? At the question level? Or summatively?
  • Should that feedback be with reference to previous performance?
  • What learning metrics should be considered feedback? How does feedback work for covert retrieval (which if you remember refers to when pupils answer retrieval questions in their heads)?
  • Offering feedback has a time cost. Is offering feedback efficient?

Most studies report successful testing effects independently of whether feedback is provided or not. Rowland (2014) found that corrective feedback doubles the benefits of testing, but Adesope et al (2017) observed that corrective feedback does not add any additional value. These inconsistent findings have made it “problematic theoretically as well as confusing” (Yang et al, 2021) for teachers to know when, if at all, to use feedback with retrieval practice.

With a substantially larger dataset (48,478 students’ data extracted from 222 independent studies with 573 effects), strictly focusing on classroom research, Yang et al’s (2021) conclusions were more elucidatory.

Consistent with Rowland (2014), Yang et al, (2021) report that offering corrective feedback following class quizzes significantly increases learning gains as compared to not providing feedback at all, including inducing greater re-exposure and larger learning gains. No significant difference in effectiveness was detected between recall and recognition tests, regardless of whether feedback was provided.

So, how do we use feedback?

Arguably, the critical mechanism in learning from testing is successful retrieval (even though unsuccessful retrieval has been shown to aid learning too). Hence, if pupils do not retrieve the correct response and have no recourse to learn it, then the benefits of testing are sometimes limited or absent altogether.

Therefore, providing feedback after a retrieval attempt, regardless of whether the attempt was successful or not, helps to ensure that retrieval will be successful in the future. Or to put it another way: feedback helps make up for the lower level of initial performance.

What you might be surprised to learn is that delaying feedback often produces better retention over time (Mullet et al, 2014). This led to Mullet et al’s witty conclusion: “When you take a few extra days to grade student work, feel free to tell them it is for their own good.”

Now, I am one of those teachers who would say that feedback is like sushi – best when it is fresh. However, the point being made here is that delayed feedback acts as another form of spaced learning.

Having said this, it is important to note that if full processing of the delayed feedback cannot be guaranteed, then giving immediate feedback is probably the better choice.

Equally, it appears that correct-answer feedback is critical if graded quizzes are unavailable to pupils when they restudy (McDaniel et al, 2021).

The amount of time processing feedback should be given careful consideration, too. The length of time learners spend studying the correct answer does matter.

Vaughn et al (2017) report: "Where learners fail to retrieve the answer after a few seconds, they should terminate their search and spend more time processing the correct answer.”

And in their meta-analysis of 10 learning techniques, Donoghue and Hattie (2021) confirm the major findings from Dunlosky et al (2013) but stress the important role of feedback: “It is not the frequency of testing that matters, but the skill in using practice testing to learn and consolidate knowledge and ideas.”

In other words: “Spinning your wheels while trying to retrieve does about as much good for learning as spinning your wheels when your car is stuck in the mud. However, spending a few extra seconds processing the correct answer seems to be a worthwhile time investment.” (Vaughn et al, 2016)

Or as Helen Webb – one of the guests on the SecEd Podcast episode accompanying this series – told me: “No need to wait for everyone to finish a quiz or activity before answers are shared. Once the majority of the class has completed the test (or has simply lost focus as they can't do the next bit), review answers to keep the pace of lesson.”

The power of hints as a form of feedback

“Saying you should test yourself without making it fun is like saying you should eat your spinach without making it taste good.”
Vaughn and Kornell, 2019

If one is to increase test-taking, how do we make taking tests more desirable? How do we create a situation where our pupils prefer tests to restudy? Perhaps, when you don’t know the answer, a hint might help?

The route explored by Vaughn and Kornell (2019) was to allow participants to decide the difficulty of the test trials.

In experiment one, participants could choose retrieval with no hint (idea:______), a two-letter hint (idea: s____r), a four-letter hint (idea: se__er), or a pure study trial (idea: seeker). In this experiment, participants chose to test themselves (retrieval) in the majority of the trials (the most popular was the four-letter hint model – 54%).

In experiment two, the hint options were removed and, lo and behold, participants chose to restudy rather than test.

However, you guessed it – all the retrieval trial models yielded significantly better recall performance compared to restudy. So, hints encourage people to test themselves – a good first step.

Experiment three, meanwhile, demonstrated that hints do not decrease learning outcomes – but only if they are not too easy. When the target is too easy, the hints make testing less effective. For example, without knowing anything about the area of study, you know the answer to: king-q__en. Such a hint could potentially impair learning, it seems.

When offered two options, either restudy or taking a test, Vaughn and Kornell’s participants chose the less effective restudying on the majority of trials. But when allowed to request hints during test trials, they preferred testing over restudy by a sizeable margin.

So hints encourage self-testing, making self-regulated, self-testing study more enjoyable and effective.

But as we know, desirable difficulties are not always desirable to the pupils because they typically, but incorrectly, assume that poor short-term performance is equivalent to poor learning (Bjork et al, 2013; Soderstrom & Bjork, 2015).

So, hints are probably especially important when pupils would otherwise fail to answer most of the test questions without them.

Hints may be more effective when encoding/learning new material than when relearning or if the material is difficult (not necessarily complex) or procedural: “Hints to make the spinach taste good.”

And remember: agency is powerful. It is also fickle. Changing a pupil’s beliefs about the benefits of testing does not always change how they choose to study. Pupils often think testing is good for them. However, Vaughn and Kornell (2019) hypothesise that the reason they often choose restudy instead of testing is not because they think testing is bad, but because they are trying to avoid failure. So lower the failure rate. Get the hint?

Retrieval difficulty-success-feedback

One final point on difficulty, successful retrieval and feedback: there are competing forces at play. The conundrum is how to create conditions that increase initial retrieval success without short-circuiting the benefits of initial retrieval effort.

There is a common thread that “challenging and effortful” learning impedes initial performance but enhances long-term learning and ultimately pupil outcomes (Bjork, 1994). Manipulations that increase initial learning/encoding difficulty enhance delayed memory performance and this is collectively referred to as “desirable difficulties” (Bjork & Bjork, 2011). This is directly relevant to retrieval practice.

More retrieval effort is helpful, but if the test is so hard that retrieval attempts are often unsuccessful, such increased effort is less beneficial for later memory (at least in the absence of correct-answer feedback).

Having said this, easier retrieval is not more effective. Retrieval practice remains more effective when the learning conditions promote increased retrieval difficulty or effortful retrieval.

However, we then add the human factor: pupils (and teachers) are prone to actively seek strategies that safeguard retrieval success – or at least to avoid strategies that might stimulate retrieval failure.

Know that these efforts are often misguided. Difficult retrieval is a very effective way to learn and study.

One caveat: these retrieval practice recommendations are insensitive to the time costs of feedback, as they often are insensitive to difficulty and an individual student’s prior knowledge. What if there was no need for feedback? We will come back to estimates of optimal difficulty when we consider personalisation (article nine).

The danger that consistent retrieval failure will undermine pupils’ motivation is real, however, and should not be taken lightly. But nonetheless, pupils should learn to accept struggle as part of their learning.

Instead of worrying about retrieval success, pupils and teachers should embrace errors as a path to knowledge (and feedback will at least get pupils to the correct answer).

So – over-practise, employ self-assessment, learn and relearn, and success will feed motivation. And when knowledge surfaces in lessons, notice it, harvest and celebrate it – claim it to be the fruits of retrieval’s labour.

And remember – giving feedback on the initial test boosts its effects on later memorability, overcoming issues with poor immediate performance. And this is not forgetting the metacognitive benefits of feedback: reducing the discrepancy between perceived learning and actual performance, correcting previous errors, relearning correct answers, informing restudy activity and subsequent studying (McDaniel & Little, 2019).

Potentiation or pre-testing effects

One potential way of offering effective feedback and supporting metacognition is via the use of pre-testing: “Studies have shown that pre-questions – asking students questions before they learn something – benefit memory retention." (Carpenter et al, 2018)

And, as peculiar as it sounds, attempting to retrieve a memory enhances subsequent learning even if the attempt is unsuccessful (Kornell et al, 2015).

What we do know is that “extra time processing the answer after a retrieval attempt is more beneficial than spending more time in retrieval mode” (Vaughn et al, 2016). Second, feedback can often puncture the general overconfidence of pupils. So pre-tests feel like a no-brainer…

What might pre-testing look like in classrooms? Ahead of teaching the content, a quiz on the timeline of events, introducing key vocabulary, naming faces or categorising materials. Not only priming or potentiating learning but activating prior knowledge too.

The art of elaborative interrogation

What is elaborative interrogation and how does feedback relate to it? Corrective feedback indicates the right answer, whereas elaborate feedback offers an explanation as to why the answer was correct.

Then we have elaborative interrogation, a broad concept among cognitive psychologists. However, most definitions in an education context tell us that “elaborative interrogation” is essentially prompting learners to offer an explanation for an explicitly stated fact (their answer). Which means teachers asking “how” and “why” questions.

And further elaborating, asking (and attempting to answer) “how” and “why” questions on the correct answer in the feedback, can be important for later application of that knowledge (Butler et al, 2013).

This involves supporting the integration of new information with prior knowledge, explicitly or implicitly inviting learners to process both the similarities and differences between related concepts, encouraging the organisation of knowledge, or the connecting and integrating of knowledge with new ideas.

Enders et al (2021) showed that “elaborate feedback provides an additional and effective learning gain beyond the potential benefits that testing knowledge and corrective feedback produces”, with students profiting more from elaborate feedback on incorrect answers than correct answers.

As a practical recommendation, Enders et al (2021) suggest that self-administered formative tests with closed question formats should at least provide explanations for why students’ answers are incorrect.

In their review of effective learning techniques, Dunlosky et al (2013) rated elaborative interrogation practice as having moderate utility.

The purpose of elaborative interrogation is primarily to improve our memory of the new information. An additional benefit is that it helps us to organise our knowledge in a more coherent way.

One further step may be to construct meaning from the subject matter by explaining, elaborating, making inferences, and connecting new facts and ideas to prior knowledge – the food and drink of classroom practice.

In one of the largest and most comprehensive meta-analyses of undergraduate STEM education, Freeman et al (2014) reported average examination scores improved by 6% in "active learning sections" (elaboration) and failure rates reduced from 34% to 22% where elaboration was established practice.

Giving pupils the opportunity to elaborate encourages them to infer missing information, to synthesise the presented information and process their thinking, knowing that “those learners encouraged to self-explain outperformed those who read and reread material” (Griffin et al, 2008).

Final thought

The provision of feedback is far from simple. Teachers may want to think about retrieval practice thus: across two phases – initial learning and relearning – without feedback, with corrective feedback, and with delayed corrective feedback, while managing the effort required, the difficulty, and the timing of the feedback. Easy.


  • Feedback punctures general overconfidence and this links with illusions of competence (see article eight).
  • Feedback timing is an interesting area of research – delayed feedback has its advantages.
  • Pre-testing is a no-brainer: priming or potentiating learning but activating prior knowledge too.
  • Hints make testing more attractive to pupils.
  • Retrieval practice is a fertile opportunity for elaborative interrogation – asking “how” and “why” questions.
  • At the very least, provide explanations as to why answers are incorrect.
  • The length of time learners spend studying the correct answer does matter. Spending a few extra seconds processing the correct answer seems to be a worthwhile exercise.
  • And if you only have time to read one paper on this topic: Correcting a metacognitive error: Feedback increases retention of low-confidence correct responses (Butler et al, 2008): https://bit.ly/3AVshyS
  • And if you’re lucky enough to have the time to read two papers, then consider: Does individual performance feedback increase the use of retrieval practice? (Hui et al, 2021): https://bit.ly/3uDpE3N
  • Kristian Still is deputy head academic at Boundary Oak School in Fareham. A school leader by day, together with his co-creator Alex Warren, a full-time senior software developer, he is also working with Leeds University and Dr Richard Allen on RememberMore, a project offering resources to teachers and pupils to support personalised spaced retrieval practice. Read his previous articles for SecEd via https://bit.ly/seced-kristianstill

References: For all research references relating to this article, go to https://bit.ly/3w3xIdb

Acknowledgement: This article would not have been possible without the author’s on-going conversations with Andy Samms. Not only has Andy helped develop and advocated for RememberMore, he has offered honest, forthright feedback. Andy's feedback rings in my ears: “In this busy world of education, for teachers, simplicity and speed remains key."

ResearchED: Kristian will be speaking at the first ever ResearchED Berkshire taking place at Desborough College in Maidenhead on May 7. Visit https://researched.org.uk/event/researched-berkshire/

RememberMore: RememberMore delivers a free, personalised, and adaptive, spaced retrieval practice with feedback. For details, visit www.remembermore.app or try the app and resources via https://classroom.remembermore.app/


Please view our Terms and Conditions before leaving a comment.

Change the CAPTCHA codeSpeak the CAPTCHA code
Sign up SecEd Bulletin