A teacher’s guide to retrieval practice: Overcoming illusions of competence

Written by: Kristian Still | Published:
Image: Adobe Stock

Continuing his series on the potential of retrieval practice, spaced learning, metacognition, and successive relearning in the classroom, Kristian Still considers how students often mis-assess and mismanage their own learning due to their cognitive biases and beliefs. So – how do we overcome these ‘illusions of competence’?


In this series, I am attempting to elaborate and share what the recipe of test-enhanced learning (more commonly known as retrieval practice), spaced learning, interleaving, feedback, metacognition, and motivation might look like in and out of the classroom.

I am reviewing the research and cognitive science behind these concepts and the modulators underpinning the effective retention of knowledge.

In writing this series, nine clear but interlinked elements emerged. I am considering these elements across nine distinct but related articles:

I would urge readers to also listen to a recent episode of the SecEd Podcast (SecEd, 2022) looking at retrieval practice, spaced learning, and interleaving and featuring a practical discussion between myself, teacher Helen Webb, and Dr Tom Perry, who led the Education Endowment Foundation’s Cognitive Science Approaches in the Classroom review (Perry et al, 2021).

This series, in reviewing the evidence-base, seeks to help you reflect on what will work for you, your classroom, and your pupils. This is article eight and it focuses on how we can help students to overcome their illusions of competence.


It’s all an illusion? Or is it?

“To be effective in assessing one’s own learning requires being aware that we are subject to both hindsight and foresight biases in judging whether we will be able to produce to-be-learned information at some later time.”
(Bjork et al, 2013).


Managing one’s on-going learning effectively requires accurate monitoring of the degree to which learning has been achieved, coupled with appropriate selection and control of one’s learning activities/techniques in response to that monitoring.

Assessing whether learning has been achieved is difficult because, as we have said repeatedly in the series, conditions that enhance performance during initial learning can fail to support long-term retention and transfer, whereas conditions that appear to create difficulties and slow the acquisition process can enhance long-term retention and transfer.

We have also already discussed how learners tend to test themselves only under conditions that encourage retrieval success, again conflating short-term performance with long-term learning – when, in fact, there is “overwhelming evidence that learning and performance are dissociable” (Soderstrom & Bjork, 2015).

Research on learning, memory and metacognitive processes has clearly demonstrated that most learners are misinformed about how to learn because of various cognitive biases or beliefs – real learning is counter-intuitive and feels counter-productive.

What Koriat and Bjork (2005) refer to as "illusions of competence", these misconceptions, faulty beliefs and inaccurate monitoring often leave learners “misassessing and mismanaging their own learning” (Bjork et al, 2013) and they lead to ineffective or sub-optimal learning, relearning and revision strategies.

Most commonly learners fail to recognise the mnemonic benefits that testing provides as a learning strategy, almost always rating tested items as less memorable than restudied items (Agarwal et al, 2008; Tullis et al, 2013) – even when provided with corrective feedback after each practice test (Karpicke, 2009) – overestimating their remembering, and underestimating their forgetting (Kornell & Bjork, 2009).

Spaced and interleaved practice fairs little better: the misinterpreted-effort hypothesis (Kirk-Johnson et al, 2019) positions that learners’ perceptions of greater mental effort leads them to feel that they are actually learning less – particularly concurrently (while learning) and partially retrospectively (in reflection after learning). Even students who practised repeated retrieval consistently predicted lower performance than students who repeatedly studied or engaged in other activities (Karpicke & Blunt, 2011; Roediger & Karpicke, 2006).

As Kirk-Johnson et al (2019) concluded: “The more learners perceived a study strategy as mentally effortful, the less they judged it to be effective for learning.” Not only did perceptions predict whether or not learners chose to employ retrieval practice (test-enhanced learning), that choice in turn predicted performance on a test of retention.

In summary, although testing, spaced and interleaved practice are highly robust learning strategies that promote long-term retention, more durable and accessible knowledge, without accurate metacognition, their practical usefulness in a classroom setting is maligned and therefore, in a self-regulated learning context, limited.

Yet we also know from Dr Kathleen Rawson’s research on successive relearning (see article six), how effective and efficient unsupervised relearning can be.

Remember, as learners travel through their education careers, more and more learning takes place without direct supervision. As such, a failure to understand and monitor one’s own learning process and adopt some of the most powerful learning strategies – namely spacing, retrieval practice and interleaving – is shortsighted. So what can we do?


Easy as KBCP?

McDaniel and Einstein (2020) argue that effective strategy training to promote effective self-regulated learning involves four components. Their KBCP framework proposes:

  • Acquiring knowledge about strategies.
  • Belief that the strategy works.
  • Commitment to using the strategy.
  • Planning of strategy implementation.

Knowledge

McDaniel et al (2021) emphasise three knowledge components:

  • Knowledge about the strategy (what the strategy is).
  • Evidence the strategy is effective.
  • Knowledge about how the strategy is implemented (how to apply it).

The conclusions are very clear for teachers: if you think that “imparting knowledge of specific learning strategies is sufficient” then think again. We have to do more than just tell our students – we have to show them.

Belief

This is about a student’s belief that a certain strategy “works for me”. A key challenge here is to overcome students’ “eagerness to believe that one is unique as a learner” (Yan et al, 2016).

Hence gaining first-hand experience is advocated. Direct experience has a powerful influence on a student’s belief in a strategy’s effectiveness: “Experiencing benefits of retrieval practice led students to endorse its use in the remainder of the course.” (Einstein et al, 2012)

Commitment

Commitment is considered in association with increasing the perceived “utility value” of a task (Harackiewicz et al, 2016) by encouraging students to reflect on the inherently positive outcomes associated with effective strategy use. This in turn increases interest, motivation, and persistence in activities related to that task (Hulleman et al, 2017) – fostering a commitment to action and studying with these more difficult learning strategies for an extended period of time.

Planning

Knowing that commitment is not sufficient for effective “follow-through” of intentions, McDaniel et al (2021) recommend supporting the formulation of an action plan for implementing effective strategies.

Biwer et al (2020) investigated the impact of an extensive three-part “Study Smart” intervention on undergraduate students’ metacognitive knowledge and use of learning strategies across 12 weeks. The study took place over three years (2018-2020) involving approximately 1,500 students and 50 teachers in five different faculties at a Dutch university. The intervention involved:

  • Instruction about when and why particular learning strategies are effective.
  • Reflection and discussion on strategy-use, motivation and goal-setting.
  • Gaining experience with an ineffective strategy (highlighting) versus an effective strategy (practice testing).

Compared to a control group, those randomly assigned to receive the intervention gained more accurate declarative and conditional knowledge, rated practice testing as more effective and rereading as less effective, and reported an increased use of quizzing and practice testing after experiencing the intervention.

Did this translate to student outcomes? Neither McDaniel et al (2021) nor Biwer et al (2020) directly reported an answer to this question.


Let them fail first

I could not ignore this interesting anecdote shared by Professor John Dunlosky, who applies a rather interesting “experiencing” approach. What I have coined the “Let them fail the first, before giving them the strategies to succeed” approach.

Prof Dunlosky explains that many students he encounters at Kent State University believe they know how to study. Rather than try and convince them that there is a more effective or efficient approach, he gives them an early test.

Why? Well, he figures that if a student does poorly on an exam, then “that is pretty good evidence that you did not prepare properly for it”, as he told the Tes Podcast last year. Only after the test does he present his pitch for successive relearning (see article six).

He said: “The students who don’t do as well as they should might have a higher likelihood of embracing the new strategies I will be teaching them.” (TES, 2021)


Personalisation

“The data on personalisation was tantalising, but very limited.
Dr Tom Perry in the SecEd Podcast (2022).


Confucius put forward the idea of “teaching students according to their aptitude”.

And David Ausubel wrote that the “most influential factor for learning new things is what the learner already knows”.

Making students aware of effective learning strategies and desirable difficulties, stimulating reflection on the value of these learning strategies, on motivation, and supporting them to experience the “actual learning” versus “experienced learning” are ways to motivate students to adopt effective learning strategies (we come back to “motivation” in article nine). However, these are still largely one-size-fits-all approaches.

Furthermore, “successive relearning places greater demands on students for effective time-management, organisation, and planning of practice sessions” (Rawson et al, 2020) and too many students resort to ineffective study strategies (e.g., cramming the night before the test) to compensate for their lack of proper time-management (McIntyre & Munson, 2008; Seo, 2012; Taraban et al, 1999).

Another approach is to personalise the learning for students by promoting tailored spaced and/or interleaved retrieval and balancing the spacing effect against the effects of recency, frequency, and fluency – the trio of learning villains.

This requires teachers or indeed students to balance two seemingly opposing goals: (a) maximizing time between repetitions of an item to get the biggest spacing effect and (b) minimising time between repetitions of an item to make sure it can still be retrieved from declarative memory.

As discussed and cited in Sense et al (2016), such personalised practice models have been “developed and implemented with great success” (Lindsey et al, 2009).

Meanwhile, Grimaldi and Karpicke (2014) conclude that “combining the powerful learning produced by retrieval practice with sophisticated scoring algorithms could prove to be a particularly potent way to enhance student learning”.

Van Rijn et al (2009) show that “adapting the sequence (of the presentation of material to be learned) to the characteristics of individual learners improves learning gains considerably, even if the learning session takes only 15 minutes”.

The overall impression was that personalisation generally leads to “less frustration during study sessions”, “improved student and teacher satisfaction”, and “higher test performance and improved long-term knowledge retention” (van Rijn et al, 2009).

Of course, to achieve this level of personalisation, we must rely on technology. Lindsey et al (2014) developed a method using a personalised, systematic, spaced web-based flashcard review system. In the study, 179 students learnt vocabulary and short sentences in English and were required to type the Spanish translation, after which corrective feedback was provided. This took place across three 20 to 30-minute sessions during class time.

The first two sessions began with a “study-to-proficiency” phase for the current chapter and then proceeded to a review phase. During the third session, these activities were preceded by a quiz on the current chapter, which counted toward the course grade.

During the review phase, study items from all chapters could potentially come up. Three schedules were used – massed practice which focusing students’ efforts only on the current chapter; generic spaced practice focusing on one previous chapter; personalised spaced practice which used statistical techniques and psychological theory of memory to predict what specific material the students would most benefit from reviewing.

In a cumulative exam administered after the end of the semester the personalised review yielded a 16.5% boost in retention over massed study and a 10% improvement over a one-size-fits-all strategy for spaced study. This is notable considering the experiment took place across thee 20 to 30-minute sessions, as the authors said: “We find it remarkable that the review manipulation had as large an effect as it did, considering that the duration of roughly 30 minutes a week was only about 10% of the time students were engaged with the course.”

Mozer and Lindsey (2016) added: “Our experiments go beyond showing that spaced practice is superior to massed practice: taken together (the experiments) provide strong evidence that personalisation of review is superior to other forms of spaced practice.”

The researchers’ conclusion: “Our experiment shows that a one-size-fits-all variety of review is significantly less effective than personalised review. The traditional means of encouraging systematic review in classroom settings – cumulative exams and assignments – is therefore unlikely to be ideal.” (Lindsey et al, 2014).

They add: “Any form of personalisation requires estimates of an individual’s memory strength for specific knowledge.

“Educational failure at all levels often involves knowledge and skills that were once mastered but cease to be accessible due to lack of appropriately timed rehearsal.”

“While it is common to pay lip-service to the benefits of review, providing comprehensive and appropriately timed review is beyond what any teacher or student can reasonably arrange. Our results suggest that a digital tool which solves this problem in a practical, time-efficient manner will yield major pay-offs for formal education at all levels.”

And with the proliferation of handheld devices and technology-use during the pandemic – indeed, with those ever-present mobile phones – personalisation is ever more attractive.

In sum, personalisation offers an “efficient housekeeping” function to ensure that knowledge, once mastered, remains accessible and “a part of each student’s core competency” (Lindsey et al, 2014).

This in turn addresses cognitive bias and protects againsts “illusions of competence”, promoting time-efficient, high-utility learning as well as making the most of unsupervised study. Not least of all, it provides learner metrics that can be aggregated to offer teachers insights that can be used to refine and improve teaching in the first place.


Takeaways

  • Assessing learning is difficult because conditions that appear to create difficulties and slow the acquisition process actually enhance long-term retention and transfer (misinterpreted-effort hypothesis).
  • Teachers are strongly advised to teach not only the content, but the why and how of these various learning strategies – explaining and showing students why they actually work.
  • Let them fail the first, before giving them the strategies to succeed: “The students who don’t do as well as they should might have a higher likelihood of embracing the new strategies I will be teaching them.” Prof Dunlosky (Tes, 2021).
  • Metacognitive monitoring accuracy improves as a result of spaced retrieval practice (for more, see article nine).
  • Personalised spaced retrieval is superior to other forms of massed and spaced practice.
  • Technology and the saturation of devices presents the opportunity to personalise learning.
  • And if you only have time to read one paper on this topic: Perceiving effort as poor learning: The misinterpreted-effort hypothesis of how experienced effort and perceived learning relate to study strategy choice (Kirk-Johnson et al, 2019): https://bit.ly/3B5lsuz

  • Kristian Still is deputy head academic at Boundary Oak School in Fareham. A school leader by day, together with his co-creator Alex Warren, a full-time senior software developer, he is also working with Leeds University and Dr Richard Allen on RememberMore, a project offering resources to teachers and pupils to support personalised spaced retrieval practice. Read his previous articles for SecEd via https://bit.ly/seced-kristianstill


References: For all research references relating to this article, go to https://bit.ly/3NEu8NL

Acknowledgement: This post would not have been possible with the support, conversations, and very real and very immediate feedback from Ambra Carretta's classroom and pupils. As I was drafting this article, Ambra was feeding back on her efforts to introduce test-enhanced learning to her chemistry pupils live. Not all her students were convinced. However, as the months past, her students' attainment has changed their minds!

RememberMore: RememberMore delivers a free, personalised, and adaptive, spaced retrieval practice with feedback. For details, visit www.remembermore.app or try the app and resources via https://classroom.remembermore.app/


Comments
Name
 
Email
 
Comments
 

Please view our Terms and Conditions before leaving a comment.

Change the CAPTCHA codeSpeak the CAPTCHA code
 
Sign up SecEd Bulletin