Flawed assumptions plague our exams system

Written by: Dr Mary Bousted | Published:
Image: Adobe Stock

The ‘mutant algorithm’ made a series of flawed assumptions about students’ exam results. But our exam system has always had at its heart several flawed assumptions, says Dr Mary Bousted


Schools and education have not, until now, been issues which routinely top the media agenda. Until now. This summer’s A level grading has changed all that. It is a cut-through issue rivalled only by Dominic Cummings’ drive to Barnard Castle to check his eyesight.

On one thing, all can agree. It must never happen again. That’s the easy part. More difficult is what should happen in the 2021 examining process.

As I write, teachers are starting the school year with no idea of what the content and format of this year’s GCSE and A level exams will be. Politicians with short attention spans should not make the mistake of thinking that they have months to give the profession an answer – they don’t.

Teachers need to know, now, what their pupils will be tested on so that they can prepare them for the exams they will take. Parents are also very interested in this crucial question.

Boris Johnson blamed a “mutant algorithm” for the A level results debacle. The truth is rather more prosaic. What the crisis revealed is not an exam system which went wrong under extraordinary circumstances, but the routine flaws that are replayed each and every year as grades are awarded for A levels and GCSEs.

The “mutant” Ofqual algorithm assumed that 40 per cent of candidates would underperform in their A levels. But that assumption is built into the awarding process every year through Ofqual’s calculation that students will undergo a series of routine mishaps – misunderstanding the question; revising topics that do not appear on the exam paper; failing to turn over the final page; having hayfever; having suffered a recent traumatic event, and so on.

Exams are not, for these 40 per cent of candidates, a reflection of their ability. Rather, they are a marker of how they perform on the day.

And there is another very uncomfortable fact to consider. Exam marking is not an exact science – 25 per cent of exam papers, if marked by different examiners, would be awarded different grades. When it comes to humanities subjects, like English literature and history, that unreliability rises to approaching 50 per cent (Sherwood, 2019).

So, if a lesson is to be learned from this year’s exams calamity it is this: we must do things differently. Michael Gove’s reforms have resulted in a fragile and unreliable A level and GCSE grade awarding system. The danger with putting all the assessment eggs in the exam basket has become abundantly clear.

Other high-performing education systems have broader, more robust processes in place to allocate grades for national qualifications. These countries use exams, of course, but crucially alongside other means of assessing student achievement – including the assessment of practical skills and project work.

The way we assess our qualifications drives the way we teach. England is now third in the international league table for rote learning and memorisation. This is good for basic skills. It is not good for higher order thinking.

When asked why England topped the rote learning league tables, the OECD’s head of education, Andreas Schleicher, said it was completely understandable that in a system so dominated by timed exams teachers would favour rote learning.

We know that this system does not prepare our students well for university courses. It does not prepare them for adult lives and the world of work. It is not what the OECD recommends in a high-performing education system. Assessment over-dominated by exams further discriminates against disadvantaged young people because they need more support throughout the course, more opportunities to “bank” their achievement and get feedback on their progress, than an over-reliance on end of course exams produces.

That is why the National Education Union (NEU) is launching a commission into secondary assessment. We want to know what we can learn from other high-performing education nations about the ways they assess achievement and potential. We have, we think, a lot to learn.


  • Dr Mary Bousted is the joint general secretary of the National Education Union. Read her previous articles for SecEd via https://bit.ly/3i6lzvS


Further information & resources

Sherwood: One school exam grade in four is wrong. Does this matter? Higher Education Policy Institute, January 2019: https://bit.ly/3i9yBZl



Comments
Name
 
Email
 
Comments
 

Please view our Terms and Conditions before leaving a comment.

Change the CAPTCHA codeSpeak the CAPTCHA code
 
Sign up SecEd Bulletin