During the recent Easter break, Ofqual published an interesting consultation about the awarding of examination grades. In brief, the proposal seems to be that we should return to the old model of norm-referenced grading rather than continuing with our current criteria-referenced system.
For any readers who are too young to have experienced norm-referencing or old enough to have wiped it from memory, may I remind you of the essential difference between these systems.
Norm-referencing involved allocating a fixed proportion of grades in advance, following a statistical bell-curve, for example with a pre-determined five per cent or so achieving the highest and lowest grades and a normal distribution between them.
When grade boundaries were set each year at O level, CSE, GCSE or A level, reference was made to these statistics. The possibility that national cohorts could differ significantly in their ability from year-to-year was not entertained.
This system was then replaced with criterion-referencing, which we now all use. The criteria for gaining particular marks for a question or group of questions were published and anyone who fulfilled those criteria could achieve full marks – theoretically, 100 per cent of entries could achieve an A*.
As teachers, we are inevitably under pressure, both from ourselves and from others, to ensure that our students achieve the best possible results. Once examination criteria were fixed and published, we made it our business to prepare our candidates as rigorously as possible to meet those criteria. Consequently the proportion of candidates gaining higher grades increased year-on-year – that is, until last year when examination questions were deliberately made more difficult, at government behest.
The fact that examinations also became modularised and that unlimited opportunities were provided for re-sits unsurprisingly added to the so-called “grade inflation”.
Times are changing. Re-sits and modularisation are now being phased out. We have been warned to expect a significant decrease in examination grades this summer. If norm-referencing is also re-introduced, then results will change still further. This will have effects on many other things – university offers will have to be reduced if places are to be filled. Employers will be even more confused about what quality they might expect from candidates.
Already, as an employer, I find myself looking closely at the dates of candidates’ qualifications – is an A grade at A level from 2010 better than a B grade from 1999? Who knows?
Much fuss is being made about the change to numerical grades at GCSE – I am old enough to have a set of O level grades on the 1 to 9 scale – except that grade 1 was highest and 7, 8 and 9 were different degrees of clear failure, a term that was lost with the advent of GCSE.
Are we now being told that the qualifications which our students have achieved in the past decade or so are actually not really worth as much as they appear to be? How does that make those who hold these qualifications feel? There will be some who have A grades from before the days of A* who may have achieved marks in the high 90 per cent, but others, as we have heard in the case of mathematics GCSE, for example, who have gained A grades with marks in the 50s. There is now no way of discovering which candidates were which.
Will the new system that is now being proposed become discredited in a few years’ time and be reformed? Every other grading system seems to have been so.
Would it not be better to give candidates their actual marks as part of their certificate? We are used now to students’ receiving marks at AS, which are used for selection purposes by some universities. Why not have a final certificate which said mathematics A* or grade 9 (97 per cent)?
Transparency is a much vaunted concept, but examination grading remains confusing for many.
• Marion Gibbs is head of James Allen’s Girls’ School in south London.