News

‘Volatile’ exam results mean we should judge schools over five-years, not just one

Research by a major awarding body finds ‘significant volatility’ in exam results due to range of ‘complex factors’, with one in five schools seeing year-on-year variations in excess of 10 per cent. Its authors are now calling for schools to be judged over

  

Schools should be judged on their examination performance over the period of “at least five years” rather than on just one year’s results, a major awarding body has said.

Research by Cambridge Assessment has found that “significant volatility” exists within exam results, even when the impact of things such as reliability of marking are removed.

Its study, entitled Volatility in Exam Results, warns that a range of complex factors may cause exam results in schools to go up or down in unpredictable ways.

In fact, the authors, Cambridge Assessment researchers Tom Bramley and Tom Benton, conclude that more than one in five schools experience variations of more than 10 per cent in their results.

Mr Bramley said: “This study shows quite clearly that exam results in a school may go up or down in unanticipated ways, caused by a wide and complex set of factors.

“When swings occur they could be because of what is happening in the school or the children’s lives, they could be to do with the assessment itself or the way that national standards are applied, or to do with teaching and learning. 

“But what our study shows is that when we’ve taken account of the variations which can be attributed to quality of marking and to the location of grade boundaries, surprisingly high levels of year-on-year volatility in exam results remain.

“Schools should still monitor exam results for an administrative error which might have occurred, and should still look for and alert exam boards to peculiar outcomes; but everyone in the system should be aware of the level of volatility typical in the context of the complex system which is schooling.”

The report itself states: “At the level of the individual schools, we have shown that there is considerably more volatility in the system. The analysis we have presented suggests that even if marking is accurate, and even if we deliberately choose grade boundaries purely to minimise volatility, volatility in schools’ results would remain.” 

The research will be of particular interest to school leaders in light of the 2011 GCSE English grading fiasco, which saw huge variations in the awarding of English and English literature GCSEs. 

Around 10,000 students missed out on expected C grades in June 2011’s exams because of a decision to dramatically raise grade boundaries. These students would have got Cs if they had sat in January 2011.

At the time, the Headmasters’ and Headmistresses’ Conference (HMC) reported variances of 10 per cent or more in some schools, describing this level of variation as “a serious concern”. This is a statement alluded to in this week’s research. The report states: “More than a fifth of schools would still experience levels of volatility that, according to the HMC, should be seen as a ‘serious concern’.

“Whether or not this level of volatility is concerning remains an open question, and one that cannot be answered without far more detail about the individual circumstances surrounding particular schools. However, what is clear is that volatility alone cannot be taken to imply that either marking or setting of grade boundaries has been performed incorrectly.”

Tim Oates, group director of assessment research and development at Cambridge Assessment, said the research had implications for school accountability measurements.

He explained: “It appears that underlying school-level volatility may be an enduring and persistent feature of education arrangements, which means that school performance – in terms of exam results – should be judged on a five-year picture rather than one-off annual drops or increases.

“This is a very important finding and one which challenges many assumptions, with implications for the approach to accountability and for accountability measurements. The analysis is a valuable contribution to building a far more powerful and analytic approach to system improvement and enhancement of assessment. It is a significant part of a picture that we are continuing to investigate.”

Robin Bevan, headteacher of Southend High School for Boys and school leadership representative on the Association of Teachers and Lecturers’ National Executive, said the “intelligent and insightful” report was welcome.

He continued: “Judgements about school outcomes, and indeed about the performance of individual teachers, need to be based on sound evidence. The rigorous analysis clearly shows that schools will see natural fluctuations from year-to-year in exam outcomes. 

“These variations should not be used, simplistically, to assess the effectiveness of schools on one year’s output or to assume there are problems with the reliability of exam markers.”

Photo: MA Education