Social justice and schools: Assessment

Written by: David Anderson | Published:
Image: Adobe Stock

In a seven-part series, teacher and school leader Dave Anderson considers how schools can be a key driver for social justice and how we can make our education system more equitable. In part six, he looks at the problems with our system of assessment and examinations



SecEd series: A school system that drives social justice



Here are three ways the assessment system in England sets children up to fail:

  • At age 11, as they leave primary school, roughly one-third of children are judged as not having reached the expected national standard in reading, writing and maths. This brands more than 200,000 children at age 11 as academic failures.
  • Each year, roughly one-third of 16-year-olds do not achieve a grade 4 pass in English and maths at the end of key stage 4 – the so-called Forgotten Third. This is due to the comparative outcomes system, which means 200,000 or so young people are once again deliberately labelled as academic failures. It is likely that many of these will be the same children who were “failing” at age 11.
  • In 2019, only 35 per cent of English 18-year-olds went to university (UK Parliament, 2020). And yet our entire secondary education and assessment system appears to be set up predominantly to pave the way for university education, even though in practice this has always been a minority pathway.

We also know this: Schools in England are ranked by public, high-stakes, standardised test outcomes. These tests are administered throughout primary and secondary and dominate student and teacher actions at key times; they also profoundly affect the curriculum.

The nature and scope of standardised assessment is my focus here and it is worth noting that we have a wider compulsory national assessment regime than most other countries:

  • Reception: Baseline Assessment (postponed until autumn 2021 due to Covid).
  • Year 1: Phonics screening.
  • Year 2: National curriculum assessments in English, maths and science.
  • Year 4: Multiplication tables check.
  • Year 6: SATs in maths, English (reading and writing, punctuation grammar and spelling) and science (sample tests every other year), published in national performance tables.
  • Year 11: GCSEs and other Level 1/2 qualifications, published in national performance tables.
  • Year 13: A levels and other Level 3 qualifications, published in national performance tables.

This is considerably more than other European countries and at odds with some of the highest performing international education systems. Countries such as Finland leave high-stakes compulsory testing until the age of 17/18, at the end of compulsory education. (Sahlberg, 2015).

Italy, by contrast, has state exams at age 14 and 18, but these consist of two to three written exams and an interview (European Commission, 2020). External exams in Canada vary considerably between provinces but are generally limited to year 13 and include 50 per cent continual assessment rather than exams (Crehan, 2016).

Of course, the current Covid-19 pandemic has led us to question the nature and purpose of our exam system. The debate over the future of the GCSEs continues, particularly since their architect, Sir Kenneth Baker, said in June that they have run their course (Lough, 2020).

Other countries do not conduct their most important high-stakes tests at the age of 16, when adolescents are not necessarily performing at their most representative. The National Baccalaureate Trust and the Rethinking Assessment campaign, led by prominent headteachers from both state and independent sector, are but two examples of organisations campaigning for revised models of assessment.

Geoff Barton, general secretary of the Association of School and College Leaders, has written previously in SecEd about removing comparable outcomes and replacing them with an English and maths “passport” (Barton, 2020) and the One Nation group of Conservative MPs is also calling for a “radical rethink”, citing the impact on teaching time of exam preparation, the fact that we have two sets of high-stakes exams within three years, and the negative impact on mental health as driving forces for change (Adams, 2020).


Performance tables

Putting this debate to one side for the moment, I want to focus here on how the act of publishing high-stakes assessment data adversely affects equity in our system.

Through the league table culture, our assessment system pits one school against another, highlighting “winners” and “losers”. This serves only to create tensions between schools and perpetuate inequalities.

If you list the schools in your area by the exam outcomes metric currently in favour – Progress 8 – it is highly likely that the rank order will go something like this.

  • Most academically successful school/s – probably faith or selective school/s.
  • Most academically successful non-selective school/s.
  • Moderately successful non-selective school/s.
  • Least academically successful school/s. Probably located in the most deprived areas.

It is also likely that if you ranked these schools in order of “desirability” from the parental point of view, the order would be very similar (independent schools do not have to publish performance figures such as Progress 8 – another indication that these schools exist in a parallel, separate and socially segregated system).

Now add in the percentage of students eligible for free school meals (FSM). The sequence is likely to be the same, only in reverse.

So, too often, the most academically successful schools (by the metrics decided upon by government, at least) are the most desirable in the eyes of parents but are also generally those with the lowest proportion of disadvantaged students. The converse is therefore true – the most disadvantaged students find themselves concentrated in the least desirable schools.

As I have mentioned already in this series, this effect is compounded because less desirable schools then find it more difficult to recruit and retain the best teachers and find it more challenging to attract more academically motivated students.

Try ranking the schools in your area using the government’s Find and compare schools in England service – I would be pleasantly surprised if it failed to show something like the sequence I have described. I have tried it for many areas, including my local area, my nearest city, a Midlands town, and the borough of London where I used to work –the patterns hold true.


Flawed nature of value-added

Regardless of the pros and cons of particular progress and attainment measures, I would argue that it is the public nature of the “league table” approach that is most damaging to the least advantaged in our society.

However, let us just briefly consider why some educationalists feel that the data used to make these comparisons is flawed. Stephen Gorard, professor of education and public policy at Durham University, is highly critical of any use by our school system of value-added measures (Gorard, 2018). He makes the point that value-added is almost entirely predictable from raw scores – in other words, more able students generally make more progress than less able students. So schools with a higher proportion of more able students will achieve better progress scores and therefore appear to be providing a better quality of education.

He states: “If value-added (VA) scores are as meaningless as they appear to be, there is a serious ethical issue wherever they have been or continue to be used to reward or punish schools or to make policy decisions. VA is zero-sum, meaning that it is inherently competitive and schools can only improve their scores at the expense of others.”

The truth, according to Gorard, is that “published school performance measures based on VA scores are likely to be profoundly misleading, particularly for those such as parents and policy-makers”.

He describes Progress 8 as “really, really flaky” and just another version of value-added (Gorard, 2018).


How to use assessment data

Assessment data is of course important for individuals, cohorts, clusters of schools and at a national level to inform us of standards and progress. However, this data should be used by teachers, school leaders and educational professionals to inform and make recommendations. Once in the public domain such data will always lead to increasing competition and decreasing collaboration between schools. It also makes it much more difficult for some schools to make the improvements they need.

Instead of standardised national tests for all students, samples of students could sit national benchmark tests for Department for Education monitoring purposes. Such a system is employed by other countries, such as Finland. This could make primary schools a standardised testing free zone, while still providing the DfE with data to demonstrate impact, progress and value for money.

Is it time to re-organise our schools to focus more on the “head, heart and hand”, as Peter Hyman (2020) would say? How could we assess such qualities more effectively going forward? Ranking schools by assessment data league tables would seem rather short-sighted.

With employers increasingly looking to be “qualifications blind” and seeking other qualities such as creativity and collaboration skills, an education system that focuses so heavily on academic progress and which continues to pit school against school, seems increasingly outmoded and serves only to increase the inequity in our system.

In my final article in this series, I look at our high-stakes accountability regime, led by Ofsted.


  • David Anderson is deputy principal at Uppingham Community College in Rutland.


Further information & reading


Comments
Name
 
Email
 
Comments
 

Please view our Terms and Conditions before leaving a comment.

Change the CAPTCHA codeSpeak the CAPTCHA code
 
Sign up SecEd Bulletin