Best Practice

Using summative assessment data to improve student outcomes

What is summative assessment data used for, how can we ensure it is purposeful, valid, and reliable, and how can school leaders use it to inform school improvement and to improve student outcomes? Clare Duffy describes her school’s approach
Impact: When planning how to use data in school, leaders should ask themselves two questions to ensure impact – what is the data going to be used for and how can we ensure it is purposeful, valid, and reliable? - Adobe Stock

The use of data in school can sometimes get a bit of a rough deal. Many teachers can remember excessive data-gathering in schools, often serving little useful purpose in improving student outcomes.

Thankfully now, particularly with the recommendations from the Workload Reduction Taskforce (DfE, 2024) stipulating that teachers should not carry out repetitive data entry and that the use of data should be sensible, we have seen considerable improvements in the use of data for monitoring in schools.

Indeed, a recent Teachertapp survey (2025) found that approximately 36% of teachers are required to provide pupil data to their senior leadership team once per half-term, a reduction of 14% since 2019.

We know that Ofsted will not use internal data to make judgements about a school, instead considering it in relation to how assessment is used to inform the delivery of the curriculum (Ofsted, 2024). Therefore, when planning how to use data in school, leaders should ask themselves two questions to ensure impact: what is the data going to be used for and how can we ensure it is purposeful, valid, and reliable?

It is important to note here the difference between summative and formative assessment in terms of the data it can provide. On-going formative assessment is invaluable in the classroom, helping teachers check for understanding and address gaps in students’ knowledge. However, this article will focus on how school leaders use summative assessment data to drive school improvement and improve student outcomes. By summative assessment I am referring to how a student’s knowledge and skills are assessed at the end of a learning period. The data generated compares the student’s performance against a standard, often a single grade judgement such as a GCSE grade or a key stage 3 expectation.

Summative assessment at key stage 3 poses a number of challenges for school leaders since the removal of key stage 3 levels. There is no standardised testing system at key stage 3 for schools to benchmark their data against, so schools have to come up with their own internal solutions for key stage 3 data monitoring.

Some use GCSE flight paths worked backwards down to year 7 with success criteria for key stage 3 assessments linked to GCSE grades. Others use a student’s attainment band on entry and track the progress they make using curriculum expectation maps with subject criteria linked to what they would expect a low/mid/high-attaining student to achieve at year 7, 8 or 9.

At my school, Uppingham Community College, we identified three areas where we needed to adapt how we used data to address our contextual challenges and improve student outcomes.

 

Challenge one: Getting the basics right

The first challenge we had to overcome was a lack of consistent data monitoring across subjects. We had recently introduced a new MIS and there had been a slow uptake of subject leaders using the new system for marksheets and monitoring, with some preferring in house Excel spreadsheets to track performance.

While we had good individual subject monitoring systems in place, without anything centralised it was extremely difficult to identify and target any student underachievement across more than one subject and allow heads of year to support progress.

Over the course of the first autumn term all subject marksheets were migrated onto our MIS. We also stipulated that all subjects for each year group should have a summative assessment each half-term to provide enough data for performance monitoring.

This centralised data system has allowed the senior leadership team to have more productive conversations with subject leaders in their one-to-one meetings, focusing on key students and generating discussions about valid assessments and their place in each subject’s curriculum.

 

Challenge two: Identifying underachievement at key stage 3

The second challenge we identified was with the way we assessed and reported on students at key stage 3. We were using subject-specific criteria linked to age-related expectations, with students either working towards, meeting, or working beyond expectation.

However, for some students, such as those with SEND, it was difficult to show progress as they could spend all of key stage 3 “working towards” even though they may show improvement year-on-year. We also found that the criteria were sometimes too vague, meaning underachievement could be missed until a student entered key stage 4 and the rigour of GCSE grading.

We addressed this in several ways. First, we ensured that all subject leaders created a robust set of criteria aligned with the working towards/meeting/beyond judgement in the form of “curriculum expectation maps”.

These criteria are linked to a subject’s key concepts and knowledge, as identified in their curriculum intent, utilising know/can descriptors aligned with student outcomes.

Second, internal FFT20 targets for all years, including year 7, have been shared with teachers. Alongside this, we have introduced end-of-year exams for years 7, 8 and 9 in all subjects. This gives our students early practice of sitting terminal exams as well as providing a good opportunity to teach students retrieval and revision strategies.

A student’s performance in the exam is converted into a rough approximation of a GCSE grade using a three-grade boundary (e.g. 75% score on the exam may equal grades 6 to 8, whereas 35% may equal 2 to 4). These grade boundaries are set by subject leaders who have created the exams internally. Teachers then use their best judgement to decide a student’s checkpoint grade at the end of each year (what they think a student is likely to achieve at the end of year 11), basing it on both the performance in the exam and throughout the year.

This data is only used internally and is not shared with parents or students. Comparing the checkpoint grade to the FFT20 target grade allows leaders to monitor student progress, check students are on target for the end of year 11, and intervene early when there is underachievement.

 

Challenge three: Ensuring key stage 4 data is accurate and reliable

The third data challenge we focused on was how our key stage 4 monitoring relied on using predicted grades which are often motivational rather than totally accurate of a student’s performance. Year 10 reports also didn’t provide a one-grade projected grade, rather a three or two-grade boundary which made tracking progress difficult.

As a result, we now have internal data systems which track a key stage 4 student’s target grade (using FFT20), their projected grade (what their teacher thinks they will achieve in the end), and a currently working at grade (what grade they would achieve if they sat the exam/qualification today).

The currently working at grades (CWAGs) are collected in April and July of year 10 and November, January, March and May of year 11. Alongside this, subject leaders have set 4+, 5+ and 7+ GCSE targets for their subject using FFT20, internal data, and year-on-year performance.

After each term’s data collection, subject leaders meet individually with the deputy head for quality of education in targeted improvement planning (TIP) meetings. This allows us to identify high-profile students and key marginal groups which supports intervention planning.

All subjects offer year 11 after-school intervention sessions for 12 weeks between January and May. Additionally, a small number of high-profile students are contextualised for staff each week in our staff bulletin where we list their current attainment, self-evaluation of their mock exam performance, and post-16 aspirations. This ensures the performance of these students remain a high focus for all staff.

 

Next steps

We are still very much at the start of our journey adapting how we use data within school to drive school improvement.

We are moving our year 11 mock exams next academic year from January to November to give us more time to act on any findings and inform interventions. Subjects will then have the opportunity to run further mock assessments in class during February/March, meaning that all students will have the opportunity to access, as a minimum, several practice papers during the year.

We are also seeking to redesign our key stage 3 and key stage 4 parent reports. For both key stages our behaviour for learning engagement criteria will be realigned with our new culture curriculum. In key stage 4 the reports will include a student’s target, projected and CWAG grade for each subject with single grades used from the second report of year 10 rather than the previously vague two or three-grade range.

 

Final thoughts

  • Always consider why you need the data and what it will be used for as this will help you decide what data you need to collect.
  • Visit other schools to explore good practice to help you develop creative solutions.
  • Always be mindful of staff workload and excessive data collection – only collect what is valuable and needed.
  • Ensure you get buy-in from your middle leaders as they will be the ones driving the data collection and will be doing much of the monitoring.

 

  • Clare Duffy is senior deputy headteacher at Uppingham Community College in Rutland. Find her previous articles and podcast appearances for SecEd via www.sec-ed.co.uk/authors/clare-duffy 

 

Further information & resources