Making data meaningful in inspections

Written by: Stephen Rollett | Published:
Image: Adobe Stock

Ofsted’s recent inspection update urges schools to focus on ‘meaningful data’ and warns about the misuse of data. Specialist Stephen Rollett advises

It is important to keep data in perspective. And importantly, it is the inspectorate itself that is urging caution. If you have not yet read Ofsted’s School Inspection Update: Special edition from September 2017 I would urge you to do so. Sean Harford, Ofsted’s national director of education, makes the point that what really matters is meaningful data.

So, what is “meaningful data” and how best can we use it at inspection time?

Invalid data

Validity is essential when using data. And group size has been a barrier to valid judgement in recent years. The tendency to draw big conclusions from very small groups of pupils has led to some poor data practice.

Cutting data into ever smaller sub-groups is appealing on face value – it seems to promise a sharper and more nuanced analysis. However, when comparing the performance of a few pupils with national benchmarks (drawn from thousands of pupils), it is easy to jump to invalid conclusions. Something as stark as one fewer grade C due to a pupil not sitting an exam could take on a significance in inspection far in excess of what is statistically valid.

This concern was reflected in inspectors’ training this year. They were cautioned by experts against relying on data drawn from small numbers of pupils. It was also reflected in the School Inspection Update, which said: “The breaking down of data into smaller and smaller groups compromises its use on inspection to inform valid judgements on outcomes. This manifests itself most acutely where we look at the intersection of several characteristics simultaneously, for example previously high-attaining, disadvantaged boys.”

This knowledge should embolden schools to resist invalid use of data by inspectors but also encourage them to consider the validity of their own use of data.

Everyday experience

However, while we should be cautious about making judgements, it is important to understand that inspectors will continue to be interested in the achievements of all pupils, even where group size presents a statistical challenge – as you would expect.

But rather than basing their judgements on the data alone, inspectors should explore the “everyday experience” of such pupils. As such, schools’ self-evaluation activities in relation to key groups should do likewise and seek to understand the full range of evidence including lesson observations, work scrutiny and curriculum quality.

Moreover, leaders at all levels should consider not just how to measure, but how best to affect the outcomes of students who need to catch up, who are disadvantaged or in need of additional support.

The tendency to focus on data from small groups of pupils with intersecting characteristics (e.g. high prior-attaining, disadvantaged) may have inadvertently encouraged schools to target interventions at groups and individuals when, in fact, efforts may yield better results if resources are deployed to support all pupils.

Previously some schools have been anxious about taking this stance with inspectors, fearful that they will be adversely judged. The fact that it is addressed in the School Inspection Update should give confidence to schools to think less about what they think inspectors want to see and more about what is right for their pupils.

Inspection Data Summary Report (IDSR)

The inspection dashboard tended to be the starting point for inspectors who were seeking to understand a school’s data. The new Inspection Data Summary Report (IDSR) replaces the dashboard and will be used every bit as much by inspectors.

But there are some significant changes. First, the old “strengths and weaknesses” section on the cover has been replaced by “areas to investigate”. This is about more than semantics and is designed to underpin the new data methodology. Consequently, inspectors are pointed towards aspects of performance where the data appears to be meaningful.

Second, the context section has been moved from the back to the front – another indicator of a more considered approach.

Another change in the IDSR is the addition of quintiles to show trends over time. This is arguably at odds with the widely accepted position that, due to the flux in our qualifications system, you can’t compare results year-on-year. However, Ofsted’s approach is to compare a school’s position relative to others (divided into five quintiles of 20 per cent) so that scores themselves are not being compared.

It’s not without difficulty, though. The nature of most populations is that they tend to cluster around the middle quintile and normal variation means a school could move between quintiles two to four without there being anything statistically significant at play.

Therefore, it would be inappropriate to be overly excited or alarmed if a school moves between the middle three quintiles. Schools and Ofsted are strongly encouraged to be cautious about over-interpreting movements within the middle three quintiles, focusing instead on when schools move upward to the top 20 per cent or down to the bottom 20 per cent.


Secondary schools should continue to consider the impact of “outliers” in their Progress 8 data. We know that the Department for Education has said it is looking into the issue of pupils with “extremely negative scores”, but this will not happen in time for 2017 results and accountability.

An interim and complementary measure we have been advocating at the Association of School and College Leaders is for schools to calculate the percentage of pupils who achieved a positive progress score. While this is not an official performance measure, it is potentially a good way of illustrating the impact of extremely negative results on your Progress 8 score.

For example, in one school we looked at, 60 per cent of students achieved positive progress and yet the school’s Progress 8 score was negative. This measure may be a good way of contextualising and, in some cases, challenging the narrative of the Progress 8 score. It could be applied at bucket level as well as the overall school measure.


Another thing to consider is the presentation of data. It’s helpful to have a clear sense of the main messages so you can share an accurate view with inspectors and outline the strengths and improvements within.

Too often, though, this clarity gets lost in data sets which are overly lengthy and poorly structured. Provide concise overviews to inspectors in the first instance and draw on extra detail as it’s needed. Simple tables, graphs and transition matrices serve as useful vehicles for highlighting the salient points.

And finally

Remember one final point too – your inspectors will have had time to familiarise themselves with your historic data before they set foot through the gate. They will know far less about the performance of your current pupils. This is where you hold all the cards and if used well, it can make all the difference. Here are five questions to help you prepare for this:

  • How does your curriculum and assessment system plan for and support progress?
  • How much progress are current pupils making?
  • How do you know?
  • How do you know teachers’ judgements are accurate?
  • What key messages do you want to convey about the progress of current pupils?
  • Stephen Rollett is an inspections and accountability specialist with the Association of School and College Leaders.

Further information

School Inspection Update: Special edition, Ofsted, September 2017:


Please view our Terms and Conditions before leaving a comment.

Change the CAPTCHA codeSpeak the CAPTCHA code
Sign up SecEd Bulletin