Few of us became teachers because we love data. I have only ever been interested in data if it means that we can better support students. But too often in schools we find ourselves awash with data. We invest a lot of time inputting, managing and tracking data, often with minimal impact.
So when I became an assistant head through the Future Leaders programme, I leapt at the chance to sharpen up how we use data at my school – Henbury School in Bristol. My starting principle was that the data needed to work for us and our students, not the other way round.
So what data would be useful to us? How would it be presented? How often would it be produced? Who would see it? These were the questions that I wrestled with. In the end they helped me arrive at a number of principles, which informed the school’s new way of distributing data.
Principle 1: Name it
Too often, we overlook the importance of assigning something a name. We shouldn’t. Without a name, any initiative implemented loses traction. Doug Lemov’s book Teach Like a Champion is a great example of taking naming seriously, with each of his 49 teaching techniques having a specific name which has been thought about carefully. The name I chose for my initiative was “Scorecard” – a short, sharp run-down of key data published after every data drop.
Principle 2: Stick to one side of A4
Something happens to most of us when we are presented with data over multiple pages: our eyes glaze over and we switch off. That matters, because there’s no point producing data if no-one pays attention to it. My starting principle was that all the data we cared about should fit on one side of A4. Spreadsheets bigger than this are for the backroom: in the busy world of schools, any impact the data might have made is lost in difficulties of interpretation or the time it takes to digest.
Principle 3: Make it comparable
Once, as a head of faculty, I remember being shown the number of behaviour incidents in each faculty across the school. However, some faculties had far more lessons than others, which made it very unclear whether my faculty was doing better or worse than average. This in turn made it difficult to act upon the data.
The Scorecard, in contrast, shows the percentage of lessons in which behaviour incidents occur, allowing teachers to compare different subjects and take action. This idea of comparability is continued throughout the card – for example comparing progress in different subjects, rather than attainment.
Comparability is also important over time. If I am trying to improve student engagement in my subject, I need to know that students will be asked the same question for the next Scorecard as for the current one, so I can see what difference has been made. The Scorecard uses the same measures over time.
Principle 4: Use a range of data
Data will never be able to capture everything we care about in education. Relying on one type of data will capture even less. Is one teacher teaching to the test, but not developing students’ oracy or debating skills? Does another teacher freeze the moment anyone comes in to observe, despite being universally respected by students for consistently high-quality lessons? The Scorecard compiles a range of data, covering student progress, the school’s standard operating procedures (SOPs) for lessons, and behaviour. We had all this data already, but we had never put it all together in one place.
There was one important type of data we did not have: student perceptions of teaching. In order to address this, we introduced a student survey. The survey systematically asks students to agree or disagree with a number of statements about the teaching for every subject they study. This generates reliable, comparable and therefore useful data across all subjects.
Principle 5: Agree how to use it
Imagine that the data shows that students are making less progress in science than in maths. It would be tempting to conclude from this that the science teachers are not as good as the maths teachers. But rarely can data let us jump so quickly to conclusions. For example, it might be that there are new staff in science, or that the science faculty has recently made some sensible changes to the curriculum that are taking time to bed in. We must not simply take the data at face value, rather we must see the data as a chance to ask deeper questions about what’s going on in the school.
Principle 6: Publish it to all staff
As a teacher, if my subject is performing badly on some measures, I want to know. It is impossible for staff to hold each other properly to account or to monitor their own efforts without everyone having access to the same information.
Below is an extract of how the Scorecard might look. The data used below is not real data but has been made up for illustrative purposes.
In the first year the Scorecard ran at Henbury, the percentage of students making expected progress or better rose from 55 to 69 per cent. This was certainly not the result of the Scorecard alone, but I believe it has helped in a number of ways, two of which I would like to highlight.
1, Improving accountability
All line management meetings now use the Scorecard as their starting point. This makes it hard for anyone to avoid addressing even difficult issues. The data for each subject is there, in black and white, for everyone to see. For example, if the data shows that year 10 boys’ progress is poor, that becomes a priority. Heads of faculty do not have to do endless data analysis to prove progress from last year; nor are line managers left wondering if the analysis is robust enough. Heads of faculty can spend more time supporting their staff and improving teaching and learning.
The Scorecard has also improved the accountability of the senior team, as it reveals school-wide issues that need addressing. We already knew that we had a gender gap in some subjects, but now that we have the Scorecard it is impossible for it not to be a school priority.
2, Identifying new issues
Last year, the student survey revealed that teachers didn’t set much homework in some subjects. It also revealed how many students arrived late in the morning, and lacked the correct equipment.
These are now major priorities for us this year and we have already seen an impact. We are all able to see that impact directly in the Scorecard, so it helps us to monitor and feel positive about the changes we are making.
Mostly, there has been little resistance to the Scorecard, and where there have been criticisms these have often provided opportunities for reflection and adaptation. That said, the Scorecard is far from perfect. In particular, it is only as good as the data it presents.
Some people have asked if the Scorecard has led to less accurate data. In fact, it has in some cases improved the accuracy of our data, as unrealistic patterns of data have been revealed very quickly. Others have wondered if we are putting too much emphasis on data. I don’t think we are, but this is always a reminder that data never tells the whole story, rather it is a prompt to ask deeper questions.
There is little originality in the idea of the Scorecard. Many organisations use a single set of indicators to track performance consistently in the same format over time. But I don’t know of any other school which has done anything quite like it – although I’d love to hear from any schools that have.
For me, the goal is for data to work in the background, acting as a clear and consistent point of reflection for whatever we are doing – as well as demonstrating our accountability to governors, parents and the wider public. The more embedded data is and the more clearly it is presented, the more we should be freed up to do what we do best: teach. Future LeadersFuture Leaders is a development programme for aspiring headteachers of challenging schools. To apply or nominate, visit www.future-leaders.org.uk