## In Defence of Progress 8 (Parts 4 and 5)

**Problem 4: It is Based on Ancient and Unreliable Key Stage 2 Tests in Maths and English**

(Geralt, Pixaby)

‘Yes Dominic, that’s all very well, but it is all based on student starting points, which are two tests in year 6, subject to all sorts of chance results – it’s the equivalent of building a house on foundations of sand.’

Well, in theory, if those tests results were random, then yes.

But what happens? Every time you find a school with a lower average APS score, the average P8 of similar schools (in the comparator 54) also goes down. In other words, those KS2 tests really do measure something worthwhile – those scores clearly do impact how students do in year 11, whether we like it or not.

So, while we can see that it might be harder to get progress from a student coming in at level 3 than one who comes in at level 4, we absolutely can’t argue that we should ignore how other schools with similar intakes do. Our ranking really will mean something when we compare ourselves to those schools.

And of course you can be much more forensic than that. It is easy to work out the average P8 for every level 3 student in the country, and compare your level 3 students to that. Likewise with level 4 and level 5, and indeed a dozen other divisions of your cohort.

** **

**Problem 5: Confidence Intervals Mean Schools with Different P8 Scores are Actually Doing Just as Well. The P8 can be a False Postive, or False Negative**

(Geralt, Pixaby)

So, another objection surfaces. Two schools with P8 of 0.1 and -0.1 might have the same confidence interval, of say -0.2 to +0.2. So, the difference between them can simply be statistical noise, and not real.

This is true. However, when you go to the comparator 54 schools you will find lots of schools (if you have a low P8), where their entire confidence interval is above yours. For example, perhaps your school has P8 of -0.35. This figure is the P8 of the headteacher I quoted at the beginning. His school has a large number of disadvantaged students. It has a P8 of -0.35. Let’s imagine this is your data.

When you look at the comparator schools, you will find only 9 schools out of the 54 which have lower P8 scores. Your confidence interval has an upper limit of -0.2. So, you want to learn from schools whose confidence intervals have a lower limit which is better than this. Let’s make it much better – we want a lower confidence interval which is at least 0.0, so we are certain that their students’ progress is *at least* two grades better than ours. How many are there in our 54? 11.

What if I visited the top 5? How many would I actually have to visit before I had 10 different ideas to improve my school? Probably 2 or 3. I could send my entire leadership team out to a different school each, and report back with the most valuable CPD on leadership we will ever get, because every one of those schools is getting more progress from students much like ours.

To be fair to this head teacher, he can legitimately argue that the comparator schools are not that comparable. His school – your school in this thought experiment – has 139 disadvantaged students in year 11. So, what do you do?

I hope you have decided to visit the national P8 spreadsheet again, and look for all schools with over 100 disadvantaged students whose progress is much better than yours. I’ll do that in my next blogpost.

I hope this post has shown that the P8 figures are actually the most useful information you have about your leadership – they tell you precisely what your leadership has translated to in terms of student progress for all sorts of groups within your cohort.

Perhaps I haven’t convinced you.

But choosing not to find schools doing better than yours with similar cohorts, choosing not to visit them, choosing not to learn from others with better progress than you, is a damning assessment of your leadership.

If that isn’t your job as a school leader, what’s the point of your leadership?