How should we interpret changes in school exam results?

When the latest GCSE results in a school are higher or lower than in the previous year, it’s natural to ask why.

 By Tom Anderson, Head of Research and Statistics 

When the latest GCSE results in a school are higher or lower than in the previous year, it’s natural to ask why.  

The evidence from research suggests we should be very careful about how we interpret these changes, especially in judging the quality of a school. 

One obvious reason for this is that exam results represent the performance of learners. When judging the quality of a school through the performance of learners it is essential to recognise that the schools can only ever influence (rather than completely determine) how well their learners perform.  

In fact, research evidence consistently suggests that the contribution of schools to how well learners perform is far smaller than the contribution of the learners themselves. For example, recent research published by the Nuffield Foundation showed that school factors explain less than 10% of the differences in exam results across schools. They also suggest this contribution has remained quite stable over time.  

That’s not to suggest a lack of ability or effort by schools. It’s simply because, as we should expect, the performance of learners is driven to a greater degree by their own ability and level of effort (as well as the people around them such as their parents and peers). 

So, for the purpose of thinking about what leads to changes in exam results at school level, we could write a simple equation like this: 

changes in school exam results = learners + school + other factors   

In relation to the ‘learnerscomponent in this equation, it would be reasonable to think that the characteristics, ability and motivation of the learners in a school might vary from one year to the next. Through hard work all learners can gain new knowledge and skills, so an individual learner’s ability can change over time. But its also the case that the overall ability of year groups within a school at the point where they take their exams can vary from year to year.  

This example also shows that the different components in our equation can interact (in this case something about the school can change the learners). 

Other factors affect exam results, for example, events can happen to learners just before an exam that could impact on their performance. It could be that: 

  • A learner splits up with their partner the night before an exam. The learner is very upset and does not give the exam their full attention. Their performance is poorer as a result.  
  • A learner has not revised all the material that could appear on the maths exam. However, in the week before the exam they decide to spend a lot of time revising quadratic equations. Unknown to them the exam board has included a question on quadratic equations on the exam which is worth a lot of marks. The learner gets maximum marks. 

These sorts of events are outside of the control of the school and yet they can still impact on learner performance and on changes in school exam results. 

This happens at least partly because the number of learners in a secondary school (‘the sample size’) is quite small compared to, say, the national picture. In 2018, an average secondary school in Wales had around 125 Year 11s, compared to 30,000 or so Year 11s across all secondary schools. The sample size at national level is around 240 times greater than at school level. Statistical theory suggests that there will be much more variation (or change) in exam results from year-to-year in small groups of learners than in large groups of learners. This is partly because the effects of some of the events discussed above on performance do not cancel out in small groups in the same way that they may do at the national level. It is also because the human factors – the attitudes and behaviours of teachers and learners can have a large effect on performance at a school level.  

We can start to see, using this simple equation, the risks involved in judging the quality of a school solely on the basis of a change in results from the previous year. We know that some schools saw decreases in results between 2017 and 2018. For some of those schools, the school component of our equation could have increased (which would be evidence of an improvement in the school). However, this may have been cancelled out by decreases in the pupil component or by other factors. If we were to conclude from the decrease in results that the quality of the school had declined, we would be being unfair on the school with potentially adverse effects (for example, on teacher morale). 

Given that there are so many variables that might affect results at a school level it’s not surprising that results can be quite different from year-to-year, and sometimes these changes might be quite large. Just looking at the results in a school from year-to-year does not help us to understand what contribution the school made to that change. 

So if we want to be as fair as possible to schools when interpreting their performance using exam results, we need to take all the factors that influence those results into account. This may mean using statistics in a more sophisticated way than just comparing results from one year to the next. Alternative approaches would include monitoring trends in school results over several years rather than just focusing on the difference between one year and the next. Or using more sophisticated statistics that try to explain the contribution of different factors to the changes in results. 

In the end, statistics alone can’t provide a full understanding of why changes in results at the level of a school have happened. Improved understanding can only be achieved using a wide range of information about the school and its learners.   

Further reading: 

For those that are interested, Cambridge Assessment and Ofqual have recently published detailed research providing more information on what drives year-on-year changes in GCSE results at school level.  

Next steps: 

We will publish experimental statistics this summer showing how qualification results vary at a school level in Wales compared to previous years. These statistics will not identify individual schools but will help schools and other interested users see how changes in results vary across all schools in Wales at a qualification level. We would welcome feedback on those statistics.