An extraordinary year needed a completely new approach

When thinking about changes in attainment gaps this year, we must remember that the national results are different.

 By Tom Anderson, Head of Research and Statistics, Qualifications Wales 

Now that learners in Wales have received their grades, many commentators may seek to draw conclusions from this year’s results. It’s been an extraordinary year for learners, schools and colleges.  It has been a completely new approach.   

We said in a blog in July that we believed that national results would be higher, and potentially substantially higher, than in pre-pandemic years. 

On the results days, we published results summaries that confirmed this. Some ‘attainment gaps’ also changed.  

At this stage we don’t know for sure why there have been changes in attainment gaps and it is important that the context of the pandemic is carefully considered, before coming to any conclusions. There could be a few reasons for this, including the different assessment system this year. It is also possible that higher national results can change attainment gaps, and this article shows how that could be the case. 

What are attainment gaps and why might we be interested in them?  

Attainment gaps in qualifications describe how results differ between groups of people, for example, between boys and girls.  

There are lots of good reasons why there is interest in how and why attainment gaps change over time. These reasons include:  

  • Characteristics such as the sex of learners being protected in equalities legislation, as a means of preventing disadvantage and promoting equity and social justice.  
  • Welsh Government policy on education seeking to reduce the attainment gap over the longer term.  
  • Grades being used as an indicator of a learner’s attainment in different subjects. Learners should therefore want their grades to represent their attainment, and not other characteristics, like their sex or their ethnicity. 

Why are national results important to attainment gaps? 

Qualification assessments and grading decisions are usually standardised. Prior to the pandemic national results didn’t change much from one year to the next. In those circumstances, there could be less focus on national results when trying to understand any changes in attainment gaps. 

But when national results are much higher (or lower) than the previous year, more care needs to be taken when understanding the implications of any changes in attainment gaps. This is because the change in the national results could be linked to the change in the gaps. 

It is important to note that, as national results this year are higher than in the past, results for all groups of learners may also be higher than usual, even if attainment gaps have widened. 

How can changing results change an attainment gap? 

Here is a fictional example comparing results and attainment gaps in two scenarios. 

In the table below, the twenty learners (11 males, 9 females) taking a subject have been ordered by how many marks they got in their assessments, with learner 1 getting the most marks and learner 20 getting the fewest marks (the marks aren’t shown in the table to simplify it). Those learners are also recorded as male or female (leaving aside for the purpose of this exercise the complexities around biological sex and gender identity). 

Grades for a subject (bold text highlights learners affected by grade boundary change in scenario 2) 

 

 

Scenario 1

Scenario 2

Learner number

Gender of learner

Grades with normal grade boundaries

Grades with lower grade boundaries for Grades A, B and C (leading to higher results)

1 F A A
2 F A A
3 M A A
4 M A A
5 F A A
6 M B A
7 M B A
8 F B B
9 F B B
10 F B B
11 M C B
12 M C B
13 F C C
14 M C C
15 M C C
16 M D C
17 F D C
18 M D C
19 M D D
20 F D D

In scenario 1, grades are set in the usual way. This results in 5 learners getting a grade A, 5 getting a grade B, 5 a grade C and the remaining 5 a grade D.  

In scenario 2, the number of marks needed to get grades A, B and C is reduced. Results are therefore higher in scenario 2. This change affected 5 males but only 1 female, due to their position in the rank order (from 1 to 20) in the table. Some learners who didn’t achieve as many marks now get a grade A. Other learners also achieve a higher grade in scenario 2 than they did in scenario 1. 

The table below summarises the effect of these two scenarios on results across the 20 learners. 

Summary of results and attainment gaps 

 

Female (F)

 

Male (M)

 

Attainment gap in percentage points (F-M)

Grades

Scenario 1

Scenario 2

 

Scenario 1

Scenario 2

 

Scenario 1

Scenario 2

 

 

 

 

 

 

 

 

 

% A

33.3

33.3

 

18.2

36.4

 

15.2

-3.0

% A or B

66.7

66.7

 

36.4

54.5

 

30.3

12.1

% A, B or C

77.8

88.9

 

72.7

90.9

 

5.1

-2.0

% A to D

100

100

 

100

100

 

0

0

In relation to attainment gaps, it is noticeable that: 

  • Results for males changed a lot because of where they happened to be in the rank order of learners (from 1-20) in terms of their attainment in the subject (see highlighted rows in table 1). For example, their results increased from 18.2% at A in scenario 1, to 36.4% at A in scenario 2. Four males benefited but only 1 female benefited from the grade boundaries being lowered. 
  • Results for females did not change much. 
  • The attainment gaps between males and females therefore changed. 
  • There were different changes to attainment gaps at different points on the grade scale.  

This is just a simplified example and the real world is much more complex. But the example does show why higher results can lead to changes in attainment gaps.  

The national cohort is largely evenly split between boys and girls. The effect we have been describing could be more pronounced for attainment gaps where the number of learners in each group varies substantially (e.g. when comparing those eligible for free school meals to those not eligible) or where the number of learners in a group is small (as is the case for many ethnic minority groups in Wales). 

Does an increase in an attainment gap show a bias?  

An increasing attainment gap would be problematic if it was caused by factors that were not to do with the attainment of learners. That might mean that grades did not just represent attainment. 

For example, if there was an increasing attainment gap by gender that was not explained by the different attainment of boys and girls, this would suggest a bias against the group with the lower results.  

 An increasing attainment gap is not in itself proof that a bias exists, any changes could be related to national results.  

In addition, there other factors that could be causing changes in attainment gaps:  

  • Whether there has been a large change in entry leading to a change in the ability of learners taking the qualifications. 
  • Whether there has been a real change in the attainment of learners that has impacted differently on different groups - in the current circumstances, where education has been disrupted in different ways for different people, this could be more likely to happen.  
  • It could be an effect of the assessment arrangements if these have changed – the alternative assessment arrangements in summer 2021 varied more at a centre level and allowed different sorts of evidence to be taken into account. Different groups of learners might be suited better by this system of assessment than others. 

What happens next? 

This summer we have seen national results increase and attainment gaps change.  

It will be impossible say for sure what has caused the changes in the gaps. We do not have good data on how individuals have been impacted by the pandemic or exactly how the detail of the assessment approach has varied across schools and colleges.  

To be confident about knowing what had caused the differences in attainment gaps compared to pre-pandemic years, we would have needed the pandemic not to have happened, the usual exams to have gone ahead and the grades to be awarded in the usual way (these alternative scenarios are called a counterfactual by statisticians – as in they are ‘counter’ to the reality of what actually happened, or an alternative reality).  

Then we would be able to see how similar the grades awarded by centres this year were to the grades that the same learners achieved through exams. This would give us a way of deciding if the grades for different groups had been affected by the circumstances. 

However, the pandemic has happened and the exams have not. So we will not be able to make that comparison. 

A weaker sort of (model-based) analysis is possible. This would look at the relationships between the prior attainment of learners (before the pandemic), their characteristics and their grades to see if there is any suggestion that characteristics like sex are playing more of a role in changes in results than in previous years, once evidence of prior attainment is taken into account. 

All models are simplifications of reality and have uncertainty associated with the results, so the model-based analysis cannot provide conclusive proof of what has caused a change, but it can provide a best guess to support further discussion. We are planning to do this analysis and publish it as Official Statistics in October.