X Close

Centre for Education Policy and Equalising Opportunities (CEPEO)

Home

We create research to improve the education system and equalise opportunities for all.

Menu

Exploring gaps in teacher judgements across different groups and the implications for HE admissions

By Blog Editor, on 31 January 2024

By Oliver Cassagneau-Francis

This blog was originally published on ADR UK (Administrative Data Research UK)’s website [link to original post].

In this blog post, CEPEO research fellow Oliver Cassagneau-Francis describes how he and the project team (CEPEO director Lindsey Macmillan, deputy director Gill Wyness and affiliate Richard Murphy) will use the Grading and Admissions Data for England dataset to study differences in predicted grades and compare the resulting outcomes for different groups of students. This project is funded through an ADR UK Fellowship.

Students from more advantaged backgrounds are three times more likely to go to university than their peers from less advantaged backgrounds, and they are also more likely to go to highly selective courses. These courses often lead to better careers­­. Recent work has shown that for students with the same level of academic attainment, the quality of the course they enrol into varies across socio-economic groups. In particular, students from more advantaged backgrounds enrol into more selective university courses than students from less advantaged backgrounds who achieve the same grades at school. This is true across the spectrum of student attainment.

A likely driver of these differences is the important role of teacher-predicted grades in UK university admissions. Students generally apply to university and accept their places before sitting their exams, relying on predictions of their grades made by their teachers (henceforth “predicted grades” or “predictions”). These are generally inaccurate.

 

Predicted grades became more complex during the pandemic

Unpacking UCAS predicted grades is a difficult task. Teachers are asked to be optimistic in their predictions, and so it is unclear whether achieved grades are the correct comparison for predicted grades. However, during the Covid-19 pandemic in 2020, exams were cancelled and teachers, having already given the predicted grades needed for university applications, were asked to now give their students grades that would become their actual A-level results. These were called centre-assessment grades. Note that with these grades, teachers were asked to provide a realistic judgement of the grade each student would have been most likely to get if they had taken their exam(s) in a a given subject and completed any non-exam assessment; so there is not the element of optimism that is in UCAS predictions.

Therefore, we have two groups of students with different information on each: a group who have predicted grades and actual grades (the pre 2020 cohort); and a group for whom we have predicted grades and centre-assessment grades (the 2020 cohort). By comparing UCAS predictions with centre-assessment grades and with actual grades we can learn about how teachers make predictions.

In addition, in 2020 teachers were asked to rank students within the centre-assessment grades, meaning it’s possible to see which students just achieved a given grade and which ones just missed out.

How administrative data can provide new insights

For this project, we will use this unique information to study how teacher predictions differ across different social groups (e.g. by socio-economic status, gender, or ethnicity). We will also study the impact of receiving different predictions – teacher-predicted grades for university applications, and centre assessed grades – on outcomes, such as which university and course students went on to enrol in.

To do this, we will use the Grading and Admissions Data for England (GRADE) dataset. This contains de-identified data on students from:

  • the Office of Qualifications and Examinations Regulation (Ofqual)
  • the Department for Education (DfE)
  • the Universities and Colleges Admissions Service (UCAS).

 

The data from these different sources has been linked together, de-identified and made available to accredited researchers. The dataset is very comprehensive, covering nearly all students who were in school in England and took their GCSEs or A-levels in 2018, 2019 and the 2020 cohort whose exams were cancelled in summer 2020. This project will focus on the A-level students from these cohorts.

Measuring differences in teacher judgements across groups

In this project, we will study carefully those students placed either just above or just below a grade boundary in 2020, using centre-assessment grades and teacher rankings. If these look different – for example, if women (or students from ethnic minority or lower socioeconomic backgrounds) are more often found at the top rank of a B boundary, than at the bottom of an A boundary then this suggests bias against women (or students from ethnic minority or lower socioeconomic backgrounds) – this will suggest that teachers might be predicting more or less generous grades for students from different groups. We will expand this analysis to look at specific subjects (e.g. Maths and English) and specific grade boundaries (e.g. A* / A). We can also perform a similar exercise using exam grades and marks (pre-2020), allowing us to compare the distributions of students around grade boundaries that are determined by exam versus those due to teacher judgements. Ofqual carried out their own analysis of the centre-assessment grades, finding limited evidence that student characteristics influenced grades, and they release equalities analyses for each round of exams.

In the second part of the project, we will again look closely at students just on either side of a grade boundary and compare their university enrolments and other outcomes. These students are ranked very closely by their teachers but look quite different to universities, as they received different (centre-assessment grades. It will also be interesting to compare outcomes of students who were ranked very closely by teachers, who were given different predictions pre-Covid. By comparing their outcomes, we will be able to isolate the impact of receiving an A over a B (both at the predicted grade and at the actual grade level), for example, on students’ university pathways.

Teachers are one of the main drivers of student success at GCSEs and A levels, success which then goes on to determine future outcomes. Understanding whether there are discrepancies in teachers’ judgements in favour of certain groups over others, resulting in differences in school attainment and university choices, will help us to understand the implications for social mobility and equity.

Leave a Reply