Covid-19 and education: Why have we waited until now to improve the accuracy of predicted grades?
By Blog Editor, IOE Digital, on 3 April 2020
3 April 2020
By Gill Wyness
For students expecting to take their A-Levels and BTECs this summer, the impact of COVID-19 will be profound. Instead of taking the formal examinations that they were preparing for, Ofqual confirmed today that school leavers will be provided with a set of grades based on teacher judgement, which will, in turn, form the basis of their university applications. This plan has attracted a fair amount of criticism, with fears that the system may be biased, and might lead to certain groups of students missing out on a university place because of a bad prediction.
But it is worth noting that this is already how students apply to university, so it is perhaps surprising that there is suddenly such widespread resistance to the idea of predicted grades. However, my recent study with Richard Murphy (University of Texas at Austin) suggests that fears that these predicted grades might be inaccurate may be well-grounded.
The UK’s system of university applications has the peculiar feature that students apply to university on the basis of predicted rather than actual exam grades. In fact, only after they have applied, received offers, and shortlisted their preferred two courses do students go on to sit their A-level exams. If the student achieves grades in line with the offer (i.e. grade requirement) of their chosen university course, the course is bound to accept the student and the student is bound to go. If the student misses their offer (i.e. fails to achieve the grades their course required) the course may still accept them, or they may need to enter a process known as ‘clearing’ (in which courses which still have places available are advertised which unplaced students can then apply to) in the hope of gaining access to a place on a course that still has vacancies. In short, A-level predictions are actually a very important feature of the university admissions system.
Surprisingly then, little is known about how accurate these predictions are, largely due to data constraints. Our study uses aggregate data on university applications to study the accuracy of predicted grades and to examine where students with different predictions end up. Our results show that only 16% of applicants achieve the A-level grade points that they were predicted to achieve, based on their best 3 A-levels. And the vast majority of applicants are over-predicted – i.e. their grades are predicted to be higher than they actually go on to achieve. This is in line with other related work (e.g. Dhillon, 2005).
We also find evidence of socio-economic (SES) gaps in predicted grades: among the highest achieving students, those from disadvantaged backgrounds receive predicted grades that are slightly lower than those from more advantaged backgrounds. This may have consequences for social mobility since under-predicted students are more likely to be overqualified for the course they eventually enrol in.
One potential explanation for the inaccuracy of these grades is that, to date, the guidelines given to teachers have been lacking. Information on the UCAS website advises that “a predicted grade is the grade of qualification an applicant’s school or college believes they’re likely to achieve in positive circumstances”. UCAS also suggest that predicted grades should be “aspirational but achievable”, but that ‘inflated’ predictions are “not without risk, and could significantly disadvantage [applicants]”.
These guidelines are confusing at best, and it may not be surprising that predictions are typically inaccurate. Moreover, teachers may be using predictions as an incentive, a target for students to try and meet, rather than as a true picture of their ability. This is one explanation for the high degree of overprediction we observe.
So, what does this mean for the students who will receive estimated grades this year (and who unlike previous students, will never learn what their grades would really have been on the day)?
If we see the same patterns of over-inflation in predictions, this could result in there being a significant increase in the number of students qualified for university (i.e. those who previously may have missed their offer). This could prove tricky for university admissions staff, who may decide to add additional criteria when choosing students. This could lead to universities instigating their own entry tests.
However, a crucial difference in the grade predictions being made this year is that they are already under much more scrutiny than in previous years. Ofqual has already given detail on how grades should be judged and exam boards will be required to “put all centre assessment grades through a process of standardisation using a model developed by Ofqual”.
So we might actually end up with a fairer system than the one we have been using for the last 50 years. Which begs the question of why have we waited til now to improve the accuracy of predicted grades, bearing in mind just how high stakes they are.
Dr. Gill Wyness is Deputy Director of CEPEO and a Research Associate with the Centre for Economic Performance at LSE.
This blog was first posted on the CEPEO website.
2 Responses to “Covid-19 and education: Why have we waited until now to improve the accuracy of predicted grades?”
- 1
-
2
@TeacherToolkit wrote on 3 April 2020:
With a crisis springs opportunity; fingers crossed for a new model of assessment. https://www.teachertoolkit.co.uk/2020/04/03/just-great-teaching-exams/
[…] have we waited until now to improve the accuracy of predicted grades? This is a great question posed by UCL. The UK’s system of university applications has the peculiar feature that students apply to […]