X Close

Centre for Education Policy and Equalising Opportunities (CEPEO)

Home

We create research to improve the education system and equalise opportunities for all.

Menu

Predicted grades – what do we know, and why does it matter?

By IOE Editor, on 11 August 2020

By Dr. Gill Wyness

Whose grades are being predicted?

Predicted grades are a common feature of the English education system, with teachers’ predictions of pupils’ A level performance forming the basis of university applications each year.

What’s different this year?

The Covid-19 pandemic has put these predictions under the spotlight. The cancellation of exams means that all year 11 and year 13 pupils will instead receive ‘calculated grades’ based on teacher predictions.

How well do teachers predict grades?

Teachers’ predicted grades have been shown to be inaccurate but the majority of inaccurate grades are over-predicted – in other words, too high.

  • There is limited research on the impact of predicted grades, though studies of prediction accuracy by individual grade (e.g. how many A’s were predicted to be A’s) by Delap (1994) and Everett and Papageourgiou (2011) showed around half of all predictions were accurate, while 42-44% were over-predicted by at least one grade, and only 7-11% of all predicted grades were under-predicted.
  • Studies of prediction accuracy according to a student’s best three A levels show even higher rates of inaccuracy (unsurprisingly, since it is harder to predict all three A levels correctly). For example, Wyness and Murphy find that only 16% of students received accurate predictions for all three, with 75% overpredicted and just 8% underpredicted.

Who loses out?

Lower achieving students tend to be overpredicted; higher achieving students tend to be more accurately predicted.

  • All studies find that higher grades are more accurately predicted than lower grades. This is likely an artefact of the combination of teachers’ tendency to overpredict coupled with ceiling effects. Overprediction is impossible for the top grades so accuracy is the consequence.
  • Thus, AAA students are likely to be accurately predicted (or underpredicted) whereas CCC students are more likely to be overpredicted.
  • It is therefore essential to take into account the achievement level of the student when analysing prediction accuracy by student characteristics. For example, low SES students tend to be lower-achieving, on average. Therefore, low SES students tend to be overpredicted on average, while high SES students tend to be more accurately predicted (this is shown by Wyness and Murphy).

So are teachers biased?

There is little evidence of bias in prediction accuracy according to student characteristics.

  • The majority of the studies above show no compelling evidence of bias in teacher prediction by student characteristics, once achievement is taken into account.
  • Though Wyness and Murphy show that among high achievers, state school students receive slightly less generous predictions than those in independent schools and that those from low SES backgrounds receive slightly less generous grades than those from high SES backgrounds
  • This was not a causal finding, and other factors could be driving this apparent bias.

What’s going wrong, then?

Predicting student grades is a near-impossible task for teachers

  • Work by Anders et al (2020) highlighted the difficulty of predicting grades accuracy. In this study, the authors attempted to predict A level grades using detailed administrative data on student prior achievement (GCSE) and both statistical and machine learning techniques. Their models could correctly predict 1 in 4 pupils across their best three A levels, versus 1 in 5 for teacher predictions (based on Murphy and Wyness, 2020).
  • Their predictions were incorrect for 74% of pupils.

That’s not great. What else do we know?

Certain pupil types appear harder to predict than others

  • Anders et al also found that high achieving pupils in comprehensive schools were more likely to be underpredicted by their models, compared to their grammar and private school counterparts. This highlights the difficult task that teachers face each year, particularly for pupils with more variable trajectories from GCSE to A level.

Can’t we remove the teacher and calculate grades based on past performance?

The ‘calculated grades’ for 2020 are not just based on teacher predictions.

  • Schools have provided predicted grades and pupil rankings (which are known to be easier to produce than predicted grades).
  • These predicted grades may also be more accurate than in previous years, since teachers were given better guidelines on how to predict, and what information to use
  • Ofqual will standardise teachers’ predicted grades according to the centre’s historical performance. this will reduce the tendency towards overprediction that all studies of predicted grades have observed. For example, if a school historically awards 60% of Bs on average, they will be expected to do so this year, and grades will be downgraded to reflect this.
  • But teachers’ rankings will be preserved so that pupils cannot “change places” after the standardisation.

Scotland have promised to re-think standardising results based on the school. What will happen in England?

  • It’s a controversial point. Our paper shows that high-achieving comprehensive school pupils are more likely to be under-predicted compared to their grammar and private school counterparts.
  • Among high achievers, where under-prediction is most common, the team found 23% of comprehensive school pupils were underpredicted by two or more grades compared to just 11% of grammar and private school pupils.”

What if a student who does less well earlier goes on to study really hard? Isn’t this unfair?

“Outlier” students and disadvantaged students could potentially be disproportionately affected by the standardization process

  • The standardization process could affect outlier pupils more than others
  • For example, an AAA student at a historically low performing school could be downgraded as a result of standardization
  • And a DDD student at a high performing school could be upgraded
  • This could serve to entrench existing socio-economic gaps in pupil attainment to the extent that low SES students are more likely to attend historically low performing schools, and high SES students are more likely to attend high performing schools

So what should we do about it?

The cancellation of exams this year has highlighted that the system of using predicted grades as a key part of the university application process urgently needs reform.

  • the research above highlights that predicting student grades, even removing teachers from the equation, and instead using detailed data on pupils’ past achievement is a near-impossible task.
  • A better solution would be to reform the university applications system and allow students to apply to university after they have sat their exams
References
Gill, T., & Benton, T. (2015). The accuracy of forecast grades for OCR A levels in June 2014. Statistics Report Series No. 90. Cambridge, UK: Cambridge Assessment.
Delap, M. R. (1994). An investigation into the accuracy of A‐level predicted grades. Educational Research, 36(2), 135-148.
Everett & Papageorgiou (2011), “Investigating the Accuracy of Predicted A Level Grades as part of 2009 UCAS Admission Process”, BIS Research Paper No 37, Department for Business, Innovation and Skills, London.
Murphy, R., & Wyness, G. (2020). “Minority Report: the impact of predicted grades on university admissions of disadvantaged groups”. Education Economics, 1-18.
UCAS (2015). “Factors associated with predicted and achieved A level attainment”, University and College Admissions Service, Gloucestershire

Leave a Reply