X Close

Institute of Education Blog


Expert opinion from academics at the UCL Institute of Education


Predicted grades – what do we know, and why does it matter?

By Blog Editor, IOE Digital, on 11 August 2020

Gill Wyness.

Whose grades are being predicted?

Predicted grades are a common feature of the English education system, with teachers’ predictions of pupils’ A level performance forming the basis of university applications each year.

What’s different this year?

The Covid-19 pandemic has put these predictions under the spotlight. The cancellation of exams means that all year 11 and year 13 pupils will instead receive ‘calculated grades’ based on teacher predictions.

How well do teachers predict grades?

Teachers’ predicted grades have been shown to be inaccurate but the majority of inaccurate grades are over-predicted – in other words, too high.

  • There is limited research on the impact of predicted grades, though studies of prediction accuracy by individual grade (e.g. how many A’s were predicted to be A’s) by Delap (1994) and Everett and Papageourgiou (2011) showed around half of all predictions were accurate, while 42-44% were over-predicted by at least one grade, and only 7-11% of all predicted grades were under-predicted.
  • Studies of prediction accuracy according to a student’s best three A levels show even higher rates of inaccuracy (unsurprisingly, since it is harder to predict all three A levels correctly). For example, Wyness and Murphy find that only 16% of students received accurate predictions for all three, with 75% overpredicted and just 8% underpredicted.

Who loses out?

Lower achieving students tend to be overpredicted; higher achieving students tend to be more accurately predicted.

  • All studies find that higher grades are more accurately predicted than lower grades. This is likely an artefact of the combination of teachers’ tendency to overpredict coupled with ceiling effects. Overprediction is impossible for the top grades so accuracy is the consequence.
  • Thus, AAA students are likely to be accurately predicted (or underpredicted) whereas CCC students are more likely to be overpredicted.
  • It is therefore essential to take into account the achievement level of the student when analysing prediction accuracy by student characteristics. For example, low SES students tend to be lower achieving, on average. Therefore, low SES students tend to be overpredicted on average, while high SES students tend to be more accurately predicted (this is shown by Wyness and Murphy).

So are teachers biased?

There is little evidence of bias in prediction accuracy according to student characteristics.

  • The majority of the studies above show no compelling evidence of bias in teacher prediction by student characteristics, once achievement is taken into account.
  • Though Wyness and Murphy show that among high achievers, state school students receive slightly less generous predictions than those in independent schools, and that those from low SES backgrounds receive slightly less generous grades than those from high SES backgrounds
  • This was not a causal finding, and other factors could be driving this apparent bias.

What’s going wrong, then?

Predicting student grades is a near-impossible task for teachers

  • Work by Anders et al (2020) highlighted the difficulty of predicting grades accurately. In this study, the authors attempted to predict A level grades using detailed administrative data on student prior achievement (GCSE) and both statistical and machine learning techniques. Their models could correctly predict 1 in 4 pupils across their best three A levels, versus 1 in 5 for teacher predictions (based on Murphy and Wyness, 2020).
  • Their predictions were incorrect for 74% of pupils.

That’s not great. What else do we know?

Certain pupil types appear harder to predict than others

  • Anders et al also found that high achieving pupils in comprehensive schools were more likely to be underpredicted by their models, compared to their grammar and private school counterparts. This highlights the difficult task that teachers face each year, particularly for pupils with more variable trajectories from GCSE to A level.

Can’t we remove the teacher and calculate grades based on past performance?

The ‘calculated grades’ for 2020 are not just based on teacher predictions.

  • Schools have provided predicted grades and pupil rankings (which are known to be easier to produce than predicted grades).
  • These predicted grades may also be more accurate than in previous years, since teachers were given better guidelines on how to predict, and what information to use
  • Ofqual will standardise teachers’ predicted grades according to the centre’s historical performance. this will reduce the tendency towards overprediction that all studies of predicted grades have observed. For example if a school historically awards 60% of Bs on average, they will be expected to do so this year, and grades will be downgraded to reflect this.
  • But teachers’ rankings will be preserved, so that pupils cannot “change places” after the standardisation.

Scotland have promised to re-think standardising results based on the school. What will happen in England?

  • It’s a controversial point. Our paper shows that high-achieving comprehensive school pupils are more likely to be under-predicted compared to their grammar and private school counterparts.
  • Among high achievers, where under-prediction is most common, the team found 23% of comprehensive school pupils were underpredicted by two or more grades compared to just 11% of grammar and private school pupils.”

What if a student who does less well earlier goes on to study really hard? Isn’t this unfair?

“Outlier” students and disadvantaged students could potentially be disproportionately affected by the standardisation process

  • The standardisation process could affect outlier pupils more than others.
  • For example, an AAA student at a historically low performing school could be downgraded as a result of standardisation.
  • And a DDD student at a high performing school could be upgraded.
  • This could serve to entrench existing socio-economic gaps in pupil attainment to the extent that low SES students are more likely to attend historically low performing schools, and high SES students are more likely to attend high performing schools.

So what should we do about it?

The cancellation of exams this year has highlighted that the system of using predicted grades as a key part of the university application process urgently needs reform.

  • The research above highlights that predicting student grades, even removing teachers from the equation, and instead using detailed data on pupils’ past achievement, is a near-impossible task.
  • A better solution would be to reform the university applications system and allow students to apply to university after they have sat their exams.


Photo by Pete via Creative Commons 


3 Responses to “Predicted grades – what do we know, and why does it matter?”

  • 1
    Tim Mercer wrote on 11 August 2020:

    Thanks for this. From the press coverage you would have thought that teachers just guessed and put in any old grade depending on whether they liked the student or not.

    I think there a couple of other points. The process for calculating CAGs this year was much more rigorous and data grounded than UCAS grade predictions. We used 5 assessment data points, each was weighted to generate a final percentage and then we ranked students and overlaid with SEN allowances. The predictions were made in April after Jan/Feb mocks only 2 months from final exams.

    The process for UCAS grades is a departmental discussion based largely on Y12 AS or mock exam performance. The predictions are made in September when students have a lot more development to do. The guidance is to have ‘high expectations’ and hence to give the student the benefit of the doubt and hence will obviously lead to over predicting.

    Comparing teacher predictions for Covid impacted exams vs UCAS predictions is an apples and oranges comparison, the context and purpose of each is very different.

  • 2
    Charles Jackson wrote on 16 August 2020:

    Given the evidence that grades have been lowered more for some types of school/college than others – notably less for independent schools and more for FE/Sixth-form colleges, there is also the question of what evidence is there that some schools’ predictions are more accurate than others. The implication of the system used is that teachers in some types of school/college over-predict more than those in others. Following the comment above, we can reasonably assume that predictions sent to OFQUAL were likely to be more rigorously produced than those typically used on UCAS forms but we should continue to bear in mind that there is considerable evidence that UCAS predicitons are often over-optimistic.

    The evidence quoted above “Among high achievers, where under-prediction is most common, the team found 23% of comprehensive school pupils were underpredicted by two or more grades compared to just 11% of grammar and private school pupils,” also suggests the reverse may be the case.

    Taken together these two points seem to me to question the fairness of OFQUAL’s process and, in particular, without evidence that some types of school were more inaccurate in their predictions, it is not surprising that the fairness of the results are being challenged and that students at certain types of school have been disadvanteaged by the process.

  • 3
    A-Levels – Why 2020 destroyed futures | FraserIRL wrote on 16 August 2020:

    […] Now I’m not saying teachers are much better at predicting grades, but they know more of a background of the student and know their learning situation personally compared to this magical being. […]

Leave a Reply