X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

The Covid-19 cohort and the ‘mess’ of public exams: reconsidering roles and responsibilities

By Blog Editor, IOE Digital, on 13 August 2020

Melanie Ehren and Christopher Chapman.

On 18 March the Secretary of State for Education told Parliament that, in response to the Coronavirus  pandemic, schools and colleges in England would shut to all but the children of critical workers and vulnerable children until further notice. Exams scheduled for the summer would not take place.

Government worked with the education sector and Ofqual to develop a process to provide calculated GCSE, AS and A level grades for each student which reflects their performance as fairly as possible and ensure consistency across the sector. The process involves the following steps:

  1. Schools and colleges use their professional experience and all available evidence (including non-exam assessment; homework assignments or mock exams; and any other existing records of student performance over the course of study) to make a fair and objective judgement of the grade they believed a student would have achieved had they sat their exams this year.
  2. Schools and colleges provide a rank order of students within each grade in a subject.
  3. Exam boards then use the grades and rank order to standardise grades*. For some students, this will result in either a lower or higher grade.

The recent news from Scotland, where the Scottish Qualifications Authority (SQA) used a similar approach, but with some important distinctions, indicates that this is not a straightforward process. More than 120,000 of grades estimated by teachers were moderated downwards by the SQA. This had a disproportionately large effect on the awards and, ultimately, life chances of young people from disadvantaged backgrounds. Less than one week after SQA issued the results the Deputy First Minister and Cabinet Secretary announced that teachers’ original estimated grades would stand. This has led to a 14.3% rise in the pass rates for the Covid cohort of students compared to 2019 results.

A level results for England are being released today and GCSE results on August 20. So, this is a good time to reflect on some fundamental questions about the roles and purposes of public exams and how these were not taken adequately into account when deciding that alternative grading arrangements should be based on a predominantly psychometric model.

Who is responsible for public exams?

Assessments and exams can have various functions, from formative (to inform teachers about students’ knowledge and understanding and how to enhance progress), to summative decisions. Public exams fall in the latter category as they inform decisions about qualifications of students for the labour market or next levels of education. Grades for specific subjects decide someone’s access into university or are used by employers to hire someone for a specific role or job.

Because of the importance and high stakes nature of these decisions, a standardisation process is put in place to ensure the legitimacy of these grades. Standardising the exams should warrant public trust that the awarded grades are a good representation of a student’s competencies, but also that students can compete on a level playing field for opportunities with students from previous and future years.

As public exams have wider consequences than just to inform in-school teaching or decision-making (e.g. on grade repetition or progression), the responsibility for an accurate outcome is therefore not solely one for individual teachers, but also for the school and education policy-makers and regulators (DfE, Ofqual in England and Learning Directorate, SQA in Scotland).

Who knows best and who can we trust to make an accurate decision?

The question is then also who is best able to make an informed decision about the competencies of individual students and the grades they should be awarded? The teacher? The school? Some external agency who doesn’t know the individual student personally, such as Ofqual or SQA?

This question contains various elements: whom do we trust best to know the capabilities of individual students? But also are there appropriate incentives and measures in place for that person to make an accurate and well-informed assessment? Arguably the teacher who has worked with an individual student would have the most detailed and holistic insight into their performance.

However, we also know that human decision-making is prone to bias and teachers may have unconscious views about specific groups of students (or individual students) that may affect their assessment. Research evidence has pointed to differences in teacher judgement that relate to gender, class and ability where teachers’ predictions of students’ grades may be highly inaccurate.

Furthermore, within a high-stakes accountability system such as England’s, where schools are ranked on their students’ performance, they may also face pressure to be somewhat lenient in awarding grades. The extraordinary conditions under which teachers have had to teach and in which students have had to prepare for exams since lockdown may also have prompted some leniency. The standardisation process should be viewed as an attempt to ensure as much fairness and consistency in awarding grades as possible, where only merit and not background informs the outcome of the process.

How students prepare for school work and public exams

Finally, we also need to consider how students prepare for school work and public exams. In normal times, when students sit for standardised exams, some would put most of their efforts into cramming, sometimes supported by private tutors (so called ‘shadow schooling’). Other students would choose a more balanced strategy and prepare and revise for both school work and teacher exams, as well as their final examinations. These variations in strategies may penalise students who tend to only cram for final exams and are now assessed only on their performance on teacher assessments for which they didn’t prepare well. Can we hold students responsible for choosing exam prep strategies when they were unaware of the consequences?

If there is one lesson to learn from the current debate it is that making a high stakes decision about an individual student’s future needs to be a shared responsibility, but one which has the teacher and the individual student at the heart of the decision-making process.

 

*The standardisation procedure involves the following steps: For each centre, in every subject, exam boards will use historical performance data to determine the proportion of students who achieved each grade in previous years. They will check this against prior attainment data for this year’s students compared to the prior attainment of students making up the historical data. The predicted grade distribution for the centre in the subject might be adjusted upwards or downwards according to the prior attainment distribution of the 2020 students, compared to previous years. Exam boards will then overlay the centre’s rank order of students onto the predicted grade distribution and allocate grades to students, without changing the rank order. This will have the effect of amending the centre assessment grade to align it with the predicted grade distribution meaning that, for some students, the grade they are allocated will not be the same as the centre assessment grade that was submitted.

Print Friendly, PDF & Email

One Response to “The Covid-19 cohort and the ‘mess’ of public exams: reconsidering roles and responsibilities”

  • 1
    Terry Pearson wrote on 16 August 2020:

    For sure, there are times in our lives when we have little choice but to rely heavily on the judgements of others. These times can be especially tense when the decisions made are placed into the hands of relevant experts and can have major implications for us. The allocation of grades in public examinations is one example of a high stakes case and results day is a time at which many people expect the most suitable decision to be made on their behalf, whoever or whatever may be making it.

    A Level results day this year has generated a lot of concern about the way expert judgements have been used to arrive at examination grades from virtually everyone except the Government and the regulator. For many this is clearly a time when faith in the system has been challenged severely. This really is an opportune time to examine the examination system. A task which may be long overdue.

    For me there seems to be three key areas to investigate; the extent to which algorithms should be used in determining the grades for qualifications, whether past performance should be considered during moderation processes and the role of trust in both these activities.