How should we assess students this year, and what are the implications for universities?
By IOE Editor, on 10 November 2020
By Professor Lindsey Macmillan, Dr. Jake Anders, and Dr. Gill Wyness
In summer 2020, to much controversy, the UK government cancelled both GCSE and A level exams and replaced them with “Centre Assessed Grades” based on teacher predictions. While Scotland has cancelled some exams in 2021, and Wales appear to have arranged for something akin to exams to take place in a classroom setting, the English Government remains adamant that their exams will go ahead as planned. This strategy is not without its problems, but with some important adjustments, it’s still the best and fairest way to assess pupils.
Primary and secondary schools closed their doors in late March 2020 and only fully re-opened 6 months later in September. Schooling has continued to be disrupted for many, when classes or other ‘bubbles’ have to self-isolate due to suspected COVID outbreaks, meaning that learning has to move online. This situation is likely to result in further unequal “learning loss” as a result of inequalities in-home learning environments, including technology to reliably access lessons online.
Recent work by Ofsted reported widespread learning loss as a result of these closures, with younger pupils returning to school having forgotten basic skills, and older children losing reading ability. But the loss is not evenly distributed; Ofsted reported that children with good support structures were doing better than those whose parents were unable to work flexibly. Several analyses (e.g. Andrew et al, 2020; Anders et al, 2020) back this up, reporting that pupils from better-off families spent more time on home learning, and were much more likely to have benefitted from online classes than those from poorer backgrounds. Work by the Sutton Trust found that children in households’ earnings more than £60,000 per year were twice as likely to be receiving tutoring during school closures compared to those earnings less than £30,000. While steps have been put in place to help pupils catch up, such as the pupil catch-up premium and the National Tutoring Programme, pupils this year will almost certainly be at a disadvantage compared to previous cohorts when they face this year’s exams, and the severity of disadvantage is likely to vary by family background.
While this might be evidence enough that exams should be cancelled this year, it is worth first considering that the alternatives:
- Continuous teacher assessment
Perhaps the most obvious alternative to exams is continuous teacher assessment, through the use of coursework, in-class testing and so on. This would negate the need for exams and would mean all students would receive a grade in the event that exams have to be cancelled due to a resurgence in the pandemic. Scotland has already committed to using teacher assessment instead of exams for their National 5s (equivalent to GCSEs) this year. While this does seem like a safe choice to replace exams, research has shown that teacher assessment can contain biases. For example, Burgess and Greaves (2013) compared teacher assessment versus exam performance at Key Stage 2, finding evidence of black and minority students being under-assessed by teachers, versus white students. Campbell (2015) similarly shows that teacher’s ratings of pupils’ reading and maths attainment at age 7 varies according to income, gender, Special Education Need, and ethnicity.
Using coursework to assess pupils (whether internally or externally marked and/or moderated) also risks interference from parents and schoolteachers, so that a pupil’s eventual grade could be more a reflection of the support they’ve received rather than their own achievements. And levels of support are likely to vary by SES, again putting those from poorer backgrounds at a disadvantage.
2. Teachers’ predictions
But sticking with exams is not without its risks. It is, after all, a pandemic, and the government could be forced to cancel exams at the last minute. If they leave it too late to implement continuous teacher assessment or an alternative form of external assessment then they will have to turn to more reactive measures – such as asking teachers to predict pupils’ grades (the method finally adopted for the 2020 GCSE and A level cohorts). This would at least have the advantage of being consistent with last year, but, again would likely result in biased measures of achievement. Predicted grades have been shown to be inaccurate, with the vast majority overpredicted (causing headaches for university admissions). However, work by Anders et al. (2020) and Murphy and Wyness (2020) showed that among high achieving pupils, those from low SES backgrounds and state schools are harder to predict and end up with lower predictions than their more advantaged counterparts.
3. A school leaving certificate?
There are more radical possibilities to consider. One is for schools to abandon assessment this year altogether, and to simply issue students with school leaving certificates, similar to that received in America for graduating high school. This would certainly level the playing field among school leavers. But it could lead to some big problems for what comes next. For example, without A level grades, how would universities decide which applicants to accept? Under this scenario, admissions tutors would become increasingly reliant on ‘soft metrics’ such as personal statements, teacher references and interviews. This may also lead to the more widespread use of university entry tests, which are already in place at some institutions. All of this is likely to be bad news for social mobility since the use of “soft metrics” has been shown to induce bias (Wyness, 2017; Jones, 2016) while there is very little evidence about the equity implications of using aptitude tests, except in highly specific settings (Anders, 2014) so the potential for unintended consequences is substantial.
But in theory, universities shouldn’t need to use entry tests – these pupils already have grades in national tests – their GCSEs. For this university entry cohort, they were sat before the pandemic, and are high-stakes, externally marked assessments. Indeed, Kirkup et al. (2010) find no evidence that the SAT (the most widely used aptitude test in the US) provides any additional evidence on performance once at university than using GCSE results on their own. Many universities already use GCSE grades as part of their admissions decision along with predicted A level grades. Yet these grades were measured two years ago now – and so will obviously miss any changes in performance since then. Indeed, recent work by Anders et al. (2020) suggests that GCSE performance is a poor predictor of where students are at, in terms of achievement, at the end of their A levels. Using administrative data and machine learning techniques, they predict A level performance using GCSEs, finding that only 1 in 3 pupils could be accurately predicted, and that certain groups of students (those from state schools and low SES backgrounds) appeared to be “underpredicted” by their GCSEs, going on to outperform at A level.
An alternative approach to exams?
The alternatives to exams raise many concerns, particularly for those from poor backgrounds. A better solution may be to design A level exams to take account of the learning loss and missed curricula experienced by pupils, and the fact that some pupils will have experienced this to different degrees. Ofqual was dismissive of this suggestion in their report on examinations for 2020/21, pointing to burden on exam boards among other factors, but while we take seriously the considerations they highlight, we think this underestimates the challenges of the status quo.
For all the headlines about Wales “cancelling” exams, from a first look, it appears that this is rather a simplistic summary. They are still planning to hold some kind of examination, which will be both externally set and externally marked, but when these will take place is now more flexible, and they will happen in class rather than in exam halls – ironically, removing the in-built social distancing normally associated with examinations. This kind of flexibility is needed in these difficult circumstances.
An alternative that has also been discussed in England is that exams could be redesigned so that the majority of questions are optional. In this way, they would look more like university finals, in which students are typically given a set of questions, and need only answer a subset of their choice – e.g. answer 2/7 questions. This would take account of the fact that pupils may have covered different aspects of the curricula but not all of it, since they need only answer the questions they are prepared for. While appreciating there are challenges with this approach, a carefully designed exam would at least provide pupils with a grade they have earned and would provide universities and employers with the information needed to assess applicants.
Universities should also be aware that students from different backgrounds will have experienced lockdown in very different ways, and those lacking school and parental support may still struggle to do well, even in well-modified exams. This could and should be tackled with the increased use of contextual admissions. Universities often cite fears that students from contextual backgrounds are more likely to arrive underprepared for university and risk failing their courses. But this year, lack of preparation for university may well be the norm, forcing universities to provide extra tuition and other assistance to help students get “up to speed”. There has never been more need, and more opportunity, for widespread contextual admissions.