By IOE Editor, on 2 February 2020
Welcome to the Centre for Education Policy and Equalising Opportunities (CEPEO) blog. This blog is a forum for staff, students, alumni and guests to write about and around CEPEO’s four thematic areas of research and engagement.
Our focus areas
The Centre concentrates around four thematic areas, each underpinned by the aim to improve the education system and equalise opportunities for all. These include:
Learning About Culture: The importance of arts-based learning, the limits of what we know about it, and the challenges of evaluating it
By Blog Editor, on 8 September 2021
Jake Anders, Kim Bohling, Nikki Shure and Alex Sutherland
There is little doubt about the importance of arts and culture to the education and upbringing of young people. Arts-based education gives young people an important means of creative expression and “arts for arts’ sake” is the best argument for having arts-based education in schools. However, far less is known about the link specifically between arts-based learning activities and pupils’ educational outcomes – partially due to a lack of robust studies on this topic. Yet this is a link that is often invoked as part of the overall importance of these programmes, partly in response to a perception that an increased focus on “core educational outcomes” is squeezing arts-based education out of schooling.
Over the past four years, a team from UCL and the Behavioural Insights Team has been working with the Education Endowment Foundation (EEF), the Royal Society for the Arts (RSA) and five arts-based education organisations on a project called Learning About Culture (see Table 1 below for programme detail). At the heart of this project are five randomised controlled trials (RCTs) involving around 8,500 children in 400 state schools across England. These evaluations were designed to look at the impact of five specific arts-based learning interventions on literacy outcomes. To our knowledge, these trials represent the largest collection of RCTs testing arts-based approaches on attainment outcomes. This body of research represents a significant step forward in understanding how to assess the relationship between creative activities and pupil outcomes, which is in itself important.
Each of the programme reports is linked to below and an overarching report that synthesises the findings, lessons, and recommendations can be found here. What you’ll immediately notice is the diversity of approaches we looked at – including music, storytelling, and journalism – reflecting the richness and diversity of the sector.
Table 1. Learning about Culture programmes
Each programme name is hyperlinked to the EEF project page.
|Programme name (Developer):||Description:|
|First Thing Music (Tees Valley Music Service)||Programme to train teachers in the Kodály method of music instruction in order to deliver daily a structured, sequential music curriculum of increasing progression (Key Stage 1)|
|Speech Bubbles (London Bubble)||Weekly drama and storytelling intervention aimed at supporting children’s communication skills, confidence, and wellbeing. (Key Stage 1)|
|The Craft of Writing
(Arvon, University of Exeter, Open University)
|Programme to develop teachers as writers combined with explicit focus on pedagogical implications for the classroom. (Key Stage 2)|
|The Power of Pictures
(Centre for Literacy in Primary Education)
|Specialist training from published author-illustrators and expert teachers helps primary teachers to develop their understanding of the craft of picture book creation. (Key Stage 2)|
|Young Journalist Academy (Paradigm Arts)||The project aims to develop pupils’ writing by involving them in journalism. In doing so, it aims to provide pupils with a meaningful purpose for writing and teach specific writing techniques. (Key Stage 2)|
What did we find?
When compared to ‘business as usual,’ we were unable to find improvements in pupil attainment in any of the five trials that we could reliably say weren’t due to chance. However, it’s important to emphasise that this is an extremely challenging barrier to clear and the fact of the matter is that most of the trials that the EEF funds don’t find impacts of interventions on pupil learning outcomes.
While it is easy to focus on the lack of a positive impact in the outcome measures, we also want to emphasise the trials found no evidence of detrimental effects from introducing such programmes. That is actually really good news, because it means that including arts-based programmes alongside ‘core curriculum’ subjects isn’t a zero-sum game where increasing time on arts means lower grades elsewhere.
And, as we pointed out above, improving pupil academic attainment is not the best or only reason for schools to implement arts-based interventions in schools. Although they did not improve literacy test scores, in interviews with participating teachers and pupils, we found that the programmes generated a great deal of enthusiasm among the teachers and pupils who took part in them. Perceived improved pupil engagement was a theme that emerged from the implementation and process evaluations across the five programmes.
In the overarching report, we also stress that these results should absolutely not be seen as the last word in whether arts-based learning is effective in improving outcomes for pupils. Necessarily, in this kind of research, we focused on one set of outcomes, which could be quantified and measured over a fairly short time horizon. But benefits could accrue in many other ways that we just couldn’t capture. For one, having these initiatives available to pupils may have long term consequences for the subjects these pupils choose at GCSE or A level or the career paths they choose to follow. We don’t know that there are these benefits, either, but our evidence shouldn’t be used to discount such possibilities.
Our reflections as evaluators
The overarching report contains thoughts and lessons for multiple audiences: researchers, funders, and arts organisations. For brevity, we’ve only selected a few takeaways to highlight here.
Evaluators and funders
There is a line of argument against our efforts here that what we can measure in trials (and research more broadly) is not always what ‘matters’, or what we ‘should’ measure. Equally, some will point to challenges in measuring what we did use as outcomes, as well. We know that the measures used are imperfect, but given the choice between imperfect measurement of something versus perfect measurement of nothing – or something further removed from the intervention – then we stand by our decision to do what we can in an imperfect world. This isn’t an abstract research issue: in order to be able to ascertain whether something is effective (or not) we need to be clear what we expect to change and measure that as best we can.
In line with EEF’s policy, reflecting their primary aim as an organisation, our impact evaluations focused on measuring pupil attainment outcomes. While this approach has many strengths given the undoubted importance of such outcomes, these projects – where we see positive signs of engagement based on the implementation and process evaluation but ultimately no impacts on our measured outcomes – highlight one of its key limitations: a null finding leaves a lot of unanswered questions. An alternative approach – with similarities to the increased emphasis on ‘mechanism experiments’ in economics and particularly important where there is a limited evidence base about how interventions work – would focus first on establishing whether the interventions do indeed affect the intermediate steps via which they are thought to improve attainment. This would help us first to establish whether the programme is working as we think it does or if there is more to be done to understand this crucial first stage to achieving impact on pupils’ academic attainment.
We really appreciate the courage and commitment from the arts-based education organisations who put themselves forward to participate in a multi-year evaluation process. The EEF’s support for both an individual and overarching approach to the evaluation meant that we were able to observe themes across the programmes that could be useful to other arts organisations. From these themes, we offer some recommendations for consideration.
Ensure buy-in and engagement from school staff at multiple levels.
High teacher buy-in was crucial for the day-to-day delivery of the programme, and senior leadership team (SLT) buy-in was important for supporting the teacher in high-quality delivery. For example, SLT members were able to ensure teachers had access to necessary resources and space, as well as ensure there was time in the timetable for the programme.
Carefully consider programme resource requirements and test assumptions about what’s available in schools
The interventions placed different demands on schools in terms of the resources needed to take part, and even where required resources were considered ‘standard’, challenges were still reported. In some cases, schools did not have resources, such as arts supplies, that were assumed to be available in most schools. In other cases, the schools had the required resources, such as technological equipment, but they were difficult to access. Organisations may want to consider how to surface these challenges early in set-up and whether they can provide any support to schools in overcoming them.
On a more personal note
As independent evaluators, we have a responsibility to be as objective as possible, recognise our biases, and do our best to minimise their influence on our work. We are also all researchers who care deeply about improving outcomes for pupils and furthering our understanding of ‘what works’ to support pupil development. When we are able to take the ‘evaluator hat’ off, this team also broadly supports the inclusion of arts in the school day, and some of us have direct experience of delivering arts-based learning opportunities either in the school day or extended learning space. We would have been thrilled to report that the programmes had a significant impact on attainment outcomes – not only to further enhance the toolkit for improving pupil outcomes, but also to secure further protection for the arts in the school day. Ultimately, we are not able to report those outcomes, and we stand by the findings of the six reports produced. We are still supporters of arts in education and we also enthusiastically support further research in this space, as there is certainly more to learn.
By Blog Editor, on 12 August 2021
By Jake Anders, Claire Crawford, and Gill Wyness
This piece first appeared on theGuardian.com.
This week’s GCSE and A level results confirmed the expectations of many who study education policy: the proportion of students achieving top grades in these qualifications has increased substantially compared to 2019, especially at A level. Students themselves should be extremely proud of their results, which were achieved under very difficult circumstances. Likewise, teachers have worked extremely hard to make the best assessment they can of their pupils’ performance. But there is no getting around the fact that these results are different – and not directly comparable with – pre-Covid results.
It is right to allow for the fact that students taking GCSEs and A levels this year and last are at a disadvantage compared to previous cohorts. In-person exams would have been next to impossible in 2020, and those assessed this year have missed significant amounts of schooling.
To deal with this, the government chose an entirely different means of measuring performance: teacher assessments. (We advocated a different approach, based on more flexible exams, in 2021.) This year’s approach has been rather more orderly than last year’s chaos, but the wide range of measures that teachers could consider – such as mock exams, in-class tests and coursework – inevitably led to variation in how schools assessed their pupils.
This year’s grades may also be capturing average or ‘best’ performance across a range of pieces of work, rather than a snapshot from one or two exams. This seems to have been particularly true at A level, where grades have immediate consequences for university entry decisions. In short, it is unsurprising that grades based on teacher assessment are higher than those based on exams alone: while some have called this grade inflation we think it’s more accurate to say that they are capturing different information.
But given they have been presented on the same scale, the stark increase in grades compared to pre-Covid times present significant challenges for current and future cohorts.
Even making comparisons between pupils within the 2021 cohort may be challenging. Using teacher assessment is likely to have disadvantaged some students relative to others. Previous research has shown that Black Caribbean pupils are more likely than white pupils to receive a grade from their teacher below their score in an externally marked test taken at the same time. Similarly, girls have also been found to perform better at coursework, while boys do better at exams on average. Differences by gender have been particularly apparent this year, with girls seeing larger improvements in performance than boys compared to pre-pandemic.
This year’s record high scores raise challenging questions. The much larger proportion of pupils getting As and A*s at A level, for example, may lead to universities relying more heavily on alternative methods of distinguishing between applicants – such as personal statements – which have been shown to entrench (dis)advantage.
There is also the all-important question of what to do next year: are this year’s grade distributions the right starting point, or should we be looking to return to something closer to the 2019 distribution? Is it possible to go back? And would we want to?
Assuming in-person exams are feasible next year, one possibility would be to return to 2019’s system as if nothing had happened. This would probably see substantial reductions in the proportion of students getting top grades, especially at A level. One can only imagine the political challenge of trying to do this.
Even more important is that the next cohorts of GCSE and A level students (and indeed the ones that follows – we are tracking the experiences of those taking GCSEs this year as part of a new UKRI-funded cohort study, COSMO) have also been affected by the pandemic, arguably to a greater degree than this year’s. They are therefore likely to underperform their potential and get lower grades than cohorts who took their exams before the pandemic struck. That is clearly not desirable.
It is important to continue making allowances for the exceptional circumstances young people have faced during this crucial time in their education. During the period affected by pandemic learning loss, our suggestion would be to design exams with more flexibility, allowing candidates to choose which questions to answer based on their strengths, as is common in university exams. This would enable a return to the fairest way to assess students – exams – while still taking account of lost learning.
Either way, any return to exam-based grades is likely to result in an immediate pronounced drop in results compared to the last two years, especially at A level. Gavin Williamson has suggested that the government will aim instead for a “glide path back to a more normal state of affairs”. This would smooth out the unfairness of sharp discontinuities between cohorts. But it would mean moving away from grades being based on the same standard over time, instead setting quotas of students allowed to achieve each grade, gradually reducing the higher grades and increasing the lower ones. Even if that seems a good plan now, it would be very hard to stick to: the fall-out from the small reduction in pass rates seen in Scotland this week would be a taste of things to come for years.
A more radical possibility would be to reset the grading system entirely. This would get around the political issue of there being very large or deliberate small falls in grades for future cohorts, but one wonders whether this is the right time to undertake such a drastic overhaul. The pandemic will have repercussions on young people’s grades for years to come: is the best approach really a total reset right now?
The question of what to do next is one that policymakers will have to grapple with over the coming months and years. Of more fundamental importance and urgency, however, is that pupils have experienced widespread learning losses due to the pandemic – regardless of what their grades show – and are likely to be affected by these for years. Students require ongoing support throughout the rest of their educational careers, including catch up support throughout school, college and university.
We cannot simply award them GCSE and A level grades that try to look past the learning they have lost and move on – the learning loss remains and must be addressed.
Dr Gill Wyness & Dr Jake Anders are deputy directors of the UCL Centre for Education Policy & Equalising Opportunities (CEPEO). Dr Claire Crawford is an associate professor at CEPEO.
By Blog editor, on 25 June 2021
By Professor Paul Gregg
Lockdown artificially closed down large parts of the economy but to understand where the economy is and will be in the next year or so, it is crucial to make a distinction between economic activity that has been lost and that which has just been delayed. To make this distinction clearer, think of Easter Bank Holidays. Easter normally falls in April but in some years it is in March. In a year when it falls in March, the economic activity for March falls sharply compared to other years, because the Bank Holidays close large parts of the economy. But correspondingly April will see higher output as the economy re-opens. There is no effect here on overall output or underlying economic performance. It is merely delayed by a month.
Lockdown has the same effect. It places a dam in the way of consumer spending, but behind the dam there is a build-up of demand that is released when Lockdown ends and the economy re-opens. This creates a surge of activity. The same can be seen in terms of vacancies. Locked down firms stopped recruiting as they weren’t trading. But staff members were still leaving to start other jobs in open sectors of the economy or leaving the labour force. The positions remain unfilled until the firm re-opens, then we have a surge as 6 months of vacancies appear at once.
There is currently an economic surge building, starting in April as the economy started to re-open but just as economic activity was artificially suppressed in Lockdown, the re-opening will artificially inflate the level of activity above the underlying level. This raises a number of key questions about where the economy is now and is heading. What is the underlying level of economic activity? How much pent-up economic activity is there to be released? Over what period will the surge occur? And what does this mean for government policy, especially for the government’s fiscal position?
Where is the economy now?
The 13 months from the end of February 2020 to the end of March 2021 saw a shortfall in economic activity of 10% compared to pre-crisis levels. April to June 2021 saw the economy start to re-open, with a mix of released activity with still partial closure, meaning rapid growth in activity. So from July, hopefully, a fully re-opened economy will see economic activity not just return to underlying levels but experience a surge from the release of the pent up demand.
The US offers a useful comparator here of underlying activity levels. It has not used Lockdowns so widely as the UK, and has not used a furlough programme to preserve employment, instead focusing on supporting the incomes of people who lose jobs (more than in normal times). In the US, economic activity in the first quarter of 2021 was just 1% below that of pre-crisis levels. In the absence of the crisis the economy likely would have grown, so a reasonable figure is that economic activity stands 3% below what would have happened without the crisis. The employment rate is 3% below peak levels and unemployment just over 2% higher. Note that the employment fall has been larger than the GDP fall in the US. In the UK economic activity was down nearly 8% from pre-crisis levels in the first quarter of 2021. The US situation suggests that at most underlying activity is around 1.5% down in the UK if the artificial effects of enforced Lockdown are stripped out. This is very modest given how scary things looked last year.
How much pent-up economic activity is to come?
There are two parts to gauging the size of this pent-up demand. What has happened to disposable incomes, and the extent of excess saving from that income.
Disposable incomes are about 1.5% down on pre-crisis levels in real terms, reflecting lower employment, the effects of furlough etc. The proportion of incomes saved (the Saving Ratio) in the UK have been over 10% higher than normal since the crisis hit. So there is 10% of peoples’ annual incomes that could be spent to take savings back to normal levels. This is a bit over £3,000 per household.
Now people could consume this slowly over the lifetimes or binge-spend. Evidence from lottery wins suggest large wins see spending on durable goods like a new car but a large portion is saved. Spending more generally is unaffected. Smaller wins see proportionately more spent and less saved. So people are likely to run this excess saving down over a couple of years and because of the relief as Lockdown ends this is likely to be front-loaded starting from April this year. In the second half of this year, therefore, we can reasonably expect the surge of spending on pubs, clubs and holidays to boost economic activity to between 5 and 6% above underlying levels or around 4% above pre-crisis levels. Then as the surge eases, next year would see no GDP growth as underlying improvements in the economy are masked by the spending surge ending.
The employment story is very different. Furlough meant that Lockdown didn’t see forced job shedding, just the effects of firms not hiring or closing down. The employment rate fell by 1.6% compared to 10% for GDP. So, the employment fall has been in line with underlying lost output but not the extra driven by forcing firms not to trade and consumers not to consume. The surge will, however, boost employment rapidly. This is already appearing in the data and unemployment should be expected to return to pre-crisis levels by the end of the year.
What does this mean for government policy?
The crisis has seen government debt rise by 20% of GDP by the end of last year, when the current deficit was £65 billion in the final quarter. The coming surge in activity, ending of furlough and other crisis spending should mean that the current deficit should evaporate. The government should be looking to post a surplus by early next year. There will also be a reduction in the debt to GDP ratio because of the boost to growth from the spending surge. The government should be then keeping the deficit below the level of growth to reduce the debt burden slowly.
This still leaves the question of what to do about the large increase in debt over the last year? The answer is absolutely nothing.
The surge in activity addresses the current deficit and around 1/3 of the increase in historic debt levels has been funded by Quantitative Easing from the Bank of England. Which leaves the Bank holding one third of all government debt. There are lots of issues about how to manage these holdings, but these do not incur interest payments or require urgent financing. These holdings are a long-term issue which means that the functional debt is 2/3 of GDP, not 100%, and this level is manageable until we are firmly past the legacy of the Covid Crisis. This will help reduce the current government budget deficit and ease the historic debt concerns enough to not return to the austerity policies of George Osbourne. It still, of course, means little room for major spending boosts.
The economic fallout from the Covid Crisis has been much less than feared last year and the release of excess savings, resulting from Lockdown, will create a temporary economic boom in the second half of this year. The limited economic damage reflects in large part the successful management of the economic fallout by the Chancellor and stands in massive contrast to the extremely poor handing of the health crisis itself.
The Chancellor has in effect used a major fiscal stimulus to overcome the effects of Lockdown. But more interestingly Furlough, the main spending ticket, acted as a highly targeted stimulus, focused on the hard-hit sectors. This then stopped leakage of reduced demand to other sectors. This high degree of targeting has been rather like the German Kurzarbeit, where firms in trouble in a recession can apply for government support to put workers on part-time working. Wages are then topped up by this support but fully, as with the 80% of wages paid under Furlough. The lessons then are: Fiscal stimulus works. That it should be targeted on jobs not consumption, through say VAT cuts. Finally, it should be targeted on stressed firms, sectors or other targeting devices and provide proportionately more support for lower waged jobs. It would be good to remember these lessons for the next recession, which is due in 2031.
 Recessions have occurred every 10 years on average since 1980
By IOE Editor, on 8 June 2021
By Jake Anders, Lindsey Macmillan, Patrick Sturgis, and Gill Wyness
Following a disastrous attempt to assign pupil grades using a controversial algorithm, last year’s GCSE and A level grades were eventually determined using Centre Assessed Grades (CAGs) following public outcry. Now, new evidence from a survey carried out by the UCL Centre for Education Policy and Equalising Opportunity (CEPEO) and the London School of Economics finds that some pupils appear to have benefited from an unfair advantage from this approach – particularly pupils with graduate parents. As teachers will again be deciding exam grades this year, this finding serves as an important warning of the challenges involved in ensuring that a system using teacher assessments is fair.
The decision to cancel formal exams in 2020 was taken at a late stage in the school year, meaning that there was little time for the government to develop a robust approach to assessment. After a short consultation, the Department for Education (DfE) decided that pupils’ exam grades would be determined by the teacher’s assessment of pupils’ grades, including their ranking. However, to prevent grade inflation due to teachers’ overpredicting their pupils, Ofqual then applied an algorithm to the rankings to calculate final grades, based on the historical results of the school.
A level pupils received their calculated grades on results days 2020, and although Ofqual reporting showed that the calculated grades were slightly higher than 2019 across the grade range, many pupils were devastated to find their teacher assessed grades had been lowered by the algorithm. More than a third of pupils received lower calculated grades than their original teacher assessed grades. Following widespread public outcry, the calculated grades were abandoned, and pupils were awarded the grades initially assessed by teachers. This inevitably led to significant grade inflation compared to previous cohorts.
This also created a unique situation where pupils received two sets of grades for their A levels – the calculated grades from the algorithm and the teacher allocated “centre assessed grades” or “CAGs”.
While it is now well established that CAGs were, on average, higher than the algorithm-calculated grades, less is known about the disparities between the two sets of grades for pupils from different backgrounds. Understanding these differences is important since it sheds light on whether some pupils received a larger boost from the move to teacher predicted CAGs, and hence to their future education and employment prospects. It is also, of course, relevant to this year’s grading process, as grades will again be allocated by teachers.
Administrative data on the differences between calculated grades and CAGs is not currently publicly available. However, findings from a new UKRI-funded survey of young people by the UCL Centre for Education Policy and Equalising Opportunity (CEPEO) and the London School of Economics (LSE) can help us to understand the issue. The survey provides representative data on over 4000 young people in England aged between 13 and 20, with interviews carried out online between November 2020 and January 2021.
Respondents affected by the A level exam cancellations (300 respondents) were asked whether their CAGs were higher or lower than their calculated grades. The resulting data reveal stark differences in the extent to which pupils were given a boost by the decision to revert to CAGs. As shown in Figure 1, pupils with graduate parents were 17 percentage points more likely to report that their CAGs were higher than their Ofqual calculated grades. The survey data are linked to administrative data on prior attainment at Key Stages 2 and 4, as well as demographic and background characteristics such as, free school meals status, ethnicity, SEN and English as an additional language). Even after accounting for differences between pupils across these characteristics, those with graduate parents were still 15 percentage points more likely to report having higher CAGs than calculated grades.
Figure 1. The proportion of young people reporting their CAGs were better than their calculated grades by whether or not they report that one of their parents has a university degree (left panel: raw difference; right panel: adjusted for demographic characteristics and prior attainment)
There are a number of possible explanations for these differences. First, it could be that pupils with graduate parents are more likely to attend particular types of schools which have a greater tendency to ‘over-assess’ grades. While not directly relevant to this sample, an extreme version of this are the documented cases of independent schools deliberately over-assessing their pupils, but this could also happen in less dramatic and more unconscious ways. It could, for example, be more likely among schools that are used to predicting grades as part of the process for pupils applying to highly competitive university courses, where over-prediction may help more than it hurts.
A second possibility is that graduate parents are more likely to lobby their child’s school to ensure they receive favourable assessments. Such practices are reportedly becoming more common this year, with reports of “pointy elbowed” parents in affluent areas emailing teachers to attempt to influence their children’s GCSE and A level grades ahead of teacher assessed grades replacing exams this summer.
A third possibility is that the relatively high assessments enjoyed by those with graduate parents is a result of unconscious bias by teachers. A recent review by Ofqual found evidence of teacher biases in assessment, particularly against those from SEN and disadvantaged backgrounds, while a new study from Russia showed that teachers gave higher grades to pupils with more agreeable personalities. Interestingly, we found no differences between FSM and non-FSM pupils, perhaps suggesting teachers were careful not to treat FSM pupils differently. But they may nonetheless exhibit an unconscious positive bias towards pupils from backgrounds that tend to be associated with higher educational achievement.
Our results do not afford any leverage on which of these explanations, if any, is correct. Regardless of what is behind this systematic difference, our findings show that pupils with more educated parents received an unfair advantage in their A level results last year, with potential repercussions for equality and social mobility. They also highlight this is a substantial risk for this year’s process – perhaps even more so without the expectation of algorithmic moderation: grading pupils fairly in the absence of externally set and marked assessments is setting teachers an almost impossible task.
The working paper ‘Inequalities in young peoples’ education experiences and wellbeing during the Covid-19 pandemic’ is available here.
Learn more about our project on the impact of the pandemic on young people here.
The UKRI Covid-19 funded UCL CEPEO / LSE survey records information from a sample of 4,255 respondents, a subset of the 6,409 respondents who consented to recontact as part of the Wellcome Trust Science Education Tracker (SET) 2019 survey. The SET study was commissioned by Wellcome with additional funding from the Department for Education (DfE), UKRI, and the Royal Society. The original sample was a random sample of state school pupils in England, drawn from the National Pupil Database (NPD) and Individualised Learner Record (ILR). To correct for potentially systematic patterns of respondent attrition, non-response weights were calculated and applied to all analyses, aligning the sample profile with that of the original survey and the profile of young people in England.
This work is funded as part of the UKRI Covid-19 project ES/V013017/1 “Assessing the impact of Covid-19 on young peoples’ learning, motivation, wellbeing, and aspirations using a representative probability panel”.
This work was produced using statistical data hosted by ONS. The use of the ONS statistical data in this work does not imply the endorsement of the ONS in relation to the interpretation or analysis of the statistical data. This work uses research datasets which may not exactly reproduce National Statistics aggregates.
By IOE Editor, on 3 June 2021
This blog post first appeared on the University of Bristol Economics blog.
Simon Burgess, June 2021
Yesterday saw the resignation of Sir Kevan Collins, leading the Government’s Education Recovery Programme. The pandemic has hit young people very hard, causing significant learning losses and reduced mental health; the Recovery Programme is intended to rectify these harms and to repair the damage to pupils’ futures. His resignation letter labelled as inadequate the Government’s proposal: “I do not believe that it is credible that a successful recovery can be achieved with a programme of support of this size.”
The rejection of this programme, and the offer of a funding package barely a tenth of what is needed, is hard to understand. It is certainly not efficient: the cost of not rectifying the lost learning is vastly greater than the £15billion cost (discussed below). And it is manifestly unfair, for example when compared to the enormous expense incurred to look after older people like me. The vaccination programme is a colossal and brilliant public undertaking; we need something similar to protect the futures of young people. We have also seen educational inequality widen dramatically across social groups: children from poorer families have fallen yet further behind. If we do not have a properly funded educational recovery programme, any talk of “levelling up” is just noise.
Context – Education recovery after learning loss
An education recovery plan is urgently needed because of all the learning lost during school closures. For the first few months of the pandemic and the first round of school closures, we were restricted to just estimating the learning loss. Once pupils started back at school in September, data began to be collected from online assessment providers to actually measure the learning loss. The Education Endowment Foundation is very usefully collating these findings as they come in. The consensus is that the average loss of learning is around 2-3 months, with the most recent results the most worrying. Within that average, the loss is much greater for students from disadvantaged backgrounds, and the loss is greater for younger pupils. To give only the most recent example, the latest data shows that schools with high fractions of disadvantaged kids saw falls in test scores twice as severe as those in low-poverty schools, and that Year 1 and Year 2 pupils experienced much larger falls in attainment. Government proposals for “Recovery” spending for precisely these pupils would be next to nothing, as Sir Kevan Collins notes in his Times article today: “The average primary school will directly receive just £6,000 per year, equivalent to £22 per child”.
The Government’s proposals amount to roughly £1 billion for more small-group tutoring and around £500m for teacher development and training. I am strongly in favour of small-group tutoring, but the issue is the scale: this is nowhere near enough. It is widely reported that Sir Kevan Collins’ estimate of what was required was £15 billion, based on a full analysis of the lost learning and the mental health and wellbeing deficits that both need urgent attention. For comparison, EPI helpfully provide these numbers on education recovery spending: the figure for England is equivalent to around £310 per pupil over three years, compared to £1,600 per pupil in the US, and £2,500 per pupil in the Netherlands.
Why might the programme have been rejected? Here are some arguments:
“It’s a lot of money”
It really isn’t. An investment of £15bn is dwarfed by the cost of not investing. Time in school increases a child’s cognitive ability, and prolonged periods of missed school have consequences for skill growth. We now know that a country’s level of skills has a strong (causal) effect on its economic growth rate. This is a very, very large scale problem: all of the 13 cohorts of pupils in school have lost skills because of school closures. So from the mid-2030s, all workers in their 20s will have significantly lower skills than they would otherwise have. And for the 40 years following that, between a third and a quarter of the entire workforce will have lower skills. Lost learning, lower skills, lower economic growth, lower tax revenues. Hanushek and Woessman, two highly distinguished economists, compute this value for a range of OECD countries. For the UK, assuming that the average amount of lost learning is about half a year, their results project the present discounted value of all the lost economic growth at roughly £2,150 billion (£2.15 trillion). Almost any policy will be worthwhile to mitigate such a loss.
“Kids are resilient and the lost learning will sort itself out”
This is simply wishful thinking. We should not be betting the futures of 7 million children on this basis. Economists estimate the way that skills are formed and one key attribute of this process can be summarised as “skills beget skills”. One of the first statements of this was Heckman and co-authors, and more recent researchers have confirmed this, and also using genetic data. This implies that if the level of skills has fallen to a lower level, then the future growth rate of skills will also be lower, assuming nothing else is done. It is also widely shown that early investments are particularly productive. Given these, we would expect pupils suffering significant learning losses to actually fall further behind rather than catch up. Sir Kevan Collins makes exactly this point in his resignation letter: “learning losses that are not addressed quickly are likely to compound”.
Perhaps catch-up can be achieved by pupils and parents working a bit harder at home? There is now abundant evidence from many countries including the UK that learning at home is only effective for some, typically more advantaged, families. For other families, it is not for want of trying or caring, but their lack of time, resources, skills and space makes it very difficult. The time for home learning to make up the lost learning was March 2020 through March 2021; if it was only patchily effective then, it will be less effective from now on.
“There’s no evidence to support these interventions”
This is simply not true, as I set out when recommending small-group tutoring last summer. There is abundant evidence that small-group tutoring is very effective in raising attainment. There is also strong evidence that lengthening the school day is also effective.
This blog is less scientifically cold and aloof than most that I write. I struggle to make sense of the government’s proposals to provide such a half-hearted, watered-down recovery programme, to value so lightly the permanent scar on pupil’s futures. The skills and learning of young people will not magically recover by itself; the multiple blows to mental health and wellbeing will not heal if ignored. The Government’s proposal appears to have largely abandoned them. To leave the final words to Sir Kevan Collins: “I am concerned that the package announced today betrays an undervaluation of the importance of education, for individuals and as a driver of a more prosperous and healthy society.”
By IOE Editor, on 4 May 2021
By Paul Gregg
The old saying is that “If you ask a stupid question, you get a stupid answer”. The government-sponsored report from the Commission on Ethnic and Racial Disparities does just this on ethnic pay gaps. The central point is about comparing like-with-like when considering access to better-paying jobs in Britain. This blog post starts with a balanced assessment of what ethnic pay gaps in Britain actually look like, before explaining why the ONS analysis that the Commission draws on gets it so wrong.
Ethnic pay gaps from the Labour Force Survey
If we estimate the average (mean) pay gap between a Black, Asian, or Minority Ethnic (BAME) person and their White counterpart, living in the same region, and with similar educational achievement, using the nationally representative Labour Force Survey (LFS) of all with positive earnings, we find an ethnic pay gap of 14%. So similarly educated BAME people from the same place earn 14% less than White people. This is almost exactly the same pay gap as that found between men and women, and for those born into less advantaged families, compared to those born to more affluent families, again given the same educational achievement. The British labour market creates massive inequality of opportunity between people achieving the same education, across ethnicity, gender, and family background.
How does this compare with the Commission findings?
So the ethnic pay gap comparing like with like is 14%. So how on earth did the Commission come up with a 2.3% gap? There are two major parts to this.
The first is region people live in. The ONS report that the Commission draws on does not compare people in the same region. But ethnic minorities are not evenly spread across the country. They live disproportionately in London, the South East and major cities like Birmingham and Manchester. These are areas with higher pay but also higher living costs, especially in terms of housing costs. This 2.3% gap is comparing pay of BAME groups living in high-cost London to White populations living in low-cost Wales and the North East of England etc. This doesn’t make sense. One approach to make this more comparable would be to adjust for housing costs of where people live, but the easier approach is to compare BAME Brummies to White Brummies, and BAME Londoners to White Londoners – i.e. to compare BAME and White people living in the same region. Instead, this study gives a region-by-region breakdown of the ethnic pay gap, which is indicative of a pay gap between white and BAME groups, irrespective of where people live, of around 7%. This is one step closer to a balanced assessment but was not headline given by the Commission.
Well Paid Jobs
The second issue needs a little more explanation. Britain’s jobs have a wide distribution of pay levels. The minimum wage means that pay differences at the bottom are not that great. Pay of the person in the middle of the pay distribution was £13.68 Per Hour in 2020 (pre-pandemic). This is where ½ the employed population earn more and half less – the median. Low paid people earn between £8.50 and £9 per hour (so a little above 60% of the median). One quarter earns more than 1.5 times this median figure, 10% earn more than 3 times this, and 5% more than 7 times. In other words, there are a small minority of jobs with extremely high pay. These are in law, business, and finance predominantly.
The ONS analysis which the Commission draws so heavily on, completely ignores access to these top jobs, because it measures pay gaps using– the pay gap between the person in the middle of the White earning distribution and the middle of the BAME one. This excludes differences in access to high paying jobs from the analysis. The average based on the mean (which is what all people think of as the average) rather than median, assesses the gap across all jobs. Doing this moves the pay gap from 7% or so for people in the same regions to 13%. Surely any assessment of disparities in opportunity would include access to the elite jobs in society as well as more typical jobs. It has to – to do otherwise is just stupid. The point is well made in the report in looking at BAME groups in the Civil Service (Figure 9, p12). Across departments as a whole, about 15% of staff are from BAME groups. But in senior roles, the number is half of this. Ethnic minorities of equal educational attainment systematically do not get opportunities leading to Britain’s higher-paying jobs.
Educational achievement, as highlighted by the Commission report, has been a huge success story, educational levels in the BAME community are now a little higher than in the White population. Adjusting for this too, to compare Black and White with the same education to look at disparities in opportunities, pushes the pay gap up a little further to 14%. Comparing individuals with the same education, therefore, is making very little difference to the pay gap, as you would expect. The inequalities of opportunity lie beyond education in the labour market.
Britain’s ethnic minorities are well educated but are not progressing in the labour market to the highest paid jobs. Yet a key report on ethnic disparities in opportunities chooses to assess pay gaps in a way that ignores this entirely. How stupid is that?
By IOE Editor, on 23 April 2021
Jake Anders and Carl Cullinane
The COVID-19 pandemic and its impact is a generation-defining challenge. One of its most concerning aspects, particularly in the long term, is the already profound effect it has had on young people’s lives. Disruption to their development, wellbeing and education could have substantial, long-lasting effects on later life chances, particularly for those from lower-income homes. Evidence is already showing disadvantaged pupils lagging 5 months behind their peers. This poses a unique challenge for educational policy and practice, with the scale of the disruption requiring solutions to match that scale.
In order to address these impacts, it is vital that we fully understand these effects, and in particular, the disproportionate burden falling on those from certain groups, including those from lower socio-economic backgrounds and minority ethnic groups. This needs high-quality data. Recovering from the effects of the past 12 months will be a long–term project, and to reflect this we need research of similar ambition.
The COVID Social Mobility and Opportunity Study (COSMO for short), launched today, seeks to play this role, harnessing the power of longitudinal research to capture the experiences of a cohort of young people for whom the pandemic has had an acute impact, and its effects on their educational and career trajectories.
This country has a grand tradition of cohort studies, including the pioneering 1958 National Child Development Study and the 1970 British Cohort Study. Such studies are a key tool in understanding life trajectories and the complex factors that shape them. And they are particularly vital when it comes to measuring the impact of events that are likely to last through someone’s life course. The existing longitudinal studies, including those run by our colleagues in the UCL Centre for Longitudinal Studies, have played a huge role in understanding the impacts of the pandemic on society in the last year.
But there is a key gap in the current portfolio of cohort studies: and that is the generation of young people at the sharp end of their school education, who would have taken GCSEs this summer, and within a matter of months will be moving onto new pathways at sixth form, further education, traineeships and apprenticeships. The impacts on this group are likely to be profound and long-lasting, and understanding the complex elements that have aggravated or mitigated these impacts is crucial.
A variety of studies have already collected some such data, providing emerging evidence of inequalities in pupils’ outcomes and experiences of remote schooling. This has highlighted alarming challenges for pupils’ learning and wellbeing. However, to develop a full understanding we require the combination of rich, representative, survey data on topics such as learning loss experiences, wellbeing, and aspirations, linked with administrative data on educational outcomes, and concurrent interventions. We also need to follow up those young people over the next few years as they pass through key stages of education and their early career, to understand what has happened next, ideally long into their working lives.
Such evidence will be key in shaping policies that can help to alleviate the long–term impacts on young people. Which groups have suffered most and how, how long will these impacts persist, and how can we reduce their effect. These will be fundamental questions for national policymakers, education providers, employers and third sector organisations in the coming years, both in the UK and internationally.
That’s why we’re extremely excited to be launching COSMO with funding from UK Research and Innovation (UKRI)’s Ideas to Address COVID-19 response fund. Our study will deliver exactly that data over the coming years, helping to inform future policy interventions that will be required, given that the huge effects of the pandemic are only just beginning. As the British Academy pointed out on the anniversary of the first COVID lockdown – this is not going to go away quickly.
Beginning this autumn, the study will recruit a representative sample of 12,000 current Year 11 pupils across England, with sample boosts for disadvantaged and ethnic minority groups plus targeting of other hard-to-reach groups. Young person, parent, and school questionnaires – enhanced with administrative data from DfE– will collect rich data on young people’s experiences of education and wellbeing during the past challenging 12 months, along with information on their transitions into post-16 pathways via this summer’s unusual GCSE assessment process.
The study is a collaboration between the UCL Centre for Education Policy & Equalising Opportunities (CEPEO), the UCL Centre for Longitudinal Studies (CLS) and the Sutton Trust. The study will harness CEPEO’s cutting-edge research focused on equalising opportunities across the life course, seeking to improve education policy and wider practices to achieve this goal. The Sutton Trust also brings 25 years of experience using research to inform the public and achieve policy change in the area of social mobility.
COSMO will also be part of the family of cohort studies housed in the UCL Centre for Longitudinal Studies, whose expertise in life course research is world-renowned. We are also working closely with Kantar Public, who will lead on delivering the fieldwork for this large study, alongside NatCen Social Research. More broadly still, all our work will be co-produced with project stakeholders including the Department for Education and the Office for Students. We are also working with partners in Scotland and Wales to maximise comparability across the nations.
We are excited for COSMO to make a big contribution both to the landscape of educational research and to the post-pandemic policy environment, and we are delighted to be getting to work delivering on this promise over the coming years.
By IOE Editor, on 29 March 2021
By Ruth Lupton, Stephanie Thomson, Lorna Unwin and Sanne Velthuis
Inequalities in post-16 progression
The continued use of GCSEs as a blunt instrument for dividing pre-and post-16 education is one of the main causes of inequality in the English system, with impacts extending well into adulthood. The system asks the least confident, least academically successful young people, often (but not always) facing greater social and economic disadvantages, to make the most complex, life-shaping choices at the youngest age. Contemporaries with high academic attainment can progress more straightforwardly in a simpler, better understood, and historically better-funded system, often postponing decisions about occupational directions until age 18, 19 or later.
In our new research, funded by the Nuffield Foundation, we investigated the post-16 trajectories of young people who we described as ‘lower attainers’ – the 40% of each GCSE cohort who annually do not achieve a grade 4 (formerly C) in both English and maths. We presented our findings at a recent CEPEO webinar.
Our research employed a mixed-methods approach combining analysis of data from the National Pupil Database (NPD) and Individualised Learner Record (ILR), collection and analysis of local data about course and apprenticeship opportunities and entry requirements, and interviews and focus groups.
It shows how, in making the transition to the post-16 phase and attempting to progress beyond GCSEs, ‘lower attainers’ face multiple barriers including: inconsistent careers information and guidance; restrictive entry requirements that are often based on English and maths GCSEs (even when it is not clear why specific grades are needed); considerable local variation in accessible provision; and the low availability and poor visibility of apprenticeships. Apprenticeships are not the accessible pathway for ‘lower attainers’ that many people imagine, with only 5.8% moving into an apprenticeship at 16 in the 2015 cohort, for example.
It also shows that many young people start their post-16 phase on courses below the levels of learning they have already achieved and that learners with similar attainment at 16 enter the post-16 phase at different levels in different places, partly due to local differences in the mix of provision and institutional practices. This has potential repercussions for the achievement of Level 2 and Level 3 qualifications between 16 and 18/19.
Making the problems and solutions more visible
All this points to a complex and locally variable picture that needs to be better understood. But achieving clarity and understanding is very difficult due to the way attainment is measured and administrative data is collected, organised and made accessible.
Published statistics do not make the achievements and trajectories of lower attaining young people very visible, probably because much of the policy focus to date has been on raising KS4 attainment at the standard benchmarks. Coverage of lower-level qualifications (and of spatial variations) still lags behind.
And beyond the published statistics, there are major problems with the capacity for detailed analysis of the underlying data.
One issue is the data itself. Currently, we have two different large-scale administrative datasets for the post-16 phase – the NPD and ILR – with different definitions, variables and standards of documentation, and including different learners. Getting access to these involves a lengthy and difficult application procedure, and working with the data to summarise what learners are doing and achieving is a painstaking process. Looking at academic routes is easier than tracking routes through vocational courses and apprenticeships because matching NPD (Key Stage 4) to NPD (Key Stage 5) is easier than matching NPD to ILR. It is easier to look at outcomes than it is to understand progress and what learners are actually doing. So analysis often focuses on qualifications achieved as the data is collected in this way. We tried a different approach. We developed a measure of a learner’s ‘main level of learning’ – the level that they were spending most of their guided learning hours on – and thus were able to illuminate progression (or not) from levels already achieved. If the data sources were easier to access and use, much more could be done to analyse and explain course changes and progression between 16 and 19 and to understand what constitutes success and progress.
At a local level, basic information on the system in terms of the nature of provision at any given time as well as associated entry requirements is not routinely collected. To shed light on these issues, we had to collect and aggregate this information from provider and national agency websites, a labour-intensive task. The lack of available data leaves policy-makers unsighted as to what is on offer, who is missing out, and which gaps need to be plugged.
The other issue is analytic capacity. Even if there were better data, there is a paucity of academics with interests and expertise in further education and training compared with the numbers working on school and higher education research. And we need more research teams who can combine quantitative and qualitative methods to investigate the relationship between the pre and post-16 phases. Changing this now will require not just funding for projects and centres but investment in early-career scholarship, addressing status issues and links to teaching. And there are insufficient links between people who have the skills for data analysis and practitioners who understand how the system works on the ground. Cuts to local authority funding have further diminished local capacity and intelligence.
Thus, if the characteristics and trajectories of lower attainers at GCSE are to be better understood on an ongoing basis, three substantial changes will need to be made:
- Routine reporting of sub-benchmark achievement in more detail, and at relevant subnational scales.
- Improvement in data infrastructure and access.
- Increase in research and analysis capacity, both in local government and in universities and research institutes, and better links between them.
These will not be cheap. But if the government is serious about eroding the long-standing inequalities in post-16 progression, it simply must invest in making the situation more visible.
The research reported here was funded by the Nuffield Foundation, but the views expressed are those of the authors and not necessarily the Foundation. Visit www.nuffieldfoundation.org
By IOE Editor, on 18 March 2021
England’s school system has a high level of accountability – and a high level of accountability-related stress.
By John Jerrim
This blog post reports findings from Nuffield Foundation-funded research conducted into teacher health and wellbeing.
It is no secret that many in education dislike certain aspects of England’s school accountability system. Indeed, accountability is often blamed for causing high-levels of stress among the teacher workforce.
Yet we know surprisingly little about the link between accountability and teacher wellbeing.
This blogpost – based upon a new research paper I am publishing with colleagues today – looks at international evidence on this issue from TALIS 2018. (TALIS is the OECD’s Teaching and Learning International Survey.)
Do high accountability school systems have teachers who are more stressed about this aspect of their job?
As part of TALIS, teachers were asked how much stress was caused by different aspects of their job. This included “being held responsible for pupil achievement” – i.e. accountability.
In another international survey, PISA, headteachers were asked various questions about accountability, such as how school assessment data is used, whether school examination results are made publicly available (e.g. school league tables) and if there is a school inspectorate (e.g. Ofsted).
Using this data, we have created a “school accountability” scale, capturing the extent of school accountability systems used across the world. Countries receive a score between -1 and +1, where a higher number corresponds to more accountability measures.
In the chart below, the extent of accountability in the school system is plotted along the horizontal axis and the percentage of teachers who feel stressed about accountability on the vertical axis.
There are two key points of note.
First, England sits towards the top-right hand corner: we have lots of accountability in our school system, and also a lot of accountability-driven stress among teachers. (68% of teachers in England report feeling accountability-related stress, compared to a cross-country average of around 45%).
Second, there is a positive cross-national correlation, though this is relatively weak (the correlation coefficient is around 0.3). In other words, teachers do tend to be more stressed about accountability in countries where there is more accountability within the school system. Yet this relationship is not that strong – and certainly not deterministic.
For instance, there are countries with systems of school accountability similar in extent to England’s – most notably New Zealand and the United States – where teachers are a lot less likely to be stressed by this part of their job.
Now, as I have written before, results from such cross-national analyses need to be treated very carefully. The chart above should be treated as a conversation starter, rather than being used as ‘proof’ of anything more.
It does nevertheless raise important questions about the pros and cons of England’s current system of school accountability. In particular, do we have the right balance between quality assurance of schools and ensuring that this does not stress teaching staff out?
How is accountability-induced stress among teachers linked to the stress felt by headteachers?
Within our paper, we also consider how stress-induced by accountability is shared among staff within the same school.
For instance, do teachers feel more stressed about accountability when their boss – headteachers – feel stressed about this part of the job as well?
As the chart below, which relates to all TALIS countries, indicates, the answer is to some extent ‘yes’. Specifically, in comparison to teachers whose head does not feel stressed by accountability at all, teachers are around seven percentage points more likely to feel stressed by accountability if their headteacher says they feel ‘a lot’ of stress about this part of their job as well. To put this figure into context, on average across countries, approximately 45% of teachers say that they feel stressed by accountability.
So, there is indeed a relationship. But the difference is not particularly strong.
Accountability-induced stress is – to some extent – concentrated within particular schools.
Our analysis has also considered whether teachers are more likely to feel stressed about accountability if their colleagues (i.e. other teachers within their school) feel stressed by accountability as well.
Here, we found strong evidence of a positive relationship. For instance, again looking across all TALIS countries, a teacher is twice as likely to say that they feel stressed by accountability if their colleagues also feel stressed by this part of their job.
In other words, there are some schools where the stress caused by accountability is a particularly big problem that needs to be addressed.
We still need to know much more
So, England is a high-accountability, high-accountability stress country. We know there is a modest link between the stress of headteachers and the stress of their staff. And, to some extent, the problem of accountability-induced stress is clustered among teachers working within specific schools.
Yet, for all the talk about how school league tables and Ofsted inspections negatively affect teachers, we still know relatively little about the pros and cons of England’s extensive system of school accountability.
With the recent pause in many aspects of the school accountability system in England due to the Covid-19 crisis, now could be the ideal time for policymakers to take a moment and consider whether we have the right quality assurance mechanisms in place within our schools.
The project has been funded by the Nuffield Foundation, but the views expressed are those of the authors and not necessarily the Foundation. Visit www.nuffieldfoundation.org.
By IOE Editor, on 17 March 2021
By Patrick Sturgis, Lindsey Macmillan, Jake Anders, Gill Wyness
Children and young people are, mercifully, at extremely low risk of death or serious illness from the coronavirus and, for this reason, they are likely to be the last demographic in the queue to be vaccinated, if they are vaccinated at all. Yet, there are good reasons to think that a programme of child vaccination against covid-19 will eventually be necessary in order to free ourselves from the grip of the pandemic. In anticipation of this future need, clinical trials assessing the safety and efficacy of existing covid-19 vaccines on young people have recently commenced in the UK.
While children and young people experience much milder symptoms of covid-19 than older adults, there is currently a lack of understanding of the long-term consequences of covid-19 infection across all age groups and there have been indications that some children may be susceptible to potentially severe and dangerous complications. Scientists also believe that immunisation against covid-19 in childhood may confer lifetime protection (£), reducing the need for large-scale population immunisation in the future.
Most importantly, perhaps, vaccination of children may be required to minimise the risk of future outbreaks in the years ahead. If substantial numbers of adults refuse immunisation and the vaccines are, as seems likely, less than 100% effective against infection, vaccination of children will be necessary if we are to achieve ‘herd immunity’.
We now know a great deal about covid-19 vaccine hesitancy in general populations around the world from a large and growing body of survey and polling data and, increasingly, from actual vaccine uptake. Much less is known, however, about vaccine hesitancy amongst children and younger adults. Here, we report preliminary findings from a new UKRI funded survey of young people carried out by Kantar Public for the UCL Centre for Education Policy and Equalising Opportunity (CEPEO) and the London School of Economics. The survey provides high quality, representative data on over 4000 young people in England aged between 13 and 20, with interviews carried out online between November 2020 and January 2021. Methodological details of the survey are provided at the end of this blog.
Respondents were asked, “If a coronavirus vaccine became available and was offered to you, how likely or unlikely would you personally be to get the vaccine?”. While the majority (70%) of young people say they are likely or certain to get the vaccine, this includes 25% who are only ‘fairly’ likely. Worryingly, nearly a third express some degree of vaccine hesitancy, saying that they either definitely won’t get the vaccine (9%) or are that they are not likely to do so (22%).
Although there are differences in question wording and response alternatives, this represents a substantially higher level of vaccine hesitancy than a recent Office for National Statistics (ONS) survey of UK adults, which found just 6% expressing vaccine hesitancy, although this rose to 15% amongst 16 to 29 year olds.
Differences in vaccine hesitancy across groups
We found little variation in hesitancy between male and female respondents (32% female and 29% male), or between age groups. However, as can be seen in the chart below, there were substantial differences in vaccine hesitancy between ethnic groups. Black young people are considerably more hesitant to consider getting the vaccine than other ethnic groups, with nearly two thirds (64%) expressing hesitancy compared to just a quarter (25%) of those who self-identified as White. Young people who identified as mixed race or Asian expressed levels of hesitancy between these extremes, with a third (33%) of mixed race and 39% of Asian young people expressing vaccine hesitancy. This ordering matches the findings for ethnic group differences in the ONS survey, where 44% of Black adults expressed vaccine hesitancy compared to just 8% of White adults.
To explore potential sources of differences in vaccine hesitancy, respondents were asked to state their level of trust in the information provided by a range of different actors in the coronavirus pandemic. The chart below shows wide variability in expressed levels of trust across different sources between ethnic groups, but most notably between Black young people and those from other ethnic groups. Young people self-identifying as Black were considerably less likely to trust information from doctors, scientists, the WHO and politicians and more likely to trust information from friends and family than those from other groups. Although in terms of overall levels, doctors, scientists and the WHO are most trusted across all groups. Encouragingly, only 5% of young people say they trust information from social media, a figure which was consistently low across ethnic groups.
We also find evidence of a small social class gradient in vaccine hesitancy, with a quarter (25%) of young people from families with at least one parent with a university degree expressing vaccine hesitancy compared to a third (33%) of young people with no graduate parent.
We can also compare levels of vaccine hesitancy according to how young people scored on a short test of factual knowledge about science.  Vaccine hesitancy was notably higher amongst respondents who were categorised as ‘low’ in scientific knowledge (36%) compared to those with ‘average’ (28%), and ‘high’ (22%) scientific knowledge. This suggests that vaccine hesitancy may be related, in part, to the extent to which young people are able to understand the underlying science of viral infection and inoculation and to reject pseudoscientific claims and conspiracy theories.
How much are differences in vaccine hesitancy just picking up underlying variation between ethnic groups in scientific knowledge and broader levels of trust? In the chart below, we compare raw differences in vaccine hesitancy for young people from the same ethnic group, sex, and graduate parent status (blue plots) with differences after taking account of differences in scientific knowledge and levels of trust in different sources of information about coronavirus. The inclusion of these potential drivers vaccine hesitancy do account for all of the differences between ethnic and social class groups. While Black young people are around 40 percentage points more likely to express vaccine hesitancy than their White counterparts, this is reduced to 33 ppts when comparing Black and White young people with similar levels of scientific knowledge and (in particular) levels of trust in sources of coronavirus information.
Our survey shows high levels of vaccine hesitancy amongst young people in England, which should be a cause for concern, given the likely need to vaccinate this group later in the year. We also find substantial differences in hesitancy between ethnic groups, mirroring those found in the adult population, with ethnic minorities – and Black young people in particular – saying they are unlikely or certain not to be vaccinated. These differences seem to be related to the levels of trust young people have in different sources of information about coronavirus, with young Black people more likely to trust information from friends and family and less likely to trust health professionals and politicians.
There are reasons to think that actual vaccine take up may be higher than these findings suggest. First, Professor Ben Ansell and colleagues have found a decrease in hesitancy amongst adults between October and February, a trend which was also evident in the recent ONS survey. It seems that hesitancy is declining amongst adults as the vaccine programme is successfully rolled out with no signs of adverse effects and this trend may also be evident amongst young people. Given that parental consent will be required for vaccination for under 18s, it may be the case that parental hesitancy is as important for take up.
There may also have been some uncertainty in our respondent’s minds about what is meant by ‘being offered’ the vaccine, given there were no vaccines authorised for young people at the time the survey was conducted and no official timetable for immunisation of this group. Nonetheless, this uncertainty cannot explain the large differences we see across groups, particularly those between White young people and those from ethnic minority groups.
If the vaccine roll out is to be extended to younger age groups in the months ahead, we will face a considerable challenge in tackling these high levels of and disparities in vaccine hesitancy.
The UKRI Covid-19 funded UCL CEPEO / LSE survey records information from a sample of 4,255 respondents, a subset of the 6,409 respondents who consented to recontact as part of the Wellcome Trust Science Education Tracker (SET) 2019 survey. The SET study was commissioned by Wellcome with additional funding from the Department for Education (DfE), UKRI, and the Royal Society. The original sample was a random sample of state school pupils in England, drawn from the National Pupil Database (NPD) and Individualised Learner Record (ILR). To correct for potentially systematic patterns of respondent attrition, non-response weights were calculated and applied to all analyses, aligning the sample profile with that of the original survey and the profile of young people in England. Our final sample consists of 2,873 (76%) White, 208 (6%) Black, 452 (12%) Asian, 196 (5%) Mixed, and 50 (1%) Other ethnic groups. The Asian group contains respondents who self-identified as Asian British, Indian, Pakistani, Bangladeshi, Chinese or ‘other Asian’.
 Respondents in the Asian category are a combination of Indian, Pakistani, Bangladeshi, Chinese or ‘other Asian’ origin.
 We have not yet liked the survey data to the National Pupil Database and Individualised Learner Records which will enable us to use an indicator of eligibility for free school meals and IDACI. Currently we use parent graduate status as a proxy for socio-economic status.
 Once the survey is linked to the National Pupil Database we will be able to look across a wider range of measures of school achievement.
 There were ten items in the quiz, ‘low’ knowledge equated to a score of 5 or less, ‘average’ knowledge to a score of 6 to 8, and ‘high’ knowledge to a score of 9 or 10. Note that this test was administered in the previous (2019) wave of the survey.
This work is funded as part of the UKRI Covid-19 project ES/V013017/1 “Assessing the impact of Covid-19 on young peoples’ learning, motivation, wellbeing, and aspirations using a representative probability panel”.