Brian Creese
For most of my years working in and around FE and Adult education I have not spent too much time thinking about GCSEs. Although GCSE re-sits account for a large cohort in the 16-18 sector, we at the IOE’s NRDC (National Research and Development Centre for Adult Literacy and Numeracy) have spent more time with the Skills for Life qualifications and working to develop and then bed in Functional Skills.
But following Alison Wolf’s report published in the early years of the current administration, GCSEs are the only game in town. I recently attended a consultation at BIS concerning the new English and Mathematics GCSEs and their impact on post-16 education. As I am sure regular Blog readers will know, there are changes to the content of both mathematics and English GCSE exams and these will be introduced for 16-18 year olds from 2016/17. Alongside this, all 16-18 students without A*-C English or mathematics now have to study for GCSE or an approved ‘stepping stone’ qualification. By 2020, the ‘ambition’ is for all adults (who now seem to be those over 19) to be on a GCSE path. As the DfE/BIS puts it ‘GCSEs are as right for adults as they are for (more…)
Teenagers across England are waiting nervously for their GCSE, AS and A Level results. Now new figures have shown more of them are choosing to take more “academic” subjects, such as the humanities, languages and sciences, until the end of school – an effect attributed to the new English Baccalaureate (Ebacc) of five core subjects introduced in 2010 by Michael Gove, the former secretary of state for education.
The Joint Council for Qualification has published an analysis of the subjects UK teenagers chose to take at A level and AS level in 2014. Its analysis points to some dramatic changes both for GCSE qualifications taken by 16-year-olds in 2013 and AS level qualifications taken by 17-year-olds in 2014.
GCSE entries for geography, history, French, German and Spanish all increased markedly from 2012 to 2013 – up 19.2%, 16.7%, 9.4%, 15.5% and 25.8% respectively. AS entries in geography, history and Spanish – all Ebacc subjects – increased significantly between 2013 and 2014, as the graph shows. AS science entries increased as well, albeit less dramatically.
The EBacc effect
These increases are chalked up to the first signs of the “EBacc effect”. This is the fallout from the policy to include a measure on school league tables showing the proportion of 16-year-old students at each school who achieved good grades (A star to C) across five core subjects. These subjects are English, mathematics, science, a language other than English and history or geography.
The EBacc effect is real, and to my mind, mostly a good thing. Since its inception, state schools have been entering more and more students onto these GCSEs. In 2013, government figures showed 35% of state school students were entered on programmes that could lead to an EBacc up from 23% in 2012 (in independent schools the figures are much higher). Of those students, 23% achieved the EBacc goal in 2013, up from 16% in 2012. Language entries, which had decreased sharply since 2004, increased to 48% of students.
This “Ebacc effect” has now been shown to continue on to AS Level, because students are likely to continue with these subjects they did at GCSE. Given the uptick in parallel AS subject choice, more students will fit the profile that selective universities are looking for: students who choose “facilitating” subjects, which largely parallel EBacc subjects.
This means that more and more students are enrolled on courses that will give them the most flexibility in choosing their futures, taking subjects that have both the breadth and depth to prepare students to progress in further or higher education, for work, for family life and for social and civic participation.
Driven by pressure on schools
So why have I qualified my enthusiasm? It’s because these increases are largely due to the perceived (and, starting in 2016, real) accountability pressures schools perceive themselves to be under, rather than a fundamental philosophical shift towards providing all of our students with the curriculum provision they deserve.
Because schools are accountable for their students’ performance on qualifications, the notion of a broad and balanced education (to use a somewhat hackneyed phrase) only seems to apply to higher achievers. In England, there seems to be a policy consensus that lower achievers need a skills-based rather than a subject or knowledge-based curriculum.
The underlying assumption, unfortunately shared across the political spectrum, seems to be that up to 50% of children have a “style of learning” that is simply not compatible with the academic grind of GCSEs and A levels. Consequently – in the conventional wisdom – such students need more applied or vocational qualifications.
But if there’s a worthwhile set of knowledge, skills and understandings enshrined in EBacc subjects, then shouldn’t all students be pursuing them? Michael Young at the Institute of Education has pointed out that until quite recently, government policy on education systematically marginalised knowledge. He argues instead for a curriculum for all that is built around substantive content but is based on the understanding of important concepts and universal values that all students should be treated equally and “not just members of different social classes, different ethnic groups or as boys or girls”.
The right direction
The EBacc effect may be a pull in the right direction. The new accountability measures for 2016 that feature the best eight GCSE subjects could be a further incentive, but these are still high-stakes measures that will provoke some schools, understandably, to try to game the system. The unintended consequences could be that schools pay less attention than they already do to lower achievers in their efforts to chase their slice of an already cut pie.
For now, I’m reserving judgement because; a) I think the shift to base accountability on the best eight GCSEs is going in the right direction and; b) we don’t really know how schools will change their students’ subject entry patterns. And so many other changes are happening simultaneously.
For both GCSEs and A levels the level of demand has increased, examinations have reverted to being linear rather than modular and the way the GCSEs will be graded has changed. At the moment, we cannot predict if these changes will also have an effect on which subjects schools offer all of their students, not simply the top half.
Tina Isaacs does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
This article was originally published on The Conversation.
Read the original article.
Chris Husbands
The government’s decision that only the first attempt at a GCSE will henceforth count towards a school’s place in the league tables is sensible. It is a response to widespread gaming of school GCSE outcomes. This has seen some schools entering young people for multiple exams in successive years, and, at the extreme, basing their curricula on the demands of assessment and accountability and not the school’s (legal and moral) duty to provide a broad, balanced education.
Of course, government has not banned re-takes – although earlier announcements suggested that Ministers would like to tighten up on them. It has simply decided that only the first attempt at an examination will count towards a school’s performance data. For students, re-takes are sensible. I try something; I fail. I learn from my failure and I improve. This is exactly the sort of lesson we want young people to learn for work and for life. But secondary students have been entered early for examinations for entirely different reasons. In too many schools, the Key Stage 4 curriculum –extended in some to three years on often spurious grounds – has become a vehicle for delivering assessment arrangements rather than for the needs of students. In some schools the key stage 4 curricula are simply unfit for purpose.
But the Government’s decision raises questions that go much wider, and raise questions about the relationship between assessment, curriculum, and accountability. We look to a few, simple tools to do too much: assessment dominates our accountability framework. It’s only a couple of weeks since the Government declared that all 16- and 17-year-olds would need to study English and Mathematics until they secured a GCSE grade C or equivalent. Now the message is that only the first attempt at such a qualification will count for accountability purposes. It’s sensible for 16- and 17-year-olds to continue to study English and mathematics. It’s sensible for a system to be structured to ensure that as many as possible achieve good grades at 16 or later. But to confuse this by telling young people that their redoubled efforts won’t help their school makes less sense.
Some argue that there is a different way of looking at this: that young people should take assessments when they are ready. Some very able teenagers are ready for GCSE Mathematics at very high levels at 14, although in most cases they would get a deeper and richer understanding of the subject if they did the exam later. Others don’t reach such examination readiness until they are 18 – or later. So it might be much better if the examination system allowed for assessment when ready, much as graded assessments in Music do. But if this is true, it makes little sense to publish accountability tables based on examination results at the arbitrary age of 16. We’d be interested in a driving school – call it Cautious Cars – which boasted that 98% of its learners passed their driving test first time. But we’d be concerned if we learnt that Cautious Cars did not allow any learner to take the test until they’d completed 100 lessons. We might think another – Dashing Drivers, perhaps – effective if it boasted that learners took their test after just six lessons, but we’d be disappointed to learn that only 10% passed. Cautious Cars is an expensive banker, where Dashing Drivers is cheap but risky. Assessment and accountability are at odds.
Accountability matters, but we need to be much clearer about what we are holding schools accountable for: reaching a level – the proportion of young people meeting GCSE C or above – or progression (progress made between entry to a school and leaving it). The two pull in different directions in relation for schools’ decision-making. Factor in curriculum and the challenges multiply: there are tensions between a broad, balanced curriculum – as was mandated in Kenneth Baker’s 1988 Education Reform Act – and the much looser curriculum free-for-all which has developed since the Labour government removed the requirement to study languages after age 14.
These are complex questions. At root, I suspect, most of us think government is right to clamp down, however clumsily, on a practice which has produced some of the most egregious gaming behaviours in secondary schools - though such a decision should have been subject to consultation to consider the implications. But resolving the real tensions around assessment and accountability depend on a deeper discussion about what we want from upper secondary education and how we can use the different tools of assessment, curriculum and external audit to get what we want.
Chris Husbands
The furore about 2012 English GCSE grading continues. For many headteachers, rightly committed to their own students’ success, the concerns are about fairness and what the results say about the success of their own teachers and schools. For Sir Michael Wilshaw, Her Majesty’s Chief Inspector of Schools, the experience provides an opportunity to ask questions about the rigour and fitness for purpose of English examinations. For the Secretary of State, the issue is the long-term reform of examinations.
None of these matters is straightforward. As Tina Isaacs points out in her excellent blog post, the technical challenges of managing an assessment system are daunting. But there are underlying issues which emerge from the debates so far which will need resolving before assessment policy can be settled.
The 2012 results were, as is well known, made up of different elements, including controlled assessment in schools and terminal examinations. Grades in January’s controlled assessments appear, on the Ofqual report evidence, to have been set too leniently, so that, as the Chief Executive of Ofqual said in an interview, the January candidates “got lucky”. Not surprisingly, there are renewed calls to reconsider the place of controlled assessments and to focus more on end of course terminal examinations.
Such a move would address the long-standing concern that 15 and 16-year-old pupils are “over-assessed”. But controlled assessments in schools allow assessment of aspects of learning which are difficult in a two or three hour final examination: speaking and listening in English, practical performance in drama, sport, art and design, music performance, experimental work in science. Any assessment activity is partial: it always samples a candidate’s performance. In other examinations – the driving test, graded assessments in music – assessment of practical performance is core to the final result. The technical challenge is to get the balance right between controlled assessment and external examinations in ways which are fit for the different assessment objectives.
A second feature of the debate which has caused immense argument comes from the introduction of “comparable outcomes” following a 2010 Ofqual decision. ‘Comparable outcomes’, designed to ensure that outcomes in successive examination years are broadly comparable, underpinned the setting of overall 2012 grade boundaries, and thus began to move examinations toward the pre-1986 model of “norm” rather than “criterion” referenced assessment. Norm and criterion referencing hang over the debate. In norm referenced assessment, candidates are assessed against each other, and the highest performing are awarded the highest grades. In criterion referenced assessment, candidates are assessed against a defined standard and all who meet it are awarded the highest grades.
We know, of course, that the most powerful indicator of any candidate’s likely attainment in examinations is prior performance. Since the Key Stage 3 tests were abolished in 2008, the principal measure of prior performance has been the results of Key Stage 2 assessment, which allows for attention to comparable performance as a guide to the calibration of subsequent outcomes. The introduction of “comparable outcomes” is not, strictly, norm referenced assessment, though it is obviously closer to norm referencing in determining awarding practices.
The implications of this are profound for secondary schools and the way they are held accountable. If “comparable outcomes” put a brake on GCSE performance – because prior attainment is the major determinant of subsequent attainment – then the long-term performance of secondary schools is, to some extent, pre-determined. If England is to move more decisively towards norm-referencing GCSE, then the concept of floor targets– which now require each school to secure GCSE 5A*-C grades including English and Mathematics for at least 40% of students — becomes difficult to manage. There are questions, moreover, about school-to-school support – the government’s principal tool for school improvement — since, as one outstanding headteacher provocatively asked, “why would I help a school next door? Every student in my neighbouring school I help push over the C/D borderline reduces the chance of one of my students securing a grade C”.
Assessment issues are always complex. Looking forward from 2012 there are tough policy questions to be asked. What range of skills and competences do schools, awarding bodies and politicians want GCSE to assess? What assessment approaches are most likely to secure this? Should the examination system ration success or do all young people, appropriately taught, have the potential to achieve at the highest level?
Chris Husbands
The blogosphere is bristling with responses to the Daily Mail’s story about the possible return of O-levels. I began my teaching career in the early 1980s. One of my abiding memories – and bitter frustrations – is that each year, 15-year-olds who had been cajoled, exhorted and motivated to keep going through CSE courses simply left at Easter, and never turned up for their exams. They saw no real point in turning up to complete an examination which they thought of as dead end – with no progression route and little labour market validity. In this respect, at least, they were showing themselves as pretty shrewd labour market economists.
So I was part of a generation of teachers which welcomed the introduction of GCSE in 1986. The Conservative secretary of state for education, Sir Keith Joseph, was determined that the new examination would be “tougher, because it would demand more of pupils; would be fairer because pupils would be judged by what they could do and not how they compared to someone else; and would be clearer because everyone would know what had been tested.” Sir Keith’s aim was to get 80-90% of pupils up to the level previously thought to be average. As Caroline Gipps, at the time a senior member of staff at the Institute of Education, pointed out, on norm referenced tests such as O-level, there is no point in trying to get every pupil to achieve an above average score, since, by definition, such tests are designed to have half the population scoring above and half below the mean.
By this measure, GCSE has been an enormous success. Performance rose: 41% of pupils scored A-C grades in 1988, but by 2011 the figure was 69%. School staying on rates increased sharply: they had been 36% in 1979, but rose to 44% in 1988, 73% by 2001 and almost 80% by the end of the decade. GCSE completed the 1973 work of RoSLA – the Raising of the School Leaving Age from 15 to 16. By and large, GCSE achieved the levering up of performance which Joseph had expected.
But none of this makes it unproblematic. One of the challenges was explained as long ago as 1994 by Caroline Gipps. GCSE used criterion-referenced assessment, and so “as the requirements become more abstract and demanding, so the task of defining the performance clearly becomes more complex and unreliable”. Put differently, it becomes more difficult to design assessment criteria which work at both extremes of the performance range. But it is not impossible, and assessment experience here and elsewhere suggests it can be done by ensuring a common core of curriculum entitlement, and a sufficiently varied and stimulating curriculum diet that there are opportunities for all young people at all levels to experience success.
A second challenge was not foreseen in 1988, and followed the annual publication of examination results focusing on 5 A/A*-C performance from the early 1990s: although in technical terms a GCSE pass was a grade G or better, league tables reinforced the idea – imported from an old O-level equivalence – that the cusp performance was at Grade C. There were thus incentives for schools to focus their effort on moving marginal performance at grade D up to grade C, and it became no easier to motivate a pupil on track for a grade G to improve by one grade than it had been to motivate the CSE students of the early 1980s.
The difficulty for the nation is that neither of these problems will be solved by introducing a new binary divide into qualifications, even if, as the leaked reports of DfE thinking suggest, the revamped O-level is “targeted” at the top 75% of the attainment range rather than the 60% target group for O-levels and CSE. There are two reasons. The first is that any system which designs in a selective process at the beginning of examination courses has a backwash effect: a divided system at 14 means making selection decisions by 13. The analysis by the Financial Times’s Chris Cook suggests that this will have a sharply differential effect in different parts of the country. Moreover, with any threshold there will be errors about mis-classifying pupils into the “wrong” route, closing down opportunities and dampening motivation. Ben Levin and Michael Fullan, writing about education system reform, warn that “literacy and numeracy goals must include higher-order skills and connections to other parts of the curriculum, such as science and the arts, to avoid the curriculum… becoming too narrow and disengaging”.
The second reason is that the most serious performance challenge we face as a nation is to do what our major competitors are doing and to seek to bring all young people up to Level 2 performance by the time they leave compulsory education. Given that the education participation age is rising to 17 and then to 18, the challenge is a curriculum rather than an assessment one: how do we secure high-quality, labour-market valid outcomes for all young people? That’s a question of curriculum design, educational quality and learner motivation.