X Close

IOE Blog


Expert opinion from IOE, UCL's Faculty of Education and Society


Adult GCSEs, anyone? We need qualifications that really work for students and employers

By Blog Editor, IOE Digital, on 7 January 2014

Brian Creese, NRDC (National Research and Development Centre for Adult Literacy and Numeracy)
I spent a very interesting day a couple of weeks ago at the IOE London Region Post-14 Network Research and Policy Working Day. A very healthy gathering of FE managers and practitioners were doubtless encouraged by keynote speaker Alison Wolf. Although the Wolf Review had more than 20 recommendations, the one which has had the greatest impact on the ground is the policy of ensuring that all students who are in education continue to study English and maths until they get their Grade A*-C. This has many ramifications for schools, sixth form colleges and FE colleges, including the creation of a desperate shortage of suitable staff to teach these subjects to a high standard.
Then last week a dry little document passed my desk, a Statistical First Release from the DfE: ‘Level 1 and 2 Attainment in English and mathematics by 16-18 students’. This tells us about those students who, despite the best efforts of school teachers and Secretaries of State, fail to attain the cherished grade. There are around 220,000 students who fail to get the A*-C grade in English first time around and 244,000 students who do not get their maths. What I found astonishing is that after subsequent study at school or college a mere 17,000 students from each cohort go on to obtain their required GCSE grades. This means that over 90% of those students who did not achieve the benchmark for English and maths initially have still not achieved it when they finish school or college.
The paper does a further breakdown according to the type of provider students go on to after taking GCSEs. Just over 20% go on to attend school, an academy or sixth form college. Of these about 60% are re-entered for GCSE, and success rates vary: from 22% to 42% in English and from 18% to 33% in maths. The picture is very different in FE where a large majority of the cohort attend: here, less than 10% are re-entered for GCSE with about 4% achieving A*-C.
What is unremarked in the statistical release is that most students in FE are studying Functional English and Functional Maths. Colleges widely believe that it is pointless plodding away at GCSE English and maths, which most learners will still fail, when there is a qualification that delivers the skills required in the workplace which can be delivered in ways that will engage with this cohort. The current success rates for Functional Skills level 2 are around 55% to 70%, which means that a much higher percentage of students attending FE colleges will leave with a level 2 qualification which makes sense for them and their employers.
So why do we continue to put students and teachers through this torture? The answer is not that GCSE is the ‘Gold Standard’ but that GCSE is the ‘Gold Brand’. Wolf is correct in saying that it is the only level 2 qualification that employers really understand and they have lost patience with learning the ins and outs of new and different types of qualifications foisted on them by hyperactive policy-makers.
However, is the curriculum content of school GCSE English and maths the best for young adults? If GCSE is the required qualification, perhaps we should think about a different GCSE for post-16s? One that concentrates on the skills needed by adults in work and home life? Not ‘functional’ perhaps, but ‘applied’? Adult Applied English/maths has a fine ring to it!
Developing a new GCSE qualification better suited to post-16s would be a long and hard path, but perhaps by creating a clear adult curriculum for GCSE English and maths we can finally provide a qualification that works for young people and adults and is recognised by employers.

We need a full-scale, politically neutral review of accountability and exams

By Blog Editor, IOE Digital, on 2 November 2012

Chris Husbands
If nothing else, today’s Ofqual report into this year’s GCSE reminds us that few things in education are more technically complex than assessment. The controversial report itself is a difficult document to navigate. The differences between marks, grades and awards, between syllabus content and specification structure, between coursework, controlled assessment and terminal assessment and the different things they can tell us are all a reminder that to construct, develop and manage an assessment regime is an enormous challenge. Ofqual picks its way through this complexity and has come up with a clear view: GCSEs went wrong in 2012 because the highly regulated system is overburdened. We expect too much of our assessment system, and as a result, our system drives perverse behaviour.
As Ofqual remind their readers, the English GCSEs which year 11 students completed this year were new:  the GCSEs they replaced in 2010 had been in place for eight years and teachers and schools had become used to them.  The replacement of coursework by controlled assessment – assessments completed in schools under controlled conditions – had been designed to address  perceived problems of external help and plagiarism (para 1.48) – but threw up new challenges about the management of controlled assessment in school. For Ofqual, the results this year were a crisis of regulation and of complexity. They point out that the reliance on controlled assessment – 60% of the marks in English GCSE – placed a big emphasis on the role of schools and “we do not regulate schools” (para 1.49). The report heaps some blame on the now (perhaps, in the circumstances, conveniently) abolished Qualifications and Curriculum Authority for failing to grasp the “difficulties of maintaining standards in a set of new qualifications of such complexity” (para 1.48 again).
These are devastating conclusions. Ofqual claim that regulation failed at the point of specification design, and introduced a major unregulated component into the assessment system. For Ofqual’s numerous critics, this is a whitewash, shifting the blame for the crisis onto teachers who over-marked controlled assessments, and diverting attention away from Ofqual’s own regulation of key aspects of the system – including the moderation of controlled assessment: essentially, examination board moderators did not cavil at schools’ marks. No-one reading the report from a dispassionate perspective can feel satisfied about the regulation and management of a complex examination system.
Tucked away in the report is perhaps the most important sentence: “We have found evidence that this [the use of examination thresholds at grade C] can lead to undue pressure on schools in the way they mark controlled assessments. A recurring theme in our interviews with schools was the pressure exerted by the accountability arrangements, and the extent to which it drives teachers to predict and manage grade outcomes” (para 6.3).
Over the last 30 years, we have placed greater and greater weight on grade boundaries: they determine not only children’s futures, but also the fate of schools and, increasingly, individual teachers’ career progression. Schools below threshold are subject to intervention strategies and may be taken over. For teachers, the mooted possibility of performance related pay systems would simply lay greater emphasis on the importance of examination results.
I blogged earlier this year about the infamous Atlanta testing scandal in the United States, where cheating became endemic because of the rewards for “success”.  We have, collectively, to reflect now on the school accountability system, and whether a crude examination-led accountability system is not always going to lead us into difficulty. Once again, Campbell’s law is vindicated: “The more any quantitative indicator is used for decision-making, the more subject it is to corruption pressures and the more apt it will be to distort the processes it monitors”.
If nothing else, the Ofqual report might put another nail in the coffin of the current school accountability system. Schools need to be held accountable, and the highest standards of attainment matter – but we appear to have created a system which drives the most perverse behavior – “cheating” as one highly respected journalist puts it.
Teachers are angry about the Ofqual report.  They believed that they were acting not only professionally and morally but also with great technical accuracy.  No-one who has examined the extra-ordinary sophistication of schools’ data tacking systems can fail to be impressed. They believed that they were doing what they were expected to do: using all their data internally and externally to map progress, to monitor performance, to predict outcomes and to design interventions.  I’m lucky:  I get to talk to teachers, school leaders and policymakers from around the world.  They are in awe of the technical abilities displayed in monitoring performance which are routine in English schools.  They understand that our information and performance systems are exceptional and our schools highly skilled.
Informed commentators in England, such as John Dunford, have argued that the time has come to move away from a system of external assessment to one based on internal assessment led by chartered assessors. Implicitly, the Ofqual report appears to make this more difficult. Its strong undercurrent – and another reason for the widespread professional anger – is that regulated assessment cannot be left to schools. That feels a disappointment, because properly conducted internal assessment can be much richer and more productive than most external examinations.
The Ofqual report is technically complex, and fascinating reading for those absorbed in the complexities of assessment, but it fails to pose really tough questions about the long-term future of assessment in England. It sets out the challenges of running  a modern assessment system without really making the point that complexity is inevitable; it accurately highlights the consequences for schools of the over-emphasis on single accountability measures, but it does not yet pursue the logic of this for the long-term development of assessment systems in England.
Perhaps this is because of a structural flaw in the makeup of Ofqual: it is, after all, a regulator. But there is enough in the report which documents the systemic failures of regulation and the perverse behaviour driven by the overlap between our assessment and accountability systems to be clear that something needs to be done. We need a full scale, politically neutral review of our education accountability framework.

The GCSE debacle: what, if anything, went wrong?

By Blog Editor, IOE Digital, on 4 September 2012

 Tina Isaacs
Reading the headlines about the current GCSE furore brought me back to the heady days of September 2002 and the last major examination crisis. Teachers reported marking discrepancies in certain A level coursework modules and complained to awarding bodies and the media. The BBC news headline on 15 September 2002 stated: “Inquiry into exam fixing claims… the exams watchdog is investigating persistent complaints from head teachers that this year’s A-level results were ‘fixed’ to stop grades ballooning”. 
The story ran in the media for weeks and resulted in the Tomlinson inquiry where modules in 31 A- level subjects were re-graded. Grade boundaries changed in 18 units out of a possible 1,200, resulting in just under 2,000 students getting higher AS and A level grades. The Qualifications and Curriculum Authority’s chairman was dismissed and the Secretary of State resigned.
This time around the crisis is more focused. Grade boundaries for GCSE English modules were changed between the January and June series, with the worst offender seemingly AQA’s English Language controlled assessment (coursework) module. Teachers marked their students’ coursework for June bearing in mind what the all important C/D grade boundary had been in January. But more marks were necessary to gain a grade C in June than in January.  Teachers were unsurprisingly upset that students who they believed would gain a comfortable C ended up with Ds.
Once again the press leapt in. For example, the Manchester Evening News reported that “Teachers say pupils have had their futures ruined after exam bosses ‘moved the goalposts’ on a crucial exam”.  The BBC ran a story under the headline “”.  Gove admits GCSE pupils treated unfairlyIt looks as though there will be an inquiry along the lines of Tomlinson’s 2002 report, possibly spearheaded by the House of Commons education select committee.
If so, I hope that it makes sure it fully understands the complexity of the issues. 2012 was the first year of the new English specification (syllabus) – actually three new specifications: English (a combination of language and literature), English language, and English literature.  There was also a new element in the specifications — Functional English. The English and English Language qualifications are tiered and foundation tier students’  grades are capped at grade C. Ofqual has introduced the notion of comparable outcomes, meaning that all things being equal, cohorts (but not necessarily individuals) with the same prior performance data should get the same outcomes in 2012 as in 2011. All of this means that setting standards in new qualifications is more demanding than maintaining standards in existing ones. 
For the January modules, the awarding bodies had less robust statistical information than they would in a mature qualification. In addition, proportionately few students’ work was graded, so awarders had less cohort information than usual.  Ofqual found that AQA’s January marking had been lenient, that is, some students who got C grades should have got D grades. Its report states that the June grades, derived from a much larger group of students, fairly represented achievement, taking into account students’ prior performance (mainly from key stage 2 tests) and awarders’ judgements. 
I’ve seen press reports claiming that  anywhere between 10,000 and 133,906 students who should have got a grade C in June received a grade D. That’s a huge discrepancy. I wish I could weigh in with my own well researched figures, but the Ofqual report was a bit light on statistics to make any definitive statements. Perhaps this is because the report was, rightly, written quickly or that Ofqual wanted to make its findings accessible to the general public. Given that a grade C in English is crucial to many young people’s futures, I wish that there had been a technical annex or two to help understand what, if anything, went wrong.