X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

Archive for the 'Chris Husbands' Category

Exporting London Challenge is complex and challenging

By Blog Editor, IOE Digital, on 31 October 2013

 Chris Husbands

London schools are among the best in the country. Many are, simply, amongst the best urban schools in the world. This was not true even half a generation ago. But the evidence on the success of London schools is clear. The headline statistics – as Sam Freedman pointed out in an important blog which draws together a range of evidence – undersell the story. Their real success is in their performance for children from poorer backgrounds: as Sam pointed out, 49% of London pupils eligible for free school meals secure 5 A*-C GCSEs including English and mathematics, compared to just 36% outside London, while disadvantaged London schools appear massively to outperform similarly disadvantaged schools outside London.

Sam’s blog tries to do two things: first to explain why London schools are now so good and second to work out what we can learn from that for education reform and improvement elsewhere. Both are quite complex. Sam’s explanation itself explores two dimensions. The first is the scale of  socio-economic and demographic changes in London over the last 15 years, which have been striking. He quotes a Stephanie Flanders blog post suggesting that the value of property in London has increased by 15% since the recession began – at a time when real wages levels have fallen, and a remarkable map produced by Daniel Knowlson tracking the speed of gentrification in inner London. Indeed, those boroughs whose schools have improved most – Southwark, Newham, Tower Hamlets, Hackney – are those which have changed the most. Put differently: one reason London schools changed is because London pupils changed. London schools became posher.

So London Challenge was working on fertile ground. The review evidence its positive. In 2010, OFSTED concluded that “London Challenge has continued to improve outcomes for pupils in London’s primary and secondary schools at a faster rate than nationally”, and attributed this to clarity of purpose, consistency of monitoring and “programmes of support for schools… managed by experienced and credible advisers”. Above all “London Challenge… motivated London teachers to think beyond their intrinsic sense of duty to serve pupils well within their own school and to extend that commitment to serving all London’s pupils well”.

The most systematic evaluation of City Challenges, including London Challenge, noted that London head teachers themselves were convinced that London Challenge had made a difference, though it concluded, with due academic caution, that “A great many factors contributed to this improvement, including national policies and strategies and the considerable efforts of head teachers and staff. However, these factors apply everywhere in the country. The most plausible explanation for the greater improvement in Challenge areas is that the City Challenge programme was responsible”.

There were attempts to export London Challenge elsewhere in England: to Manchester and the Black Country. Sam wonders why these were less successful, though he underplays the success of Manchester.

This for me is where the issues become more  interesting. There are, I think, three reasons why it has proved more difficult to export London Challenge. The first is the issue of scale:  there are 400 secondary schools in London. Performance benchmarks can be finely calibrated: the now famous families of schools data comparing performance put schools into over 20 families each of 20 or so schools. It’s difficult to get such rich benchmarking at smaller scale.

The second is the problem of hindsight;  London Challenge was a great success. But it was not a single thing: it was a package of policies. It included school-to-school support,  it included the development of the Chartered London teacher scheme, it included improvements to teacher supply; it included academisation of schools. These were not done to a pre-ordained plan: they were customised to different settings. We look back on a policy initiative called London Challenge, but it wasn’t like that as it developed.  And this makes it difficult to replicate. London Challenge looked different depending where you looked from – which makes it difficult to copy and transfer. London Challenge, moreover, was well funded – schools that were already well-funded by national standards were even better funded as a result.

And  finally, there is the issue  of context: London was changing. The changing demographics of London were part of that context.  Other towns and cities are changing too, but in different ways.

All these things make the story complex. The key message of London Challenge: ambition for every child in every school is transferable. Some of the policy levers are transferable.  But the way they combine and are used together demands careful attention to context.

Great teachers or great teaching? Why McKinsey got it wrong

By Blog Editor, IOE Digital, on 10 October 2013

Chris Husbands
It’s a fabulous quotation: “The quality of an education system cannot exceed the quality of its teachers.” It has the sense of an underlying educational law, as compelling as Newton’s laws of motion. It’s routinely attributed to the 2007 McKinsey Report, How the world’s best performing education systems come out on top.
But if you dig into that report, you’ll find a footnote acknowledging that the quotation came from a senior government official in South Korea: yet another illustration of the old adage that a management consultant is someone who steals your watch and then tells you the time. But as an aphorism it has done its job, and is now routinely quoted by government ministers, education reformers and academics  the world over. A Google search yields over 180,000 uses of the  quotation since 2007. It crops up again, in disguised form, in Andrew Adonis’s contribution to last week’s Varkey Gems study on the status of teachers worldwide: “No education system can be better than its teachers”.
It’s a great quote. And it’s wrong. It took me a long while to work out what was wrong with it, until a line from Bob Schwartz, professor of practice in the Harvard Graduate School of Education, triggered my thinking. “What”, asked Schwartz in an OECD essay, “is the most important school-related factor in pupil learning: the answer is teaching”.  And that captures the difference.  It’s just as good a quotation, but it is different in three important letters: it’s teaching, not teachers.
A moment’s thought tells you that Schwartz has to be right and McKinsey have to be wrong. We can all teach well and we can all teach badly.  Even good teachers teach some lessons and some groups less well; even the struggling teacher can teach a successful lesson on occasion. More generally, we can all teach better: teaching changes and develops. Skills improve. Ideas change. Practice alters. It’s teaching, not teachers.
The three letters also have important policy implications. If you pursue the line that the important thing is teachers, you focus on people. You need to sack the weakest teachers and you need to design a system which guarantees that you can replace them quickly with better ones. Of course, performance managing very poor teachers out of the profession is important, and it is important that we recruit the brightest and the best. But these turn out to be very, very slow routes to improving the quality of an education system.
The English figures bear this out. There are 400,000 teachers in schools in England. About 30,000 new ones are trained each year. Assume the weakest 5,000 recruited each year can be replaced with 5000 who are definitely going to be better than the remaining 25,000 (there are some heroic assumptions here), and it will be many years before a visible impact is secured on the profession. It took Finland more than 30 years for recruitment practices to re-shape the profession. Changing teaching by changing teachers is a long, slow slog. And in some of those high performing countries, including South Korea and China, recruitment is – as the Varkey Gems report makes plain – helped by the extraordinary status enjoyed by teaching there. In fact, the status of teaching is a stronger attraction for committed candidates than relative salary levels. The status of teaching determines the extent to which policy can reshape teachers.
If you pursue the line that it is teaching that matters, you get a different set of policies. It’s still important to recruit and train those who can develop as excellent teachers, but you need to work continuously to improve the quality of teaching across schools: every teacher, in every classroom, in every school, getting better at teaching. This involves focusing on what drives really good teaching – committed teachers and high quality instruction, which itself depends on rigorous subject knowledge and knowledge of effective pedagogy, both leavened by imagination.
I’ve called it – with tongue partly in cheek – a formula for quality teaching: Q = C + E [K(s+ t)] + I. That is, quality depends on committed teachers (C),  plus effective pedagogy (E), based on subject knowledge (Ks ) plus knowledge of effective teaching (Kt), supplemented by imagination (I).
Forty years ago, policy assumed that schools made little difference to pupil outcomes:  outcomes were principally determined by social factors. School effectiveness research told us that that was not the case. Schools made a difference. Then we understood that school effects were the sum of classroom effects: teachers make a difference. But the key lesson is that it’s teaching, not teachers, which matters. Every teacher can teach better. That’s an equally great line.

Re-take that! Why the Government should rethink the role of exams in measuring school performance

By Blog Editor, IOE Digital, on 3 October 2013

Chris Husbands
The government’s decision that only the first attempt at a GCSE will henceforth count towards a school’s place in the league tables is sensible. It is a response to widespread gaming of school GCSE outcomes. This has seen some schools entering young people for multiple exams in successive years, and, at the extreme, basing their curricula on the demands of assessment and accountability and not the  school’s (legal and moral) duty to provide a broad, balanced education.
Of course, government has not banned re-takes – although earlier announcements suggested that Ministers would like to tighten up on them. It has simply decided that only the first attempt at an examination will count towards a school’s performance data. For students, re-takes are sensible. I try something; I fail. I learn from my failure and I improve. This is exactly the sort of lesson we want young people to learn for work and for life. But secondary students have been entered early for examinations for entirely different reasons. In too many schools, the Key Stage 4 curriculum –extended in some to three years on often spurious grounds – has become a vehicle for delivering assessment arrangements rather than for the needs of students. In some schools the key stage 4 curricula are simply unfit for purpose.
But the Government’s decision raises questions that go much wider, and raise questions about the relationship between assessment, curriculum, and accountability. We look to a few, simple tools to do too much: assessment dominates our accountability framework. It’s only a couple of weeks since the Government declared that all 16- and 17-year-olds would need to study English and Mathematics until they secured a GCSE grade C or equivalent. Now the message is that only the first attempt at such a qualification will count for accountability purposes. It’s sensible for 16- and 17-year-olds to continue to study English and mathematics. It’s sensible for a system to be structured to ensure that as many as possible achieve good grades at 16 or later. But to confuse this by telling young people that their redoubled efforts won’t help their school makes less sense.
Some argue that there is a different way of looking at this: that young people should take assessments when they are ready.  Some very able teenagers are ready for GCSE Mathematics at very high levels at 14, although in most cases they would get a deeper and richer understanding of the subject if they did the exam later. Others don’t reach such examination readiness until they are 18 – or later.  So it might be much better if the examination system allowed for assessment when ready, much as graded assessments in Music do. But if this is true, it makes little sense to publish accountability tables based on examination results at the arbitrary age of 16. We’d be interested in a driving school – call it Cautious Cars – which boasted that 98% of its learners passed their driving test first time. But we’d be concerned if we learnt that Cautious Cars did not allow any learner to take the test until they’d completed 100 lessons. We might think another – Dashing Drivers, perhaps – effective if it boasted that learners took their test after just six lessons, but we’d be disappointed to learn that only 10% passed. Cautious Cars is an expensive banker, where Dashing Drivers is cheap but risky. Assessment and accountability are at odds.
Accountability matters, but we need to be much clearer about what we are holding schools accountable for: reaching a level – the proportion of young people meeting GCSE C or above – or progression (progress made between entry to a school and leaving it). The two pull in different directions in relation for schools’ decision-making. Factor in curriculum and the challenges multiply: there are tensions between a broad, balanced curriculum – as was mandated in Kenneth Baker’s 1988 Education Reform Act – and the much looser curriculum free-for-all which has developed since the Labour government removed the requirement to study languages after age 14.
These are complex questions.   At root, I suspect, most of us think government is right to clamp down, however clumsily, on a practice which has produced some of the most egregious gaming behaviours in secondary schools ­- though such a decision should have been subject to consultation to consider the implications.  But resolving the real tensions around assessment and accountability depend on a deeper discussion about what we want from upper secondary education and how we can use the different tools of assessment, curriculum and external audit to get what we want.

The riddle of autonomous schools: how will researchers crack the code?

By Blog Editor, IOE Digital, on 30 September 2013

Chris Husbands
The Riddle of the Labyrinth, Margalit Fox’s hugely readable account of the deciphering of Linear B in the 1940s and 1950s tells the story of half a century of frankly obsessive work by utterly determined individuals. Some 2,000 clay tablets unearthed at the Cretan Palace of Knossos at the turn of the century were covered in a script the like of which no-one had ever seen. The challenge to deciphering them was that they were written in an unknown language in an unknown script about unknown topics.
The story is ultimately a triumph of research: a combination of mind-numbingly patient transcription, comparison and analysis, with flashes of interpretive genius, before, in 1953, amateur scholar Michael Ventris cracked the code.
The real fascination of Fox’s book is not in the outcome but in her account of the research process. The hero is Alice Kober, a now almost forgotten classicist in New York who, in a series of technical papers in the 1940s, identified the nature of the script, unlocked syllabic patterns and pointed the way to a solution – a solution which was frustratingly just out of reach when she died age 43 in 1950. Times were tough times for research. Kober’s communication with other scholars depended on a slow transatlantic postal service. Paper was scarce: when Kober could not get notebooks, she began hand-cutting two-by-three-inch cards from any spare paper she could find: backs of greeting cards, examination book covers, library checkout slips. It was in this unprepossessing environment for research that the breakthrough was made.
Here is another riddle: what – if anything – is the connection between Margalit Fox’s fabulous book and the autonomous world of schooling in which we now find ourselves? I’ll give you a clue: complexity. As the experiences of Kober and Ventris demonstrate, complexity is good for research, and a fecund ground for the imaginative development of practice: in complexity there is a great deal to be explained, much to be studied. Autonomy brings opportunities. Autonomy creates spaces in which differences can be explored and evaluated.
A key feature of the autonomous school system which has emerged since the 2010 Education Act is the school group, or chain. There are now something over 500 such groups — a genuinely new feature of English publicly funded schooling – a (normally) non-geographic cluster of schools with integrated management an financial arrangements and, in some cases, strongly corporate approaches to school leadership and teaching.
Autonomous schools can become autarchic schools, looking inward and concerned with their own practices and development. One of the arguments elaborated about the school system being created following the 2010 Act is that it is inter-, not in-dependent: schools are being encouraged to collaborate one with another. And there is some evidence that a consequence is an inward focus within the group or cluster. Where collaborations have a formal and legal form, some of its details are tightly protected.
So we find school clusters or groups developing distinctive curricula, pedagogies and approaches to professional development — and the best of collaborative innovation is structured, coherent and researched. But where concern with branding and commercial intellectual property issues begin to predominate in schooling, researchability can take second place. Not all innovative practices are open to scrutiny. Moreover, autarchic schools could begin to operate as closed systems, drawing together a strongly coherent but strongly protected set of arrangements where the defined pedagogy and practices are unexplored and unevaluated.
The tendency for autonomous institutions to face inwards, to be concerned with developing distinctiveness and in some cases protecting that distinctiveness calls for new and distinctive relationships between academic research and practice. We need to develop practitioner-researchers who can work in schools to diagnose issues and synthesise evidence to support initial teacher education and school improvement. This work needs to be linked to researchers who can stand back and make sense of the bigger picture.
Individual universities can do some of this, but we also need more thinking on how the education research and development infrastructure needs to develop. What might an education equivalent of the National Institute for Clinical Excellence look like and how might a member-led Royal College of Teaching operate? Isolated, separated researchers did solve the “riddle of the labyrinth” but it took them a long time – 50 years from the excavation of the tiles to a solution. That’s too slow. Educational researchers, and universities who employ them, have shown great inventiveness in finding ways to get close to schools. In an autonomous school system, it will be a highly prized skill, and a huge amount may depend on getting it right quickly.
This post is adapted from Chris Husbands’s address to the British Educational Research Association Conference earlier this month

Child protection: Schools want and need clear statutory requirements, not freedom to do their own thing

By Blog Editor, IOE Digital, on 5 August 2013

Chris Husbands
We have been here before. Daniel Pelka’s name is added to the grim roll call of cases of children murdered following months or years of abuse: Maria Colwell, Jasmine Beckford, Victoria Climbie, Lauren Wright. The conviction of Daniel’s parents will now be followed by a serious case review, and amongst the questions which will be asked, according to the BBC report will be why police and social services did not become involved after staff at Daniel’s school noticed bruising on his neck and what appeared to be two black eyes. The Colwell Inquiry in 1974 found poor communication and liaison between agencies, poor training, and a lack of co-ordination. Lord Laming’s report into the murder of Victoria Climbie in 2000 found that the agencies involved in her care had failed to protect her, noting that on at least 12 occasions staff involved in her case could have prevented her death. Laming went on to recommend radical change in arrangements for child protection which underpinned the system-wide Every Child Matters programme. The murder of Lauren Wright in 2001 by her step-mother followed abuse during which, despite warnings, Lauren was not removed from the family home. In each of these cases, reports criticized the way in which information was – or was not – shared and the extent to which front line teachers, social workers and police officers were able to interpret the information they had. In each case, professionals failed to make sense of what they found.
Concerns arising from the Lauren Wright case produced sections 157 and 175 of the 2002 Education Act, which laid statutory responsibilities on schools and local authorities in relation to the training of teachers and governors in relation to child welfare. The government is currently considering the results of its consultation on amending the requirements. Consistent with its drive to reduce prescription and bureaucracy, government proposes to replace the detailed prescription of section 157 and 175 with more general guidance, setting out the “minimum legal and statutory requirements and beyond that giving schools and further education colleges autonomy to use their own judgment to decide how to keep children safe”. Amongst elements which appear to be excluded from statutory prescription are the requirement to update whole school training every three years, for governors to be trained to understand their duties, and for there to be a nominated governor for child protection. Whilst the consultation recognizes that it is impossible to advise schools and colleges on every detail of safeguarding issues, it no longer sets out where Designated Senior Persons (normally the headteacher) should look for help, nor does it set out reference to Local Children’s Safeguarding Board inter-agency procedures. It insists that “individuals should use their own judgment”, but, as we have learnt, individual judgment is only part of the picture: information matters, judgment matters, communication matters, but sound knowledge and clear guidance are essential.
There are areas where deregulation, school autonomy and diversity are to be celebrated as markers of a vigorous and dynamic school system, and where differences between the practice of different schools are important. Child protection and the arrangements which underpin it are not such areas. We know that teachers, school leaders and governors find safeguarding and child protection difficult and troubling. Clear statutory requirements are actually seen as helpful. Most child abuse takes place within families. The signs are not obvious. They are often hidden. The fact that schools are particularly well placed to notice when children are being mistreated makes it doubly important that practice is not left to local discretion. Serious abuse is rare. Marion Brandon’s most recent study of serious case reviews suggests that there are about 85 violent and maltreatment-related deaths of children England each year – that is about 0.77 per 100,000. So any individual school is unlikely to build up experience in case management that supports good practice. The new guidance proposes setting out “minimum legal and statutory requirements” and beyond that giving schools and FE colleges autonomy to use their own judgement to decide how to keep children safe. Some schools will always go beyond good practice, but it is those with least awareness of how to keep children safe where detailed statutory requirements make the difference.

How the government is connecting the dots between the pupil premium and KS2 results

By Blog Editor, IOE Digital, on 17 July 2013

Chris Husbands
What is primary education for? Now the Government has spoken. In its long-awaited consultation on key stage 2 accountability primary education has essentially been given a tight remit:  its purpose is to make pupils “secondary ready”. Nick Clegg declares that “every primary school should make its pupils ready for secondary school by the time they leave”, whilst David Laws observes that “all children… can arrive in secondary school ready to succeed”.
Their comments are a demonstration of just how far we have moved from the principles of the Plowden Report. Lady Plowden – a former Conservative county councillor – devoted a chapter to the purposes of primary education, concluding (para 505) that “a school is not merely a teaching shop, it must transmit values and attitudes. It is a community in which children learn to live first and foremost as children and not as future adults”.  In place of the Plowden vision of primary education, the consultation document offers – in definitional terms – preparatory schooling.
And there is, of course, merit in this: the consultation’s root argument is that Level 4 attainment is in itself not sufficient for secondary readiness. Too many children secure only a Level 4c, and, as the government declares “the difference in academic achievement at secondary school between these pupils and those who manage a ‘good’ level 4 (level 4a or 4b) is significant”: whereas 81% of those who secured level 4a in English and mathematics went on to secure at least 5 A*-C GCSEs, only 47% of those who secured level 4c did so. The response, much trailed, is to abolish levels.
However, the problem we have currently is not with the principle of levels but with the pitch of level 4. It would be possible to establish level 4 at (quoting again) “the level at which 11 year-olds would be considered ‘secondary ready'”. What this suggests is that the problem is not necessarily levels but an accountability regime that encourages primary schools to ensure pupils score level 4, however insecurely, and the weakness of secondary school literacy and numeracy provision in Year 7. The new “scaled score” “would be the same for all tests and remain the same over years”. 85% of primary pupils will be expected to reach this score – a tough target if the score really is set at the equivalent of Level 4b.
The numerical score will be more precisely calibrated – it will distinguish between pupils scoring (say) 57 and those scoring 58. Most assessment specialists would observe that this suggests a confidence in the validity and reliability of assessment regimes which is hardly borne out by the evidence. Some who secure well above 58 will not merit it – because of on-the-day test performance, or marker error – just as some who score lower should have done better.
This is an inevitable feature of assessment but the consultation document puts its faith in the certainty of numerical scales whilst letting secondaries off the hook for the way they use and respond to scores.
The aspiration is to hold the difficulty of the test constant over time, so that children with similar attributes do equally well in any year. It is not too difficult for PISA to achieve this –questions are kept private so that some can be re-used and the difficulty of new ones scaled against them, whilst the test is administered only every three years to a sample. But it will be impossible to keep KS2 questions private as teachers administer the tests, and they will do so to all pupils in all schools. If questions are not re-used then it will be difficult genuinely to scale the test each year to secure consistency. But if questions are re-used it will be difficult to make the test sufficiently different each year to avoid a repeat of the gradual grade improvement as teachers learn what is expected. It remains fundamentally difficult to separate improvement in test preparation from improvement in knowledge and skills.
The abolition of national curriculum levels dismantles a national assessment framework which, whatever its weaknesses – and these were more apparent at key stage 3 than key stage 2 — provided a national standard. Levels have been sorely abused in reporting. There has been too much pressure to level individual pieces of work and to trace specious sub-levels of progress. Husbands’ First Law of Assessment Policy is that the weight applied to any measure will always exceed the validity of the measure.
The consultation essentially takes the system back to the 1980s before the TGAT report, giving “schools… freedom to design their own systems of measuring pupil performance”. Few will do so. Most will buy in commercial systems. Some of these will be good. Others will not.  Ofsted inspection may help to drive out poor systems, but the loss of a common national framework – something which international visitors have generally admired – is a big price to pay.
A further innovation is the introduction of a new reporting method which will place “each pupil” in deciles against peers nationally. The consultation suggests that “only” parents and schools will know this information, but the reality is surely that the press will inevitably get hold of it at regional and perhaps school level through FOI, which suggests that it will be abused. It is not at all clear what purpose this decile information will serve. Rank order tests have their uses, but whether they drive high aspiration is a tough question: it’s useful to know distributions of performance as outputs of an education system. However, if the purpose is secondary readiness, there are obvious questions about how pupils’ place in national distributions affect secondary teachers’ expectations. The secondary assessment system is already strongly tilting towards norm referencing, and the primary system appears to be not far behind.
All of these measures are about attainment rather than progress, reflecting the government’s view of the core purpose of primary education. The consultation leaves open the question of progress measures through primary schooling, asking for responses of whether to take a baseline at seven or at five. Both turn out to be problematic. Testing at 5 is expensive – it involves one-to-one assessment by teachers and teaching assistants — and outcomes on the Early Years Foundation Stage Profile are not a good predictor of age 11 performance. But testing at age 7 is not a genuine baseline since it incorporates school performance up to Year 2, and most primary schools cover the age range 5 to 11. Given the sharper accountability at 11 that flows from numerical scores and pupil ranking, measuring progress from age 7 offers schools a significant incentive to ensure children perform poorly on an age 7 baseline.
The core message of the consultation is that the concern is with absolute attainment – secondary readiness – rather than the progress made by primary schools. And this, of course, explains what, for the Liberal Democrats, is the big news in the consultation: the significant increase in the pupil premium which will rise to £1,300 from 2014/5 – a major element in funding. This is a further twist in the evolving purpose of the pupil premium – once intended as an incentive to primary schools to admit more disadvantaged children, then a compensatory payment for the additional costs involved in meeting the needs of disadvantaged children, it is now more clearly a fund to secure threshold levels of attainment.  The evidence is that after a faltering start, schools are becoming cannier in the way they spend it, using research more intelligently to inform decision making. But given the expectations on them, they will need to:  “as more and more children have surpassed [current expectations of a]…basic level, primary schools will now be asked to raise their game”.  After each hill, there’s another.
I should like to thank Becky Allen for helpful comments on an earlier version of this post

21st century skills for those who will set the stage for the 22nd

By Blog Editor, IOE Digital, on 27 June 2013

Chris Husbands
Last week the CBI and Pearson published their Education and Skills survey for 2013.  The headline finding: a stubborn shortage in the skills the UK needs to remain competitive and fuel long-term growth… 
A child born this year will start school in 2018; she will complete her schooling in about 2031; current indications are that her working life would last almost certainly until 2083.
No one has any real idea of what her working life will be like, but we can hazard some guesses: she is likely to change roles two or three times and will probably need to re-train several times; the technologies and work routines she uses will alter repeatedly; and her 50-year or so career will be played out against a background of resource depletion, rising pressures on food stocks and unpredictable climate change.
In his book 21st Century Skills – Learning for Life in our Times, Charles Fadel talks about the inter-relationship of “knowledge, skills, and character” as well as “the meta-layer or fourth dimension that includes learning how to learn”.
He questions the sorts of knowledge needed in a world in which information is instantly accessible. Should the curriculum teach facts and knowledge, or the ability to appraise information? Should engineering, with its focus on the practical application of learning, become a standard part of the curriculum? Is it necessary to teach skills such as long division when there are calculators? Should entrepreneurship be compulsory, and from what age?
Fadel complains the conventional school curriculum is over-burdened with content and urges instead a focus on the 4Cs – creativity, collaboration, critical thinking and communication – and the ability to “learn how to learn”.
It is a powerful, influential cocktail. Unfortunately, it does not add up. Although the dynamics of the workplace will change fundamentally over the next century, it is most unlikely that the basic laws of science will be extensively revised. The way we receive text will change, but the ability to read will still depend on being able to decode meaning. Advanced mathematics will still depend on early acquisition of number bonds and mathematical operations. Twenty-first century skills will turn on some very 20th-century basics.
The skills demanded by employers are already complex. The CBI complains that too many school-leavers struggle to write to the necessary standard, employ basic numeracy or use a computer properly: it is a far from straightforward mix of familiar content – literacy and numeracy – and more recent innovations – the ability to use a computer.  More than this, in their earlier November 2012 First Steps report on schooling, the CBI complained that the curriculum was too crowded and underpinned a conveyor belt education system that did too little to challenge higher achievers while providing little to support those needing it. The system was not delivering the workforce modern employers need.
The complaints are familiar: new employees too infrequently “possess habits of discipline, ready obedience, self-help, and pride in good work for its own sake“.
But that last quote is not from 2012, but from a Board of Education report of 1906. For as long as we have evidence, employers have been critical of the ability of the education system to provide the workers they need.
For all this, the global experience has been striking. In the 1960s, around the world, young people with minimal qualifications went straight from school – often in their early or mid-teens – and secured low-skill, manufacturing jobs. In the 1960s a third of US workers had dropped out of high school. By 2006, nine out of 10 workers had at least graduated from high school and 69 per cent had some college education. Labour market growth over the past 50 years has been a race to skills and qualifications.
There are still low-skill, low-waged jobs, but increasingly the focus is on higher-skill, added-value careers. The production lines that generated the metal-bashing jobs of the 1960s are now digitised, so that those who operate them need software diagnostic skills; the best social care is provided by those who can diagnose and make intelligent interventions; and so on.
As a result, the world’s most efficient and effective education systems, from Finland to Singapore, have some strikingly common characteristics: they are unremitting in their focus on the core skills of literacy and numeracy, but they set those skills in the wider context of developing higher-order complex thinking.  Most of all, they take equality seriously: they focus, in a way which education systems historically did not, on ensuring that all – not just a privileged few – develop the higher-order skills needed to use and analyse information, and that they have access to rewarding higher-level training. Put at its crudest, conventional subjects still matter, but they need to be taught and learnt in innovative ways.

Assessment: levelling off?

By Blog Editor, IOE Digital, on 17 June 2013

Chris Husbands

On Thursday, OFSTED reported that a quarter of those who secured Level 5 at the age of 11 did not get top grades at GCSE; I blogged on it here.  On Friday, government solved the problem: it abolished National Curriculum levels.  If only all educational challenges could be solved so easily.  I picked up the news on a train returning from a day working with headteachers in Somerset and we were powering across the Somerset Levels, which felt somehow appropriate.

The eight national curriculum levels in use now derive from Paul Black’s Task Group on Assessment and Testing (TGAT) Report of 1988.  It is a sophisticated report, still worth reading, which tried to devise a national assessment system precise enough to give information about pupil progress but flexible enough to recognise that pupil learning is not simply cumulative.  As Black observed, “no system has yet been constructed that meets all the criteria of progression, moderation, formative and criterion-referenced assessment” (para 5).  TGAT’s 10-level scheme, drawing together test results, teacher assessment and teacher moderation was intended to secure “ information about … progress … in such a way that the individual child’s progress can be related to national performance” (para  126).

The implementation of TGAT was – as is always the case – partial.  The recommendation that “wherever schools use national assessment results in reports for evaluative purposes, they should report the distribution of pupil achievements” (para 84) was lost as progression was mapped through a level-based system .  Assessment at 14 was never implemented as intended.  Working groups each generated statements of attainment at each level in each subject – Science alone had seventeen attainment targets, so 170 statements of attainment.   TGAT’s proposals were, as reform proposals always are, sharply criticised.  Some who wanted “simple” tests disliked the attempt to draw external and teacher assessment into a single 10-level structure.  Others thought it would lead teachers to think of pupils as numbers.  But level-based assessment was rapidly adopted, not least because in areas of the school system it was working with the grain of thinking.  Substantial work had been done on graded assessment in languages, GCSE operated levels of response mark schemes, and primary mathematics schemes often had a level-based structure. Lord Dearing’s review of the national curriculum in 1994 moved from statements of attainment to more generic level descriptors, and lopped levels 9 and 10 off TGAT’s structure.

Since 1994, the use of levels has become embedded.   Some teachers – wrongly – assign levels to every piece of work, which misses the point about generic descriptors.  Better practice happens where  teachers have used levels to inform assessment and planning and sharpen the relationship between, say, level 5 writing and level 5 speaking in English,  or progression in History between levels 5 and 6.  Precisely because levels are not disaggregated, they have informed thinking about what performance looks like across different cognitive domains and how domains relate to each other.  Primary and secondary classrooms often feature posters using  pupil-friendly language to help pupils identify how to consolidate learning and move on.  The application of level descriptors itself involves professional judgement.  Black’s aspirations for the way the TGAT structure could shape progression and formative assessment have been substantially achieved. 

Ironically, it is what TGAT envisaged as criterion-reference assessments related to national performance that have caused more difficulty.  Such has been the focus on key hinge points of the structure – the importance attached to the proportion of pupils securing a Level 4 at Key Stage 2, for example – that schools have engaged in frenetic activity to push pupils over the ‘next’ threshold for key assessments, so that a higher level is achieved, but not always securely.  ‘Sub-levels’ –disaggregated progression within levels – have generated data of dubious reliability.  Partly for this reason, many secondary schools re-assess pupils soon after they arrive, and, perhaps inevitably, decide to place greater emphasis on these tests than on national curriculum levels.

The government’s announcement that levels are being abandoned suggested that “outstanding schools and teaching schools” would lead the way in devising alternative ways of tracking progress. More likely will be an expansion of external tests, like those from NFER, CEM or from commercial test and publishing companies – News Corporation has been buying into the market in the USA. National benchmarking will still exist through end of Key Stage tests; without levels, these will presumably generate a numerical score, though TGAT warned (para 10) that where external tests are detached from school assessment practice they tend to be “narrow in scope and employ a limited set of techniques….[and] rarely assist normal teaching work”. 

Profusion of local and commercial assessment puts at risk the real long-term achievement of  TGAT:  its contribution to raising standards across the school system.  It made expectations are clearer – from Carlisle to Canterbury, from Newquay to Newcastle.  Some won’t mourn levels, especially those who have seen them misused.  But the level structure focused attention on what pupils need to learn, learning outcomes, and on progression towards those outcomes.  

TGAT recognised that “many schools have established common systems for use throughout the school”, but teachers’ use of assessment was “limited by the range of  instruments available, by the lack of time and resources required to develop new instruments, and by the lack of external reference and a basis for comparison”, so the “needs of the less able, or the competence of the most able, have hitherto been under-estimated” (paras 7-8).  And that, more or less, brings us back to OFSTED’s concerns.

 

Ofsted, school accountability and the most able students

By Blog Editor, IOE Digital, on 13 June 2013

Chris Husbands
OFSTED’s report on the progress made by the most able children in non-selective secondary schools has hit the headlines, finding that more than a quarter (27%) of previously high-attaining pupils had failed to achieve at least a B grade in both English and Maths. For the Daily Mail, this is evidence of the failure of the comprehensive system; the Daily Telegraph reported calls for secondary schools to set pupils from the beginning of year 7. The report is in practice – as reports always are – more complex than the press release headlines, but it still makes sobering reading: a significant number of those who do exceptionally well at the age of 11 do not perform to expectation by the age of 16.
The first observation to make is that whilst the report focuses on non-selective (comprehensive) schools, it includes some glancing references to selective (grammar) schools that suggest all is not well there either: in comprehensive schools, 35% of those who secured level 5 or above in both English and Maths went on to secure an A or A* at GCSE, whereas the figure was 59% in grammar schools. But this means that 41% of those who secured a Level 5 at age 11 and went on to selective secondary education did not secure an A or A* at GCSE.
For over 20 years in assessing English secondary schools, we have held schools to account based on the proportion of 16 year olds who move across a threshold of GCSE grade C or above. In accountability terms, there are no further incentives for schools to address the needs of their highest attaining young people. There are, however, many disincentives for schools not to address the needs of middle attainers. In these circumstances, it’s not terribly surprising that the needs of the highest – and, indeed, the lowest – attainers may have been neglected.
Much of the press debate has focused on the issue of setting or – a different concept entirely – streaming, arguing that grouping children by ability would address the problem. In fact, the evidence is much more nuanced on this. In practice, all classes turn out to be mixed attainment classes – the only point at issue is the breadth of the attainment span in any given class. Once this point is accepted, the issue is about how teachers provide for pupils of varying talents and attainment, and, though it has barely been reported, the OFSTED report stresses the importance of well-focused teaching, and the identification tracking of individual pupils.
And there’s a further point: over the same twenty year period, policy and press discussion has tended to divide schools into “successful” and “failing” schools. The OFSTED report on higher attainers demonstrates that it’s a lot more complex than this: it turns out that “successful” schools are often no more successful in meeting the needs of very high attaining pupils than less successful schools. And, for all the difference between comprehensive schools and grammar schools, if grammar schools are not securing the highest grades for two-fifths of their highest attainers, the observation holds there: they, too are just not doing well enough with higher attainers. Put slightly differently, it does not matter much which school you go to, but it may matter a great deal who teaches you when you get there. In English education, within-school variations in pupil attainment are more significant than between-school variations.

The predictive value of GCSEs and AS-levels: what works for university entrance?

By Blog Editor, IOE Digital, on 23 May 2013

Chris Husbands
Key Stage 4 and 5 qualifications are again at the centre of a controversy: which are most useful for fair university admissions – GCSEs or AS-levels? This matters because the DfE has announced that AS-levels are to become a standalone qualification, rather than the first half of pupils’ A-level results. The DfE argues that decoupling the AS in this way will put an end to time-consuming assessment in Year 12 that takes time away from teaching and learning. It is relaxed about the change, but some universities – most notably Cambridge University – beg to differ.
Cambridge has calculated that apart from the case of Mathematics, a pupil’s performance at AS-level provides a “sound to verging on excellent” indicator of Tripos (BA degree) potential across all its major subjects. STEP, an advanced Mathematics assessment, provided a better indicator in Maths. GCSEs, by contrast, were found to be less effective: around 10% of Cambridge entrants who apply with A-levels present very strong AS performance despite less impressive GCSE performance, and around three-quarters of this group are from state schools and colleges. On that basis, Cambridge claims that the loss of AS-levels will impact on student choice, flexibility and the opportunity for all pupils to apply to university with confidence.
The DfE did its own number crunching. It argued that GCSEs were accurate predictors of university outcomes in 69.5% of cases, and knowing both GCSE and AS results improved the accuracy of the prediction only slightly – to 70.1%. On this basis, it concluded that the added value of AS results for university admissions was very low. What should we make of this disparity, which was analysed in more detail by FullFact?
The two calculations are based on very different methodologies. Cambridge’s sums were based on just the students who were successful in its admissions process – a select group, whilst the DfE’s data drew on a much larger dataset – some 88,000 students. But there were also important differences in the granularity used by Cambridge and the DfE. As input data, the DfE used overall grades (A, B C, etc) secured in GCSE and AS examinations, whereas Cambridge used the much finer grained data of UMS scores on AS units. Universities routinely receive UMS scores, though few in practice make use of them. For outcome data, the DfE again used an overall score – looking at whether students in the global dataset secured a 2:1 or above in 2011, whereas Cambridge used the results on Part I Tripos examinations between 2006 and 2009. Moreover, the DfE used a single score across all subjects to see whether GCSE and AS results overall were good predictors in general, whereas Cambridge used a comparison of GCSE/AS and Tripos scores on a subject-by-subject basis.
This is complex stuff. Obviously, we all want policy to be informed by the most robust analysis possible; analysis that is as fine-grained as possible, makes full use of available records and takes account of important variables such as, in this instance, subject and institutional differences. But that is still a major challenge for policy and practice. What is also at stake is qualifications policy, which needs to serve stakeholders beyond the higher education sector.
Perhaps the elephant in the room is the continuing lack of real transparency regarding university admissions. As the debate between Cambridge and the DfE rumbled on, the Higher Education Policy Institute annual conference was hearing about just how in the dark schools feel when it comes to the admissions process – just which types of information do tutors take notice of and prioritise? Why such apparent differences across institutions? Tutors may use prior attainment at Key Stage 4 and/or 5; they may use the personal statement; they may use academic and other references; some will interview candidates and run other aptitude tests. But few universities state publicly the significance they attach to each source of information. If we had more robust data on the predictive value of different factors – at national level – that might help to pave the way for greater consistency and transparency in admissions, and help pupils in choosing which qualifications are right for them.