Paul Morris
The results of PISA 2012 will be announced tomorrow. Billed in the media as “education’s Olympics”, it compares the academic achievements of 15-year-old pupils around the world, ranking them according to performance and developing a compelling narrative of “winners” and “losers”.
From previous exercises, we can expect an epidemic of reform proposals as policy makers in England and those who seek to influence policy attempt to identify evidence as to: which countries are successful; what works elsewhere; what features can be borrowed to improve the lot of low performers. These questions will be pursued under the rationale of developing (or maintaining) ‘world class’ schools.
It will be portrayed as an objective/scientific exercise in evidence-based policy making, a straightforward case of learning from the best. Yet this portrayal has not, in contrast to other countries such as Germany, characterized the prevailing approach in this country to date. Rather, the PISA data, along with other international tests, have most often been selectively raided to defend and promote the preferred ideologies and preferences of those who make and seek to influence educational policies. The appropriate maxim is less “show me the evidence and I’ll decide policy” and more a case of “I know the right policy, I’ll find the evidence for it.”
I illustrate this point with two examples of how such tests have been appropriated to support existing views, opinions and prejudices about schooling, and add a note of caution.
The first example comes from the recent release of the Organisation for Economic Cooperation and Development’s Programme for the International Assessment of Adult Competencies (PIACC) on Oct 8. There was significant media coverage of the poor results recorded for the UK’s 16- to 25-year-olds and the following propositions were advanced by pundits to explain the poor results: young people give up mathematics and English too early; the absence of a performance-based pay system for teachers; the poor quality of teachers; the lack of discipline in schools; the dumbing-down of the curriculum; low investment in further education; the reduction of resources for young adults; and grade inflation.
Some of these claims were supported by reference to the opposite feature operating in a country that performed well. If however one ventures into the PIACC report none of the above school focussed “explanations” featured in the OECD’s attempt to explain the results in the UK. They identified social background (the children of parents with low levels of education have significantly lower proficiency than those whose parents have higher levels of education) and the nature of employment (low skilled jobs limit the development of skills) as the major impact on the levels of performance in UK.
The second case is the Coalition Government’s 2010 White Paper on Education, which was based on improving England’s position in international league tables by learning from high performers. As the Foreword explains:
… Alberta to Singapore, Finland to Hong Kong, Harlem to South Korea – have been our inspiration.
Two main problems were identified: the quality of teachers and the lack of school autonomy. Where the White Paper argues the need for high quality teachers it relies heavily on a report by McKinsey & Company that cites as its evidence Finland, Singapore, South Korea and other high performing countries in the 2006 PISA exercise. However, when the McKinsey report later promotes the value of multiple and shortened routes for teacher education, the evidence to support this claim is not derived from any high performing country in PISA. Rather it uses evidence from Charter Schools in the USA and the Teach First scheme in England.
The White Paper uses the same strategy and promotes the need for multiple, shortened and school-based teacher education programmes, which are not a feature of the high performers. With regard to school autonomy, the White Paper bases its claim on the assertion that “everyone knows it is effective” studiously avoiding reference to the McKinsey report, which argued that there was no connection between school autonomy and pupil performance. Subsequently in 2010 the OECD argued that there is no simple relationship and that forms of autonomy designed to foster competition between schools, as is the case in England, did not improve results.
Finally, a note of caution before the circus rolls into town. I have
written elsewhere about how some consultancies and think tanks tend to utilise these reports to promote competition, markets and a small role for the state, in the processes making some dubious claims about causality.
We would do well to avoid getting sucked in by the hysteria, and be wary of methodologically and ethically dubious “cure-alls” which will be flogged by educational pundits.