Before we compare mathematics, reading or science, here's some geography
By Blog Editor, IOE Digital, on 13 December 2012
Two major studies of international attainment in education have been published: the four-yearly Trends in International Mathematics and Science Study (TIMSS) and the five-yearly Progress in International Reading Literacy Study (PIRLS). Both have been extensively reported and tell quite different things. On international comparisons, says the BBC, England “has slipped in science, but is top 10 for primary and secondary maths”. The Daily Telegraph account, picking up on the science story, noted that England had fallen behind Slovakia and Hungary, whilst the BBC, looking at mathematics performance, declared that English pupils were amongst the best in the world.
Looking beyond the headlines, Chris Cook in the Financial Times extracted data on the performance of Chinese heritage students in England from GCSE and, with the statistical wizardy of which he is capable, showed that such students in England perform almost as strongly as the students in high performing Pacific Rim systems.
International rankings of educational performance are now a routine feature of the education news cycle. The OECD PISA rankings come around every three years, on a different cycle from PIRLS and TIMSS. My calculation is that edu-statistics geeks are going to have to wait until 2052 for the three to coincide – which makes 2052 like one of those rare astronomical events when several planets line up.
Of course, PISA, PIRLS and TIMSS measure slightly different things at different ages – so the discrepancies between their methods, foci and findings prompt endless debate about their significance. The indefatigable North American blogger Yong Zhao has explored the complexities of what the figures show and suggested that the surface interpretation of PISA, TIMSS and PIRLS is almost certainly misleading. Each, moreover, is taken by a changing number of countries, so that arguments about whether performance rising or falling are endless.
But there is another, intriguing debate to be had about PISA, PIRLS and TIMSS. In this post so far I have written about national performance. But all three studies draw on samples, and the sampling frame is different. So PISA compares, for example, Korea, which is a country, with Hong Kong and Macau, which are special administrative regions, with Ontario, which is a province in Canada. TIMSS includes the results for Massachusetts and Florida, which are not countries or provinces but States of the Union. This is a puzzle for comparison. We know, for example, in England, that schools in London out-perform schools nationally – though they have not always done so – and one report (again in the Daily Telegraph) suggested that the Department for Education in England is considering entering each English region separately in PISA 2015. But English regions are neither provinces, nor states, nor administrative regions, nor countries: they are geographical agglomerations of local authorities.
In most descriptions of international performance, the comparisons are said to be not between countries but between “jurisdictions”. But educational jurisdictions are rarely defined, and sloppy comparisons follow between quite different entities. An educational jurisdiction must be a unit which shares some characteristics: for starters, common curriculum and assessment standards, teacher recruitment, development and remuneration policies and overall system development policies. It would make little sense to break a jurisdiction (for example, England) which shares these characteristics into smaller units – although that is exactly what PISA does in publishing results for Shanghai rather than for China.
It makes sense to treat Hong Kong separately from Shanghai: they are very different education systems with different histories, linguistic traditions and educational structures. It probably makes sense to deal with Ontario differently from Alberta in PISA: in Canada education is strong a provincial responsibility. American states are a different matter, however: as increasing federal funds are allocated to States dependent on conformity with US-wide educational requirements, the responsibilities are shifting. But even national comparisons are fraught with difficulty: is it reasonable to compare New Zealand (population of 4 million) with Japan (population of 120 million)? The conventional argument is that Asian jurisdictions – with often highly centralised education systems in cultures which place a high premium on education – do exceptionally well in mathematics assessment. A cheekier reading of PISA might suggest that smaller jurisdictions do better than larger jurisdictions.
There is a vast amount to be learnt from international comparisons of educational performance, and many countries have reported “PISA shock” when results suggest hitherto unexpected weaknesses. But it’s worth remembering that the unit of comparison is often complex. International comparison can be full of pitfalls – and one of these is “what is a jurisdiction”?