Why we should care about international tests
By Blog Editor, IOE Digital, on 7 December 2020
By Mary Richardson and Tina Isaacs
Governments around the world now agree that international comparisons of educational achievement are something to value. They provide extra data to enhance policy-making and practice. In fact, pupils in more than half of all the countries across the globe take part in international large-scale assessments (ILSAs) such as PISA, and the involvement of the majority of the world’s most advanced economies assures their continued popularity.
Tomorrow sees the publication of results from one of these programmes, and this country’s will appear in the TIMSS National Report for England (The Trends in International Maths and Science Study compares children’s knowledge at ages 10 and 14). Much excitement always accompanies the publication of the international and national reports for the ILSAs because each participating country wants to know where it features in what are essentially global education league tables.
In taking part in ILSAs carried out by the Organisation for Economic Co-operation and Development (PISA) and the International Association for the Evaluation of Educational Achievement (TIMSS and PIRLS), countries are acknowledging that knowledge and skills – human capital – are strategic resources that provide the foundations for a nation’s economic performance, and that educational achievement is the key to economic success.
Nations see themselves competing in global markets, with educational achievement one of the main tools for staying, and succeeding, in the race. So measuring educational outcomes and using those results to inform educational policy change have become commonplace.
Countries try to use the test results to understand better the functioning of student learning and school performance given the breadth of information available about performance in mathematics and science topics, as well as pupils’, teachers’ and headteachers’ attitudes and opinions. However, the commitment to on-going participation in ILSAs also suggests continued agreement with summative testing as the gold standard for evidencing good practice in teaching and learning.
It is important to remember that being a part of the ILSAs is not only about the test results. Participation represents a substantial financial and workforce investment for all countries who sign up. There is no doubt that the results of ILSAs influence the policy and practice in participating countries, so, not surprisingly, the claims that they can improve standards and augment policy are often hotly contested (see for example, Wagemaker et al, 2020; Waldow and Steiner-Khamsi, 2019) Can these test results really tell us about teaching, learning, knowledge, assessment, pedagogy – are these realistic assumptions?
We have had the fascinating experience of co-leading the analysis of TIMSS data for England, compiling some comparative information and then reporting initial findings. The actual results are embargoed until tomorrow, so we can’t talk right now about how England has ‘done’ compared to other countries, but we can reflect on the process of working on an ILSA through an entire cycle.
What strikes us both as most important is how little the facts and figures actually tell us. There are raw data on who achieves well and who does less well; on gender differences; on skills and knowledge differences; on differences between native and non-native speakers; on socio-economic differences; on a variety of school factors; and on home life, but at the moment these are just numbers in tables. As educational researchers, we are keen to know not only about what but also about how and why. These data only offer a starting a point – it will take time and effort to explore them in sufficient detail to move beyond ‘who did better than we did’ and ‘who did not’ in this cycle. This is where the value of educational research becomes evident – tomorrow is where the research really begins with a deep dive into the ‘whys’.