Session Information
23 SES 05 A, International Knowledge Assessment and National Reforms
Paper Session
Contribution
PISA rankings have assumed significance in many countries around the world. Within Europe, their influence has been considerable (Grek, 2007, 2009). Finland’s success has made it a Mecca for ‘edu-tourism’. Poland overhauled its stratified system to go comprehensive, based on PISA advice (World Bank, 2010). Germany went into PISA shock after its poor showing in PISA 2000, generating considerable media attention, and causing a raft of reforms (Ertl, 2006). Beyond Europe, PISA results have influenced curriculum changes in Korea (OECD, 2010). In Australia, the ambition of being placed in the ‘top five’ in PISA rankings is now in the Australian Education Act (Gorur & Wu, forthcoming).
Given the widespread influence of these rankings, PISA has also been critiqued widely. One set of critiques have been of a technical nature – where PISA's accuracy, the models that underpin the survey, the indicators that are used to measures such things as social and cultural capital, some of the test items themselves, and its sampling procedures have all been critiqued and debated (Bracey, 2008; Hopman & Brinek, 2007). For these critics, the objective is to enhance the accuracy of PISA through better mathematics.
A great deal of attention has also been directed at the use and misuse of PISA in politics and policy. Policy makers’ use of PISA to introduce or justify reforms has been commented upon (Alasuutari & Rasimus, 2009). Equally, the harmonisation of policy and a global convergence of a narrow set of ideas about what constitutes learning are said to result from the wide-spread use of PISA (Rizvi & Lingard, 2009). Worry has been expressed at the growing influence of transnational bodies such as the OECD through the benchmarking and standardising efforts of PISA (Stronach, 2010).
In this paper, we offer a different type of critique, based on an in-depth examination of the extent to which the ‘average performance scores’ (on which rankings are based) can inform a nation’s reform strategy. Using, for the purposes of illustration, examples from Australian data, we go beyond the average scores to take a close look at three aspects of Australia’s performance in PISA, using a different unit of analysis; examining rankings by item content; and examining rankings by test completion. Based on this analysis and using data from interviews with measurement and policy experts, we show how uninformative and even misleading the ‘average performance scores’, on which the rankings are based, can be. We explore how a more nuanced understanding would point to quite different policy actions. After considering the PISA data and Australia’s ‘top five’ ambition seriously, we argue that neither the rankings nor such ambitions should be given much credence.
This research is a result of collaboration between a statistician and a sociologist of measurement. It combines statistical analysis with ethnographic material, including a in-depth interviews with policy makers and measurement experts. These interviews stretch over several related studies and are used mainly as triggers to thinking, and to gain an understanding of the practices and problems faced in measurement and in policy. Broadly informed by concepts from science and technology studies (STS), we eschew a division between the technical/scientific and the social/political, treating PISA as a techno-social object. The critique itself, therefore, is neither purely technical, nor purely ideological, but a hybrid.
This paper would be of interest to scholars interested in PISA and education policy based in Europe and elsewhere, because the Australian example illustrates phenomena that are applicable to the PISA data of any nation. Moreover, the type of sociology offered here would be of interest to European and other scholars as it is both novel and potentially effective in encouraging a change in policy practices.
Method
Expected Outcomes
References
Alasuutari, P., & Rasimus, A. (2009). Use of the OECD in Justifying Policy Reforms: the case of Finland. Journal of Power, 2(1), 89-109. Bracey, G. W. (2008). The Leaning (Toppling?) Tower of PISA? Principal Leadership, 9(2), 49-51. Ertl, H. (2006). Educational standards and the changing discourse on education: the reception and consequences of the PISA study in Germany. Oxford Review of Education, 32(5), 619-634. Grek, S. (2007). ‘And the winner is…’: PISA and the construction of the European education space. Paper presented at the “Advancing the European Education Agenda” European Education Policy Network Conference, Brussels and Leuven, Belgium. Grek, S. (2009). Governing by numbers: the PISA 'effect' in Europe. Journal of Education Policy, 24(1), 23-37. Hopman, S., T, & Brinek, G. (2007). PISA According to PISA - Does PISA Keep What It Promises? In S. Hopman, T, G. Brinek & M. Retzl (Eds.), PISA According to PISA - Does PISA Keep What It Promises? Berlin: Lit Verlag Dr. W. Hopf. Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press. OECD. (2010). PISA 2009 Results: Learning Trends (Vol. V). Paris: OECD. Rizvi, F., & Lingard, B. (2009). Globalizing education policy. London: Routledge. Stronach, I. (2010). Globalizing Education, Educating the Local. Oxon and NY: Routledge. World Bank. (2010). Successful Education Reform: Lessons from Poland Knowledge Brief (Europe and Central Asia ed., Vol. 34).
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.