Session Information
09 SES 14 A JS, Assessment Practices and the Goals of Education: Some Key Issues (Part 2)
Joint Paper Session NW 09 and NW 13 continued from 09 SES 13 A JS
Contribution
This contribution seeks to probe some important issues that arose in a joint symposium of Networks 9 and 13 at ECER 2015, titled "Educational Goals and the PISA Assessments". The scope on this occasion is being extended to include objective assessments more widely, not just the PISA instruments. Key research questions for this contribution include:
(1) What can validly be measured by objective assessment strategies – ranging from those used in everyday educational practice to those used by national and international educational bodies?
(2) What is worth measuring?
(3) How can we best measure what is worth measuring?
In relation to the first question, there have been recurring criticisms over the last decade-and-a-half that the theoretical frameworks underlying the assessment practices of OECD and other international educational bodies are beset by shortcomings. For instance it is alleged: that the range of educational achievements measured is too narrow; that central educational goals become relegated or ignored; that the energies of schools become concentrated on short-term fixes rather than on long-term educational improvements (Sjøberg, 2007; Jerrim, 2011; Meyer, Zahedi et al. 2014).
The second question, What is worth measuring?, raises in an acute way the rationale for assessment itself in educational practice, particularly the scope and defensibility of that rationale. It is a perennial and problematic question. But it frequently gets decided in a crudely pragmatic way: What gets tangibly rewarded by the system is worth measuring. Other things can be taken care of when time permits. Notwithstanding the crudeness this is not an unreasonable stance for practitioners and school leaders to take, especially where performance metrics have become prevalent in the conduct of educational practice.
The third question can be profitably tackled only when some educationally coherent and defensible answer can be given to the second, albeit a provisional answer. By a coherent and defensible answer is meant : something that practitioners can recognise as a worthy candidate for their commitments and moral energies as educational practitioners.
In addressing these questions I propose to start with the familiar notion of measurable achievements in learning. What makes this notion most familiar – but sometimes inimical to teachers – is that the measurement is usually done by standardised tests, with results produced purely in numerical scores. This tends to restrict assessment to one dimension. But achievement in learning can be more fully and meaningfully appraised under more headings than just one. For instance, in an ongoing R&D project with schools at our university we use the following three headings:
(a) Advances in students’ achievements in learning (as measured by exams, tests, assignments)
(b) Advances in students’ attitudes toward learning
(c) Advances in students’ practices of learning.
Although teachers are well able to carry out assessments under (a), the tests are often marked by persons other than those who administer them – chiefly for reasons of objectivity and transparency. Teachers are uniquely well placed to carry out assessments under (b) and (c). Yet, advances in students’ attitudes toward leaning, and in their practices of learning, are often neglected where professional cultures are preoccupied with objective test instruments. Unfortunately a heavy reliance on such instruments can have enduring negative effects on students’ attitudes toward learning and on their practices of learning.
Both of these latter dimensions need to be regularly monitored and evaluated if assessments of quality in education are to be meaningful. Cultivating a systematic capacity to do so with the required degree of objectivity offers some really promising possibilities. I will comment more specifically on this key finding of the R&D programme mentioned above: “Teaching and Learning for the 21st Century” TL21".
Method
Expected Outcomes
References
Aho, E., Pitkanen, K. & Sahlberg, P. (2006) Policy development and reform principles of basic and secondary education in Finland since 1968. Education working paper series ; no. 2. Washington, DC: World Bank Dewey, J. (1938) Logic: The Theory of Inquiry, New York: Henry Holt and Company Hogan, P. (2010) The New Significance of Learning: Imagination’s Heartwork, London & New York: Routledge. Hogan, P. et al. Website for TL21 R&D programme: www.maynoothuniversity.ie/TL21 Jerrim, J. (2011) England’s “plummeting” PISA test scores between 2000 and 2009: Is the performance of our secondary school pupils really in relative decline? DoQSS Working Paper No. 11-09. Institute of Education, University of London Kuhn, T.S. (1970) The Logic of Scientific Discovery Second, Enlarged Edition Chicago: University of Chicago Press Meyer, H-D, Zahedi, K. et al. (2014) Open Letter to Andreas Schleicher, OECD, Paris; available at: http://bildung-wissen.eu/fachbeitraege/basistexte/open-letter-to-andreas-schleicher-oecd-paris.html OECD (2013) PISA 2012 Results in Focus: What 15-year-olds know and what they can do with what they know, available at: http://www.oecd.org/pisa/keyfindings/pisa-2012-results-overview.pdf Sjøberg, S. (2007) PISA and “Real Life Challenges”: Mission Impossible? in Hopman,S (ed.) PISA according to PISA: Does PISA Keep What It Promises? Vienna: LIT Verlag
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.