Assessment Practices and the Goals of Education: Some Key Issues
Author(s):
Pádraig Hogan (presenting / submitting)
Conference:
ECER 2016
Format:
Paper (Copy for Joint Session)

Session Information

09 SES 14 A JS, Assessment Practices and the Goals of Education: Some Key Issues (Part 2)

Joint Paper Session NW 09 and NW 13 continued from 09 SES 13 A JS

Time:
2016-08-26
15:30-17:00
Room:
NM-F101
Chair:
Paulus Julius Smeyers

Contribution

This contribution seeks to probe some important issues that arose in a joint symposium of Networks 9 and 13 at ECER 2015, titled "Educational Goals and the PISA Assessments". The scope on this occasion is being extended to include objective assessments more widely, not just the PISA instruments. Key research questions for this contribution include:

(1) What can validly be measured by objective assessment strategies – ranging from those used in everyday educational practice to those used by national and international educational bodies?

(2) What is worth measuring?

(3) How can we best measure what is worth measuring?

In relation to the first question, there have been recurring criticisms over the last decade-and-a-half that the theoretical frameworks underlying the  assessment practices of OECD and other international educational bodies are beset by shortcomings. For instance it is alleged: that the range of educational achievements measured is too narrow; that central educational goals become relegated or ignored; that the energies of schools become concentrated on short-term fixes rather than on long-term educational improvements (Sjøberg, 2007; Jerrim, 2011; Meyer, Zahedi et al. 2014).

The second question, What is worth measuring?,  raises in an acute way the  rationale for assessment itself in educational practice, particularly the scope and defensibility of that rationale. It is a perennial and problematic question. But it frequently gets decided in a crudely pragmatic way: What gets tangibly rewarded by the system is worth measuring. Other things can be taken care of when time permits. Notwithstanding the crudeness this is not an unreasonable stance for  practitioners and school leaders to take, especially where performance metrics have become prevalent in the conduct of educational practice.      

The third question can be profitably tackled only when some educationally coherent and defensible answer can be given to the second, albeit a provisional answer. By a coherent and defensible answer is meant : something that practitioners can recognise as a worthy candidate for their commitments and moral energies as educational practitioners. 

In addressing these questions I propose to start with the familiar notion of measurable achievements in learning.  What makes this notion most familiar – but sometimes inimical to teachers – is that the measurement is usually done by standardised tests, with results produced purely in  numerical scores. This tends to restrict assessment to one dimension. But achievement in learning can be more fully and meaningfully appraised under more headings than just one.  For instance, in an ongoing R&D project with schools at our university  we use the following three headings:

(a) Advances in students’ achievements in learning (as measured by exams, tests, assignments)

(b) Advances in students’ attitudes toward learning

(c) Advances in students’ practices of learning.

Although teachers  are well able to carry out assessments under (a), the tests are often marked by persons other than those who administer them – chiefly for reasons of objectivity and transparency. Teachers are uniquely well placed to carry out assessments under (b) and (c). Yet, advances in students’ attitudes toward leaning, and in their practices of learning, are often neglected where professional cultures are preoccupied with objective test instruments.  Unfortunately a heavy reliance on such instruments can have enduring negative effects on students’ attitudes toward learning and on their practices of learning.

Both of these latter dimensions need to be regularly monitored and evaluated if assessments of quality in education are to be meaningful. Cultivating a systematic capacity to do so with the required degree of objectivity offers some really promising possibilities. I will comment more specifically on this key finding of the R&D programme mentioned above: “Teaching and Learning for the 21st Century” TL21".

Method

The contribution will begin by setting out broadly the contrasting perspectives on assessment that emerged during the initial exchange at ECER 2015: on the one hand a stance representing a developmental approach to assessment – one governed by a “normal science” orientation (to use Thomas Kuhn’s term); on the other hand a stance marked by a critical questioning of any paradigms that have become normalised, or widely institutionalised. In moving on to address the three main research questions a Socratic approach will be pursued – not so much as a method or technique, but more as an ethical-practical orientation. Put simply, this approach begins with the presupposition that “the unexamined practice is not worth pursuing” and it retains a commitment to the priority of questioning. As a practical philosophical approach however it pushes beyond critique, recognising that all critique must – whether explicitly or implicitly – be for the sake of some better state of affairs than the one being critiqued. There is also a recognition that critique cannot have the last word; that openness to further criticism and revision is crucial. Where research practice is concerned this ethical-practical orientation is embodied in a philosophically alert action research discipline (i.e. both critical and self-critical). The knowledge claims yielded by such research do not answer to a criterion of conclusive objectivity, or to findings that are proven beyond doubt. Rather, the more appropriate criterion here is what Dewey called “warranted assertibility”. This orientation acknowledges the provisional standing of any findings or insights yielded by the research. It also announces its own hospitality to further questioning and critique aimed at identifying shortcomings and disclosing additional constructive possibilities. Rather than seeking in educational practice the conditions for replication, such a research orientation seeks resonance and recognition in the professional outlooks of educational practitioners, including teachers, school leaders, school inspectors. When brought to words by practitioners, such recognition might take a form like the following: “There are fertile ideas here – ones that are well worth my efforts in my own practice.” This is an empirical warrant. But it is a warrant that differs clearly from the criteria of verification and refutation that prevail in the more exact sciences. There is no suggestion however of an adversarial attitude to what has more customarily counted as “empirical” in educational research. Although keenly critical, the research orientation remains ecumenical where different research genres are concerned.

Expected Outcomes

Evidence to date suggests that a more inclusive approach yields fuller and richer data than that yielded by the prevalent assessment instruments used internationally. The three-dimensional approach involves students themselves more proactively in assessment decisions and actions. It also builds a sophisticated, incisive assessment capability on the part of teachers. Where evidence-gathering for statistical purposes is concerned this capability includes the monitoring and recording of data on the three dimensions at regular intervals. The record however does not identify any student personally. Where schools become accomplished in this kind of data-gathering, more adequate and more meaningful assessment evidence is thus made available nationally. With care and effort the approach could also be promoted internationally. Data-gathering on three dimensions of progress rather than on the traditional one dimension also strengthens the capacity of schools – and particularly of school leaders – to engage with parents, inspectorates and educational policymakers. This strengthening naturally takes time and effort, but it doesn't involve a major extra commitment of resources. In fact it could be a less costly option than the heavy reliance on surveillance and compliance which, with a few notable exceptions, has featured prominently in international educational reforms in recent decades. Inherent in such capacity-building is a more articulate and sure-footed role for teachers and school leaders. No less inherent is the cultivation of a more informed trust between teachers and the different parties they deal with: students, parents, inspectors, managerial bodies, policymakers. To proceed along such a path signifies a long-term research and development commitment in assessment along lines such as those in the Irish project reported above. Without the success of initiatives like this, assessment is likely to be widely seen as something unwelcome that is done to students, whether by teachers, by state examination systems, or by powerful international bodies.

References

Aho, E., Pitkanen, K. & Sahlberg, P. (2006) Policy development and reform principles of basic and secondary education in Finland since 1968. Education working paper series ; no. 2. Washington, DC: World Bank Dewey, J. (1938) Logic: The Theory of Inquiry, New York: Henry Holt and Company Hogan, P. (2010) The New Significance of Learning: Imagination’s Heartwork, London & New York: Routledge. Hogan, P. et al. Website for TL21 R&D programme: www.maynoothuniversity.ie/TL21 Jerrim, J. (2011) England’s “plummeting” PISA test scores between 2000 and 2009: Is the performance of our secondary school pupils really in relative decline? DoQSS Working Paper No. 11-09. Institute of Education, University of London Kuhn, T.S. (1970) The Logic of Scientific Discovery Second, Enlarged Edition Chicago: University of Chicago Press Meyer, H-D, Zahedi, K. et al. (2014) Open Letter to Andreas Schleicher, OECD, Paris; available at: http://bildung-wissen.eu/fachbeitraege/basistexte/open-letter-to-andreas-schleicher-oecd-paris.html OECD (2013) PISA 2012 Results in Focus: What 15-year-olds know and what they can do with what they know, available at: http://www.oecd.org/pisa/keyfindings/pisa-2012-results-overview.pdf Sjøberg, S. (2007) PISA and “Real Life Challenges”: Mission Impossible? in Hopman,S (ed.) PISA according to PISA: Does PISA Keep What It Promises? Vienna: LIT Verlag

Author Information

Pádraig Hogan (presenting / submitting)
National University of Ireland Maynooth, Ireland

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.