“We Have A Lot To Do, If We Are To Succeed”. Development Of Accessible And Reciprocal Assessment Practices
Author(s):
Lisbeth Gyllander (presenting / submitting)
Conference:
ECER 2015
Format:
Paper

Session Information

09 SES 01 C, Assessment Practice and Competency Development: Roles and Perceptions of Students and Teachers

Paper Session

Time:
2015-09-08
13:15-14:45
Room:
334. [Main]
Chair:
Mary-Anne Holfve-Sabel

Contribution

An accountability and performativity wave is currently sweeping the world. International studies and national evaluations are being used in the political debate. Many school systems are influenced by concepts, models and various forms of protocol from a new public management discourse. However, focusing on what caused the downward trend does not give answers to what needs to be done to reverse the negative trend of students' performance. Using assessment for learning may be one way to enhance students’ learning. Assessment for learning has been a research area of interest for many years. Though research studies are extensive not many focus on student perspectives.

This study brings forth students’ voices on how to make assessment practices accessible and tailored to meet students’ needs and to support their learning. The study also problematizes application of assessment methods without prior considerations of embedded structures and pre-conditions that enable or constrain assessment practices. These structures and pre-conditions emanate from several practices integrated in the school context. This study therefore –besides focusing on teaching and learning –also ‘zoom in’ on the practices of professional learning, leading and researching.

The study aims to answer the questions:

  • What enables or constrain development of accessible and reciprocal assessment practices?

  • In what ways may interconnected practices enable or constrain development of accessible and reciprocal assessment practices?

The theoretical framework consists of a nomological network of assessment, practice and validity theories that form a construct of valid assessment practices. Empirical data is validated against the construct in order to understand what enables or constrains students and teachers when attempting to develop accessible and reciprocal assessment practices.

The construct builds on prior research that brings forth the importance of clarified goals, feedback at the process or regulatory level – based on solid diagnosis of student performances related to intended learning outcomes – and comprehensive suggestions on how to proceed. Drawing on the intended and non-intended impact on students, access and agency in assessment practices are also focused.

Assessment practices are here understood as practices closely connected to planning, enacting and evaluating teaching and learning activities. Assessment consists of interactive, dynamic and collaborative activities that are integrated in teaching activities and connected to classroom practice. These activities affect our understanding of learning, the learner and what is supposed to be learnt. Teachers need to clarify what students are expected to do with their abilities, and create qualitative learning practices where teaching and learning activities stimulate these abilities and enhance student participation. Assessment practices therefore need to be accessible for students and tailored to meet students’ needs.

When developing practices, structures and pre-conditions that embed those practices need to be considered. Practices can be understood as human activities where language, activities and relationships hang together in specific ways. A practice is located in space and time and is dependent on cultural-discursive, material-economic and socio-political arrangements that enables or constrains activities taking place within the practice. Practices (e.g. teaching) can be affected by interconnected practices (e.g. professional learning, leading), and are enmeshed with pre-conditions that enable and constrain them.

Validity theories give guidance on whether interpretations, decisions and actions in assessment practices are valid. Validation is related to the intent to improve learning whereas validity dependent on how well intentions are achieved. Within this study, validation is used to view the intention to enhance learning through assessment practices. The validity framework highlights enablings and constraints that affect students’ possibilities to access assessment practices. Valid assessment practices are dependent on the degree to which students are able to reach the learning objectives, and on the intended and non-intended impact of assessment on students’ learning.

Method

Action research serves as an overarching methodological frame. Action research falls into the field of practice- or site-based educational development, where attempts are made to develop educational practices by involving teachers, students and researchers. Data derive from focus group interviews, participatory observations and dialogical meetings with students and teachers in a secondary school in Sweden. Seven focus group interviews were conducted initially, with student, to identify students’ experiences and perceptions of assessment. The interviews were semi-structured and aided by initial, comprehensive and in-depth questions. During the interviews, the researcher acted as the moderator of the ongoing dialogue and introduced new aspects if necessary. Preliminary findings were discussed with students to validate adequacy. Development areas were found through dialogue with the teacher team and put into action. Subsequent studies in the four-year project used dialogical meetings to capture thoughts, possibilities and difficulties regarding assessment practice. Four rounds of dialogical meetings were conducted with students and teachers. The meetings focused on various aspects of assessment and collaboration in assessment practice. Themes were selected from an ongoing, parallel observational study, which also provided complementary empirical data. Narratives and photos from observations as well as pedagogical plans formed the basis and inspiration for the dialogues. Mind mapping was used as a supportive method to summarize and visualize ongoing dialogues. Data was analyzed through a four step process. Content analysis was used initially to identify themes capturing students' comprehension of assessment. A complementary mind map aided verification of consistency between meaning units and themes. Data from the interviews and dialogical meetings with students and teachers were then put through a second screening using Stobart’s validity framework for formative assessment. By connecting four assessment aspects with students’ statements on assessment practices, various factors that shaped these practices appeared. In the third step, an empirically derived model was drafted. The Interconnecting Data Analysis model (IcDA-model) combined themes, from the initial analyses, with assessment aspects from Stobart’s validity framework. Pre-conditions within assessment practices were brought into the model to visualize enablings and constraints within assessment practices. The IcDA-model enhanced understanding of the interdependency between themes, aspects and pre-conditions. In the fourth step, data was analyzed from an overall perspective using the theory of practice architecture combined with validity theories. The analysis brought forth how cultural-discursive, social-political and material-economical arrangements, from interconnected and interdependent practices, enabled and constrained assessment practices and the development of these practices.

Expected Outcomes

Findings indicate that students’ understanding of, and access to, assessment practices are dependent on cultural-discursive, social-political and material-economical arrangement. Viewed from a classroom context, students’ understanding and access are dependent on relevant teaching, learning and assessment activities, student-friendly language – related to goals and feedback – and relationship between students and teachers. Students construct meaning out of relevant learning activities. Teaching and learning activities, as well as assessment methods, therefore need to be chosen based on a solid understanding of students’ needs, preconceptions and understanding, as well as pre-conditions in the specific context. Findings point to the significance of enhancing student voice and agency in assessment practices. The analysis visualize how applying assessment methods is not in itself sufficient to construct valid assessment practices affecting students’ learning. Findings visualize how for example. curricula, local educational goals and directives enable and constrain assessment practices, and the development of those practices. Assessment practices are embedded in a school context shaped by arrangements from several interconnected and interdependent practices – the practices of teaching, student learning, professional learning, leading and researching. Development of assessment practices thus needs to be placed within the overall context of practices within the specific school. Findings indicate that accessible and meaning-making assessment practices are not just a question of applying methods. Sustainable development of assessment practices may be dependent on simultaneous development of all practices at the specific school. We need to take into consideration the interdependencies between the practices that interconnect in the local school context. Pre-conditions differ depending on the local context, and methods need to be chosen with considerations to the specific pre-conditions, students’ understandings and needs. Hence applying methods without prior analysis of pre-conditions and needs, may lead to non-intended impact on students’ learning.

References

Black, Paul & Wiliam, Dylan (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, Vol. 1. (27 sidor) Cronbach, Lee J. (1971). Test Validation. In Robert L. Thorndike (Ed.), Educational Measurement (Second edition, pp. 443-507). Washington, D. C.: American Council on Education. Gipps, Caroline (1999). Socio-Cultural Aspects of Assessment. Review of Research in Education 24, 355-392. Kane, Michael T. (2006). Validation. In Robert L. Brennan (Ed.), Educational Measurement (Fourth edition, pp. 17-64). Westport CT: American Council on Education/Praeger Publishers. Kane, M. T. (2013). Validating the Interpretations and Uses of Test Scores. Journal of Educational Measurement, 50(1), 1-73 Kemmis, S. & Grootenboer, P. (2008). Situating praxis in practice: practice architectures and the cultural, social and material conditions for practice. In S. Kemmis, and T.J. Smith (Eds) enabling praxis: challenges for education. Rotterdam: Sense Publishers. Kemmis, S., Wilkinson, J., Edwards-Groves, C., Hardy I., Grootenboer, P., & Bristol, L. (2014). Changing Practices, Changing Education. Singapore: Springer. Messick, Samuel A. (1989). Validity. I Robert L. Linn (Ed.), Educational Measurement (Third edition, pp. 13-103). New York: American Council on Education/Macmillan. Moss, P., Girard B. J. & Haniford L. C. (2006). Validity in Educational Assessment. In: Review of research in Education, 30(1), 109-162. Nicolini, D. (2013). Practice theory, work & organization. Oxford: Oxford university Press. Nusche, D., et al. (2011). ”OECD Reviews of Evaluation and Assessment in Education – Sweden”. Paris: OECD. OECD (2013). PISA 2012 Results in Focus. What 15-year-olds know and what they can do with what they know. Paris: OECD. Sadler, Royce. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144. Schatzki, Theodore (2010). The timespace of human activity. On performance, society, and history as indeterminate teleological events. Lanham, Maryland: Lexington Books. 278 s. Schuell, T. (1986). Cognitive conceptions of learning. Review of educational research. 56(4), 411-436. Scriven, M. (1967) The methodology of evaluation. In R. Tyler, R. Gagna and M. Scriven (1967) Perspectives on Curriculum Evaluation (AERA Monograph Series – Curriculum Evaluation) (Chicago, Rand McNally and Co). Stobart, Gordon (2012). Validity in Formative Assessment. In John Gardner (Ed.), Assessment and Learning (Second edition; pp. 233-242). London: Sage Publications. Taras, Maddalena. (2005). Assessment – Summative and Formative – Some Theoretical Reflections. British Journal of Educational Studies, Vol.53, No 4, December 2005, pp466–478 Taylor, P., Fraser, B., & Fisher, D. (1997). Monitoring constructivist classroom learning environments. International Journal of Educational Research, 27, 293-301.

Author Information

Lisbeth Gyllander (presenting / submitting)
University of Gothenburg
Department of Education and Special Education
Tågarp

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.