Session Information
99 ERC SES 04 I, Assessment, Evaluation, Testing and Measurement
Paper Session
Contribution
Education across the globe, including Europe, the United Kingdom and Australia, has been fuelled by a resurgence of high-stakes testing, exacerbated by outcome-defined policies and competing factors, including the ongoing demands of accountability, performativity, and the rigid alignment between high-stakes exams and curricula (Lingard & Sellar, 2016; Löfstedt et al., 2020; Verger, Parcerisa & Fontdevila, 2019). The performance-based nature of education has turned education into schooling. Kemmis et al. (2014) describe schooling as the opposite of education; it is a ‘technical tool’ (p.25) used to impart and monitor information as outcomes. These influence assessments' objectives, methods, and outcomes, in addition, to teachers' pedagogical assessment practices (Cairns, 2020). Given current political preferences for teachers' accountability and measurable outcomes, students undertake multiple assessments measuring their learning at single points in time, each affecting their depth of understanding, engagement with the subject, and sense of who they are as a student (Andrade & Brookhart, 2020). Further, teachers' pedagogical practices may be modified by the effects of high-stakes testing as they move from collaborative work in the classroom to more 'teach to the test' methods, further reducing student engagement (Resnick & Schantz, 2017). A significant number of studies investigating the advantages of collaborative learning have been undertaken, promoting a constructivist approach to collaboration in the classroom. However, behaviourist methods are still prevalent when assessing students. This discrepancy leads to the question: if collaboration is widely recognized as enhancing learning, why does the reliance on individual summative testing persist? Broader recognition of multiple assessment methods is required to improve engagement in high school STEM classes and abate the documented effects of individual summative testing. Leading to the research questions –
- How do students perceive collaborative testing?
- How do students and teachers discern collaborative testing's impact on student learning?
- What are teachers' perceptions of the utility of collaborative testing?
Perspective
Humans exist in a dynamic world where we are a part of the world and being of that world interact with other humans and non-humans. Participating in the environment entangles or intertwines us with the material and others, all affecting each other as we understand our place, meaning and knowledge of the world (Barad, 2007; Muris, 2022; Plauberg, 2018). Knowledge is not separated from the learner; it is interwoven and interconnected; it affects and is affected by interactions with non-human and human interactions (Barad, 2007; Guillion, 2018).
Teaching science involves a dynamic intra-mingling between students, teachers, surroundings, and our classroom's tools. In this intra-play, students attained their understanding and knowledge of science. They did not do so alone; students did not sit apart, and they did not sit quietly; they participated, interacted and enhanced their understanding of science. These interactions influence our meaning-making (Koro- Ljungberg, 2015). The learner’s thinking and actions act upon the world equally as the thinking of the learner acts on the world. There can be no separation between object or subject, from human and non-human; all are intertwined in the knowing. Barad (2007) termed this interconnecting of theory and knowledge as onto-epistemology.
Onto-epistemology does not separate the object/subject, human/non-human, world/us, knowledge/learning. This theory places equal importance on the material world, ‘matter matters’ (Barad, 2007). Matter is an active participant in the entanglement of meaning, enfolded alongside material and discursive practices; they are constantly reconfiguring to forge reality (Guillion, 2018).
Method
This paper presents early data from a doctoral study exploring student and teacher perceptions of collaborative testing within collaborative pedagogy and as an addition to current practices of individual, competitive testing. The study design draws on diffractive ethnography to examine (i) teachers' perceptions of the utility of collaborative testing and (ii) students' and teachers' views on the effectiveness of collaborative testing. Additionally, this study examines the effectiveness of assessing students' 21st-century skills while collaboratively testing. To address this novel approach to testing, the researcher used a multi-phase, collaborative practitioner inquiry method involving eight teachers and the researcher in a reciprocal relationship. The discussion will include qualitative data gathered through interviews, focus groups, audio recordings of student testing groups, observations, and assessment tasks, outlining student and teacher perceptions of the efficacy of this novel assessment method.
Expected Outcomes
The literature demonstrates a supportive view of the pedagogical value of authentic summative assessments utilising collaborative ideals to benefit student understanding (Reiger & Reiger, 2020). However, almost all research into collaborative testing has been conducted in undergraduate science classes; therefore, this diffractive ethnographic study hopes to broaden understanding and highlight different assessment choices to enhance teachers' pedagogy, practice, and student engagement. Of equal importance to improving teacher assessment strategies, this study will look at the feasibility of using a rubric to assess 21st-century skills such as collaboration, communication and student social skills while students are undertaking collaborative testing.
References
References Andrade, H. L., & Brookhart, S. M. (2020). Classroom assessment as the co-regulation of learning. Assessment in Education: Principles, Policy & Practice, 27(4), 350-372. doi:10.1080/0969594X.2019.1571992 Barad, K. M. (2007). Meeting the universe halfway quantum physics and the entanglement of matter and meaning. Durham: Duke University Press. Cairns, R. (2020). Exams tested by Covid-19: An opportunity to rethink standardized senior secondary examinations. PROSPECTS. doi:10.1007/s11125-020-09515-9 Gullion, J. S. (2018). Diffractive ethnography : social sciences and the ontological turn. New York, NY: Routledge. Kemmis, S., Wilkinson, J., Edwards-Groves, C., Hardy, I., Grootenboer, P., & Bristol, L. (2014). Praxis, Practice and Practice Architectures. In Changing Practices, Changing Education (pp. 25-41). Singapore: Springer Singapore. Koro-Ljungberg, M. (2016). Reconceptualizing Qualitative Research: Methodologies without Methodology. doi:10.4135/9781071802793 Löfstedt, P., García-Moya, I., Corell, M., Paniagua, C., Samdal, O., Välimaa, R., & Rasmussen, M. (2020). School satisfaction and school pressure in the WHO European region and North America: an analysis of time trends (2002–2018) and patterns of co-occurrence in 32 countries. Journal of adolescent health, 66(6), S59-S69. Murris, K. (2022). Karen Barad as Educator. In Karen Barad as Educator, Agential Realism and Education (1 ed., pp. XV, 95): Springer Singapore. Plauborg, H. (2018). Towards an agential realist concept of learning. Subjectivity, 11, 322-338. doi:https://doi.org/10.1057/s41286-018-0059-9 Rieger, G. W., & Rieger, C. L. (2020). Collaborative Assessment That Supports Learning. J. Mintzes & E. Walter (Eds.), Active Learning in College Science. Springer., https://doi.org/10.1007/978-3-030-33600-4_51 Resnick, L. B., & Schantz, F. (2017). Testing, teaching, learning: who is in charge? Assessment in education: principles, policy & practice, 24(3), 424-432. doi:10.1080/0969594X.2017.1336988 Verger, A., Parcerisa, L., & Fontdevila, C. (2019). The growth and spread of large-scale assessments and test-based accountabilities: A political sociology of global education reforms. Educational Review, 71(1), 5-30.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.