Session Information
09 SES 10 C, Assessments as Opportunities for Learning and Instruction
Paper Session
Contribution
A continuously growing body of research points out the value of peer assessment (PA) both as an assessment tool (e.g. Cheng & Warren, 1997) and as a learning tool (e.g. Topping, 1998). Peer feedback can be seen as a specific approach of PA, which aims to involve students in ‘assessment for learning’ by asking them to provide fellow students with opinions, ideas and suggestions for improvement (Black & William, 1998). A growing body of research emphasizes that feedback has a powerful impact on both learning and performance (Nelson & Schunn, 2008). Interestingly, the average effects of feedback are one of the highest in education, but also one of the most unpredictable in their influences (Hattie & Gan, 2011). To this day however (Evans, 2013), research on peer assessment in higher education remains “very variable in type and quality, scattered and fragmentary in nature” (Topping, 1998, p. 267). Being an example of a more complex learning task that requires high-level cognitive processing (King, 2002), PA is a process in which students assess a peer’s performance, which results in numerous cognitive rewards for the assessee as well as the assessor (Topping, 1998). As high-level PA processes hardly happen spontaneously (Kollar & Fischer, 2010), previous literature recommends the use of collaboration scripts, as they focus on socio-cognitive structuring (Kollar, Fischer, & Hesse, 2006) by specifying, scheduling, and delegating roles and activities in the collaborative process of PA (e.g. Fischer, Kollar, Stegmann, & Wecker, 2013). The effectiveness of different scripting techniques appears to be an important field of study (Kollar & Fischer, 2010).
It is within this frame that the main aim of the present study can be situated: “How can we increase product improvement and peer feedback quality by structuring the peer assessment process?” With respect to this question, suggestions have been made in the literature. Previous researchhighlights the need for structure and support to ensure effective feedback (Poverjuc, Brook, & Wray, 2012) and the benefits of offering structure in a CSCL environment (Strijbos & Weinberger, 2010). The instructor could structure the peer assessment process by providing more detailed instructions on expected performance (Kollar, Fischer, & Slotta, 2007), e.g. by providing guiding questions to support the assessor while providing peer feedback (Gielen & De Wever, 2012). One of the questions, however, is how detailed the script should be and what level of structuring is the most appropriate (c.f. ‘script granularity’ concept of Kobbe, et al., 2007) to be beneficial for students’ product improvement and their peer feedback quality in this process.
Therefore, the present research examines the impact of structuring the PA process in a wiki-based CSCL environment on product improvement and the quality of students' feedback. The following hypotheses are examined: A higher level of structuring the PA process will lead to (H1) a higher product quality, (H2) a higher feedback quality.
Method
Expected Outcomes
References
Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy, and Practice, 5(1), 7-74. De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2007). Applying multilevel modelling to content analysis data: Methodological issues in the study of role assignment in asynchronous discussion groups. Learning and Instruction, 17(4), 436–447. Evans, C. (2013). Making Sense of Assessment Feedback in Higher Education. Review of Educational Research, 83(1), 70–120. Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning. Educational psychologist, 48(1), 56–66. Gielen, M., & De Wever, B. (2012). Peer Assessment in a Wiki: Product Improvement, Students’ Learning And Perception Regarding Peer Feedback. Procedia - Social and Behavioral Sciences, 69(Iceepsy), 585–594. Hattie, J., & Gan, M. (2011). Instruction based on feedback. In P. Alexander, & R. E. Mayer (Eds.), Handbook of research on learning and instruction (pp. 249–271). New York: Routledge. Hattie, J., & Timperley, H. (2007). The Power of Feedback, 77(1), 81–112. King, A. (2002). Structuring Peer Interaction to Promote High-Level Cognitive Processing. Theory Into Practice, 41(1), 33-39. Kobbe, L., Weinberger, A., Dillenbourg, P., Harrer, A., Hamalainen, R., Hakkinen, P., Fischer, F. (2007). Specifying computer-supported collaboration scripts. International Journal of Computer-Supported Collaborative Learning, 2(2), 211- 224. Kollar, I., Fischer, F., & Hesse, F. W. (2006). Collaboration scripts—A conceptual analysis. Educational Psychology Review, 18, 159–185. Kollar, I., Fischer, F., & Slotta, J. D. (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning and Instruction, 17, 708–721. Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: A cognitive perspective. Learning and Instruction, 20(4), 344–348. Nelson, M. M., & Schunn, C. D. (2008). The nature of feedback: how different types of peer feedback affect writing performance. Instructional Science, 37(4), 375–401. Strijbos, J.-W., & Weinberger, A. (2010). Emerging and scripted roles in computer-supported collaborative learning. Computers in Human Behavior, 26(4), 491–494. Topping, K. (1998). Peer Assessment Between Students in Colleges and Universities, 68(3), 249–276.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.