Session Information
Paper Session
Contribution
This study is framed in a research project that analyses how higher education students deploy self-assessment (SA) strategies and considers different factors affecting it. With the term SA we refer to “a wide variety of mechanisms and techniques through which students describe (i.e., assess) and possibly assign merit or worth to (i.e., evaluate) the qualities of their own learning processes and products” (Panadero et al., 2016, p. 804). Research in the area of formative assessment has shown that SA is a strategy that can positively affect self-regulation (Yan, 2019) and achievement (Brown & Harris, 2013).
In this communication we present part of the results of a randomized experiment carried out in higher education. Specifically, we analyze how different types of feedback affect the strategies and criteria deployed by higher education students during a SA task.
We selected the type of feedback as a key component of the experiment because it is a powerful instructional practice that intertwines with self-regulation (Butler & Winne, 2016) and it has an important effect on academic achievement (Wisniewski et al., 2020). Understanding the effects of external feedback on students’ SA could help us understand how to better integrate both for instructional purposes. For this reason, we compared how higher education students self-assessed before and after receiving different types of feedback (rubric vs. instructor’s feedback vs. combination of rubric and instructor’s feedback), and we analysed how these two manipulations (moment and types of feedback) could affect the quality and quantity of the strategies and criteria used by students while they self-assessed their work.
Regarding feedback, the different conditions included two types. While the use of instructor’s feedback is very common, rubrics have gained a prominent role as feedback tools in the last years due to its positive effects for students, teachers, and programs (Dawson, 2017). Although the use of rubrics seems to be more effective without combining it with exemplars (Lipnevich et al., 2014), we do not know how the combination of instructors’ feedback with rubric can affect SA.
The contrast of how students perform self-assessment before and after receiving feedback could inform us on the time we should provide feedback in relation to SA.
Therefore, this communication aims to explore 2 research questions:
- RQ1: What are the self-assessment strategies and criteria that higher education students implement before and after feedback?
H1: Self-assessment strategies and criteria will decrease when feedback is provided.
- RQ2. What are the effects of feedback type and feedback occasion on self-assessment behaviors (i.e., number and type of strategy and criteria)?
H2: Rubric feedback will provide better self-assessment practices than other feedback types.
Method
- Participants 126 undergraduate psychology students (88.1% females) across first, second and third year of study (34.9%, 31.7%, and 33.3%, respectively) participated in the study in one of three feedback conditions: rubric (n = 43), instructor’s feedback (n = 43), and rubric + instructor’s feedback combined (n = 40). - Instruments Thinking aloud protocols: participants were asked to state out loud their thoughts, emotions, and other processes that they experienced during the SA. They were prompted to think aloud if they remained silent for more than 30 seconds. These protocols were coded using categories from a previous study of the team. They covered the strategies and criteria that students deployed during the SA task. Procedure The procedure consisted of two parts. First, participants attended a seminar on academic writing, where they wrote an essay that was assessed by the instructor (pre-experimental phase). Later, participants went individually to the laboratory, where they self-assessed their original essay following the instructions to think aloud. Then, they were asked to self-assess again after receiving the feedback corresponding to their condition (rubric vs. instructor vs. combination). During this process they filled some questionnaires 3 times (data not included in this study). Intervention prompts Rubric: it was an analytic rubric created for this study that included 3 levels of quality (low, average, and high) about the contents of the workshop: a) writing process, b) structure and coherence of the text and c) sentences, vocabulary and punctuation. Instructor’s feedback: the instructor provided comments to each essay using the same categories as the rubric (except for the “writing process” criterion that could not be observed by the instructor). Additionally, it included a grade ranging from 0 to 10 points. Data analysis The thinking aloud protocol was coded by two judges. After 3 rounds of coding different videos and discussions, they reached a Krippendorff’s α=0.87. The categorical variables were described with multiple dichotomous frequency tables, as each participant could display more than one behavior. To study the effect of the factors (feedback occasion and condition) on self-assessment strategies and criteria frequencies, we conducted ANOVAs and square test.
Expected Outcomes
Regarding RQ1, the most common SA strategies used by the participants had a low level of complexity, but there were also some advanced strategies (e.g., thinking different responses). The strategies used before and after feedback were similar, with the logical inclusion of strategies focused on the content on the feedback after it was received. The criteria used to assess the task were also similar, but after feedback the use of 3 criteria increased in conditions 1 (rubric) and 3 (rubric + instructors’ feedback) according to Binomial χ2 comparisons: writing process (p< 0.001 in both conditions), paragraph structure (p <0.05 in the rubric condition) and punctuation marks (p>0.05 in both conditions). In the instructors’ feedback condition there was a non-significant decrease in the writing process and the analysis of sentences. Regarding RQ2, after feedback there were not significant differences in the number of strategies used in each condition. However, the number of criteria differed substantially F(2,121) = 25.30, p < 0.001, η2 = 0.295) with post hoc differences for Rubric (M = 4.48, SD = 0.165) and combined conditions (M = 4.50, SD = 0.171) that outperformed the instructor condition (M = 3.02, SD = 0.169), both at p < 0.001. Also, the pre-post increase in number of strategies deployed was greater (post hoc p=0.002) in the rubric (M=0.938, SE=0.247) than in the instructor’s feedback (M= −0.291, SE=0.253) condition. The study has several implications. First, rubric feedback seems to be a better scaffold when students self-assess, providing an increase in the number of criteria used and stimulating student reflection (Brookhart, 2018). Second, the instructor’s feedback showed worse results in the deployment of SA strategies and criteria, maybe because students are in a more passive position. Finally, it seems that feedback presented once students have self-assessed could be better, since it would allow students to exhibit constructive strategies and criteria.
References
Brookhart, S. M. (2018). Appropriate Criteria: Key to Effective Rubrics. Frontiers in Education, 3, 22. https://doi.org/10.3389/feduc.2018.00022 Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. H. McMillan (Ed.), The Sage Handbook of research on classroom assessment (pp. 367–393). Sage. Butler, D. L., & Winne, P. H. (2016). Feedback and Self-Regulated Learning: A Theoretical Synthesis. Review of Educational Research, 65(3), 245–281. https://doi.org/10.3102/00346543065003245 Dawson, P. (2015). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347–360. https://doi.org/10.1080/02602938.2015.1111294 Lipnevich, A. A., McCallen, L. N., Miles, K. P., & Smith, J. K. (2014). Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4), 539–559. https://doi.org/10.1007/s11251-013-9299-9 Panadero, E., Brown, G. T. L., & Strijbos, J. W. (2016). The Future of Student Self-Assessment: a Review of Known Unknowns and Potential Directions. Educational Psychology Review, 28(4), 803–830. https://doi.org/10.1007/S10648-015-9350-2/TABLES/1 Wisniewski, B., Zierer, K., & Hattie, J. (2020). The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research. Frontiers in Psychology, 10, 3087. https://doi.org/10.3389/FPSYG.2019.03087/BIBTEX Yan, Z. (2019). Self-assessment in the process of self-regulated learning and its relationship with academic achievement. Assessment & Evaluation in Higher Education, 45(2), 224–238. https://doi.org/10.1080/02602938.2019.1629390
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.