Session Information
99 ERC SES 05 K, Assessment, Evaluation, Testing and Measurement
Paper Session
Contribution
Student feedback literacy refers to “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (Carless & Boud, 2018, p. 1316). The study of this topic has been developed primarily from qualitative approaches, in English and in countries such as the United Kingdom, Australia and China (Gozali et al., 2023; Nieminen & Carless, 2022).
Quantitative research has gradually gained interest in the field. Since 2022, there have been various proposals for scales to measure student feedback literacy (e.g., Dawson et al., 2023; Özdemir-Yılmazer & Kabadayı, 2024; Woitt et al., 2023; Yu et al., 2022; Zhang et al., 2023; Zhan, 2022). Existing scales explore feedback literacy, but they focus on different dimensions, include a different number of items, and present a different factorial structure. For example, the Student Feedback Literacy Scale by Zhan (2022) presents a structure with the following six dimensions: Eliciting, Processing, Enacting, Appreciation, Readiness, and Commitment; while the L2-Student Writing Feedback Literacy Scale by Zhang et al. (2023) includes only two, Using Feedback and Evaluating Feedback.
Given that most instruments measuring student feedback literacy are meant for in-person instruction and English-speaking programs, the Online Feedback Literacy of In-Service Teachers Self-perception Scale was developed specifically for the Mexican context of in-service teachers who are enrolled in an online continuing education program offered by the National Pedagogical University (UPN). The instrument was designed based on the literature and existing scales, and followed a rigurous, systematic process recommended by the American Educational Research Association et al. (AERA et al., 2014) and Lane et al. (2016).
In this study, online feedback literacy of in-service teachers is understood as the knowledge, skills and attitudes necessary to make sense of the information and use it to improve the task or learning strategies, taking into account the participation roles of the learners and the trainers in the asynchronous online context. The hypothetical model of the Online Feedback Literacy of In-Service Teachers Self-perception Scale includes 48 items and four factors (i.e., Knowledge, Skills, Attitudes, and Intention of Transfering Learning), and 17 sub-factors. The factors are conceptualized as: Knowledge, the conceptual and technical knowledge that allows the teacher in training to understand and participate in the feedback process; Skills, the capacities developed to use feedback productively; Attitudes: the dispositions and values to understand and use feedback productively; and Transfer intention, the personal disposition and purpose of the teacher in training to apply his training experience in the work he develops in the classroom.
The Standards for Educational and Psychological Testing (AERA et al., 2014) indicate that “validity is a unitary concept. It is the degree to which all accumulated evidence supports the intended interpretation of test scores for the proposed use” (p. 14) and it is necessary to refer to sources of validity evidence (i.e., evidence based on test content and evidence based on internal structure), instead of types of validity (i.e., content validity or construct validity). From this framework, this study provides validity evidences of the proposed interpretations and uses of the scores of the Online Feedback Literacy of In-Service Teachers Self-perception Scale.
Method
Three sources of validity evidence were collected: content, response process, and internal structure. Evidence Based on Scale Content (AERA et al., 2014) was obtained through a validation process using independent judgements, the objective of which was to assess the dimensions, indicators, and items of the scale, by rating (from 1 to 4 points) their congruence, relevance, clarity, and sufficiency, and providing additional written suggestions. One of the judges is an expert in research about feedback and in the development of measurement instruments in the educational field; another is an expert in the design and delivery of online training programs, and another is an expert in online training of in-service teachers. Average scores of the ratings allowed to identify problematic items and written suggestions allowed to systematically incorporate the judges’ suggestions. The collection of Validity Evidence Based on the Response Process (AERA et al., 2014) was done through the application of cognitive interviews (Caicedo Cavagnis & Zalazar-Jaime, 2018, p. 363). The aim was to identify whether participants showed patterns in the difficulties in understanding the content or the wording of the items that make up the scale, as well as to gather information regarding their response process. Three UPN students participated. To process the data from the three interviews, the content analysis technique (Krippendorff, 2004) was applied, with inductive coding (Saldaña, 2009). The third phase aimed to obtain validity Evidence Based on the Internal Structure of the scale (AERA et al., 2014). This was done by first piloting the revised instrument, with the responses of 277 participants. The Statistical Package for the Social Sciences (SPSS), version 29, was used to carry out the Exploratory Factor Analysis (EFA) in five steps: obtaining evidence of the adequacy of the data, factor extraction (Principal Component Analysis), the definition of the number of factors to retain, oblique rotation (Promax), and evaluation of the fit of the factor model, as suggested by Yong & Pearce (2013).
Expected Outcomes
The evaluation by independent judges allowed for the improvement of definitions, content, and structure of the scale. Most rating averages were close to M=4 and the lowest means were M=3 (for congruence, two indicators and 13 items; relevance, three indicators and 11 items; clarity, one dimension, one indicator and five items; and sufficiency, one indicator). Based on the scores and the judges’ suggestions, nine elements were eliminated, nine were relocated and 18 were adjusted. Overall, the scale was easy to be self-administrated. In the cognitive interviews only in 26 instanses participants indicated issues or suggestions related to technical (two cases), format (four cases), and content (21 cases) aspects of the questionnaire. The instrument was modified to address suggestions and improve the understanding and flow. The EFA showed that the hypothesized dimensions did not coincide with the factorial structure derived from the statistical analysis. The factorial model included 21 items, organized into four factors that explained 69.195% of the total variance. The factors were: Skills (seven items), Attitudes (five items), Utility (five items) and Knowledge (four items). It should be noted that in the research background a univocal factorial structure or a similar set of dimensions has not been established. In this sense, it is necessary to deepen the empirical study of the student's feedback literacy to support and complement the conceptual approaches. However, this study presents three sources of validity evidence that support the proposed interpretations and uses of the scale scores. This model advances empirical research on feedback literacy in Latin America, the training of basic education teachers, and the online asynchronous environment.
References
American Educational Research Association, American Psychological Association & National Council on Measurement. (2014). Standards for Educational and Psychological Testing. Caicedo Cavagnis, E. & Zalazar-Jaime, M. F. (2018). Entrevistas cognitivas: revisión, directrices de uso y aplicación en investigaciones psicológicas. Avaliação Psicológica, 17(3), 362-370. http://dx.doi.org/10.15689/ap.2018.1703.14883.09 Carless, D. & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Asessment & Evaluation in Higher Education, 43(8), 1315-1325. https://doi.org/10.1080/02602938.2018.1463354 Dawson, P., Yan, Z., Lipnevich, A., Tai, J., Boud, D. & Mahoney, P. (2023). Measuring what learners do in feedback: the feedback literacy behaviour scale. Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2023.2240983 Gozali, I., Syahid, A., & Suryati, N. (2023). Ten years after Sutton (2012): quo vadis feedback literacy? (A bibliomeric study). Register Journal, 16(1), 139-167. https://doi.org/10.18326/register.v16i1.139-167 Krippendorff, K. (2004). Content Analysis. An Introduction to its Methodology. SAGE Publications. Lane, S., Raymond, M. R., Haladyna, T. M., & Downing, S. M. (2016). Test development process. En S. Lane, M. R. Raymond, & T. M. Haladyna (Eds), Handbook of Test Development (2da ed.) (3-18). Routledge. Nieminen, J. H. & Carless, D. (2022). Feedback literacy: a critical review of an emerging concept. Higher Education. https://doi.org/10.1007/s10734-022-00895-9 Özdemir-Yılmazer, M., & Kabadayı, B. (2024). Adaptation of student feedback literacy scale into Turkish culture: A study of reliability and validity. System, 122(10324). https://doi.org/10.1016/j.system.2024.103294 Saldaña, J. (2009). The Coding Manual for Qualitative Researchers. SAGA Publications. Woitt, S., Weidlich, J., Jivet, I., Orhan Göksün, D., Drachsler, H. & Kalz, M. (2023). Students’ feedback literacy in higher education: an initial scale validation study. Teaching in Higher Education. https://doi.org/10.1080/13562517.2023.2263838 Yıldız, H., Bozpolat, E., & Hazar, E. (2022). Feedback Literacy Scale: A Study of Validation and Reliability, International Journal of Eurasian Education and Culture, 7(19), 2214-2249. http://dx.doi.org/10.35826/ijoecc.624 Yong, A. G. & Pearce, S. (2013). A Beginner’s guide to Factor Analysis: Focusing on Exploratory Factor Analysis. Tutorials in Quantitative Methods for Psychology, 9(2), 79-94. https://doi.org/10.20982/tqmp.09.2.p079 Yu, S., Zhang, E. D., & Liu, C. (2022). Assessing L2 student writing feedback literacy: A scale development and validation study. Assessing Writing, 53, 1-13. https://doi.org/10.1016/j.asw.2022.100643 Zhan, Y. (2022). Developing and Validating a student feedback literacy scale. Assessment & Evaluation in Higher Education, 47(7), 1087-1100. https://doi.org/10.1080/02602938.2021.2001430 Zhang, E., Zhou, N., & Yu, S. (2023). Assessing L2 secondary student writing feedback literacy and its predictive effect on their L2 writing performance. Language Teaching Research, https://doi.org/10.1177/13621688231217665
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.