Session Information
99 ERC SES 05 K, Assessment, Evaluation, Testing and Measurement
Paper Session
Contribution
The European Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators (2022) aim to raise awareness about the potential benefits and risks of AI in education. The use of AI systems, indeed, has great potential to significantly improve assessment, contributing to better learning outcomes and enabling more efficient school management. The European Framework for the Digital Competence of Educators (DigCompEdu,2017) is also a key reference to guide teachers in using digital technologies to improve and innovate education. This framework includes 22 competences divided into six areas, one of which specifically concerns the application of AI to improve classroom and formative assessment, provide feedback and monitor students' progress. More specifically, the recent widespread use of AI, especially in STEM education, is deeply affecting formative assessment, as a practice where gathered information about student learning is used, by teachers and students, to adapt teaching and foster student learning (Black & Wiliam, 2018). On the backdrop of STEM and formative assessment literature, research has pointed out how feedback plays an important role because it allows students to “monitor the strengths and weaknesses of their performance, so that aspects associated with success or high quality can be recognized and reinforced, and unsatisfactory aspects modified or improved” (Sadler 1989, p. 121).
Moving from these assumptions, the present paper presents a systematic literature review study aimed at investigating the relationships between formative assessment and AI in STEM education.
These research questions framed the study:
- Is AI used for formative assessment in STEM subjects?
- How can the use of AI improve the assessment process?
The systematic review is based on an analysis of 10 scientific articles.
In response to the first research question, almost all the studies examined confirm that AI can be effectively used in formative assessment in STEM disciplines, demonstrating its potential in monitoring student progress and adapting to individual needs. However, as also highlighted by Li and colleagues (2023), it is essential that teachers understand and deal with the limitations of such technologies. Although AI systems can evaluate students' progress on predefined knowledge models, they are not able to assess aspects such as collaboration, social skills or creativity.
Regarding the second research question, the analysis showed that AI can improve the assessment process in several ways including automated assessments, predictive analysis of student results, and personalized feedback provision.
Identifying the strengths and challenges of integrating AI in STEM assessment, this study critically reflects on the future implications of such technologies in education, underlining the need for ethical and conscious use.
Method
The research was conducted by interrogating different databases including EBSCO, Scopus, and Web of Science. The specific search string was: Formative assessment AND STEM education AND Artificial Intelligence OR AI. These inclusion criteria have been used: • The study was realized over the last five years; • The study was a peer reviewed article written in English; • The study was realized in middle school and secondary school Articles’ title, keywords, and abstract have been screened using the search terms. EBSCO and Web of Science returned five articles; Scopus returned only one article. Abstracts were examined, and articles that did not meet the inclusion criteria were excluded. Thus, only three articles were initially gathered. Given the small number, the references of these three articles were analyzed. Thus 12 other suitable articles were identified. However, of these, only seven articles were fully read because they met all the inclusion criteria. The final set included 10 articles. All gathered studies have been conducted in the USA with K-12 students. More specifically, five studies are targeted at K-12 education; three, instead, have been conducted in middle school, and two studies in both contexts. It is important to note that six studies have been published in the same journal (Journal of Science Education and Technology). Among the selected studies, seven considered STEM disciplines; one study focused on physics and chemistry, one study on mathematics, and one study on science. In relation to the first research question, almost all the studies examined confirm that AI can be used in formative assessment in STEM disciplines. Some studies (Lee et al., 2021; Li et al., 2022; Mehrabi et al., 2024) point out that the integration of AI in the assessment of STEM subjects allows for more in-depth information, enabling teachers to refine teaching strategies and support students’ learning. Li et al. (2023), instead, do not consider the human component in the teaching-learning process and stress how AI in formative assessment limits creative and divergent learning. Regarding the second research question, results confirm how AI can improve the assessment process if socio-relational aspects are carefully considered by teachers. More specifically, the integration of AI in the assessment process can provide a detailed analysis of student responses, personalized feedback, and continuous performance monitoring (Lee et al., 2021; Li et al., 2022; Maestrales, 2021; Mehrabi et al., 2024).
Expected Outcomes
This systematic review on the integration of AI in formative assessment for STEM disciplines offers significant insights for the future of education. Aligned with the European DigCompEdu Framework (2017) the present results confirm the importance of using AI to improve teaching and assessment. The results of the 10 scientific articles included in this systematic review demonstrate that AI can play a crucial role in educational assessment, helping to monitor students' progress and providing them with personalized feedback. However, it must be acknowledged the importance of teachers’ awareness of the inherent limitations of these technologies. Despite the potential of AI, there are crucial aspects of learning, such as creativity and social skills, that are currently not adequately assessed through AI-based assessment systems. In this perspective, further research on the modalities through which to integrate AI into assessment is required to effectively enhance the educational process without sacrificing the human components of teaching. The small sample of studies included, due to the approach and research criteria set, may not represent all literature on the topic and this could be a limitation of this study. At the same time, however, this study calls for a reflection on how to integrate AI in educational assessment for STEM subjects. Further research is needed to understand how and to what extent AI can affect the current school context. Finally, it is important to note that all the studies included in this systematic review were conducted in the USA. Following the mentioned guidelines, this is an important insight for implementing research in the European context.
References
Black, P., & Wiliam, D. (2018). Classroom Assessment and Pedagogy. Assessment in Education: Principles, Policy & Practice, 25, 551-575. European Commission, Joint Research Centre, Redecker, C., Punie, Y., European framework for the digital competence of educators: DigCompEdu, Publications Office of the European Union, 2017, https://data.europa.eu/doi/10.2760/159770 European Commission, Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union, 2022, https://op.europa.eu/en/publication-detail/-/publication/d81a0d54-5348-11ed-92ed-01aa75ed71a1/language-en Lee, H. S., Gweon, G. H., Lord, T., Paessel, N., Pallant, A., & Pryputniewicz, S. (2021). Machine learning-enabled automated feedback: supporting students’ revision of scientific arguments based on data drawn from simulation. Journal of Science Education and Technology. https://doi.org/10.1007/s10956-020 09889-7. Li, C., Xing, W., & Leite, W. (2022). Using fair AI to predict students' math learning outcomes in an online plat form. Interactive Learning Environments, 1–20,1–20. https://doi.org/10.1080/10494820.2022.2115076 Li, T., Reigh, E., He, P., & Adah Miller, E. (2023). Can we and should we use artificial intelligence for formative assessment in science? Journal of Research in Science Teaching, 60(6), 1385–1389. https://doi.org/10.1002/tea.21867 Maestrales, S. Y., Zhai, X., Touitou, I., Schneider, B., & Krajcik, J. (2021). Using machine learning to evaluate multidimensional assessments of chemistry and physics. Journal of Science Education and Technology. https ://doi.org/10.1007/s1095 6-020 09895-9. Mehrabi, A., Morphew, J.W. & Quezada, B.S. (2024). Enhancing performance factor analysis through skill profile and item similarity integration via an attention mechanism of artificial intelligence. Front. Educ. 9:1454319. doi:10.3389/feduc.2024.1454319 Sadler, R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144. Zhai, X. (2019). Applying machine learning in science assessment: Opportunity and challenges. A call for a special issue in Journal of Science Education and Technology. https://doi.org/10.13140/RG.2.2.10914.07365 Zhai, X., Krajcik, J., & Pellegrino, J. W. (2021). On the validity of machine learning-based next generation science assessments: A validity inferential network. Journal of Science Education and Technology, 30, 298–312. https://doi.org/10.1007/s10956-020-09879- Zhai, X., Shi, L., & Nehm, R. (2020). A meta-analysis of machine learning-based science assessments: factors impacting machine human score agreements. Journal of Science Education and Technology. https://doi.org/10.1007/s10956-020-09875-z. Zhai, X., Yin, Y., Pellegrino, J. W., Haudek, K. C., & Shi, L. (2020). Applying machine learning in science assessment: a systematic review. Studies in Science Education, 56(1), 111–151
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.