Session Information
16 SES 05.5 A, General Poster Session
General Poster Session
Contribution
As artificial intelligence (AI) becomes more integrated into education, its role in developing analytical reasoning and problem-solving skills is gaining attention. In secondary education, particularly in STEM subjects like physics and mathematics, AI tools such as ChatGPT and DeepSeek provide instant explanations and solutions. However, their use raises concerns about students ability to critically evaluate AI-generated content, verify facts, and develop independent analytical thinking. While previous research has explored AI's role in adaptive learning and personalized feedback, fewer studies have examined how AI can be structured to improve students' critical reasoning and fact-checking abilities. This study investigates how AI can be effectively integrated into inquiry-based learning environments to develop students' analytical autonomy rather than passive reliance on AI-generated solutions.
The research focuses on how AI, within a problem-based learning (PBL) framework, can enhance students ability to critically evaluate AI-generated responses in physics and mathematics. It examines at which stages AI contributes most to analytical skill development, what challenges arise when using AI in STEM subjects, and how these difficulties can be addressed. The study also compares AI-supported learning with traditional problem-solving approaches and explores its implications for European and international policies on AI literacy in schools.
The research is based on three key educational theories. Cognitive Load Theory suggests that AI should be used to reduce unnecessary cognitive load, such as routine calculations, while increasing meaningful cognitive engagement with problem-solving. Metacognitive Theory emphasizes the importance of students actively regulating their learning by verifying AI-generated answers and applying reasoning strategies. Problem-Based Learning (PBL) encourages students to interact with AI-generated solutions critically, improving them instead of accepting them without question. These theories provide a solid foundation for discussing how AI can support the development of analytical skills in STEM education.
A mixed-methods approach was used to collect and analyze data. The study included an experimental design comparing students who used AI-supported PBL with those who followed traditional instruction. Teachers played an essential role by guiding discussions where students examined AI-generated solutions, encouraging them to question, correct, and verify conceptual accuracy. Interviews and surveys with students and teachers provided qualitative insights into AI's role in cognitive development. Pre- and post-tests were used to assess students' ability to identify, correct, and justify AI-generated errors. AI-generated responses were also analyzed to identify common mistakes, including conceptual misunderstandings, computational errors, and misleading procedural steps.
The results showed that AI was most effective when used for verification rather than as a direct problem-solving tool. Students performed better when asked to validate AI-generated solutions, with 85% showing improved analytical reasoning in this context. However, challenges were identified, particularly in recognizing AI-generated errors. Initially, 72% of students struggled to spot inconsistencies in AI's responses. ChatGPT often produced logically inconsistent solutions, while DeepSeek sometimes misunderstood question parameters, especially in physics problems. Teacher-led discussions played a crucial role in improving outcomes. Explicit instruction on fact-checking helped students develop self-regulated learning strategies, and teachers guided them toward independent reasoning rather than passive acceptance of AI-generated solutions.
There were also subject-specific challenges. In physics, AI had difficulty with conceptual problem-solving, particularly in topics like kinematics and thermodynamics, where real-world assumptions are crucial. In mathematics, AI was useful for calculations but prone to algebraic simplification errors, which students needed to detect and correct. These findings highlight the importance of structured AI integration, ensuring that AI serves as a tool for critical engagement rather than a source of unquestioned answers.
By connecting research with educational practice, this study provides concrete strategies for teachers and researchers to use AI effectively while maintaining a strong focus on academic integrity, fact-checking, and metacognitive development.
Method
This study used a mixed-methods approach to explore how AI tools like ChatGPT and DeepSeek influence students’ ability to analyze and verify information in physics and mathematics. By combining quantitative and qualitative methods, the research aimed to measure both improvements in analytical reasoning and students’ perceptions of AI-assisted learning. The study followed an experimental design, comparing two groups of students: one using AI tools within a problem-based learning (PBL) framework, and a control group following traditional instruction without AI assistance. The AI-supported group solved physics and mathematics problems with the help of ChatGPT and DeepSeek but was then required to identify and correct errors in AI-generated solutions. The control group solved similar problems using textbooks and teacher guidance. To measure improvements in analytical reasoning, students completed pre- and post-tests. These tests included both multiple-choice questions and open-ended problem-solving tasks, where students had to explain their reasoning when verifying AI-generated answers. Performance differences between pre- and post-tests were analyzed using paired t-tests to check whether the improvements in the AI-supported group were statistically significant. To understand student and teacher experiences, the study included semi-structured interviews and surveys with 30 students and 10 teachers. The interviews focused on whether students found AI useful for learning, how they approached fact-checking AI-generated responses, and what challenges they faced. Teachers provided insights into how AI affected student engagement, whether students relied too much on AI, and how AI-supported learning compared to traditional methods. AI-generated responses were analyzed and categorized into three common types of errors: • Conceptual misunderstandings, where AI applied incorrect physics or math principles. • Computational errors, including miscalculations or simplifications. • Procedural mistakes, where AI solutions were logically inconsistent or incomplete. Thematic analysis (Braun & Clarke, 2006) was used to analyze interview responses and identify patterns in how students and teachers engaged with AI. The study found that AI was most effective at the verification stage, with 85% of students improving their analytical reasoning when required to evaluate AI-generated steps rather than using AI passively. These findings will help develop AI-integrated teaching frameworks that balance AI use with critical thinking and independent problem-solving.
Expected Outcomes
This study showed that AI improves students’ analytical skills in physics and mathematics, but only when used correctly. The most important finding was that AI worked best for verification rather than direct problem-solving. When students checked and corrected AI-generated solutions instead of accepting them, 85% demonstrated better analytical reasoning. They became more aware of common AI errors and learned to question results rather than rely on AI blindly. However, many students initially struggled to recognize AI mistakes. About 72% had difficulty identifying errors, especially early on. ChatGPT sometimes provided logically inconsistent answers, while DeepSeek misunderstood question parameters in physics. This highlights the need for training students in fact-checking strategies, as AI is not always reliable. Another key finding was the important role of teachers. When teachers guided students in fact-checking AI-generated responses, engagement and reasoning skills improved. Without guidance, some students over-relied on AI and failed to question answers. This shows the importance of structured AI integration to ensure AI supports, rather than replaces, critical thinking. The study also found subject-specific challenges. In physics, AI struggled with conceptual understanding, especially in kinematics and thermodynamics, where real-world conditions matter. In mathematics, AI handled calculations well but often made algebraic simplification errors, requiring students to correct them. These findings contribute to discussions on AI literacy in European and global education, emphasizing fact-checking and analytical reasoning in digital learning. The study provides practical recommendations for AI-supported teaching, ensuring students actively verify AI responses rather than accept them passively. Future research should explore how AI can be adapted across subjects and education levels to improve learning outcomes.
References
1. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. 2. Chen, X., Xie, H., & Hwang, G. J. (2020). A multi-perspective study on AI in education: Effects on student learning and implications for future research. Educational Technology & Society, 23(4), 1-14. 3. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906-911. 4. Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235-266. 5. Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign. 6. Luckin, R. (2018). Machine learning and human intelligence: The future of education for the 21st century. UCL Institute of Education Press. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2021). Artificial Intelligence and Education: A Critical View. OECD Education Working Papers, No. 240. 7. Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Social Science Research Council. 8. Sweller, J. (1988). Cognitive load during problem-solving: Effects on learning. Cognitive Science, 12(2), 257-285. 9. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1-27.
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
 This database will be updated with the conference data after ECER. 
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.