Session Information
32 SES 03 A, The Trend towards Digitalization - Organizational Education Perspectives
Paper Session
Contribution
The release of the language-based AI application ChatGPT in November 2022 attracted international attention and led to a nuanced scientific debate on the opportunities, challenges and implications of generative AI for research, practice and policy (Dwivedi et al. 2023). The 'big language models' were also found to have both benefits and risks for the dimensions of teaching and learning when used in differentiated educational contexts (Kasneci et. al. 2023). In the context of higher education, the changes brought about by technological developments have led to considerable uncertainty from the perspective of both teachers and students (Gimpel et. al. 2023). In addition to (examination) legal issues (Fleck 2023), the objectivity, reliability and validity of the information generated by AI is also viewed critically (Rademacher 2023). Like the general debate on the possible uses of AI technologies, the debate on AI at universities is also largely characterised by weighing up the opportunities and risks of such technologies in areas of application such as governance, administration, research and teaching. These issues relate to the support of decision-making processes as well as the promotion of innovation and the personalisation of learning processes (Wannemacher/Bodmann 2021).
Particularly in social science programmes, the question arises as to what importance will be attached to reflexive, ethical, social and pedagogical dimensions in AI-supported teaching in the future (Zawacki-Rinter et al., 2020, p. 513). Despite all these uncertainties, there is no question that the use of AI-based applications in digitised education at universities will intensify. AI technologies are now reaching a certain level of diffusion in research, study and teaching at universities (Wannemacher/Bodmann 2021). Particularly in the field of higher education, a far-reaching automation of didactic interaction patterns can be expected in the near future, with classic teaching formats being successively expanded or supplemented by the use of chatbots in the context of sophisticated learning scenarios (Schmohl/Löffl/Falkemeier, 2019).
In view of the growing number of students worldwide, concepts are also gaining in importance that use AI applications to provide as many students as possible with fast, individualised advice without having to accept a significant loss in quality compared to advice provided by humans. According to a study by the Georgia Institute of Technology, chatbots can be used successfully to provide such advice. The study showed that learners in selected online courses were unable to distinguish the chatbot from a "real" teacher (Kukulska-Hulme/Bossu/Coughlan et al., 2021, p. 23f).
At the same time, various studies in this field also show that many teachers and students at universities have a certain fundamental scepticism towards highly developed AI technology, which makes it difficult to use (Ferguson/Coughlan/Egelandsdal et al., 2019, p. 12 f.). Only a few studies have been conducted on the pure distance learning sector.
The initial situation for the empirical study in this paper is that "Synthea" has been used at IU International University since December 2023 to answer students' questions in distance learning. These primarily relate to the teaching materials provided so that the AI has a sound basis for answering them. This means that the uncertainty regarding the accuracy of the answers is already reduced. To further increase security, the teachers of the individual modules verify the answers provided by Synthea and can change them if necessary. The system is designed in such a way that the AI understands this as a learning process, further questions on the same subject area are then answered accordingly and no further verification is necessary. This means that students do communicate with an AI, but primarily to generate knowledge rather than for consultation processes.
Method
In discussions among teachers, it becomes clear that the scope of questions, the content and also the process of verification vary. Particularly in modules that are not exclusively about knowledge transfer, but also about personal and professional development (e.g. practical reflections), there is uncertainty about the extent to which AI can actually provide advice in a meaningful way and, above all, in the context of the students' actual topics, as it is often a process to comprehensively clarify the problem and initial situation in personal consultations in order to develop targeted solutions. Whether an AI can do this and how it can be implemented - the experiences to date should provide information on this. For both students and teachers, the focus will also be on how interaction with the chatbot has changed compared to interaction with real people, the extent to which trust has been built, etc. The first step in the empirical design is to determine the sample. As far as possible, all degree programs in the Department of Social Sciences are to be included; for this purpose, modules are identified in which different examinations are integrated and which take place in different semesters (Gläser/Laudel, 2009). The specific lecturers will be contacted with a request to participate in the study and to send information to the students. The online survey will be divided into 2 sub-surveys in order to specifically address the target group of lecturers and students. The areas surveyed will be subdivided into the following, among others: • Organizational questions about the course, module, semester, examination performance • Questions about the general use of AI in an academic context • Questions on the use of AI in the context of the module • Questions about satisfaction with the AI answers • Questions about uncertainty, confidence in working with AI • Questions about criticism and opportunities for improvement The questions are both closed with scale-based answer options and open. This enables both quantitative and qualitative evaluation. The former is analysed statistically, while the open answers are subjected to content analysis. By combining the methods, it is possible to gain a comprehensive insight into the status quo and aspects such as uncertainty and trust (Döring/Bortz, 2016; Mayring/Frenzl 2014)
Expected Outcomes
With 130,000 students, the IU International University of Applied Sciences is the largest university in Germany and one of the largest and fastest growing universities in Europe. The distance learning sector in particular is growing rapidly across Europe. The AI-based teaching and learning assistant 'Syntea' was developed to enable personalised interaction with students and improve their learning outcomes, and has now been implemented in almost all social science distance learning modules. This article presents the results of a mixed method (Brüsemeister, 2008; Kelle, 2014) study in which both learners and teachers of the modules supported by Syntea were interviewed. Users are asked about their experiences with Syntea through an online questionnaire survey. For this purpose, surveys will be conducted in modules of different social science courses over a period of several weeks and then analysed quantitatively and qualitatively. The main focus will be on the question of how the learning and teaching experience has changed as a result of the permanent support provided by the AI-based chatbot. Which uncertainties have been added and which possibly reduced? In addition to gaining insights into the general current situation and obtaining feedback from both teachers and students, the aim is to be able to compare the results of the individual modules. In this way, it can be determined whether there are differences between the degree programs or the examination results.
References
Brüsemeister, T. (2008): Qualitative Forschung. VS Verlag. Wiesbaden. Döring, N./Bortz, J. (2016): Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften. Berlin, Heidelberg: Springer Verlag. Dwivedi, Y. K. et al. (2023). Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. In: International Journal of Information Management, Volume 71, 102642. Fleck, T. (2023): Prüfungsrechtliche Fragen zu ChatGPT. Hg. v. Stabsstelle IT-Recht der bayerischen staatlichen Universitäten und Hochschulen. https://www.rz.uni- wuerzburg.de/fileadmin/42010000/2023/ChatGPT_und_Pruefungsrecht.pdf. Ferguson, R. et al. (2019). Innovating Pedagogy 2019: Open University Innovation Report 7. Milton Keynes: The Open University. Gimpel, H. et al. (2023). Unlocking the power of generative AI models and systems such as GPT-4 and ChatGPT for higher education: A guide for students and lecturers, Hohenheim Discussion Papers in Business, Economics and Social Sciences, No. 02. https://nbn-resolving.de/urn:nbn:de:bsz:100-opus-21463. Gläser, J./Laudel, G. (2009): Experteninterviews und qualitative Inhaltsanalyse: als Instrumente rekonstruierender Untersuchungen. Wiesbaden: VS Verlag. Kasneci et. al. (2023). ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. https://osf.io/preprints/edarxiv/5er8f. Kelle, U. (2014): Mixed Methods. IN: Bauer, N./Blasius, J. (Hrsg.): Handbuch Methoden der empirischen Sozialforschung (S. 153-166). Wiesbaden: Springer VS. Kukulska-Hulme, A. et al., (2021). Innovating Pedagogy 2021: Open University Innovation Report 9. Milton Keynes: The Open University. Mayring, P./Franzl; T. (2014): Qualitative Inhaltsanalyse. In: BAUER, N./BLASIUS, J. (Hrsg.): Handbuch Methoden der empirischen Sozialforschung (S. 543–556). Wiesbaden: Springer VS. Rademacher, M. (2023). Warum ChatGPT nicht das Ende des akademischen Schreibens bedeutet. https://digiethics.org/2023/01/03/warum-chatgpt-nicht-das-ende-des-akademischen-schreibens-bedeutet/. Schmohl, T./Löffl, J./Falkemeier, G. (2019). Künstliche Intelligenz in der Hochschullehre. In: Tobias Schmohl, Dennis Schäffer (Hrsg.): Lehrexperimente der Hochschulbildung. Didaktische Innovationen aus den Fachdisziplinen. 2., vollständig überarbeitete und erweiterte Auflage. Bielefeld: wbv, S. 117-122. Wannemacher, K./Botmann, L. (2021). Künstliche Intelligenz an den Hochschulen Potenziale und Herausforderungen in Forschung, Studium und Lehre sowie Curriculumentwicklung. Arbeitspapier 59 – Künstliche Intelligenz an den Hochschulen. Zawacki-Richter, O./Marin, V./Bond, M./Gouverneur, F. (2020). Einsatzmöglichkeiten Künstlicher Intelligenz in der Hochschulbildung – Ausgewählte Ergebnisse eines Systematic Review. In: R. A. Fürst (Hrsg.), Digitale Bildung und Künstliche Intelligenz in Deutschland. Nachhaltige Wettbewerbsfähigkeit und Zukunftsagenda. Wiesbaden: Springer, S. 501-517.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.