Session Information
22 SES 04 A, AI and Teaching in HE
Paper Session
Contribution
Artificial Intelligence (AI) has shown tremendous potential in several fields for several decades, leading to a rapid proliferation of its applications. However, its integration in the higher education area, particularly by educators, is often characterised as slow and cautious (Lee et al., 2024). This caution is rooted in multiple concerns surrounding the black-box nature of AI, trust issues, and a lack of institutional support and guidance (Bhaskar et al., 2024). The concept of educators in this study refers to faculty members, lecturers, and research staff members involved in teaching activities.
The emergence of Generative AI (GenAI) technologies has initiated a transformative phase in higher education, eliciting mixed reactions that combine excitement about potential benefits with concerns over possible negative consequences (Bozkurt et al., 2024). While GenAI has the potential to enhance rather than replace educators' expertise, GenAI adoption and integration into higher education is still in its early stages (Lee et al., 2024). Moreover, educators need to trust GenAI tools to use them effectively in their teaching practice, yet the complex challenges and concerns associated with this technology often undermine this trust.
While trust has been identified in the literature as an important determinant of technology adoption in educational settings, the specific trust factors influencing AI and GenAI adoption among educators remain understudied (Bhaskar et al., 2024). This gap is particularly problematic given GenAI's unique capabilities to generate human-like content, which introduces new ethical and practical considerations compared to previous trust studies in AI.
Previous systematic literature reviews and meta-analyses on trust in AI provide cross-disciplinary insights and predate the emergence of GenAI (Wu et al., 2011; Yang & Wibowo, 2022). These findings reveal significant gaps in studying trust as a dynamic, multidimensional concept involving emotional, psychological, and behavioural elements within specific domains and interaction contexts. While extant studies emphasise technology-specific factors such as performance, privacy, and explainability, this study focuses on individual and organisational perspectives in education.
The theoretical foundation for understanding AI/GenAI trust factors draws from technology acceptance, trust theory, socio-technical systems theory, and educational psychology. This study's conceptual framework is adapted based on three established AI Trust theoretical frameworks: the foundational trust framework (Lukyanenko et al., 2022), the tri-dimensional framework (Li et al., 2024), and the five-dimensional framework (Yang & Wibowo, 2022). The proposed framework builds on established technology adoption models like the Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate their applicability to GenAI's unique features (Wu et al., 2011).
This research addresses a critical gap by examining how institutional support mechanisms influence educators' trust in GenAI adoption (Bhaskar et al., 2024; Wang et al., 2024). Through systematic analysis of trust factors and their interactions, this study aims to synthesise existing research and establish a foundation for future investigations in this emerging field.
To address its aims,this study seeks to answer a fundamental research question: What factors and institutional strategies in higher education influence educators' trust in adopting GenAI for educational purposes? - with twospecific research sub-questions that include:
RQ1. What are the trust factors influencing educators' GenAI adoption for educational purposes?
RQ2: How do institutional strategies, including policies, leadership support, and training programs, influence trust in Generative AI adoption among educators?
Method
To address the research questions, we performed a systematic literature review following the PRISMA 2020 guidelines (Page et al., 2021) and a comprehensive, structured approach based on known best practices (Gough et al., 2017). Our approach included five main stages: searching, screening, organising, analysing, and reporting. Electronic searches were executed between July 28 and August 4, 2024, across these databases: ERIC, EBSCOhost, Web of Science (WoS), ProQuest, and Scopus using the following search query: generative AI OR artificial intelligence OR ChatGPT OR large language models AND higher education or university AND trust OR trustworthy The screening of 444 articles from the search was conducted against agreed-upon inclusion and exclusion criteria, which included only peer-reviewed journal articles published in English for the past five years between January 2019 and August 2024. This timeframe decision was based on previous review studies that indicated a large increase in AI in education studies after 2018 and an explosion of GenAI studies during this time (Wang et al., 2024). Studies focused on students and areas other than higher education, such as K-12 education, medicine, and business, were excluded. The screening was conducted manually in three phases and involved a collaborative and iterative process, including three research team members. First, the first author conducted an initial screening of titles and abstracts, aiming to identify articles that meet the inclusion and exclusion criteria. Next, 178 articles were assessed for eligibility independently by two researchers. A scoring mechanism was used to evaluate the quality and relevance of each article: a score of 1 indicated high relevance with trust or adoption for both GenAI and higher education, and a score of 2 indicated moderate relevance. Researchers had several meetings to discuss and resolve disagreements using the scoring mechanism and ensuring consistency through quality appraisal evaluations and inter-rater reliability checks. Data was carefully organised to ensure transparency throughout the searching and screening stages and is available upon request. Finally, 37 articles met the criteria for the final analysis, which followed a deductive coding approach based on the trust theory framework adapted for this study.
Expected Outcomes
This study synthesises insights into educators' trust in Generative AI (GenAI) through a systematic literature review of 37 studies from 2019 to 2024, revealing that educators' trust in GenAI extends beyond technical acceptance and involves a complex interplay between individual factors like familiarity, self-efficacy, and perceived control (i.e. human-in-the-loop), as well as organisational strategies. Our findings emphasise that educators' trust evolves dynamically through well-designed institutional support (Lukyanenko et al., 2022). Training programs are critical in building familiarity, mainly when introduced incrementally, addressing pedagogical needs, and emphasising human-in-the-loop approaches (Bozkurt et al., 2024). Furthermore, institutions encouraging experimentation with GenAI while openly addressing ethical risks foster greater trust than environments mandating or restricting adoption without acknowledging educators' concerns; social influences, including leadership and peer support, further shape trust dynamics (Bhaskar et al., 2024). Theoretical implications of this research suggest that the specific characteristics of GenAI require the development of advanced models to accommodate the complex dynamic of trust and adoption, focusing on educators' perspectives. Practically, institutions should co-develop trust-building strategies with educators to ensure alignment with their values and needs (Herdiani et al., 2024). These strategies need to include human-centric approaches to enhance perceived control and emotional trust (Yang & Wibowo, 2022). Educators need to find professional development opportunities to ensure that GenAI developments align with existing pedagogical approaches and explore constructivist pedagogical approaches (Choi et al., 2023). Ultimately, fostering trust in GenAI demands an adaptive approach that prioritises collaboration between educators and institutions. Beyond top-down directives, institutions must cultivate environments where educators feel empowered to engage with GenAI responsibly, confidently, and ethically, ensuring its alignment with pedagogical goals and institutional values.
References
Bhaskar, P., Misra, P., & Chopra, G. (2024). Shall I use ChatGPT? A study on perceived trust and perceived risk towards ChatGPT usage by teachers at higher education institutions. The International Journal of Information and Learning Technology, 41(4), 428–447. https://doi.org/10.1108/IJILT-11-2023-0220 Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., Stracke, C. M., Singh, L., Crompton, H., Koutropoulos, A., Terentev, E., Pazurek, A., Nichols, M., Sidorkin, A. M., Costello, E., Watson, S., Mulligan, D., Honeychurch, S., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777 Choi, S., Jang, Y., & Kim, H. (2023). Influence of Pedagogical Beliefs and Perceived Trust on Teachers' Acceptance of Educational Artificial Intelligence Tools. International Journal of Human–Computer Interaction, 39(4), 910–922. https://doi.org/10.1080/10447318.2022.2049145 Gough, D., Thomas, J., & Oliver, S. (2017). An introduction to systematic reviews. Sage Publications Herdiani, A., Mahayana, D., & Rosmansyah, Y. (2024). Building Trust in an Artificial Intelligence-Based Educational Support System: A Narrative Review. Jurnal Sosioteknologi, 23(1), 101–119. https://doi.org/10.5614/sostek.itbj.2024.23.1.6 Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., Lekkas, D., & Palmer, E. (2024). The impact of generative AI on higher education learning and teaching: A study of educators' perspectives. Computers and Education: Artificial Intelligence, 6, 100221. https://doi.org/10.1016/j.caeai.2024.100221 Li, Y., Wu, B., Huang, Y., & Luan, S. (2024). Developing trustworthy artificial intelligence: Insights from research on interpersonal, human-automation, and human-AI trust. Frontiers in Psychology, 15, 1382693. https://doi.org/10.3389/fpsyg.2024.1382693 Lukyanenko, R., Maass, W., & Storey, V. C. (2022). Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities. Electronic Markets, 32(4), 1993–2020. https://doi.org/10.1007/s12525-022-00605-4 Wang, N., Wang, X., & Su, Y.-S. (2024). Critical analysis of the technological affordances, challenges and future directions of Generative AI in education: A systematic review. Asia Pacific Journal of Education, 1–17. https://doi.org/10.1080/02188791.2024.2305156 Wu, K., Zhao, Y., Zhu, Q., Tan, X., & Zheng, H. (2011). A meta-analysis of the impact of trust on technology acceptance model: Investigation of moderating influence of subject and context type. International Journal of Information Management, 31(6), 572–581. https://doi.org/10.1016/j.ijinfomgt.2011.03.004 Yang, R., & Wibowo, S. (2022). User trust in artificial intelligence: A comprehensive conceptual framework. Electronic Markets, 32(4), 2053–2077. https://doi.org/10.1007/s12525-022-00592-6
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.