Session Information
22 SES 04 A, AI and Teaching in HE
Paper Session
Contribution
Acknowledging the explosion of generative AI and its wide-sweeping impacts in education is already commonplace, despite these tools only becoming widely publicly available 2-3 years ago. Nevertheless, generative AI is currently a major disruptive force in many practices and policies in higher education (Labadze et al., 2023). In this paper, we focus more narrowly on how generative AI, such as ChatGPT, is impacting higher education teaching. We conducted a large survey of teachers at a Danish university in May-July 2024. Our aim is to contribute to the growing literature that documents how generative AI is intersecting teaching at the university level (Zawacki-Richter et al., 2019). Specifically, we focused on teachers’ use of and beliefs about generative AI.
First, to explore teachers’ practices around generative AI, we collected data on self-reported use patterns of generative AI. Our goal here was to survey use trends but also collected detailed demographic data to enable us to conduct exploratory work on what may be key variables in understanding how teachers use generative AI. Our large sample was conducted at only one university in a single country, which limits the generalization of our findings. However, we focused on rich description supplementing our survey data with qualitative responses for participants and informational interviews with key stakeholders in the university to create a rich understanding of how different demographic variables may be related to generative AI adoption. As such, we believe our findings provide a foundation for further, more representative research on AI use among higher education teachers which relates to the conference theme of “charting the way forward”.We explored the use patterns of generative AI through the following research questions:
RQ1: What are the use patterns of generative AI among university teachers during the ‘early adoption’ phase?
RQ2: What are the ways university teachers are using generative AI in relation to their teaching activities?
Second, to explore teachers’ beliefs about generative AI, we collected valance ratings about how teachers view generative AI in terms of contributions to student learning, teaching, and in general (e.g., reliability, sustainability, ethically). We adopted the technology acceptance model (Venkatesh et al., 2003; Yilmaz et al. 2024) to suggest relevant dimensions to study general beliefs about generative AI, and focused on effort, facilitating conditions, trust in generative AI, and usefulness beliefs related to adopting generative AI (Chatterjee & Bhattercharjee, 2020; Choi et al., 2023; Koponen, 2023). We explored teaching-relevant beliefs about generative AI through the following research questions:
RQ3: How do teachers view generative AI in relation to teaching and learning?
RQ4: How do teachers conceptualize generative AI generally?
As a goal of our research is to explore these four questions in light relevant demographic variables, about a third of the survey focused on collecting information about the participants. Based on prior survey research on higher education teaching and technology use (e.g., Guillén-Gámez, 2020) we measured variables related to the individual (e.g., age, gender, languages spoken) and their employment (e.g., employment type, teaching tasks), discipline, and teaching (e.g., years of experience, course types, use of active learning). We used these variables to look at between-group differences in the regards to our research questions.
Method
This study included a survey of university teachers from a large research-focused university in Denmark. We sent surveys to all teachers at the university between May and June 2024 and recruited 1317 participants (corresponding to approximately 13% of all university employees; Dansk Universiteter, 2024). The survey included three sections on 1) relevant demographic variables, 2) teaching practices and beliefs, and 3) generative AI practices and beliefs. The survey was designed based stakeholder focus group input and surveying relevant research. Where possible items were included that had been piloted through previous studies (e.g. Stenalt et al., 2023) and validated literature (e.g., Chatterjee & Bhattercharjee, 2020; Choi et al., 2023; Koponen, 2023). As our analysis focuses on exploratory descriptive analysis around salient demographic variables, we some of the key aspects of the composition of our sample in the following: Participants came from various fields including the natural sciences (N = 487), health fields (N= 388), humanities (N = 188), and social sciences (N = 164), as well as some from other fields (N = 90). Participants also varied in terms of their employment type (permanent N = 824 versus time-limited N = 448), teaching responsibilities (lecturing N = 765, course management = 266, teaching assistant N = 198, supervision and other tasks N = 64), and teaching experience (Mdn = 13, SD = 10), age (M = 46, SD = 13.1), and gender (Nmen = 717, Nwomen = 565, Nother = 18). Most participants were native Danish speakers (72%) compared to native English speakers (9%), however among non-native speakers the median fluency level was higher in English (“fluent”) compared to Danish (“some”). We analyzed the data in a two-step process in which, we first answered each question for the entire sample, next we conducted looked for salient differences between sub-groups. As this is a descriptive study, we are not using the null-hypothesis testing framework. Instead, we used a Bayesian framework to estimate the effect sizes and real-effect likelihood for observed between-group differences. We controlled type-1 error using false-discovery rate control.
Expected Outcomes
Here we report the main findings for each research question; salient sub-group differences will be elaborated in the presentation. First, related to use patterns (RQ 1), we found that most early adopters used ChatGPT; with 31% using it regularly and 46% having tried it, compared to 6% and 14%, respectively, using the Campus’ approved platform. In contrast, 71% had never encountered the eight other platforms we surveyed (e.g., CoPilot or Dall-E). Second, related to teaching practices (RQ 2), the most common uses—reported done "sometimes" or "frequently"—included asking questions (N = 442), improving written text (N = 431), and translating materials (N= 359). The least common uses—reportedly done "rarely" or "never"—included creating images (N = 882), summarizing multiple texts (N = 893) and chatting (N = 911). Third, related to views on AI in teaching and learning (RQ 3), a majority of teachers considered students using generative AI on tasks including completing homework, summarizing readings, or creating presentation outlines as “supportive of learning” (60%-67%) more so than cheating (8%-24%). Only the use of generative AI to combine individual contributions during group projects was rated not supportive of learning (45%) by most teachers. Across all activities, there were moderate inverse correlations (avg. r = -.61) between viewing generative AI as supportive learning and as cheating. Finally, related to beliefs about AI based on technology acceptance model (TAM) constructs, on average teachers were neutral (M = 2.9, SD = .9) about generative AI’s usefulness (std. alpha = .87) and lacked the facilitating conditions (std alpha = .73) for adoption (M = 2.5, SD = .9). Additionally, effort and trust (std alpha = .61), which are distinct TAM constructs, were indistinguishable based on factor analysis, with most rating “disagree” (M = 2.7, SD = .7) on this combined scale. We will contextualize these findings by relating them to relevant scholarly discourses and future research directions (Bearman et al. 2023).
References
Bearman, M., Ryan, J., & Ajjawi, R. (2023). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education, 86(2), 369–385. https://doi.org/10.1007/s10734-022-00937-2 Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A quantitative analysis using structural equation modelling. Education and Information Technologies, 25(5), 3443–3463. https://doi.org/10.1007/s10639-020-10159-7 Choi, S., Jang, Y., & Kim, H. (2023). Influence of Pedagogical Beliefs and Perceived Trust on Teachers’ Acceptance of Educational Artificial Intelligence Tools. International Journal of Human–Computer Interaction, 39(4), 910–922. https://doi.org/10.1080/10447318.2022.2049145 Danske Universiteter. (2024). Universiteternes statistiske beredskab (personale og studieaktivitet for de enkelte universiteter) [Data set]. https://dkuni.dk/det-statistiske-beredskab/ Guillén-Gámez, F. D., & Mayorga-Fernández, M. J. (2020). Identification of variables that predict teachers’ attitudes toward ICT in higher education for teaching and research: A study with regression. Sustainability, 12(4), 1312. Koponen, K. (2023). Acceptance of generative AI in knowledge work [Master’s thesis, Tampere University]. Trepo. https://urn.fi/URN:NBN:fi:tuni-202401171577 Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00426-1 Stenalt, M. H., Nøhr, L., Løkkegaard, E. B., & Mathiasen, H. (2023). Underviseres digitale undervisningskompetencer post-corona. Tidsskriftet Læring og Medier (LOM), 16(27). Venkatesh, V., Morris, M., Davis, G., & Davis, F. (2003). User acceptance of information technology: Towards a unified view. MIS Quarterly, 27(3), 425–478. http://dx.doi.org/10.2307/30036540 Yilmaz, F. G. K., Yilmaz, R., & Ceylan, M. (2024). Generative Artificial Intelligence Acceptance Scale: A Validity and Reliability Study. International Journal of Human–Computer Interaction, 40(24), 8703–8715. https://doi.org/10.1080/10447318.2023.2288730 Zawacki-Richter, O., Marin, V., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.