Session Information
Paper Session
Contribution
The last decade has seen an extraordinary increase in the global development and use of digital technologies. The Covid 19 pandemic accelerated the pace of this growth by an estimated 4 years, to the extent that in 2022, the ‘tipping point’ at which computing power (internet storage and datasphere) would outstrip the ‘collective brain capacity’ of humanity was projected to happen sometime between 2031 and 2036 (Kirkman and Walsh 2024). Chat GPT, released in 2022, was the first public release of a Large Language Model (LLM) Artificial Intelligence (AI). Recent estimates suggest that the introduction of this and other LLMs is likely to have increased the rate of technological growth by a further 3 years, leading to a revised estimate that puts the ‘tipping point’ potentially as early as 2028 (based on data in Taylor, 2024). Yet, a recent Global Education Monitoring Report (UNESCO, 2023) suggests that as few as 33% of teachers in Europe feel adequately prepared to use ICT in teaching and many had limited knowledge of how AIs could support them. Alongside this, one in four countries has a law, and three in four countries have a ‘policy, plan or strategy’ on teacher technology training (ibid). Given this political interest and the potential of such technologies to transform the global educational ecosystem, the question of how these technologies may support or constrain education has never been more pressing.
Since the introduction of Chat GPT, a growing body of research has emerged that identifies the benefits of AIs in education, including personalised learning, lesson planning, report writing, research and analysis capabilities, and brainstorming assistance. However, there are concerns around the accuracy, privacy and the ethical implications of using AI tools in education (Baidoo-anu & Owusu Ansah, 2023; Chan & Hu, 2023; Su & Yang, 2023). Chan & Hu (2023) identify potential risks for personal development, career prospects and societal values with the consistent use of AIs in education, whilst Watson & Romic (2024) highlight the challenges introduced to truth and academic integrity.
LLMs can generate original content rapidly, potentially enhancing traditional learning methods. For example, they have the potential to use data, analysis and machine learning to identify and provide individualised learning pathways for large numbers of students. Authors have suggested that these developments may move teachers towards performing more pastoral roles (Alier et al., 2024) as students will be able to learn everything from artificially generated content (Halagatti et. al., 2023). Yet, such discussions tend to conceptualise education as an information delivery and learning process, overlooking the more foundational questions about the broader purposes of education. If educators are to make informed decisions about how AIs may support or constrain education, we need a framework that helps them to consider the potential contribution of AIs in this more holistic way: as a collective enterprise for human development. This paper reports on a series of projects that directly respond to this challenge. Education is subsequently italicised and capitalised to emphasise theseholistic developmental purposes.
Our theoretical framework builds on international reviews of the aims of Education which suggest it has five key purposes: i) Economic: Education contributes to students’ personal economic wellbeing and to the economic wellbeing of their region and/or nation; ii) Cultural: Education introduces students to their cultural heritage so they can continue its development; iii) Social: Education enables students to participate as members of a complex social ecosystem; iv) Personal: Education promotes intellectual, spiritual, moral and physical development; v) Environmental: Education promotes the health of the natural ecosystem of the planet (adapted from James et. al. 2012).
Method
The projects reported in this paper were framed by the research question: “What is a potential framework for evaluating the pedagogic potential of AIs in Education as a collective enterprise for human development?” The research design was framed across two phases.. Phase one focused on articulating a statement of the purposes of Education that had implementable validity (Argyris, 2004) at the organisational level. Contextualising this in the case of UK curricula over the last century (White, 2008; Priestly, 2024), we can see a steady progression away from the collective recognition of the aims of Education (e.g. Morant, 1904) and towards a curriculum that outlines a ‘knowledge entitlement,’ positioning schools as the arbiters of the ‘wider curriculum’ (DfE 2014) and consequently the guardians of Education. Thus, the unit of analysis for any consideration of Educational aims and purposes is the organisation: the school or college. This approach is well supported by the work of Stenhouse: “the primary unit of coordination and support is the school.” (1975, p166). Thus, the study was framed as a multiple case study of projects bounded at the organisational level (Schwandt & Gates, 2018) within sociocultural theory as an appropriate theoretical lens that brought together the social and cultural aspects of Education (Cole, 1996). AIs in this context were viewed as mediating cultural and ‘synthetic social’ artefacts. All the organisations were selected due to their degree of engagement with AIs and were located in cities in the East of England. Phase two moved into a context-rich analysis of the pedagogic potential of AI. The guiding question for phase two was “What are the key affordances of AI that support and constrain Education?” Analysis adopted Activity Theory (Nardi, 1996) as a lens that allowed for an ‘in-depth, multi-faceted understanding’ (ibid.) of the interactions between the potential social and cultural developmental interactions between learners, teachers and AIs in each context. Data collection methods across both phases were document collection (n= 86) and semi-structured interviews with twelve professionals across six heterogeneously sampled Educational organisations: two primary schools, two secondary schools, a sixth form college and a university. The researchers were positioned as a teacher educator and a teacher researcher respectively. The project was approved by the ARU ethics committee and adhered to BERA (2024) ethical guidelines.
Expected Outcomes
From the Educational purposes in phase one, we developed a series of questions that solicit links between these purposes and AI interactions: How does this use of an AI help teachers to help students to a) know; b) learn to do; and c) develop understanding of what is needed to do the following: i) be educated citizens?; ii) contribute to employment and wider society?; iii) reach their full development as individuals?; and iv) develop a fairer, more socially just society? We noted five qualitatively different affordances of AIs in phase two: a) Content creation is the creation something new by combining pre-existing elements of text, images, video etc. into a novel combination; b) Data analysis and synthesis pulls out key information from data provided to it; c) Qualitative feedback uses the LLM’s neural network to analyse text data; d) Data transformation involves changing the data whilst maintaining meaning; e) Simulated social interactions draw on the LLM’s capacity to use probability-based language responses to interact with an AI in a simulated conversation. This final category represents a substantive change from the traditionally cultural to simulated social affordances now offered in unmediated interactions with digital technologies. Alongside these affordances, response inaccuracy was noted as a significant ‘feature’ of interactions with pedagogic potential. Four qualitatively different user roles (subjects) were identified: teacher, student, administrator and leader. Taken together, our data suggest that a framework for evaluating the pedagogic potential of AIs in Education would need to include three elements: who is using it (subject); why they are using it (purpose); and how they are using it (affordance). Although based on a small-sample, these findings have potential for transferability and implications as an implementable starting point for a framework for researchers and teacher-professionals to evaluate the pedagogic potential of AIs in Education.
References
Alier, M., García-Peñalvo, F., & Camba, J. D. (2024). Generative Artificial Intelligence in Education: From Deceptive to Disruptive. Argyris, C. (2004). Reasons and rationalizations: The limits to organizational knowledge (Repr). Oxford Univ. Press. Baidoo-anu, D., & Owusu Ansah, L. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Journal of AI, 7(1), 52–62. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. Cole, M. (1996). Cultural Psychology. Belknap Press of Harvard University Press. Halagatti, M., Gadag, S., Mahantshetti, S., Hiremath, C. V., Tharkude, D., & Banakar, V. (2023). Artificial Intelligence: The New Tool of Disruption in Educational Performance Assessment. In P. Tyagi, S. Grima, K. Sood, B. Balamurugan, E. Özen, & T. Eleftherios (Eds.), Smart Analytics, Artificial Intelligence and Sustainable Performance Management in a Global Digitalised Economy (Vol. 110A, pp. 261–287). Emerald Publishing Limited. James, M., Oates, T., Pollard, A., & William, D. (2011). The Framework for the National Curriculum A report by the Expert Panel for the National Curriculum review. Department for Education. Kirkman, P., & Walsh, N., (2024). Towards a tipping point: a ‘best guess’ review of digital technological and human expertise. Position paper presented to MBA (ELM) seminar series, ARU, 9/2/2024. Nardi, B. (1996). Context and Consciousness: Activity Theory and Human Computer Interaction. MIT. Priestley, M. R. (2024). Curriculum orientations. Available at: https://mrpriestley.wordpress.com/ Schwandt, T. A., & Gates, E. F. (2017). Case Study Methodology. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE Handbook of Qualitative Research. SAGE Publishing. Stenhouse, L. (1975). An Introduction to Curriculum Research and Development. Pearson Education. Su, J., & Yang, W. (2023). Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education. ECNU Review of Education, 6(3), 355–366. Taylor, P. (2024). Data growth worldwide 2010-2028. https://www.statista.com/statistics/871513/worldwide-data-created/ UNESCO. (2023). Global Education Monitoring Report 2023: Technology in education: A tool on whose terms? (1st ed.). GEM Report UNESCO. Watson, S., & Romic, J. (2024). ChatGPT and the entangled evolution of society, education, and technology: A systems theory perspective. European Educational Research Journal. White, J. (2008). Aims as Policy in English Primary Education (Primary Review Research Survey 1/1). Cambridge: University of Cambridge Faculty of Education.
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
 This database will be updated with the conference data after ECER. 
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.