Session Information
32 SES 13 A, Governing AI in HE Institutions? An Organizational Education Perspective
Symposium
Contribution
This paper examines how AI chatbots and large language models (LLMs) shape knowledge production within organizational contexts, focusing on the “learning in, by and between organizations.“ (Göhlich et al., 2018, p. 209). While AI-driven learning is often framed in terms of efficiency and personalization, this paper interrogates the epistemic assumptions embedded in AI training data and how LLMs reinforce dominant perspectives while excluding alternative viewpoints (Longpre et al., 2023, p. 12). At the level of learning in organizations, AI-generated content is increasingly implemented in workplace training, internal knowledge management, and professional development programs (Bucher et al., 2024). However, rather than serving as neutral tools for knowledge dissemination, these systems involuntarily privilege dominant epistemologies, reinforcing institutional hierarchies and constraining critical engagement. Regarding the learning by organizations, the widespread reliance on AI for decision-making and adaptive learning raises concerns about how biases become entrenched within an organization’s process of acquiring and institutionalizing knowledge. These biases are not incidental but are embedded in the very selection and structuring of AI training data, obscuring the power relations that shape these technologies (Carter & Wynne, 2024). Finally, in learning between organizations, AI-mediated discourse structures the way knowledge is exchanged across industries, research networks, and policy arenas—amplifying dominant narratives while restricting the visibility of counter-discourses (Dodge et al., 2021, p. 7). AI systems do not merely reflect knowledge structures; they actively shape and legitimize them. The authority over what constitutes valid knowledge is not neutral but is dictated by those who curate training data, determine inclusion criteria, and establish epistemic boundaries (Rehak, 2023). The lack of transparency surrounding data provenance exacerbates epistemic injustices (Fricker, 2007), systematically omitting perspectives that fall outside dominant frameworks. This exclusionary process highlights how AI-driven knowledge formation is deeply embedded in sociopolitical structures that dictate what knowledge is produced, circulated, and institutionalized (Ferrara, 2023). By critically engaging with how AI constructs, validates, and disseminates knowledge, this paper highlights the risks of integrating LLMs into pedagogical practices without a thorough examination of their epistemic assumptions and exclusions. Without transparency in AI training data and a critical interrogation of the knowledge hierarchies it reinforces, the implementation of AI in organizational learning risks perpetuating existing power structures rather than challenging them.
References
Bucher, A., Schenk, B., Dolata, M., & Schwabe, G. (2024). When Generative AI Meets Workplace Learning: Creating A Realistic & Motivating Learning Experience With A Generative PCA Carter, W., & Wynne, K. T. (2024). Integrating artificial intelligence into team decision-making: Toward a theory of AI-human team effectiveness. EUROPEAN MANAGEMENT REVIEW. Dodge, J., Sap, M., Marasović, A., Agnew, W., Ilharco, G., Groeneveld, D., Mitchell, M., & Gardner, M. (2021). Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday. Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. Göhlich, M., Novotný, P., Revsbæk, L., Schröer, A., Weber, S. M., & Yi, B. J. (2018). Research Memorandum Organizational Education: ECER 2017. Studia Paedagogica, 23(2), 205–215. Longpre, S., Mahari, R., Chen, A., Obeng-Marnu, N., Sileo, D., Brannon, W., Muennighoff, N., Khazam, N., Kabbara, J., Perisetla, K., Wu, X., Shippole, E., Bollacker, K., Wu, T., Villa, L., Pentland, S., & Hooker, S. (2023). The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.