Session Information
Paper Session
Contribution
Topic, objective and analytical framework
This study follows the trajectories of student evaluations in a research university in the Netherlands. It analyses how they are adjusted and used at different instances by different actors involved, how they relate with understandings of higher education quality, and which values, purposes and social consequences are thereby taken into account.
Higher education quality is a multiple, elusive not always clearly articulated concept. Student evaluations of education and teaching are related to different purposes of higher education quality and the assessment of aspects like student learning, program quality, teacher effectiveness and faculty performance (Harvey & Green, 1993; Tam, 2001; Weenink et al., 2022). While they are used to improve teaching and learning, they have also become a disciplinary device to shape academic conduct (Barrow & Grant, 2016; Hornstein, 2017). It is not clear when, where and what student evaluations are formally and informally used for by different academic actors and how quality is thereby measured and understood.
(Esarey & Valdes, 2020) note that the scholarly debate on student evaluations focused on teacher effectiveness and aspects like reliability, validity and bias. They identify mixed perspectives concerning the reliability and validity of measuring teaching effectiveness and argue that student evaluations are at best moderately correlated with student learning and/or instructional best practices. Recent studies shift attention to issues concerning fairness and social effects in using them. (Heffernan, 2022) draws attention to the negative consequences of bias for specific groups such as women and minority groups, which are increasingly subject to abusive comments. Focus groups with academics suggest furthermore that student evaluations are most critical for early career scholars’ careers [authors, under review]. Unbiased, reliable and valid evaluations can be unfair and fail to identify the best teacher (Esarey & Valdes, 2020).
Several studies argue for combining student evaluations with other dissimilar measurements of teaching like self-assessment and peer review of courses in personnel decisions, and for statistical adjustments before using them for any purpose (Esarey & Valdes, 2020; Hornstein, 2017). This ‘broad quality perspective’ can include more than student attainment and also assess the role and performance of lecturers in the educational process (Onderwijsraad, 2016; Tam, 2001; Weenink et al., 2022). One could even include the social consequences of the uses of student evaluations. It is however not known which values are brought forward in using and constructing student evaluations within academia. While the student evaluations are critiqued, there is actually a lack of knowledge on what they are used for and how they relate with quality understandings, and there are different degrees of freedom to adjust them to situated practices and purposes.
This study analyses the trajectories of student evaluations for different social sciences in a Dutch university. Various academic actors like institutional- and faculty management, educational committees, directors, course coordinators, lecturers and students can engage with them for different purposes and adjust them, for example by adding questions. These actors thereby articulate what they find valuable. (Heuts & Mol, 2013) conducted such an analysis of values for tomatoes from an Actor-Network Theory perspective, and followed them from developers and growers to so-called consumers. They identified different registers of worth which are draw upon and sometimes clash when making a ‘good tomato’. We add Norbert Elias’ notion of human figurations (Elias, 1968, 1978) to this perspective to further assess how they engage with their environment in using and adjusting student evaluations.
Research question
What are the trajectories of student evaluations in a Dutch research university, and how are different notions of quality taken into account in its uses and adaptations?
Method
A single case-study is conducted at a Dutch research university, to provide an extensive analysis of the trajectories that student evaluations go through, and develop a broad understanding of how different actors shape and engage with them (Flyvbjerg, 2006). For practical reasons, the study focuses on different social sciences. The trajectory can however transcend the social sciences faculty level. Norbert Elias’ notion of human figurations provides a human-centered, networked perspective to analyze the role of relevant actors and sites within the university for student evaluations in various social sciences. A human figuration is a constellation of mutually oriented and dependent people, with shifting asymmetrical power balances: a nexus of human interdependencies (Elias, 1968). Power develops within relationships as people are mutually dependent; lecturer and student have control over each other as they are both needed to realize educational quality. Interdependencies are at least bipolar, but often multipolar, and for example also engage higher management or even policy makers. Figurations are in this sense interdependency networks (Elias, 1978). These interdependencies restrict and enable what people can do with student evaluations, given their relative position in the network. A director might have more room to discuss and adjust uses and scope than a lecturer. To reconstruct the trajectories of shaping and using student evaluations, different sources are combined (Flick, 2004). The analysis starts with interviews with faculty support staff to reconstruct the formal trajectory and map the process, actors, documents and systems involved. Documents and other sources are interpreted, to then proceed with interviews with actors identified. These interviews are first used to understand the actor’s roles and positions within the figuration. It is not yet clear who is involved in shaping and using the student evaluations, and when and how students and lecturers are engaged. Second, the interviews are used to assess the actor’s quality views and their uses and values, motivations and room to change the student evaluations. A previous study addressed the quality views of social science educational directors. These interviews are (with permission) re-interpreted for the uses and adaptations of student evaluations. The interviews are transcribed verbatim and combined with other sources in a network reconstruction using Atlas-TI. A language-centered grounded theory approach is used to interpret how the student evaluations are used and adjusted by different actors, what they find salient, and how they relate to their views on higher education quality and its measurement (Charmaz, 2014).
Expected Outcomes
The study is in its initial stage, and the analysis of the different trajectories will be finished before the summer. The preliminary analysis of interviews with educational directors indicates that they do have some room to change the scope of the student evaluations, and add domain-specific questions for their programs. Their room to change the uses and purposes of student evaluations is however limited by institutional rules, systems and practices. Most have a limited view of the trajectory of student evaluations within the institutions beyond their own institute or program. They are aware of bias and limitations in measuring educational quality, and some try to increase their validity. There is however also reluctance to discuss the social consequences and change its uses. In line with the ‘broad perspective’ on quality, the student evaluations are enriched and combined with other assessments. Educational directors in the position of full professor display a broader view and seem to have somewhat more room to adjust the student evaluations than assistant- or associate professors or support staff. They also have more responsibilities concerning human resource management, and use student evaluations to value academic performance when it is a formal criterion - bringing the argument across that they enrich them to broaden their views. While attention is paid to bias, the initial findings suggest that the social consequences of using student evaluations play a limited role in using and adjusting the student evaluations. Our further analysis of the trajectories will provide more insight herein. The preliminary findings that the space for maneuver is limited and its uses are not contested are consonant with (Barrow & Grant, 2016; Pineda & Seidenschnur, 2022), who identified a focus on metrification and further disciplinary effects.
References
Barrow, M., & Grant, B. M. (2016). Changing mechanisms of governmentality? Academic development in New Zealand and student evaluations of teaching. Higher Education, 72(5), 589–601. https://doi.org/10.1007/s10734-015-9965-8 Charmaz, K. (2014). Constructing grounded theory (2nd ed.). Elias, N. (1968). The Civilizing Process. Sociogenetic and Psychogenetic Investigations (E. Dunning, J. Goudsblom, & S. Mennell, Eds.; Revised Ed). Blackwell Publishing Ltd. Elias, N. (1978). What is Sociology? (S. Mennell, G. Morrissey, & R. Bendix, Eds.; 1978th ed.). Columbia University Press. Esarey, J., & Valdes, N. (2020). Unbiased, reliable, and valid student evaluations can still be unfair. Assessment and Evaluation in Higher Education, 45(8), 1106–1120. https://doi.org/10.1080/02602938.2020.1724875 Flick, U. (2004). Triangulation in qualitative research. In U. Flick, E. von Kardorff, & I. Steinke (Eds.), A companion to qualitative research (pp. 178–183). Sage Publications Ltd. . Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–245. https://doi.org/10.1177/1077800405284363 Harvey, L., & Green, D. (1993). Defining Quality. Assessment & Evaluation in Higher Education, 18(1), 9–34. https://doi.org/10.1080/0260293930180102 Heffernan, T. (2022). Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching. Assessment and Evaluation in Higher Education, 47(1), 144–154. https://doi.org/10.1080/02602938.2021.1888075 Heuts, F., & Mol, A. (2013). What Is a Good Tomato? A Case of Valuing in Practice. Valuation Studies, 1(2), 125–146. https://doi.org/10.3384/vs.2001-5992.1312125 Hornstein, H. A. (2017). Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. In Cogent Education (Vol. 4, Issue 1). Taylor and Francis Ltd. https://doi.org/10.1080/2331186X.2017.1304016 Onderwijsraad. (2016). De volle breedte van onderwijskwaliteit. https://www.onderwijsraad.nl/upload/documents/publicaties/volledig/De-volle-breedte-van-onderwijskwaliteit1.pdf Pineda, P., & Seidenschnur, T. (2022). Translating student evaluation of teaching: how discourse and cultural environments pressure rationalizing procedures. Studies in Higher Education, 47(7), 1326–1342. https://doi.org/10.1080/03075079.2021.1889491 Tam, M. (2001). Measuring Quality and Performance in Higher Education. Quaity in Higher Education, 7(1), 47–54. https://doi.org/10.1080/13538320120045076 Weenink, K., Aarts, N., & Jacobs, S. (2022). ‘We’re stubborn enough to create our own world’: how programme directors frame higher education quality in interdependence. Quality in Higher Education, 28(3), 360–379. https://doi.org/10.1080/13538322.2021.2008290
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.