Session Information
23 SES 10 D, Student Evaluations of Teaching: Structuring Pedagogy, Academic Work and University Leadership
Round Table
Contribution
Student evaluations of teaching (SET) are increasingly used at national and institutional levels for a range of purposes ranging from: curriculum and teaching development, performance management, quality assurance, the construction of league tables, and the provision of information to prospective students. SET are used at national levels in several European countries, notably the UK’s National Student Survey (NSS) and the Netherlands NSSE. In addition a broader range of SET instruments is used in many other European universities. National, and to some extent institutional, instruments have come to structure public discourse on ‘the student experience’. Yet research on SET instruments has tended to focus either on their statistical reliability and validity as research instruments (e.g. Richardson et al 2007; Marsh 2007; and Spooren et al 2012) or on the relationship between SET ratings and students’ learning (Galbraith et al 2011, Richardson 2012) and their potential for improving teaching through consultations with lecturers (Penny and Coe 2004). There has been no empirical research or critical analysis of the social, cultural and political aspects of the production and consumption of survey results. In other words, SET results have been treated largely unproblematically as reflections of reality or as drivers for the improvement of teaching, and research has sought to determine their accuracy and effectiveness. The premise of the proposed round table is that SET are social and political objects in their own right, with the potential to transform the way prospective and current students think about HE and to reconfigure the nature of HE work. The symposium will bring together analyses of the social and political role of SETs in several European countries: the UK, the Netherlands, Portugal and Poland. Each paper will address questions such as:
- What are the assumptions, motivations and discourses that accompany the design of SET instruments?
- How is the work of academics, administrators and leaders transformed through their participation in the production and consumption of SET? What is the impact of SET results on HE work, and on relationships amongst staff and between students and staff in HE?
- How is the production and consumption of SET results influencing pedagogy, and particularly academic judgements in assessment?
- What is the significance of the production and consumption of SET data for representations of ‘the student voice’, the nature of ‘choice’, and the changing role of students as stakeholders in higher education?
- To what extent does the use of SET as a policy instrument embody the evolving nature of higher education institutions’ relationships with national and supra-national structures (e.g. league tables, quality assurance mechanisms embedded in the creation of a European Higher Education Area)?
Method
Expected Outcomes
References
de Santos, M. (2009). Fact-totems and the statistical imagination: The public life of a statistic in Argentina 2001. Sociological Theory 27(4): 466-489. Espeland, W. N. and M. Sauder (2007). Rankings and reactivity: How public measures recreate social worlds. American Journal of Sociology 113(1): 1-40. Galbraith, C, G. Merrill and D. Kline (2011) Are Student Evaluations of Teaching Effectiveness Valid for Measuring Student Learning Outcomes in Business Related Classes? A Neural Network and Bayesian Analyses, Research in Higher Education pp 1-22 Hagel, P., R. Carr and M. Devlin (2012) Conceptualising and measuring student engagement through the Australian Survey of Student Engagement (AUSSE): a critique. Assessment and Evaluation in Higher Education 37(4) 475-486 Le Grand, J. (2003). Motivation, Agency, and Public Policy: Of Knights and Knaves, Pawns and Queens. (Oxford, Oxford University Press). Marsh, (2007). Students' evaluations of university teaching: A multidimensional perspective. In R. P. Perry & J C. Smart (Ed.), The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective (pp.319-384). (New York, Springer). Morley, L. (2003). Quality and Power in Higher Education. (Buckingham, SRHE/Open University Press). Penny, A.R. and R Coe (2004) Effectiveness of Consultation on Student Ratings Feedback: A Meta-Analysis, Review of Educational Research vol. 74 (2): 215-253 Porter, T. M. (1995) Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. (Princeton, New Jersey, Princeton University Press). Richardson, J. T. E., J. Slater, et al. (2007). The National Student Survey: Development, findings and implications. Studies in Higher Education 32(5): 557-580. Spooren, P., D. Mortelmans and P. Thijssen (2012): ‘Content’ versus ‘style’: acquiescence in student evaluation of teaching?, British Educational Research Journal 38 (1): 3-21 Sauder, M. and W. N. Espeland (2009) The Discipline of Rankings: Tight Coupling and Organizational Change American Sociological Review 74(1): 63-82
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.