Standardized Module Evaluation for Teaching Excellence and Enhancement: Views of Students at a Single UK Higher Education Institution
Author(s):
Christopher Wiley (presenting / submitting)
Conference:
ECER 2014
Format:
Paper (Copy for Joint Session)

Session Information

10 SES 07 E JS, Teachers Involvement in Educational Effectiveness

Paper Session, Joint Session NW 10 and NW 11

Time:
2014-09-03
17:15-18:45
Room:
B231 Sala de Aulas
Chair:
Peter Gray
Discussant:
Samuel Gento

Contribution

Aspects of teaching quality in Higher Education have come under renewed scrutiny across the UK sector in recent years. Changing global economic conditions and the 2011 governmental White Paper led to the rise in tuition fees for home students to £9,000 per annum for undergraduate entry in 2012. Within this more client-oriented market environment in which value for money is sought, UK-wide surveys such as the National Student Survey (NSS) for final-year undergraduates and the Postgraduate Taught Experience Survey (PTES) for Masters-level students have gained increasing weight since their inception in 2005 and 2009 respectively, as have the league tables featured in publications such as The Times Good University Guide and The Complete University Guide. Institutions have also been required to publish specific information relating to teaching quality and provision within the Key Information Sets (KIS) introduced for undergraduate degree programmes from 2013-14.

One way in which UK universities have responded to these changing circumstances has been to introduce standardized module evaluation questionnaires across their respective institution by way of quality assurance, to identify and address under-performing areas. Yet the mechanisms by which such evaluation is typically implemented raise a number of generic problems. A recent UK-based study (Smith, Morris, & Bohms, 2011) cites lack of student engagement, inadequate communication to students about changes made in consequence of feedback received, the burden of survey administration and management, and the need to solicit student feedback in advance of the end of a module if timely action is to be taken. Darby (2007) has argued that such evaluations may reflect influences other than the course itself and that it is important to interpret survey results in this light, and Nulty (2008) has discussed the point at which survey response rates become sufficiently large to yield meaningful results.

This paper draws upon the views of students in order to explore the principles underpinning the standardization of module evaluation, its advantages and disadvantages, and the extent to which it facilitates teaching enhancement and the recognition of teaching excellence. Taking as its case study the system implemented across a single UK Higher Education institution, City University London, in March 2011, it identifies the local and national contexts for the change and compares the new centralized procedures with some of the more localized processes it superseded. At the heart of this study is original data solicited from Student Representatives at the institution concerned (see methodology below), which is analyzed in order to probe key issues such as whether standardized evaluation processes necessarily provide a uniformly accurate measure of teaching excellence. On one hand, they offer a formal mechanism whereby all academic staff are evaluated in an equitable manner, and which may subsequently inform appraisals; on the other, they run the risk of not capturing the discipline-specific information that might facilitate teaching enhancement at a local level.

Ancillary questions to be explored in this study include examination of the relative merits and shortcomings of different approaches to module evaluation, as well as consideration of the most effective timeframes for its implementation. The European dimension will be addressed (within the context of NW11’s subtopic of quality assurance at district, region, or country level) by contemplating the implications of the findings of this research for evaluating teaching in other countries, not least given that the economic conditions that were a driver for change were felt across the world. Finally, the paper will consider whether other measures, such as analysis of the grades awarded to students in a module, might offer alternative means for recognizing teaching excellence and for identifying best practices in order to enhance future teaching.

Method

This study is based on a questionnaire completed by some 40 Student Representatives (around 10% of the active student representation network) at a single UK Higher Education institution between February and June 2013. Responses were solicited via the University’s annual internal Student Representatives (STARS) conference as well as at the School-wide Student Experience Forums (SEFs) for three of the institution’s six Schools selected randomly. Student Representatives were specifically recruited for this research as they are the students best placed to reflect accurately the opinions of the cohorts they represent over and above their own individual views. The questionnaire comprised 18 questions including a range of tick box, Likert-scale, priority ranking, and free-text answers, designed to gather students’ views on various issues concerning the module evaluation system current at the institution as well as asking wider questions about teaching recognition and surveys more generally. The questions were arranged into five categories: about you; about module evaluation; about module evaluation in the context of your programme; about module evaluation and teaching excellence; about surveys at the institution. The data was collected anonymously and in full compliance with the institutional policy on research ethics, ethical approval having been obtained from the University prior to undertaking the study. The theoretical framework for analysis of the data was a mixed-methods approach comprising both quantitative methods (such as determining mean averages for Likert-scale responses, as well as calculating percentages of students who responded a particular way to a given question) and thematic analysis for the qualitative data.

Expected Outcomes

Drawing on the data collected from the Student Representatives, this paper will seek to assess the initial impact of the implementation of a standardized module evaluation system at a single UK higher education institution and its effectiveness as a quality assurance process in the eyes of the students. Findings to be discussed include the level of awareness on the part of the students of the change, the aspects of teaching and learning that the students consider to be primarily being assessed by the new standardized questionnaire, and the general effectiveness of the system, its advantages and disadvantages. In particular, the students were asked to give their views about the extent to which an institution-wide module evaluation process is sensitive to capturing information specific to the localized context of the programme or department, and whether such surveys are held at the most appropriate time during the module to capture this information. The findings also shed light on students’ views on the effectiveness of this module evaluation system as a means of recognizing teaching excellence, as compared with other possible measures such as the grades awarded in a given module or the recent rise of student-led teaching award schemes, in which the students themselves nominate teachers for recognition. Finally, the study proposes a series of recommendations arising from the students’ narratives in order to facilitate future action planning on module evaluation.

References

Alderman, G. (2007). League tables rule – and standards inevitably fall. The Guardian. 24 April 2007. Available online at . Beard, M. (2012). A Point of View: When students answer back. BBC News Magazine. 2 December 2012. . The Complete University Guide. . Darby, J. (2007). Evaluating course evaluations: The need to establish what is being measured. Assessment & Evaluation in Higher Education 32/4, pp. 441-55. Higher Education Academy (2011). The UK Professional Standards Framework for teaching and supporting learning in higher education [UKPSF]. Available online at . Huxham, M., Laybourn, P., Cairncross, S., Gray, M., Brown, N., Goldfinch, J., & Earl, S. (2008). Collecting student feedback: A comparison of questionnaire and other methods. Assessment & Evaluation in Higher Education 33/6, pp. 675-86. The National Student Survey (NSS). . National Union of Students ([2009]). NUS Student Experience Report. London: National Union of Students. Available online at . Nulty, D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education 33/3, pp. 301-14. Postgraduate Taught Experience Survey (PTES). . Smith, P., Morris, O., and Bohms, E. (2011). Effective Course Evaluation: The Future for Quality and Standards in Higher Education. Electric Paper Ltd. Available online at . The Times Good University Guide. . Unistats. . Yorke, M. (2009). ‘Student experience’ surveys: Some methodological considerations and an empirical investigation. Assessment & Evaluation in Higher Education 34/6, pp. 721-39.

Author Information

Christopher Wiley (presenting / submitting)
University of Surrey, United Kingdom

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.