Session Information
09 SES 02 B, Assessments and Feedback in Higher Education
Paper Session
Contribution
First steps in designing higher education courses based on competences and not only on content arose years ago. In the US, this development can be traced back to the 1960s when innovative teacher education programs were developed. If competence-orientation is understood as a shift towards outcome-oriented teaching and learning, the origins can be traced as far back as the 1920s (Nodine, 2016). In the EU, the demand for competence-based orientation in higher education appeared explicitly on the political agenda in 1999 through the Bologna Declaration (European Higher Education Area, 1999). However, since then, little has actually changed in the design of examinations and assessments in higher education. Higher education examinations and assessments are still regarded more as student recall tests covering last semester’s contents than as measures of achieving competencies and University teachers generally do not use explicitly stated competence models for creating their exams even if there are competence-oriented learning objectives (Schaper et al., 2012). Hence, the only feedback systems implemented remain point-based or letter grades, despite significant research about how to implement functional feedback recommending alternatives to grades-based scores. Most important criteria on useful feedback in this context are: stimulating effective learning opportunities, informing students about their own competence-related performance by examining quality assessment criteria and providing immediate feedback on the achieved competence levels in order to optimize learning processes (Carless, 2016).
Biggs (2003) emphasized competence-based teaching and learning processes by developing the theory-based and practice-oriented approach of Constructive Alignment (CA). The framework puts the learning objectives, the learning arrangements and the learning outcome assessments into context and alignment (Biggs & Tang, 2011). While a number of institutions in higher education have adopted this approach, its prevalence is still weak (Nodine, 2016; Wildt & Wildt, 2011). However, competence-based approaches to teaching and learning in higher education – especially in workshops on how to create competence-based examinations and authentic assessments – remain in high demand (Schindler, 2015).
This talk aims to apply a method that allows enriching the current implementations of CA to ensure competence-orientation by empirical evidence in a sense of validity of criterion-referenced testing (Haertel, 1985). This method enables teachers to gain deeper knowledge about their teaching and learning measurement procedures. Additionally, students receive more detailed feedback on their abilities than grades can ever provide. I use the example of traditional classroom courses in mechanical engineering at the University of Stuttgart, Germany, a university with a strong STEM-orientation. The talk will highlight factors concerning the feasibility, the ease of implementation and student acceptance, so teachers are motivated and able to implement this approach by themselves.
Method
The basic statistical method used is Cognitive Diagnostic Modeling (CDM), a family of probabilistic test models. Key to the analysis are skills that are tested using different tasks. To solve a task, it is necessary to master one or more of those skills. The results are, inter alia, individual skill profiles of the respondents and different measurements of fit (George et al., 2016). The dichotomy of the model results ensures a simple interpretation by teachers and students, which increases acceptance, and allows for more complex structures. The introduced approach uses competence-based learning objectives as the skill set in CDM. They are created by subject matter experts and teaching methodology experts in higher education who develop and align the criteria for competence-orientation. The assessments are developed by experts in mechanical engineering and are thus scaffolded by years of experience measuring learning and skills acquisition. By determining the assignment matrix of skills and tasks and by interpreting selected measurements of aligned fit, those assumptions can be checked. Furthermore, the necessary validity of the resulting models can be proved using four criteria: (1) content specific analyses by subject matter experts and experts in teaching methodology in higher education with regard to content validity, (2) the fit of learning objectives and exams as described above for content validity internal validity, (3) the covariance matrix of skills for internal validity and, using further exams based on similar learning objectives, for external validity and (4) the individual skill profiles and external criteria – namely sex, course of study and grades in different exams of the respondents – for criterion validity. The latter can be used in two different ways: (a) using statistical methods of Differential Item Functioning (Penfield & Camilli, 2007) as a measure of internal criterion validity and (b) of external criterion validity with regard to diagnostic purposes.
Expected Outcomes
The analyses are based on a sample of 2,682 students in mechanical engineering studies at the University of Stuttgart over a period of xxxx semesters/years. The sample is drawn from two courses, taught in alternating sequence by two teachers. One major advantage of combining CDM and CA is the interpretation of atypical findings and misfits. These can be understood as misspecifications within CA and not as data- or model-based misspecifications within CDM. This advantage leads to the opportunity of identifying specific potential for optimizing teaching and learning processes, especially course and module learning objectives and the examination/assessment tasks. Only a few Q3,*-statistics larger than .3 identify LID, which can be solved using different recoding strategies. The model shows that examination grades do not solely consist of information provided by subject specific learning objectives (SRMR = .090). Indeed, a few tasks show a negative item discrimination index (IDI), indicating that students seem to employ a different set of skills when they engage with the tasks. Subsequent qualitative studies would be necessary to gain insights into these employed approaches. The correlations between the mastered skills and the achieved grades equal from .32 to .71. These correlations identify critical learning objectives that must be specially addressed during the learning processes. Furthermore, these empirical findings can be used in rationalizing a normative weighting of different skills. Discussions between psychometricians and the involved teachers ensure that the findings provide meaningful feedback for teaching and learning processes striving for a well-fitting CA. Therefore, the introduced approach offers a practicable way to ensure competence-based constructs and models within higher education.
References
Biggs, J (2003). Aligning Teaching and Assessment to Curriculum Objectives. Imaginative Curriculum Project, LTSN Generic Centre. Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University: What the Student Does. 4th ed. Maidenhead: Open University Press. Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219-233. https://doi.org/10.1080/03075070600572132 European Higher Education Area (1999). Joint declaration of the European Ministers of Education of 19th June 1999. Bologna. http://www.ehea.info/media.ehea.info/file/Ministerial_conferences/02/8/1999_Bologna_Declaration_English_553028.pdf (26.01.2020) George, A. C., Robitzsch, A., Kiefer, T., Gross, J., & Uenlue, A. (2016). The R Package CDM for cognitive diagnosis models. Journal of Statistical Software, 74(2), 1-24. doi:10.18637/jss.v074.i02 Haertel, E. (1985). Construct Validity and Criterion-Referenced Testing. Review of Educational Research, 55(1), 23-46. https://doi.org/10.3102/00346543055001023 Nodine, T.R. (2016). How did we get here? A brief history of competency-based higher education in the United States. Competency-based Education, 1(1), 5-11. https://doi.org/10.1002/cbe2.1004 Penfield, R. D., & Camilli, G. (2007). Differential item functioning and item bias. In C. R. Rao and S. Sinharray (Eds.), Handbook of Statistics 26: Psychometrics, 125-167. Amsterdam, The Netherlands: Elsevier. Schaper, N., Reis, O., Wildt, J., Horvath, E., & Bender, E. (2012). Fachgutachten zur Kompetenzorientierung in Studium und Lehre. URL https://www.hrk-nexus.de/fileadmin/redaktion/hrk-nexus/07-Downloads/07-02-Publikationen/fachgutachten_kompetenzorientierung.pdf (24.07.2018) Schindler, C. (2015). Herausforderung Prüfen. Eine fallbasierte Untersuchung der Prüfungspraxis von Hochschullehrenden im Rahmen eines Qualitätsentwicklungsprogramms. München: Technische Universität München. Zugriff am 17.05.2018. Wildt, J., & Wildt, B. (2011). Lernprozessorientiertes Prüfen im "Constructive Alignment". Ein Beitrag zur Förderung der Qualität von Hochschulbildung durch eine Weiterentwicklung des Prüfsystems. Neues Handbuch Hochschullehre, 50(2), 1-46.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.