Session Information
02 SES 10 D, Competence Assessment: Methodological Issues And Approaches
Parallel Paper Session
Contribution
For many years competency-based assessment has been recognized as a promising strategy for the development of assessment tools and systems in VET (Wolf, 2002). It favours the use of performance methods to so called “objective” tests (standardized scoring) – the argument being that the former give higher priority to occupational validity compared with curricular validity. However there is little consensus among VET-researchers about the type of procedures that will accord with such an objective (McDonald et al., 1994; Tillema, 2000; Rychen & Salganik, 2003; Haasler & Erpenbeck, 2008; Becker et al, 2010). In the present paper we will explore these issues by reviewing two pilots in Norwegian VET that evaluated core elements in contrasting models for the development of large scale assessment instruments. They take as point of departures on the one hand an initiative called the VET-LSA (Baethge et al., 2006), and on the other the project COMET (Competence Development and Assessment in TVET, Rauner et al,, 2009). Whereas the VET-LSA-studies were aiming at constructing a tool that primarily would support international comparisons, the COMET project is designed as a longitudinal study “focusing on the effects of different pedagogical approaches on the competence development of students and not so much on the description between test groups” (Rauner, 2009). However this type of “competence diagnostics” of domain-specific cognitive abilities is also meant to provide the basis for comparative studies, and the present paper discusses preliminary experiences with piloting the COMET- model in different trades of the Norwegian VET-system. We will use these as stepping stones for a discussion on different strategies for occupational validation of competence assessment in VET.
In validating assessment distinctions are often made between different types of validity: Face validity, content validity, criterion related validity (concurrent and predictive), and construct validity. Usually other quality dimensions like reliability, evidential support and fairness are added. Scholars like Messick (1994) argue that these forms could be collapsed into one or subsumed under construct validity. Others have asked for a differential weighting that takes into account the purpose of assessment (Wiliams, 1998). We propose that these validity types could serve as heuristics for the design process when developing measurement instruments – from the framing of constructs to face validity and content definitions. Also decisions about the validation of items and forms have to be made against internal (other tests) or external (vocational performance) criteria. This structure is not strictly linear and should be understood as a recognition of the dilemmas that arise when trying to integrate the different concerns. For example a low degree of face validity may reflect a lacking consensus among assessors and pose some challenges for the mapping of content elements and dimensions. This framework for design is explored in the following studies.
Method
Expected Outcomes
References
Baethge, M & Arends, L (2009): Feasability Study VET-LSA. A comparative analysis of occupational profiles and VET programmes in 8 European countries – International report. Federal Ministry of Education and Research, Germany:Vocational Training Research volume 8, Bielefeld: Bertelsmann Verlag Baethge, M. et al (2006): PISA-VET. A feasibility study. Stuttgart: Franz Steiner. Becker, Matthias; Fischer, Martin & Spöttl, Georg (Hrsg.) (2010): Von der Arbeitsanalyse zur Diagnose beruflicher Kompetenzen: Methoden und methodologische Beiträge aus der Berufsbildungsforschung, Frankfurt am Main: Peter Lang Haasler, B. & Erpenbeck, J (2008): Assessing vocational competences. In Rauner, F. & MacLean, R. (2008) Handbook of technical and vocational education and training research. Springer Media Lahn, L.C. (2010) Professional learning as epistemic trajectories. In Ludvigsen, S & Säljö, R. (eds) Learning across sites. New tools, infrastructures and practices. Oxford: Pergamon. Olsen, O.J & S. Mikkelsen. (2009): Feasability Study VET-LSA. National report from Norway. Department of Sociology, University of Bergen, Bergen May 13th 2009 Rauner, Felix u.a. / et al (2009b): Messen beruflicher Kompetenzen, Band II Ergebnisse KOMET 2008, Berlin: LIT Verlag Rauner, Felix; Haasler, Bernt; Heinemann, Lars; Grollmann, Philipp (2009a): Messen beruflicher Kompetenzen, Band 1: Grundlagen und Konzeption des KOMET-Projektes, Berlin LIT Verlag Spöttl, G. (2008) Expert-worker-workshops. In Rauner, F. & MacLean, R. (2008) Handbook of technical and vocational education and training research. Springer Media
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.