Session Information
02 SES 03 B, Transitions: VET Skills and Competencies
Paper Session
Contribution
For many years competency-based assessment has been recognized as a promising strategy for the development of assessment tools and systems in VET (Wolf, 2002). It favours the use of performance methods to so called “objective” tests (standardized scoring) – the argument being that the former give higher priority to occupational validity compared with curricular validity. However there is little consensus among VET-researchers about the type of procedures that will accord with such an objective (Becker et al, 2010). Given such a framework, a number of strategies have been taken by researchers to propose and design new type of assessment tools. Most of them could be subsumed under the category “assessment for learning” but initiatives have also been taken to develop instruments for large-scale assessment and international comparative studies of competence in VET. In this paper we will discuss to what extent equivalence can be established in these projects. We will provide data from the feasibility study PISA-VET or VET-LSA (Baethge et al., 2009) and the project KOMET (Rauner et al, 2009) both coordinated by German universities. Our focus here is on the construction of tasks and their theoretical and methodological foundation.
In research on measurement across cultures, a number of biases are listed as methodological and practical challenges when constructing instruments for valid international comparisons (Van de Vijver & Leung, 1997). Given that equivalence is the inverse of bias, we could distinguish between different types of equivalences (Church, 2010). Conceptual equivalence is referring to variation in the definition of a construct like competence. Linguistic equivalence is about the accuracy of instrument translations whereas instrument equivalence refers to degree of familiarity with item formats. Sample equivalence is crucial but often difficult to achieve. Administrative equivalence favours provisions for the standardisation of the test situation etc. Item equivalence could here be the opposite of differential item functioning (DIF) which occurs when individuals at about the same level of performance make an unexpected solution to an item possibly due to cultural norms or translation biases. This typology has mainly been used in cross-cultural research on attitudes and with questionnaires as favourite instrument. Since our theoretical and methodological framing differs somewhat from this main track, we will use the distinctions above less rigorously. We will also add that rater equivalence is not often addressed in the psychometric literature mainly since rating is automatic or semi-automatic but has to be seriously considered when ratings involve the use of professional judgment to assess realistic complex tasks.
The aim VET-LSA was to identify comparable occupational profiles and learning outcomes at the end of initial VET. It also proposed guidelines for subsequent phases in the development of an assessment instrument. The item format should be a realistic task, taking place in a computer-simulated work environment. In order to strengthen the international comparison generalized descriptions of occupations and work activities were selected from the US data base O*net and then validated for their relevance to VET and national characteristics in 4 workshops with experts from 8 participating European countries. Competences defined as internal conditions for performance (domains, declarative/procedural knowledge etc) were guiding the construction of items and selection of existing scales.
Compared with the VET-LSA project that actively draws on standard work profiles and psychometric elements from different sources, the KOMET model is a construction based on theoretical perspectives and the use of “expert-trade workshops” (Becker et al.,2010). The output is authentic open task assignments for each trade and protocols for rating based on judgmental consensus and criterion-related guidance. The MECVET-project on which this paper is based, is a piloting of the KOMET-model in Norwegian VET.
Method
Expected Outcomes
References
Baethge, M & Arends, L (2009): Feasability Study VET-LSA. A comparative analysis of occupational profiles and VET programmes in 8 European countries – International report. Federal Ministry of Education and Research, Germany:Vocational Training Research volume 8, Bielefeld: Bertelsmann Verlag Becker, M.; Fischer, M. & Spöttl, G. (Hrsg.) (2010): Von der Arbeitsanalyse zur Diagnose beruflicher Kompetenzen: Methoden und methodologische Beiträge aus der Berufsbildungsforschung, Frankfurt am Main: Peter Lang Church, A.T. (2010) Measurement Issues in Cross-cultural Research. In Walford, G. et al. (eds) The Sage Handbook of Measurement. Los Angeles: Sage. Hovdhaugen, E., Opheim, V., Sjaastad, J. & Sweetman, R. (2013) AHELO mulighetsstudie. Oppsummering av erfaringene ved å gjennomføre OECDs AHELO mulighetsstudie i Norge (in Norwegian and summary in English). Oslo: NIFU report 11/2013. Olsen, O.J. (2009): Feasability Study VET-LSA. National report from Norway. Department of Sociology, University of Bergen, Bergen May 13th 2009 Rauner, F; Haasler, B; Heinemann, L; Grollmann, P. (2009): Messen beruflicher Kompetenzen, Band 1: Grundlagen und Konzeption des KOMET-Projektes, Berlin LIT Verlag. Rauner, F., Heinemann, L. & Maurer, A. (2011) COMET China: Implementation of learning areas: testing a competence model as a basis for test tasks and learning tasks (pilot project large-scale competence diagnostics in Beijing VET schools and VET colleges). University of Bremen: Working Paler of FG I:BB. van de Vijver, F.J. & Leung, K. (1997) Methods and data analysis for cross-cultural research. Thousand Oaks, CA: Sage. Wolf, A. (2001) Competence-based assessment. In Raven, J. & J. Stephenson (eds) Competence in the learning society. NY: Peter Lang.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.