Session Information
Session 4C, New principles and tools for the evaluation of VET systems in the context of current changes and trends
Symposium
Time:
2003-09-18
13:00-14:30
Room:
Chair:
Carl James
Discussant:
Ludger Deitmer
Contribution
Evaluation is becoming an increasingly important activity at different levels within European VET systems. This is being emphasized by policy makers, funding bodies, strategists and practitioners. The cycle of innovating, piloting, evaluating and refining together with dissemination of the process and its outcomes is a widely accepted model of development. However the so called information explosion of the last decade has meant that the rate of change has increased logarithmically, and that new knowledge is generated in parallel with the application and trialling of existing knowledge. It is likely that systematising and formalising the 'natural' processes of reflection and assessment under the banner of evaluation (which typically comes at the end of the innovation cycle) ensures that they are not short circuited under the pressure to move on. It is a way of ensuring that the reflective processes, which may be slow and cumbersome, are speeded up so that they do not fall behind the rapidly accelerating processes of knowledge creation. That said, the supply of knowledge and the speed of knowledge generation about the evaluation process itself has lagged behind the demand for its application. As a science, or an area of discrete study, evaluation is in its infancy. Its practitioners are drawn from a wide range of disciplines (social policy, social administration, psychology, management). This diversity of background disciplines, is both its strength and its weakness. On the one hand, emerging theories of evaluation are eclectic and novel. On the other hand, knowledge transfer is low and redeployment and adaptation of evaluation theory and practice outside the parent discipline is not commonplace. The inevitable consequence is that the general level of knowledge about evaluation is low although the level of evaluation activity is high. The majority of people engaged in practitioner level evaluation generate practices based on notions of objectivity, input-output models and a concept of evaluation as feedback without challenging whether those assumptions are necessarily the only valid starting points. The symposium brings together researchers who are collaborating through Leonardo da Vinci CERN/ EVAL network to develop theory and practice in the evaluation of VET and lifelong learning. The symposium uses different methodological approaches involving, theory, field research, case study and live demonstration of new technological tools which support the evaluation process. The papers address issues facing evaluators at different levels within VET systems, from policy learning and changing systems to evaluation of specific educational programmes. This is discussed within the broadening concept of VET as lifelong learning and non-formal learning and the role than evaluation can play in the process of change and development.. Paper 1: Change Politics and the use of Evaluation in Danish and Dutch VET Presenter: Loek Nieuwenhuis (Stoas Research) Joint Author: Hanne Shapiro (DTI, Denmark) Abstract European States are seeking to flexibilise their VET systems. Systems' change is complex and chaotic because of its multi-layer, multi-actor and multi-purpose character. Social systems are embedded: VET is in between economy, education and social affairs. Changing complex social systems needs coherent, persistent, and consistent political actions at all levels of the systems. Complexity of systems' change makes simple/single evaluation useless and impossible. Evaluation is balancing between inside and outside; evaluators are co- actors in the system. Sanderson (2000) argues for evaluation as policy learning, where evaluation creates a relatively de-politicised setting for a rational discourse. This paper looks at three perspectives to discuss evaluation of VET systems' change. The first perspective is the VET approach, looking at goal- orientated versus development orientated, and instructive versus constructive methods. The second perspective is the systems' innovation approach. This looks at systems' layers, consistency and interactivity, change concepts and models. The third perspective is the evaluation policy. This looks at evaluation as policy learning versus top down 'stick to hit'. These perspectives form a framework of analysis to describe two cases of systems change in the Netherlands and Denmark. The paper finishes with some conclusions and developmental views as to how evaluation is used in systems' change processes. Paper 2: Evaluation Cultures Presenter: Prof. Nikitas Patiniotis, University of Patras Presenter: Dr Eduardo Figueira, ACADEMUS LDA- Consultadoria, Investigação E Formação Abstract Evaluation is becoming increasingly important in European policy and development. However, there are different understandings of evaluation in each EU country. In some countries, evaluation is an institutionalised procedure in VET systems. In others it is much less recognised. The northern European countries seem to be moving at a faster pace than southern European countries in developing evaluation as a profession with training in evaluation techniques and electronic tools to assist the evaluation process. This paper looks at how VET is evaluated in eight different EU countries and asks, can we speak about evaluation culture or at least an evolving evaluation culture? What is meant by "evaluation" in each country? Is there a sense of evaluation as an institution? Are there institutional evaluation systems in place? Are these comparable across different EU countries? Who does the evaluation and who pays for it? Do factors and criteria change according to the specific target group? What are the implications for European VET programmes? The basis for this paper has arisen from discussions between VET practitioners, evaluators and researchers on the CERN network. This paper presents further results and development of work over the past year. A case study illustration is provided, focusing on evaluation culture in Portugal. Paper 3: Benchmarking and Performance Indicators in Vocational Education and Training Presenter: John Konrad, Leeds Metropolitan University Joint Author: Jenny Hughes, CRED, Wales. Abstract This proposal reflects the work of the CERN project in the policy context of the proposals provided by the recent approaches developed by the European Commission, such as those outlined in the Danish Presidency Conference 2002 which looked at quality in educational systems. CERN has identified a number of issues in this field, principally the issues of inter-state and inter- cultural comparability, in particular with reference to non-formal education, training in the workplace and processes of lifelong learning. This paper looks at benchmarking and performance indicators and how they relate to processes of development planning of lifelong learning provision. The paper makes use of work by Messner and Ruhl concerning a model for application of performance indicators. It also looks at the issues in relation to engaging SMEs in learning. The implications of this analysis for the management and evaluation of VET will be presented and proposals made for further research. Paper 4: Case Study in Applying Computer-based Evaluation Model to an Educational Programme Presenter: Brian Dillon, Nexus Research Co-operative Joint Author: Dave Slater, Nexus Research Co-operative Abstract The EVAL project has carried out research to develop ICT- based solutions for use by evaluators and project managers. This paper includes an electronic demonstration of the Nexus model. This can be used to evaluate programmes through enabling individual projects within the programme to conduct self-evaluation. The model is divided into four areas: Operational Environment; Project Environment; Outputs and Impact. Each of these areas comes with a series of electronic forms to capture information. The forms can be adapted to suit different types of VET programme. The model takes a developmental approach to evaluation which involves project staff in an on-going cycle of dialogue, learning and change. The model provides a new process for evaluation which actively relates achievements back to the contextual problems the programme is addressing. In this way, use of prescriptive performance indicators is reduced and the real added value of the programme can be assessed. The paper will identify how the model can be mainstreamed for use by different European VET programmes and projects.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.