External School Evaluation As a Tool for School Development
Author(s):
Anne Berit Emstad (presenting / submitting) Trond Buland
Conference:
ECER 2016
Format:
Paper

Session Information

09 SES 05 C, External School Evaluations and School Self-evaluations

Paper Session

Time:
2016-08-24
13:30-15:00
Room:
NM-F107
Chair:
Jana Poláchová Vaštatková

Contribution

School evaluation can provide schools with guidelines and parameters to follow in order to make positive changes that facilitate school development and student learning (Dahler-Larsen, 2006). This paper is the result of an analysis of External School Evaluation (ESE), an evaluation method initiated by the Norwegian Education Directorate, which is a tool for school development. ESE is an evaluation of school practice in an area chosen by the school itself, after school leaders have carried out an analysis of development needs. Two external evaluators conduct the assessment. These evaluators prepare the work by developing a Future Image—a schema that, describes signs of good practice in one chosen area,  of the school in question. The evaluators visit the school for three to four days, collect information/data due to the Future image, analyse the information, and deliver an evaluation report on their last day. The feedback the school receives from this process should contribute to its quality development. Some schools are given help by a national supervisors corps, organized by the directorate. In cases where the national supervisors are associated with ESE, they contribute with knowledge, experience, and advice to schools and school authorities, and are able to follow up after that process and start with development. In this paper we examine:

  1. How does ESE contribute to school development?
  2. How do external supervisors contribute?

The Committee for Quality in Primary and Secondary Education in Norway (2003), defined three areas of quality of students’ learning in a broad sense (i.e., the students’ knowledge, skills and attitudes):  results, structure, and process. The quality of results describes the desired results of educational activities—students’ learning in a broad sense. Structural quality describes the external presuppositions: the establishment’s organisation and its available resources. Process quality refers to the internal activities of the establishment—the work on education itself.

ESE aims to enhance quality of a school, and the definition of quality was used as the basis for the interview guides that were used to gather knowledge about how the different actors experienced ESE’s contribution to increased quality in process, structure, and results in schools. In the analysis and understanding of our data, we have benefited from the so-called “actor-network” approach, which is derived primarily from social studies of society and technology (Latour, 1987), but also from studies of the implementation of public policy (Buland, 1996). This perspective provides a good tool for understanding how actors build networks consisting of both human and non-human entities and actors to realize this goal. This, in turn, gives us a good foundation for understanding the importance of ESE in concrete school development. Within the actor-network approach (Latour 1987), one operates with the concept of non-human actors. Actor-network approaches analyse and explain change as a result of actors' construction of heterogeneous actor-networks around scenarios or images of reality. In relation to the school, such non-human actors can be regulations, guidelines, curricula, strategic plans, work plans, curriculum, textbooks, and research results, among others. In order to realise a project, actors develop such a network around a particular scenario, story, or narrative about a desired reality or future, and how to get there, as well as how the school's different stakeholders can contribute to achieving their common goals. ESE and the process report can be understood, therefore, as a non-human actor—an ally in the effort to develop the school in a determined direction. This underlines the fact that, to follow Prior (2011), documents do things as well as contain things, and ESE becomes an instrument for local action and change.

Method

The evaluation of ESE was based on a variety of data collection methods, including surveys, case studies, and four in-depth interviews. This kind of methodological triangulation ensures that information and data from the evaluation are supported by multiple sources. Different qualitative data sources have provided in-depth knowledge, while the quantitative data describe the extent of the investigated phenomena. Two surveys were conducted. One survey targeted schools that had undertaken ESE initiated by the government from 2011-2014; the second survey targeted the external evaluators. The Education Directorate provided information about schools and evaluators. Thirty-five out of 80 schools conducted the survey (44%), as did 31 out of 47 evaluators (66%). Enalyzer was used in the implementation of the survey; it is a system for data collection, processing, and reporting. This quantitative data was used to strengthen the qualitative data by giving us a breadth of information, while the qualitative data contributed to a more comprehensive, in-depth knowledge about ESE. The main data was collected through the case study. The case study involved four schools: two were followed and observed during the process, and two had undertaken the ESE two years ago (2013), where two of the fours schools had supervisors appointed by the directorate that participated in the process and in the work that followed. The purpose of the case study was to gain knowledge about the whole process: from the beginning, where the evaluators form the future image, until the end where the schools receive the evaluation report and follow up. The goal was to see what kind of result ESE can give, both with and without supervisors. The case study was based on observing the evaluator’s preparation and the evaluation week, and documenting studies and interviews of teachers and school leaders in the schools that had already undertaken ESE. In the analysis, we have done a general categorization or conceptualization, closely related to key questions and data, which consequently serves to emphasize the central nature of the cases (Thagaard, 2006; Postholm, 2010). Beside the case study, we conducted four in-depth interviews with two employees in the Norwegian Education Directorate working with ESE, and two coordinators of the ESE going on in the municipalities.

Expected Outcomes

ESE appears to be, potentially, a very strong policy instrument. Findings from both surveys and case studies indicate that schools’ perceptions are that ESE leads to changes in process and structure, but changes regarding student results are experienced to a lesser degree. The model’s orientation to facts and its similarity to research establish an indisputable basis for the school's future work. The model itself provides legitimacy; it has been proven in a number of schools, and the schools are largely satisfied with the results. This, however, requires that the principal and others who take the initiative for the evaluation manage to “sell” it to the teaching staff, and ensure the follow-up after the evaluation. We have used the terms scenarios and narratives to illustrate this. Our findings indicate that the supervisors have had an important role to play in the aftermath of the evaluation. This is because it is a difficult process to follow up. It requires that the supervisors dispose of expertise, especially when it comes to analysis and the ability to theorize. If the results are perceived as relevant and appropriate by the school management, they will have great significance for further development of the school. This also gives the external evaluators a large responsibility in respect to implementation, analysis, and presentation of their results. An evaluator has a high degree of influence and power. That role and the model therefore require that these assessments be treated with caution. We believe that majority of evaluators are aware of this responsibility. Nevertheless, we also believe that ethical, methodological, and technical choices are sometimes made without a high degree of reflection—at least reflection that is explicitly expressed. We, therefore, question methodology and expertise among the external evaluators in this area.

References

Argyris, C., & Schön, D. A. (1978). Organizational learning: A theory of action perspective. Reading, Mass.: Addi-son-Wesley. Randi Boelskifte Skovhus & Rie Thomsen (2015): Popular problems, British Journal of Guidance & Counselling, Buland, T. (1996). Den store planen. Norges satsing på informasjonsteknologi 1987-90, avhandling levert til vurdering for graden Dr.polit. ved Universitetet i Trondheim, Trondheim: Senter for teknologi og samfunn, NTNU Dahler-Larsen, P. (2006). Evalueringskultur: Et begreb bliver til. Odense: Syddansk Universitetsforlag. Earl, L.M., Katz, S., & Ben Jaafar, S. (2009). Building and connecting learning communities: The power of net-works for school improvement. Thousand Oaks, California: Corwin Press. Elmore, R. (2008). Leadership as the practice of improvement. I B. Pont, D. Nusche, & D. Hopkins, Improving school leadership, volume 2: Case studies on system leadership (s. 37–67). London: OECD. Emstad, A.B. (2011). The principals role in the post evaluation process. How does the principal engage in the work carried out after the school evaluation? Educational Assessment, Evaluation and Acoountability. 23 (4), s.271-288 Emstad, A. B. (2012). Rektors engasjement i arbeidet med oppfølging av skolevurdering: En kvalitativ kasusstudie av hvordan seks norske barneskoler har brukt skolevurdering i sitt arbeid med forbedring av skolen som læringsarena. Trondheim: NTNU Emstad, A.B., & Robinson, V.M.J. (2011) The role of eadership in evaluation utilization: Cases from Norwegian primary schools. Nordic Studies in Edication, 31 (4), 245-247 Ertsås, T., & Irgens, E.J. (2012). Teoriens betydning for profesjonell yrkesutøvelse. I M.B. Fleischer, D., & Christie, C. (2009). Evaluation Use: Results From a servey of US American Evaluation Association Members. American Journal of Evaluation, 30(2), 158–175. Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help an organization to learn? Evaluation review, 18(5), 574 - 591. Sage publication Goddard, D., & Leask, M. (1992). The search for quality: Planning for improvement and managing change. Lon-don: Paul Chapman. Mathews, D. (2010). Improving learning through whole-school evaluation: moving towards a model of internal evaluation in Irish post-primary schools. Phd avhandling, National university of Irland, Maynooth. Nusche, D., Earl, L., Maxwell, W., & Shewbridge, C. (2011). OECD reviews of evaluation and assessment in edu-cation. OECD-vurdering av norsk utdanningspolitikk. Oslo: Aschehoug. Timperley, H. (2008). Teacher professional learning and development. Brussels: The International Academy of Education. Utdanningsdirektoratet (2015). TTegn på god prakiss. Kom igang med skoletuvikling Oslo: Forfatteren

Author Information

Anne Berit Emstad (presenting / submitting)
NTNU
Department of teacher education
Trondheim
NTNU, Norway

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.