Session Information
11 SES 09 A, School Performance and Quality Models
Paper Session
Contribution
School performance feedback (SPF) systems present educational professionals with student achievement data in order to support self-evaluation and data-based decision making (Schildkamp & Teddlie, 2008; Visscher & Coe, 2003). However, in order to arrive at (formative) conclusions based on SPF, recipients need to make sense of the (summative) data they are presented with (Schildkamp, 2019; van der Kleij et al., 2015). Attribution, or reflecting on the causes of (learning) outcomes, is an integral part of this sensemaking process (Coburn & Turner, 2011). In line with the basic propositions of attribution theory (Weiner, 1985, 2010), the nature of educators’ causal explanations for student outcomes has been found to affect their emotions and subsequent (instructional) behavior (Wang & Hall, 2018).
Research finds that educational professionals often struggle to pinpoint factors that (may) have led to certain outcomes, particularly when those outcomes are unfavorable (Verhaeghe et al., 2010). Moreover, in defiance of ideals relating to data-based decision making, student performance is often attributed to external causes such as student characteristics, rather than matters internal to educational professionals, such as instruction (Evans et al., 2019; Schildkamp et al., 2016). This is especially apparent in cases of student failure (Wang & Hall, 2018). Consequently, it can be difficult to formulate productive decisions and constructive actions based on SPF (Schildkamp et al., 2016).
In the present study, we examine educational professionals’ causal explanations for results presented in a SPF report from a low-stakes national assessment (NA) in Flanders, Belgium. Like typical external standardized assessments, the Flemish NA relate school performance to standards and to the performance of reference groups (AERA et al., 2014; Visscher & Coe, 2003). We investigate educational professionals’ attributions of these data, with a particular interest in the locus of causality of the attributions they make. To what extent is the SPF interpreted introspectively (i.e., with regard to aspects of school policy and instructional practice that can be improved or sustained) and to what extent is school performance ascribed to external factors (such as aspects of the test itself, or input from students)?
Our review of the literature suggests that perceptions of school leaders remain somewhat underexposed in studies on attribution in educational data use. However, SPF intends to inform both school policy and instructional practice. Consequently, we will not only focus on teachers’ attributions in the present study, but also on causal explanations made by school leaders. Furthermore, we will examine causal explanations for both outcomes perceived as favorable, and those perceived as unfavorable. Perhaps in line with the very term “diagnosis”, we find that the attributions and attributional processes discussed in empirical literature are predominantly focused on explanations for student failure (van Gasse & Mol, 2021; Verhaeghe et al., 2010) and not so much for student success. However, school improvement is not only a narrative of identifying problems, but also of fostering what works.
In summary, the three research questions we address, are:
RQ1 To which internal and external factors do teachers and school leaders attribute their school’s performance on a national assessment?
RQ2 Do attributions differ according to the attributor’s work role?
RQ3 Do attributions differ according to the perceived favorability of the result?
We adopt a qualitative approach and make use of authentic educational data, because our aim is to illuminate how individuals and groups make meaning of something they experience (here, their schools’ SPF) from their own perspectives (Savin-Baden & Major, 2013).
Method
Data were collected in Flanders, the Dutch-speaking region of Belgium. Periodically, national assessments (NA) are organized in order to monitor whether attainment targets are met on system-level and in order to explore whether school-, class- of student-level variables explain differences in achievement. These NA are conducted in representative samples of schools, who afterwards receive a personalized SPF report. School results are never publicized, nor do outcomes carry any formal consequences for participants. Participants for the present study were recruited from the Flemish primary schools that had taken part in the May 2019 NA of People and Society (formerly a subdomain of the World Studies curriculum) in the sixth grade. In pursuit of maximum variation (Savin-Baden & Major, 2013), schools were categorized into four profiles based on aspects of their criterion-referenced and norm-referenced results on one focal test: Spatial use, traffic and mobility. Approximately one week after having received the SPF report, in the autumn of 2020, a random selection of schools within each profile was approached. In total, semi-structured interviews were conducted with 22 participants (11 school leaders and 11 sixth grade teachers) from 13 schools. The interview protocol included open-ended questions about participants’ appraisal of the schools’ results and about how they causally explained these results. These questions followed a think-aloud section in which participants were asked to describe and interpret the tables and graphs in their schools’ SPF report. Due to societal restrictions relating to the COVID19-pandemic, the interviews were conducted online using video-conferencing software. Recordings were transcribed verbatim. Data were coded with NVivo. Framework analysis served as an overall analytical method, as it is fit to both organize and interpret data, allows for a combination of inductive and deductive techniques, and facilitates the development of matrices to condense findings and explore patterns (Gale et al., 2013). In order to identify trends, the qualitative interview data were also ‘quantitized’ (Sandelowski et al., 2009).
Expected Outcomes
Participants attribute their schools’ performance to a wide array of factors on the level of the school, the classroom, the student, and the assessment itself. School-level factors and class-level factors can be categorized as internal or external, based on the source of the attribution. We see, for instance, that school leaders reflect on factors such as teacher professionalism: a factor internal to teachers but external to themselves. Overall, school-level factors (such as the curricular line for the subject that was tested) and student-level factors (such as pupils’ cognitive capacity) are invoked most frequently, especially by school leaders and teachers, respectively. Throughout the dataset, external attributions dominate. We might relate this to educational professionals' professional attitude: to where does my responsibility for student outcomes extend? Overall, we find few differences in attributional patterns between participants from schools that scored well and from those that scored poorer. However, reservations and concerns about (the design and the conditions of) the assessment – an external factor – are uttered primarily to explain negative results. Most participants mention a whole range of factors when making causal ascriptions for their schools’ results in the SPF report. This suggests that educational professionals acknowledge that learning outcomes are the product of different building blocks. However, it also establishes why it is not easy or straightforward to formulate an unambiguous analysis and an actionable diagnosis based on SPF. Finally, the finding that teachers and school leaders (even within schools) emphasize different factors to interpret the (same) outcomes, illustrates the importance of collective sensemaking in order to piece together a complete story. These insights are relevant not only for research on data-informed decision-making in schools, but also for educational professionals themselves, as well as for those who train them, and for those who design, offer and mandate assessments.
References
AERA, APA, & NCME. (2014). Standards for Educational and Psychological Testing. American Educational Research Association. Coburn, C. E., & Turner, E. O. (2011). Research on Data Use: A Framework and Analysis. Measurement: Interdisciplinary Research & Perspective, 9(4), 173–206. Evans, M., Teasdale, R. M., Gannon-Slater, N., Londe, P. G. la, Crenshaw, H. L., Greene, J. C., & Schwandt, T. A. (2019). How Did that Happen? Teachers’ Explanations for Low Test Scores. Teachers College Record: The Voice of Scholarship in Education, 121(2), 1–40. Gale, N. K., Heath, G., Cameron, E., Rashid, S., & Redwood, S. (2013). Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Medical Research Methodology, 13(1), 117. Sandelowski, M., Voils, C. I., & Knafl, G. (2009). On quantitizing. Journal of Mixed Methods Research, 3(3), 208–222. Savin-Baden, M., & Major, C. H. (2013). Qualitative research: The essential guide to theory and practice. Routledge. Schildkamp, K. (2019). Data-based decision-making for school improvement: Research insights and gaps. Educational Research, 61(3), 257–273. Schildkamp, K., Poortman, C. L., & Handelzalts, A. (2016). Data teams for school improvement. School Effectiveness and School Improvement, 27(2), 228–254. Schildkamp, K., & Teddlie, C. (2008). School performance feedback systems in the USA and in The Netherlands: a comparison. Educational Research and Evaluation, 14(3), 255–282. van der Kleij, F. M., Vermeulen, J. A., Schildkamp, K., & Eggen, T. J. H. M. (2015). Integrating data-based decision making, Assessment for Learning and diagnostic testing in formative assessment. Assessment in Education: Principles, Policy & Practice, 22(3), 324–343. Van Gasse, R., & Mol, M. (2021). Student guidance decisions at team meetings: do teachers use data for rational decision making? Studia Paedagogica, 26(4), 99–117. Verhaeghe, G., Vanhoof, J., Valcke, M., & van Petegem, P. (2010). Using school performance feedback: perceptions of primary school principals. School Effectiveness and School Improvement, 21(2), 167–188. Visscher, A. J., & Coe, R. (2003). School Performance Feedback Systems: Conceptualisation, Analysis, and Reflection. School Effectiveness and School Improvement, 14(3), 321–349. Wang, H., & Hall, N. C. (2018). A Systematic Review of Teachers’ Causal Attributions: Prevalence, Correlates, and Consequences. Frontiers in Psychology, 9(DEC), 1–22. Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92(4), 548–573. Weiner, B. (2010). The Development of an Attribution-Based Theory of Motivation: A History of Ideas. Educational Psychologist, 45(1), 28–36.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.