Session Information
09 SES 16 B, Assessment in Context: Perceptions, Culture, and Social Meaning
Paper Session
Contribution
Evaluation plays a fundamental role in socio-educational services as it makes both results and processes visible while providing data that can potentially improve service quality (Stufflebeam & Kellaghan). The literature consistently attributes two primary functions to evaluation: accountability and learning. Evaluations driven by accountability aim at assessing achieved results (Altieri, 2009), whereas those oriented toward learning seek to understand mechanisms and processes that foster growth and learning (Stame, 2016). Although these two functions have often been seen in opposition to one another, it is now widely recognized that they are interconnected and can reinforce each other (Regeer, Wildt-Liesveld, van Mierlo, Bunders, 2016; Lumino, Gambardella, 2020).
Regardless of its primary rationale, evaluation in socio-educational services can focus on multiple aspects (Stufflebeam, Madaus & Kellaghan, 2012; Rossi, Lipsey & Freeman, 2004; Montalbetti, 2024). A key element, particularly in terms of sustainability and transferability, involves assessing beneficiaries’ needs and satisfaction. Needs assessments, typically conducted in the early stages of service delivery, provide valuable data for designing interventions tailored to the target population. Satisfaction surveys, usually carried out immediately after service completion, capture beneficiaries' immediate reactions and levels of satisfaction. Additional evaluation criteria include effectiveness, defined as the extent to which objectives are met, and efficiency, which measures the relationship between achieved results and allocated resources.
However, limiting evaluation to these dimensions alone does not fully capture a central element of socio-educational services: the mechanisms and factors that drive specific outcomes (Pawson & Tilley, 1997). In this regard, process evaluation proves useful, as it aims at uncovering the "black box" of service delivery by identifying both explicit and implicit educational theories (Rogers & Weiss, 2007). Focusing on this aspect can generate valuable insights for improving service provision and enhancing educators' professional development.
In recent years, increasing attention in Italy has been directed toward evaluating the impact of services, an especially challenging aspect to assess. Impact evaluation requires a long-term perspective, emphasizing broad and transformative changes within the local context (Orizio, 2024).
The elements discussed above can be assessed using different timelines and methods. Ex-ante evaluation is useful for exploring contexts, identifying needs and expectations, or anticipating trends. However, evaluation can also accompany interventions as they unfold, focusing on emerging processes. Lastly, ex-post evaluation can be conducted immediately after the end of an intervention or at a medium-to-long-term distance to capture the service’s outcome chain (e.g., outcomes, impacts…).
Regarding data collection methods and strategies, evaluation largely draws on social research techniques. Commonly used tools include questionnaires, interviews, focus groups, and observational grids. However, evaluation should not be reduced to the choice of tools alone. Instead, data must be analyzed within a specific evaluative framework that informs decision-making rather than serving merely as a descriptive exercise.
Method
The dimensions of socio-educational service evaluation outlined in the previous section form the framework of this study, which seeks to explore how these elements manifest empirically. To this end, an observational study was conducted across 87 socio-educational services managed by social cooperatives operating in Brescia and its province (a city in northern Italy with approximately 1,300,000 inhabitants). The study was guided by the following research questions: ● Is evaluation carried out in socio-educational services in this study context? ● What drives the decision to evaluate or not? ● What aspects are evaluated? ● Who is responsible for conducting evaluations? ● When is evaluation carried out? ● hat methods and strategies are used in the evaluation process? ● How are the results of the evaluation used? For the collection of data a semi-structured questionnaire was developed, covering three main areas: respondent profile, service characteristics, and evaluation practices. The questionnaire was distributed digitally to coordinators of five main types of socio-educational services: elderly care, disability services, mental health support, child and youth services, and programs addressing substance abuse and severe marginalization. The target population was identified through the Italian Confederation of Cooperatives’ territorial section in Brescia, the primary association representing and safeguarding the cooperative movement in Italy. The questionnaire was administered during August-September 2024 via Google Forms. To maximize response rates, targeted follow-up phone calls were conducted.
Expected Outcomes
A total of 74 coordinators responded to the survey (response rate: 85%), mainly women (73%), with 38% aged between 41 and 50. Approximately half of the respondents hold a bachelor’s degree (51%), while about a quarter have a master’s degree. The respondents are experienced professionals, with 41% having more than ten years of experience in coordination roles. They work primarily in semi-residential (59%) and residential (41%) services. The collected data provide a comprehensive and detailed overview of evaluation practices in socio-educational services. Overall, slightly more than half of the services conduct regular evaluations (55%), about a quarter do so occasionally (28%), and a minority never engage in evaluation (16%). Among those who conduct evaluations (n=62), the most assessed aspects include service effectiveness (97%), beneficiary needs (92%), satisfaction levels (90%), and efficiency (82%). Less attention is given to organizational performance (65%), service process analysis (56%), long-term impacts (52%), and staff performance (50%). On a scale from 1 (not at all) to 10 (very much), coordinators report conducting evaluations primarily to understand service performance (mean=7.9), promote learning and improvement (mean=7.8), and ensure accountability (mean=7.6). Consistent with the literature, the functions of learning and accountability appear to be complementary. Further analyses are currently underway to determine whether factors such as service type, size, longevity, and training initiatives influence evaluation practices. The working hypothesis suggests that larger, more structured services with dedicated evaluation training programs conduct more systematic and rigorous evaluations and use them more effectively to enhance service quality.
References
Altieri, L. (2009). Valutazione e partecipazione. Metodologia per una ricerca interattiva e negoziale. Milano: Franco Angeli. Lumino, R., & Gambardella, D. (2020). Re-framing accountability and learning through evaluation: Insights from the Italian higher education evaluation system. Evaluation, 26(2), 147-165. Montalbetti, K. (2024). La valutazione in campo educativo e formativo: logiche, scenari, esperienze. Milano: Vita & Pensiero. Orizio, E. (2024). Navigare nella complessità: la valutazione di impatto nei contesti socio-educativi. Lecce: PensaMultiMedia. Patton, M. Q. (2014). Qualitative research & evaluation methods: Integrating theory and practice (4^ed.). Thousand Oaks, CA: Sage publications. Patton, M. Q. (2014). Qualitative research & evaluation methods: Integrating theory and practice (4^ed.). Thousand Oaks, CA: Sage publications. Regeer, B. J., de Wildt-Liesveld, R., van Mierlo, B., & Bunders, J. F. (2016). Exploring ways to reconcile accountability and learning in the evaluation of niche experiments. Evaluation, 22(1), 6-28. Rossi, P., Lipsey, M., & Freeman, H. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage Publications. Stame, N. (2016). Valutazione pluralista. Milano: Franco Angeli. Stufflebeam, D. L., & Kellaghan, T. (2003). International Handbook of Educational Evaluation. Dordrecht: Kluwer Academic Publishers. Stufflebeam, D. L., Madaus, G. F., & Kellaghan, T. (2012). Evaluation models: Viewpoints on educational and human services evaluation (2 ed.). Bostonon, Dordrecht, London: Kluwer Academic Publishers. Pawson, R., & Tilley, N. (1997). Realistic Evaluation. Thousand Oaks and London: Sage. Rogers, P. J., & Weiss, C. H. (2007). Theory-based evaluation: Reflections ten years on: Theory-based evaluation: Past, present, and future. New directions for evaluation, (114).
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.