Session Information
12 SES 12 A, Literature, Reviews and AI
Paper Session
Contribution
Research syntheses show the potential to reliably bundle research findings and thus capture the current state of research in an evidence-based manner. The aim is to better validate individual results through synthesis and to uncover contradictory statements. However, the question arises as to what extent this approach, which was initially developed for medical research, can also be transferred to highly heterogeneous, interdisciplinary and international research contexts, such as educational research. This contribution investigates analyses in research syntheses in educational research based on an examination of 20 critical reviews. These syntheses examine research questions from the field of digital education across various educational sectors. As critical reviews, they consider rather broad questions and quality assessment of the primary literature is subject to less strict standards (Grant & Booth, 2009; Sutton et al., 2019).
The work step of the analysis in research synthesis is described in different handbooks (e.g., Cooper et al., 2019; Higgins et al., 2023) and discussed in relation to specific requirements stemming from the field of social science in general (e.g., Petticrew & Roberts, 2006) and particularly educational science (e.g. Gough et al., 2017). The main aim is to combine data from all included studies to draw conclusions from a body of evidence (McKenzie et al., 2023). Quality assessments evaluate both the relevance of the included studies in relation to the original review question and the quality of the applied methodology in the studies under consideration (Liabo et al., 2017). Depending on the research design of the included studies this may, for example, concern questions of validity and generalizability, but also content-related questions in the case of qualitative studies.
What exactly characterizes the analysis in critical reviews? In addition to using a narrative format, Sutton et al. emphasize the possibility of integrating different types of research (e.g., quantitative and qualitative research) and the ability to respond to current topics if this aligns with the goal of the review (Sutton et al., 2019, p. 206). In the reviews under consideration, the focus was also less on the 1:1 comparability of study results and more on the critical discussion of these results and their contextualization in a larger research field. Wright and Michailova (2023) called for critical reviews to focus more on content, moving away from a pure presentation of results towards an explicit critical discussion of the literature examined including a clear positioning of the respective authors of the reviews. This is also precisely where they see the added value of a critical review compared to a systematic review. However, evidence presentation is still crucial as a basis for critical discussion. In addition, Paré et al. (2015) emphasize that the primary advantage of critical reviews lies in their possibility of highlighting problems and contradictions in research and inadequate research contexts, thus advancing the development of the research field. In return, however, critical reviews must live with the weakness that they often contain subjective assessments (Paré et al., 2015, p. 189).
This intermediate position between more traditional and more systematic approaches of critical reviews (cf. Sutton et al., 2019) and consequently the opportunities and limitations of the analysis in our project are investigated in more detail below. To this end, we first retrospectively examine how the evidence from the included studies is presented and discussed in the critical review format. This includes, for example, the question of how often results could be verified multiple times and if and how often contradictions were discovered. Based on this approach, we aim to draw conclusions about the reliability of the analyses and their impact on the discussion of the research results.
Method
In sum, the 20 critical reviews, published between 2020 and 2023 and each one addressing a specific research question in the field of digital education, are based on an examination of 48.806 international scientific publications. These studies were screened in two steps (title and abstract screening; full text screening) and a coding scheme was applied. Overall, 570 studies met the formal and content-related inclusion criteria and were included in the final analyses. All included studies were categorized according to specific factors such as their countries of origin in Europe or on a global scale. In the individual reviews the number of included studies ranges from 6 to 122 with both these ends being exemptions: most reviews included between 10 and 40 studies. Within the research syntheses, findings are presented in a narrative text format that discusses the relevant results of the included studies and draws conclusions at a meta-level. These narrative passages are usually accompanied by tables indicating the nature and main characteristics of the included studies (e.g., the country context). To examine the analyses from all 20 reviews regarding characteristics and affordances, we applied a quantifying approach. While reading all sections of all reviews in which the selected studies and their respective results are presented, we noted all instances of multiple affirmations for a finding in a table document. Mentions of contradicting evidence were also noted. The statements included the number of referenced studies stating similar results (twofold verification, threefold, etc.) as well as the respective page number in the review. Additionally, we extracted the research questions, the total number of selected studies and peculiarities or noteworthy features of the presentation or the presented studies. This process was conducted independently by three different persons, following the four-eyes-principle. Intercoder-correlation was computed for every pairing, which resulted in a kappa according to Brennan & Prediger of k=0.72 between coder a and b and k=0.79 between coder b and c (Brennan & Prediger, 1981). But agreements between coders varied considerably across some of the reviews. This may be due to the few multiple affirmations that could be identified in individual reviews and the narrated form of the findings, which resulted in challenges in the interpretation (see below). All individually recorded instances of multiple verifications and contradictions underwent subsequent communicative validation, through which all discrepancies between raters could be resolved.
Expected Outcomes
We encountered several challenges in classifying the nature and weight of evidence: (1) We regularly encountered instances where similar, but not identical, evidence was presented. This is both a result of reviewing a heterogenous body of research and due to the narrative form of presenting evidence. Findings were predominantly presented in narrated form with varying degrees of interpretation by the authors rather than quoting statistical results. We expected a link between the number of selected studies, the number of multiple affirmations and the ‘weight’ of affirmations. However, we could not extract this interdependence. (2) While multiple affirmations of evidence were common, this was not the case with contradictory evidence. Overall, we found only 15 instances in nine reviews where contradictory evidence was analyzed. This observation applies regardless of how open-ended the respective research questions had been phrased. We assume that this may be due to the heterogeneity in the field and difficulties of comparing single study results. (3) As the reviews tended towards open ended research questions, they also tended to cover larger topics that were split into different aspects by the authors, showing less overlap than one would assume. This increased thematic multiplicity – augmented by our observation that many terms and theoretical concepts related to the field were not clearly defined. In sum, the analyses proved useful to describe the wider research fields but were not always precise when it came to judging single results. Reviews were able to identify research gaps. Dynamics within the research field of digitalization then asked for different strategies for critical discussion, varying from providing contextual evidence to discussing implications with practitioners and applying additional investigations. Knowledge transfer was finally based on a variety of complementary activities.
References
Brennan, R. L., & Prediger, D. J. (1981). Coefficient Kappa: Some Uses, Misuses, and Alternatives. Educational and Psychological Measurement, 41(3), 687–699. https://doi.org/10/d22q4b Cooper, H., Hedges, L. V., & Valentine, J. C. (Eds.). (2019). The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation. https://doi.org/10.7758/9781610448864 Gough, D., Oliver, S., & Thomas. J. (Eds.). (2017). An Introduction to Systematic Reviews (second edition). SAGE. Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26(2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x Higgins, J., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (Eds.). (2023). Cochrane Handbook for Systematic Reviews of Interventions version 6.4. www.training.cochrane.org/handbook. Liabo, K., Gough, D., & Harden A. (2017). Developing justifiable evidence claims. In D. Gough, S. Oliver, & J. Thomas (Eds.), An Introduction to Systematic Reviews (second edition, pp. 261–277). SAGE. McKenzie, J. E., Brennan, S. E., Ryanm, R. E., Thomson, H. J., & Johnston, R. V. (2023). Chapter 9: Summarizing study characteristics and preparing for synthesis. In J. Higgins, J. Thomas, J. Chandler, M. Cumpston, T. Li, M. J. Page, & V. A. Welch (Eds.), Cochrane Handbook for Systematic Reviews of Interventions version 6.4. Paré, G., Trudel, M. C., & Jaana, M. (2015). Synthezising information systems knowledge: A typology of literature reviews. Information and Management, 52(2), 183–199. https://doi.org/10.1016/j.im.2014.08.008 Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences. A practical guide. Blackwell Publ. https://doi.org/10.1002/9780470754887 Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: exploring review types and associated information retrieval requirements. Health Information & Libraries Journal, 36(3), 202–222. https://doi.org/10.1111/hir.12276 Wright, A., & Michailova, S. (2023). Critical literature reviews: A critique and actionable advice. Management Learning, 54(2), 177–197. https://doi.org/10.1177/13505076211073961
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
 This database will be updated with the conference data after ECER. 
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
 - Restrict in which part of the abstracts to search in "Where to search"
 - Search for authors and in the respective field.
 - For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
 - If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.