Session Information
16 SES 07 C, Methodological Considerations
Paper session
Contribution
Technology has enabled teachers to implement a vast array of e-Learning tools in their classrooms, in order to enhance student learning. However, applying digital media does not necessarily result in increased student engagement or deep learning (Kirkwood, 2009). Rather, it leads to only a small to moderate impact on student achievement, as found by Tamim et al. (2011) in their second-order meta-analysis. Thus, it is vital that the conditions under which educational technology enhances student engagement are investigated, in order to harness technology’s full potential in the classroom.
The increasing number of systematic reviews within the area of educational technology reflects the recent interest in evidence-based policy and practice, as well as the need to keep abreast of the ever-growing body of research in the field (Gough, Oliver, & Thomas, 2012). Systematic reviews are generally understood to be “a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review” (Moher et al., 2009, p. 1). Over the past decade, systematic reviews addressing educational technology or tools used for teaching and learning in K-12 and higher education include, for example, an analysis of web 2.0 technologies for student learning (Hew & Cheung, 2012), augmented reality in education (Akcayir & Akcayir, 2016), and engagement and learning within MOOCs (Joksimovic, et al., 2017).
The current debate in Germany on using digital media for educational purposes in higher education is being fueled by large-scale government funding, such as the project line “Research into Digitalized Higher Education” [Forschung zur digitalen Hochschulbildung], thereby illustrating the increased interest of educational policy in this line of research and practice, which includes the explicit encouragement of systematic reviews (BMBF, 2016). The first of these reviews have been published, including one on digital learning infrastructures (Pensel & Hofhues, 2017) and one on student media usage in higher education (Steffens, Schmitt & Aßmann, 2017).
Research conducted in the ActiveLeaRn project, funded by the German Federal Ministry for Education and Research (BMBF) from 2016-2019, focuses on the central question:
Under which conditions does educational technology support student engagement in higher education?
Through a systematic review, existing primary research from 2007-2016 is synthesized based on the Kahu (2013) model of student engagement, in order to describe the body of research and to provide guidance for practical decision-making. In this paper, the authors present the method of conducting systematic reviews and, using their own systematic review as an example, discuss and reflect on the advantages and limitations of using this method in the dynamic field of educational technology research.
Method
This systematic review was conducted using explicit, pre-defined, formal criteria (Gough, Oliver, & Thomas, 2012): peer-review articles in the English language, published between 2007-2016, targeting student in higher education, and addressing educational technology and student engagement. In order to search four central databases (ERIC, Web of Science, PsycINFO, Scopus), a search string was iteratively developed using reviewer knowledge (O’Mara-Eves, et al., 2013) and previous literature (e.g. Kahu, 2013; Henrie, et al., 2015). 18,068 references were retrieved, after removing duplicates due to database overlaps, and four researchers screened the first 150 titles and abstracts, in order to confirm shared understanding of the inclusion criteria. Following initial screening, 4,153 articles remained in the sample for further consideration. As the population of 4,153 articles is too extensive for the entire analysis, we drew two samples. One is to develop the results exploratively, the other is to verify them confirmatorily, which allows an estimation of the stability of the results. To sample the articles, we used the accuray and parameter estimation method based on Kuhner und Hafner (1999). We estimated a sample size of 351 articles based on the population size, alpha level (α = 0.05), margin of error (5%) and percentage (0.5). The sampling will then be stratified over the years to keep the standard error as small as possible. At this point of the review, two times 351 articles are being screened on full text, in order to identify their suitability for data extraction and inclusion for quality appraisal and synthesis.
Expected Outcomes
The execution of the systematic review to this point has revealed several aspects that are worth discussing from a methodological point of view and which include: sensitivity versus precision during the initial literature search (Brunton, Stansfield, & Thomas, 2012), time constraints versus completeness during the screening process, and reconciling quantitative and qualitative research during quality appraisal and synthesis. This presentation will be of interest to practitioners and researchers, interested in conducting systematic reviews in the field of educational technology.
References
Brunton, G., Stansfield, C., & Thomas, J. (2012). Finding relevant studies. In D. Gough, S. Oliver, & J. Thomas (Hrsg.), An introduction to Systematic Reviews (S. 107-134). London: SAGE Publications. Gough, D., Oliver, S., & Thomas, J. (2012). Introducing systematic reviews. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews. (pp. 1–16). London: SAGE Publications. Henrie, C. R., Halverson, L. R., & Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: A review. Computers & Education, 90, 36–53. https://doi.org/10.1016/j.compedu.2015.09.005 Hew, K. F., & Cheung, W. S. (2013). Use of Web 2.0 technologies in K-12 and higher education: The search for evidence-based practice. Educational Research Review, 9, 47–64. https://doi.org/10.1016/j.edurev.2012.08.001 Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education, 38(5), 758–773. https://doi.org/10.1080/03075079.2011.598505 Kirkwood, A. (2009). E-Learning: you don't always get what you hope for. Technology, Pedadgogy and Education, 18(2), 107-121. Kupper, L. L., & Hafner, K. B. (1989). How appropriate are popular sample size formulas?.The American Statistician, 43(2), 101-105. Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. British Medical Journal, 339. doi:10.1136/bmj.b2535 O'Mara-Eves, A., Brunton, G., McDaid, D., Kavanagh, J., Oliver, S., & Thomas, J. (2013). Techniques for identifying cross-disciplinary and 'hard-to-detect' evidence for systematic review. Research Synthesis Methods, 5, 50-59. Pensel, S., & Hofhues, S. (2017). Digitale Lerninfrastrukturen an Hochschulen. Systematisches Review zu den Rahmenbedingungen für das Lehren und Lernen mit Medien an deutschen Hochschulen. Retrieved on 24 January, 2017: http://your-study.info/publikationen/. Tamim, R., Bernard, R., Borokhovski, E., Abrami, P., & Schmid, R. (2011). What Forty Years of Research Says About the Impact of Technology on Learning: A Second-Order Meta-Analysis and Validation Study. Review of Educational Research, 81(1), 4-28.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.