Session Information
09 SES 14 A, Exploring Factors Influencing Teaching Quality and Student Learning Outcomes
Paper Session
Contribution
Teaching quality has been empirically shown to be a key predictor of student learning (Stronge, 2013). In studying teaching quality, teaching effectiveness researchers have for years focused on the opportunities provided to students for learning, as these are crafted through teacher-student and student-student interactions with the content. Yet, following Fend’s (1981) distinction between opportunity and use and more recent work in the German-speaking countries on this issue (cf. Vieluf et al., 2020), teaching effectiveness researchers worldwide have started to more increasingly attend to not only the opportunities created for student learning, but also to how students make use of these opportunities, on the grounds that the former without the latter can only partially explain student learning.
Of particular interest in this line of research are the opportunities provided for student cognitive activation, often identified as the potential for cognitive activation, and students’ use of these opportunities, often identified as cognitive activity (Groß-Mlynek et al., 2022; Rieser & Decristan, 2023). This heightened interest in cognitive activation is justified both because of empirical findings corroborating its role for students’ cognitive and affective learning (e.g., Lazarides & Buchholz, 2019), but also due to studies showing cognitively activating teaching to be highly needed worldwide (cf. OECD, 2020).
Despite this increased interest, our review of the literature showed that in most extant studies scholarly attention has mostly been directed to the potential for cognitive activation without also exploring students’ cognitive activity. Only three studies concurrently attended to and measured both the opportunity and use for cognitive activation in relation to student learning (Lipowsky et al., 2009; Merk et al., 2021; Rieser & Decristan, 2023). These studies, however, differ not only in reporting mixed findings, but also in their methodological design: whereas the former two employed expert classroom observers’ ratings to capture the potential for cognitive activation and student ratings to capture cognitive activity, the latter utilized student ratings to measure both. Concurrently attending to different sources of information (e.g., expert classroom observers and students) is, however, critical, given scholarly calls (e.g., Fauth et al., 2020) to more systematically examine how the source of information contributes to the predictive validity of the teaching quality measures employed.
The scarcity of studies that concurrently attend to the predictive validity of opportunity and use (in cognitive activation); the mixed findings of these studies; and the fact that none of them concurrently used different sources of information to capture the opportunity for cognitive activation—note that the use of opportunities is typically captured only through student ratings—raise two questions:
- How does the predictive validity of ratings on the opportunity for cognitive activation compare with that of ratings on the use of cognitive activation?
- Does this differ when different sources of information (expert classroom observers vs. students) are employed to capture the opportunity for cognitive activation?
Addressing these questions can have important methodological implications for measuring aspects of teaching quality in more optimal ways, but also practical implications for teachers’ formative evaluation. However, to more adequately answer these questions, and especially the second one, attention needs to be paid to ensuring that the measures of the different sources obtained are aligned in the sense of tapping into similar—and if possible identical—aspects of teaching quality. Doing so becomes particularly important, given that our review of the literature showed only a few studies comparing aligned measures of teaching quality from different sources (e.g., van der Scheer et al., 2018)—and even in those cases, not with respect to the issue of their predictive validity.
Method
Sample and measures. A sample of 31 elementary school teachers and their sixth-grade students (n=542) participated in the study. For comparability purposes, all participating teachers were observed teaching the same three algebra lessons. Students’ algebra performance before and after these lessons was measured through a validated mathematics test (Authors, 2019). We measured the potential for cognitive activation in two ways: (a) Expert observer ratings: The 93 lessons were coded by three expert raters trained and certified for this purpose; the raters first rated these lessons individually and then met in pairs to discuss and reconcile their scores. For this study, we utilized the raters’ reconciled scores on the Common Core-Aligned Student Practices of the Mathematical Quality of Instruction (cf. Charalambous & Litke, 2018) framework, which capture the opportunities provided to students for cognitive activation through working on challenging tasks, providing explanations, and engaging in reasoning. (b) Student ratings: Drawing on prior work (e.g., Fauth et al., 2014), we used 8 survey items capturing students’ perceptions of how frequently their teacher gave them opportunities to engage in cognitively activating teaching (e.g., through handling different solutions, providing explanations, or working on complex tasks/new content). Student ratings were aggregated to the classroom level to reflect the class’ overall perception of the opportunities provided. Four items were utilized to measure student cognitive activity, drawing on existing scales (e.g., Merk et al., 2021). Unlike for the potential of cognitive activation, we used student ratings at the individual rather than the classroom level, given that they were taken to reflect students’ individual self-perceptions of how they themselves experienced to be cognitively challenged. We also administered a validated survey (Kyriakides et al., 2019) measuring students’ SES, gender, and ethnicity. Finally, we collected information on teachers’ gender, years or experience, and education credentials. Analyses. Two-level (students nested within teachers) multilevel modeling analysis was utilized with students’ performance at the culmination of algebra teaching as the dependent variable. After controlling for student and teacher background characteristics as well as students initial algebra performance, we introduced observer and student ratings on cognitive activation (first in isolation and then in combinations). We ran these analyses twice, first for the ratings as composites, and then for individual items (those that were aligned in content). In comparing the predictive validity of the examined predictors, we considered both their statistical significance and the percentage of the unexplained variance explained.
Expected Outcomes
For the composites, both the potential for cognitive activity (opportunity) and cognitive activity (use) were predictive of student learning, regardless of how they were measured. When introduced in isolation to the model, each significantly contributed to student learning. For opportunity, classroom expert ratings explained a much higher percentage of the unexplained variance (4.20% total, all at the teacher level, explaining about 70% of the unexplained variance at that level) compared to that explained by student ratings (1% total, all at the teacher level, explaining 16% of the unexplained variance at that level). Compared to student opportunity ratings, student use ratings explained a slightly higher percentage of the total variance (1.5% total, corresponding to about 7% and 5% of the unexplained variance at the teacher and student level, correspondingly). When all three ratings were introduced, student opportunity ratings were no longer significant. Interestingly, the combination of expert ratings on opportunity and student ratings on use explained the highest percentage of the unexplained variance of all the models considered (5.30% total, explaining 70% and 3% of the unexplained variance at teacher and student level correspondingly). When comparing the aligned survey and MQI items (e.g., providing explanations; working on challenging tasks/new content), we noticed that whereas in all cases, the expert observer ratings had a significant contribution to student learning, student ratings did have such a consistent contribution (and also explained a smaller percentage of the unexplained variance). Collectively, these findings underline the value of concurrently attending to both opportunity and use. They also suggest that classroom observer ratings might have more predictive validity than student ratings when it comes to the opportunities provided to students for cognitive activation. Future replication studies with a different student population on a different subject are, however, needed to test the veracity of these arguments.
References
Authors (2019). [Blinded for peer-review purposes]. Charalambous, C. Y., & Litke, E. (2018). Studying instructional quality by using a content-specific lens: The case of the Mathematical Quality of Instruction framework. ZDM, 50(3), 445–460. https://doi.org/10.1007/s11858-018-0913-9 Fauth, B., Decristan, J., Rieser, S., Klieme, E., & Büttner, G. (2014). Student ratings of teaching quality in primary school: Dimensions and prediction of student outcomes. Learning and Instruction, 29, 1–9. https://doi.org/10.1016/j.learninstruc.2013.07.001 Fauth, B., Göllner, R., Lenske, G., Praetorius, A.-K. & Wagner, W. (2020). Who sees what? Conceptual considerations on the measurement of teaching quality from different perspectives. Zeitschrift für Pädagogik, 66, 63–80. https://doi.org/10.15496/pub likation-41013 Fend, H. (1981). Theorie der schule. Urban & Schwarzenberg. Groß-Mlynek, L., Graf, T., Harring, M., Gabriel-Busse, K., & Feldhoff, T. (2022). Cognitive activation in a close-up view: Triggers of high cognitive activity in students during group work phases. Frontiers in Education, 7. https://doi.org/10.3389/feduc.2022.873340 Kyriakides, L., Charalambous, E., Creemers, H. P. M. B., & Dimosthenous, A. (2019). Improving quality and equity in schools in socially disadvantaged areas. Educational Research, 61(3), 274–301. https://doi.org/10.1080/00131881.2019.1642121 Lazarides, R., & Buchholz, J. (2019). Student-perceived teaching quality: How is it related to different achievement emotions in mathematics classrooms? Learning and Instruction, 61, 45–59. https://doi.org/10.1016/j.learninstruc.2019.01.001 Lipowsky, F., Rakoczy, K., Pauli, C., Drollinger-Vetter, B., Klieme, E., & Reusser, K. (2009). Quality of geometry instruction and its short-term impact on students’ understanding of the pythagorean theorem. Learning and Instruction, 19(6), 527–537. https://doi.org/10.1016/j.learninstruc.2008.11.001 Merk, S., Batzel-Kremer, A., Bohl, T., Kleinknecht, M., & Leuders, T. (2021). Nutzung und wirkung eines kognitiv aktivierenden unterrichts bei nicht-gymnasialen schülerinnen und schülern. Unterrichtswissenschaft, 49(3), 467–487. https://doi.org/10.1007/s42010-021-00101-2 OECD. (2020). Global teaching in sights: A video study of teaching. OECD Publishing. https://doi.org/10.1787/20d6f36b-en Rieser, S., & Decristan, J. (2023). Kognitive aktivierung in befragungen von schülerinnen und schülern. Zeitschrift Für Pädagogische Psychologie, 1-15. https://doi.org/10.1024/1010-0652/a000359 Stronge, J. (2013). Effective teachers = student achievement: What the research says. Routledge. van der Scheer, E. A., Bijlsma, H. J. E., & Glas, C. A. W. (2018). Validity and reliability of student perceptions of teaching quality in primary education. School Effectiveness and School Improvement, 30(1), 30–50. https://doi.org/10.1080/09243453.2018.1539015 Vieluf, S., Praetorius, A., Rakoczy, K., Kleinknecht, M., & Pietsch, M. (2020). Angebots-nutzungs-modelle der wirkweise des unterrichts: Ein kritischer vergleich verschiedener modellvarianten. Z. Pädagog. 66, 63–80. https://doi.org/10.25656/01:25864
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.