Session Information
09 SES 12 B, Effective Instruction Across Contexts
Paper Session
Contribution
Classroom processes are important source of variation in student achievement (Fauth et al., 2014). Based on empirical research findings, several theoretical models were proposed to explain how classroom processes promote student learning, emphasizing different aspects of teaching (Panayiotou et al., 2021). The Three Basic Dimensions (TBD) model of instructional quality assumes three core components essential for learning: classroom management, student support, and cognitive activation (Praetorius et al., 2018). Classroom management (CM) focusses on giving opportunities to learn by providing time for learning and coping with disruptions. Student support (SS) captures caring attitudes towards students as learners, individual learning support, and a constructive approach to students’ mistakes and misconceptions. Cognitive activation (CA) relates to pedagogical practices used by teachers to promote student engagement in higher-order thinking and is determined by the nature of learning tasks. For example, cognitive engagement with the learning content may be prompted by challenging learning tasks, by integrating new knowledge into existing knowledge structures, or by class discussion about possible solutions to a problem (Baumert al., 2010). Based on established concepts from educational science and educational psychology, such as time on task, self-determination, and cognitive-constructive learning, it is expected that CM positively affects student achievement and motivation, SS affects student motivation, and CA affects student achievement (Praetorius et al., 2018).
TBD model is a dominant framework of instructional quality in German-speaking Europe and has increasingly been applied in other European countries. However, the operationalization and measurement of the three basic components varies between studies, especially regarding SS and CA (Praetorius et al., 2014). For example, clarity of instruction can be included as a subcomponent of CA or as a separate fourth dimension (Bergem et al., 2020). Classroom observation by external observers, student perceptions, and teacher self-rating are commonly used methods of data collection (Praetorius et al., 2018). Because they are easily obtained and less prone to social desirability bias, student ratings have been increasingly used to collect data on instructional quality. In this case, students are regarded as informants on instructional practices in their classrooms and it is the collective perception of learning environment what is of interest for the researcher. Substantive research based on student ratings of learning environment is only justified when a sufficient agreement within the same class is obtained (Lüdtke et al., 2009; Marsh et al., 2012).
This work-in-progress contributes to research on instructional quality by providing empirical evidence of the TBD model components, as measured by student ratings, and their relationship to student outcomes, using representative data from Czechia where the investigation has so far been limited to scales available in ILSA studies. While the research was motivated by the authors’ interest in exploring the potential of high-quality instruction to reduce socioeconomic inequalities in student achievement, having valid and reliable measures of instructional quality is a necessary first step. The study aims to answer the following research questions. 1) Are the three dimensions of instructional quality distinguished as separate class-level constructs when using student ratings of classroom instruction? 2) To what extent are the three dimensions related to student achievement at the class level? 3) Do the dimensions of instructional quality moderate the relationship between student achievement and socioeconomic status (SES)?
Method
The study uses questionnaire and achievement data of a nationally representative sample of sixth-grade students in mainstream track schools assessed in 2023. Two-step stratified sampling design was used to select schools proportionally to their size and one sixth-grade class in each selected school. The analysis excludes students with missing data on all questionnaire items and classes in which the number of students with valid questionnaire data was lower than 10. The analytical sample includes 2330 students from 133 classes (average class size is 17.5). Instructional quality was measured by student ratings of their mathematics classes. Both content coverage of three basic dimensions of instructional quality and psychometric properties reported by prior studies were considered to select items for the measurement scales. CM was measured by PISA disciplinary climate scale (OECD, 2005), an established measure of effective use of the instructional unit in terms of time and lack of interruptions. SS and CA scales were taken from the study by Fauth et al. (2014), which confirmed the expected 3-factor structure at the class level. All items were rated on a four-point Likert-type scale. Student achievement in mathematics was measured by a 30-item test that employed combination of item types and student scores were estimated by a mixed 2PL/3PL IRT model. Student SES was measured by a composite score integrating information on parental education, parental occupation, number of books, and other home possession items using principal component analysis. One-level confirmatory factor analysis (CFA) was used to provide an initial evaluation of the hypothesized 3-factor structure of instructional quality. Intra-class correlation coefficients ICC(1) and ICC(2) of observed variables were calculated to determine whether the model dimensions can be regarded as class-level constructs (Lüdtke et al., 2009; Marsh et al., 2012). Whenever the CFA and ICCs did not support the 3-factor structure at the class level, the scales were modified using an iterative process of exploratory factor analysis (EFA) and CFA (Bellens et al., 2019). Scales that best captured instructional quality as a class-level construct then entered in two-level structural equation models (SEMs) to estimate their relationships with student achievement and SES. All analyses were performed in Mplus 8.
Expected Outcomes
The hypothesized 3-factor structure was not supported by the data. Exploration of item inter-correlations and modification indices showed problems with internal consistency of the original CA scale and EFA proposed a 4-factor model including dimensions of classroom management, student support, cognitive activation by challenging tasks (CAT), and cognitive activation by drawing on existing knowledge (CAK). Furthermore, low ICC values for CA indicators, especially those related to challenging tasks, along with a poor fit of a 2-factor CA model at the between level, suggested that CAT items should not be interpreted as indicators of a class-level construct. Only three components, namely CM, SS, and CAK, were therefore considered as dimensions of instructional quality and evaluated by a series of two-level CFAs. The data structure was best described by a 3/3 model in which the items loaded on three factors both within and between classes. Regarding the relationship with achievement in mathematics, only CM had a significantly positive effect at the class level, when controlled for individual SES and class socioeconomic composition. Results of the moderation analysis are not yet available and will be presented at the conference. While student ratings are generally accepted as reliable and valid data source to measure classroom learning environment (Lüdtke et al., 2009), this study raises questions about the students’ capacity to notice cognitively activating classroom instruction, disregarding their individual level of knowledge and skills. Our findings align with a recent study by Bellen et al. (2019), which was the first to explicitly point out issues with CA scale in three European education systems. Limited ability to measure CA by easily obtained questionnaire data might be a reason for the inconsistency between a theoretically assumed positive effect on student learning and empirical research findings, reported, e.g., by Praetorius et al. (2018).
References
Baumert, J., Kunter, M., Blum, W., Brunner, M., Voss, T., Jordan, A., et al. (2010). Teachers’ mathematical knowledge, cognitive activation in the classroom, and student progress. American Educational Research Journal, 47(1), 133–180. Bellens, K., Van Damme, J., Van Den Noortgate, W., Wendt, H., & Nilsen, T. (2019). Instructional quality: Catalyst or pitfall in educational systems’ aim for high achievement and equity? An answer based on multilevel SEM analyses of TIMSS 2015 data in Flanders (Belgium), Germany, and Norway. Large-Scale Assessments in Education, 7(1). Bergem, O. K., Nilsen, T., Mittal, O., & Ræder, H. G. (2020). Can teachers’ instruction increase low-SES students’ motivation to learn mathematics? In: T. S. Frønes, A. Pettersen, J. Radišić, & N. Buchholtz (Eds), Equity, Equality and Diversity in the Nordic Model of Education (pp. 251–272). Springer. Fauth, B., Decristan, J., Rieser, S., Klieme, E., & Büttner, G. (2014). Student ratings of teaching quality in primary school: Dimensions and prediction of student outcomes. Learning and Instruction, 29, 1–9. Lüdtke, O., Robitzsch, A., Trautwein, U., & Kunter, M. (2009). Assessing the impact of learning environments: How to use student ratings of classroom or school characteristics in multilevel modeling. Contemporary Educational Psychology, 34, 120–131. Marsh, H. W., Lüdtke, O., Nagengast, B., Trautwein, U., Morin, A. J. S., Abduljabbar, A. S., & Köller, O. (2012). Classroom Climate and Contextual Effects: Conceptual and Methodological Issues in the Evaluation of Group-Level Effects. Educational Psychologist, 47(2), 106–124. OECD. (2005). PISA 2003 Technical Report. PISA, OECD Publishing. Panayiotou, A., Herbert, B., Sammons, P., & Kyriakides, L. (2021). Conceptualizing and exploring the quality of teaching using generic frameworks: A way forward. Studies in Educational Evaluation, 70, 101028. Praetorius, A.-K., Klieme, E., Herbert, B., Pinger, P. (2018). Generic dimensions of teaching quality: the German framework of Three Basic Dimensions. ZDM Mathematics Education 50, 407–426. Praetorius, A.-K., Pauli, C., Reusser, K., Rakoczy, K., & Klieme, E. (2014). One lesson is all you need? Stability of INQUA across lessons. Learning and Instruction, 31, 2–12.
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.