Session Information
09 SES 02 A, Relating Quality of Instruction to Students’ Cognitive and Affective Outcomes
Paper Session
Contribution
Theoretical Framework
Teachers who provide high-quality instructions can foster students' cognitive and affective learning outcomes (Klieme, Pauli, & Reusser, 2009; Scheerens, Luyten, Steen, & de Thouars, 2007; Seidel & Shavelson, 2007). The construct of instructional quality (INQUA) reflects those high-quality aspects of instruction known to be related to student outcomes (Nilsen & Gustafsson, 2016).In Europe, one of the most prominent frameworks has conceptualized INQUA through three basic dimensions: Classroom Management, Cognitive Activation, and Supportive Climate (Klieme, Schümer, & Knoll, 2001).
Classroom Management includes a teachers' ability to establish and maintain clear rules regarding content and social norms. To do so, a teacher needs to create stable routines, good planning and pacing, and keep the students engaged (Brophy, 1983; Klieme et al., 2009)
Supportive climate features the social and emotional support that teachers provide to students, including elements such as a supportive teacher-student relationship, positive and constructive teacher feedback, a positive approach to student errors and misconceptions, individual learner support and caring teacher behaviour (Klieme et al., 2009, p. 141).
Cognitive activation consists of several key instructional features that promote and encourage understanding, including challenging tasks, activating prior knowledge, content-related discourse and participation practices (Klieme, 2009).
This conceptualization is based on strong theoretical foundations and confirmed several empirical studies that assessed its psychometric properties (see Praetorius, Klieme, Herbert, & Pinger, 2018 for a full overview).
Student ratings of INQUA
One approach to assess INQUA is by obtaining student ratings of teachers' behaviour through student questionnaires. A large body of studies has evaluated the psychometric properties and usefulness of this approach in the context of higher education. However, there is a remarkable lack of such research in primary and secondary education (Marsh, Dicke, & Pfeiffer, 2019). Bridging this gap is crucial as student ratings are an increasingly accepted and used measure for INQUA in primary and secondary school.
To date, several studies have investigated the properties and usefulness of student ratings of INQUA in primary and secondary school (e.g. Fauth, Decristan, Rieser, Klieme, & Büttner, 2014; Kyriakides et al., 2014; Rowley, Phillips, & Ferguson, 2019; van Der Scheer, Bijlsma, & Glas, 2019; Wagner, Göllner, Helmke, Trautwein, & Lüdtke, 2013; Wisniewski, Zierer, Dresel, & Daumiller, 2020). However, these studies seldom combine investigating the reliability and validity of student ratings of INQUA with examining their contribution to student outcomes. In addition, they often focus on a specific subject and educational level, leaving the comparability across educational levels and subjects understudied.
The current study aims to shed light on the properties and usefulness of student ratings of INQUA. To this end, we wanted to test the assumption that students can provide reliable and valid assessments of three dimensions of INQUA. We outlined three research questions (RQs) to guide this study:
RQ1: To what extent does the factorial structure of students' ratings of INQUA reflect the proposed
three-dimensional conceptualization of classroom management, cognitive activation, and supportive climate?
RQ2: Do students perceive INQUA similarly between educational levels and subjects?
RQ3: To what extent do student ratings of INQUA in primary and secondary education relate to student achievement in mathematics and science?
Method
The Trends in International Mathematics and Science Study (TIMSS), is an international large-scale educational assessment of student achievement in mathematics and science in fourth and eighth grades (5th and 9th grade in Norway). In addition to mathematics and science assessments, TIMSS administers school, teacher, student and home questionnaires to gathers extensive information about contextual factors that are associated with learning and student achievement. TIMSS also uses a two-stage stratified cluster sample design, with schools at the first stage and classes with students at the second (LaRoche, Joncas & Foy, 2020). This allows for analyses on the classroom level by aggregating individual ratings that reflect a shared perception of the classroom. A renewed focus on INQUA in the newest Trend in Mathematics and Science Study (TIMSS) 2019, contributed to new and updated items (Mullis & Martin, 2017). In addition, Norway added a number of national items to measure INQUA, providing a novel opportunity to investigate student ratings of INQUA across educational levels (Grades 5 vs. 9) and subjects (mathematics vs. science) using a representative sample of students. The sample consist of 3951 fifth grade and 4575 ninth grade students, both divided over 231 classes. All students completed mathematics and science assessments as well as context questionnaires. The student assessments measure achievement in mathematics and science. The context questionnaires measure a number of variables related to student learning, including several aspects of INQUA and control variables. The data were analyzed using a multilevel approach with individual students on the first level and classrooms on the second. Three analytical steps, each pertaining to a separate research question, were employed. To this end, we investigated (1) the factorial structure of student ratings of INQUA with multilevel confirmatory factor analysis; (2) the comparability across educational levels and subject with multilevel measurement invariance analysis; (3) the relationship between student ratings of INQUA and student achievement with multilevel structural equation modelling.
Expected Outcomes
RQ1. The empirical data was in line with the proposed three-dimensional factor structure, reflecting the three Basic Dimensions of INQUA based on Klieme et al. (2001): Classroom Management, Supportive Climate and Cognitive Activation. This model showed an acceptable model fit (RMSEA=0.016, CFI=0.984, TLI=0.981, SRMRwithin=0.023, SRMRbetween=0.085). In addition, factor correlations were below .85 in all models, providing evidence that students can adequately distinguish between three dimensions of instructional quality. RQ2. The comparability of each of the three dimensions was tested between educational levels (grade 5 vs grade 9) and between subjects (mathematics vs science). Cognitive activation was excluded from the analysis between subjects because different items were used to measure the construct in science and mathematics. Findings from the analysis (table 2) indicate that the empirical data is in line with a three-dimensional factor structure across all the investigated groups. Thus, students in both primary and secondary school students can adequately distinguish between the different dimensions regardless of whether they rate their mathematics or science teachers' instructions. However, the dimensions need to attest at least metric invariance to allow a meaningful comparison of correlations across groups (RQ3). Therefore, in the results of RQ3, we can compare the relationships of student ratings of INQUA with achievement across both educational levels and subjects for classroom management and a supportive climate, but not for cognitive activation. RQ3. Findings from multilevel structural equation modelling indicate that classroom management significantly relates to student achievement in mathematics and science regardless of the grade and subject. However, the results indicated significant differences between educational levels. The findings from this study could offer important insights into improving the assessment of INQUA, particularly using student ratings. These insights could have direct implications for teacher education and educational policy aimed at enhancing INQUA in mathematics and science classrooms.
References
Brophy, J. (1983). Classroom Organization and Management. The Elementary School Journal, 83(4), 265-285. doi:10.1086/461318 De Jong, R., & Westerhof, K. (2001). The quality of student ratings of teacher behaviour. Learning Environments Research, 4(1), 51-85. doi:10.1023/A:1011402608575 Fauth, B., Decristan, J., Rieser, S., Klieme, E., & Büttner, G. (2014). Student ratings of teaching quality in primary school: Dimensions and prediction of student outcomes. Learning and Instruction, 29, 1–9. doi:10.1016/j.learninstruc.2013.07.001 Ferguson, R. (2012). Can Student Surveys Measure Teaching Quality? Phi Delta Kappan Magazine, 94(3), 24-28. doi:10.1177/003172171209400306 Klieme, E., Pauli, C., & Reusser, K. (2009). The Pythagoras Study. Investigating effects of teaching and learning in Swiss and German mathematics classrooms. In T. Janik & T. Seidel (Eds.), The power of video studies in investigating teaching and learning in the classroom. (pp. 137-160). Münster u.a.: Waxmann. Klieme, E., Schümer, G., & Knoll, S. (2001). Mathematikunterricht in der Sekundarstufe I. "Aufgabenkultur" und Unterrichtsgestaltung.(Nebst) CD-ROM. In TIMSS - Impulse für Schule und Unterricht. (pp. 43-57). Bonn: Bundesministerium für Bildung u. Forschung. Mullis, I. V. S., & Martin, M. O. (2017). TIMSS 2019 Assessment Frameworks. Retrieved from Boston College, TIMSS & PIRLS International Study Center website: http://timssandpirls.bc.edu/timss2019/frameworks/ Nilsen, T., & Gustafsson, J.-E. (2016). Teacher Quality, Instructional Quality and Student Outcomes : Relationships Across Countries, Cohorts and Time. In IEA Research for Education, A Series of In-depth Analyses Based on Data of the International Association for the Evaluation of Educational Achievement (IEA), Vol. 2. Praetorius, A.-K., Klieme, E., Herbert, B., & Pinger, P. (2018). Generic dimensions of teaching quality: the German framework of Three Basic Dimensions. Mathematics Education, 50(3), 407-426. doi:10.1007/s11858-018-0918-4 Rakoczy, K., Klieme, E., Drollinger-Vetter, B., Lipowsky, F., Pauli, C., & Reusser, K. (2007). Structure as a Quality Feature in Mathematics Instruction: Cognitive and Motivational Effects of a Structured Organisation of the Learning Environment vs. a Structured Presentation of Learning Content. In (pp. 102-121). Scherer, R., Nilsen, T., & Jansen, M. (2016). Evaluating Individual Students’ Perceptions of Instructional Quality: An Investigation of their Factor Structure, Measurement Invariance, and Relations to Educational Outcomes. Seidel, T., & Shavelson, R. J. (2007). Teaching Effectiveness Research in the Past Decade: The Role of Theory and Research Design in Disentangling Meta-Analysis Results. Review of Educational Research, 77(4), 454-499. doi:10.3102/0034654307310317 Taut, S., & Rakoczy, K. (2016). Observing instructional quality in the context of school evaluation. Learning and Instruction, 46, 45-60. doi:10.1016/j.learninstruc.2016.08.003
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.