27 SES 05 A, The Theoritical-Empirical Relations in Didactic Research
Teaching is at the core of schooling so it is important for educational research to directly examine teaching quality (Ball & Forzani, 2007). Classroom observation systems (Bell et al., 2018) are an important tool for measuring teaching quality (Klette & Blikstad-Balas, 2018). While most research on observation systems have focused on identifying what should be measured (e.g., Praetorius et al., 2018; van der Lans et al., 2018), we must also carefully consider how we measure teaching quality, as common statistical approaches (e.g. average scores, factor analyses) can be inconsistent with theoretical views of teaching quality. Below, we highlight three widely accepted principles about instruction and discuss how common statistical approaches are at odds with those principles, such that conclusions drawn using those statistical approaches may be misleading.
Principle 1: Teaching is a goal directed activity. Teachers set and pursue specific instructional goals when planning lessons (Danielson, 2007), engaging in specific actions to achieve specific goals. This principle undermines the meaning of average scores, which are commonly estimated when using observation systems (e.g., Kane et al., 2012, Tremblay & Pons, 2019). An average assumes that the amount of something is the key, but goal-directedness suggests that the alignment between the current goal and instructional actions is much more useful. In fact, many observation systems are not well structured to examine the alignment of goal and action. For example, the common approach of scoring equal-interval segments (e.g., CLASS, PLATO) is fundamentally an approach to estimate average levels of quality.
Principle 2: There are multiple pathways to achieve specific goals. Here, consider the Protocol for Language Arts Teaching Observation (PLATO; Grossman et al., 2013), which proposes instructional scaffolding as an important domain of practice, decomposing it into four approaches (dimensions) to providing instructional scaffolding. Theoretically, the observed level of each scaffolding dimensions may not be important for determining quality, but quality may result when any scaffolding is sufficient to allow students to fully access the content. Then, the specific dimensions are alternative pathways to achieving the same goal of scaffolding. This would imply that the domain as a whole defines teaching quality (not the dimensions individually) so that examinations of how individual dimensions relate to student learning (e.g., Grossman et al., 2013) are not theoretically justified.
Principle 3: Some aspects of instructional quality are not directly observable. Observation systems measure aspects of teaching quality that are directly observable (e.g., scaffolding as just discussed) and dimensions that are not directly observable (e.g., emotional climate). When the dimension of teaching quality is not directly observable, observation systems score the dimension based on observable features that indicate whether the unobservable construct exists. Different statistical models are appropriate in these two cases (Jarvis et al., 2003). For non-observable dimensions (e.g., climate) reflective (or factor) models are appropriate because we conceptualize the unobserved dimension as causing the observed indicators (e.g. a positive emotional climate causes students to express positive emotions). For observable dimensions, formative models are appropriate because the implied construct has no causal powers (e.g., scaffolding cannot cause specific scores in the modeling dimension, but rather scaffolding is formed from modeling and other dimensions). Where formative models are appropriate (i.e. for observable constructs) common statistical approaches, such as examining internal consistency, factor analysis, differential item functioning, etc are not meaningful.
These principles (and others) have important implications for the measurement of teaching. Current work using observation systems does not appropriately consider these (and other) theoretical principles to ensure that the measurement approach aligns with the theory underlying the observation system. Building greater alignment is vital if we want to generate data that helps refine theories.
This paper is a conceptual/theoretical analysis of observation systems and the data analysis approaches that are used when working with observation systems. We start from the empirical premise that data should be used to inform and refine theories, that theories have specific implications for measurement, and that measurement approaches (including data analysis) determine the data available to test theories. If the measurement approaches are not properly informed by theory, then the resulting data will not be useful in refining and improving the theory. Starting from this foundation that emphasizes the importance of good measurement, we considered the measurement implications of some common theoretical understandings of teaching quality. In doing so, we identified substantial gaps between common theoretical understandings of instruction and the empirical approaches taken in measuring instruction using observation systems, which we highlight in this paper. Aligning the measurement of instruction with theories of instruction has the potential to bring about improvements to both theory and practice that can make a positive contribution to our understandings of teaching quality. In the European educational research context, instructional quality is a popular research topic, engaging scholars across the continent. Several studies have been and are being conducted measuring teaching quality in, for example, German contexts (e.g., Praetorius et al., 2018), Central European contexts (e.g., van der Lans et al, 2018) and Nordic contexts (Klette, Blikstad-Balas, & Roe, 2017). It is thus timely to discuss implications of misalignment between theory of measurement and theory of instruction in this line of research from a European perspective.
In order to maximize the potential benefit of observation systems in building our understanding of effective instructional practices, there is a need to align the ways that observation systems measure and analyze data with the theories of instruction that underlie those systems. This requires careful consideration about if and when to adopt common statistical approaches. This also means that different dimensions captured by an observation system may have to be treated with different statistical approaches (see implications of Principle 3 above). Further, additional data on instructional contexts (e.g., instructional goals) may need to be collected to analyze observation scores (e.g., if alignment between action and goal is important; see Principle 1). This paper also shows that developers of observation rubrics will need to provide far more guidance to users of observation systems with regard to what types of statistical models are appropriate. For example, developers will have to provide guidance for whether some dimensions are only relevant for specific types of lessons or lesson goals; for when multiple dimensions are capturing different important pathways to achieve the same goals; for determining which statistical models are appropriate (e.g., reflective or formative models); and whether each dimension measures a lesson-level (e.g., instructional modeling) or classroom-level (e.g., emotional climate) construct. The work in this paper will help support this by highlighting how theoretical views have direct implications for measurement and providing principles for rubric developers to consider when providing guidance for users. Through better aligning theory and measurement, we can improve our ability to learn about effective teaching practices.
Ball, D. L., & Forzani, F. M. (2007). 2007 Wallace Foundation Distinguished Lecture—What Makes Education “Research Educational”? Educational Researcher, 36, 529–540. https://doi.org/10.3102/0013189X07312896 Bell, C. A., Dobbelaer, M. J., Klette, K., & Visscher, A. (2018). Qualities of classroom observation systems. School Effectiveness and School Improvement, 0(0), 1–27. 10.1080/09243453.2018.1539014 Danielson, C. (2007). Enhancing Professional Practice: A Framework for Teaching, 2nd Edition (2nd edition). Association for Supervision & Curriculum Development. Grossman, P., Loeb, S., Cohen, J. J., & Wyckoff, J. (2013). Measure for Measure: The Relationship between Measures of Instructional Practice in Middle School English Language Arts and Teachers’ Value-Added Scores. American Journal of Education, 119(3), 445–470. https://doi.org/10.1086/669901 Jarvis, C. B., MacKenzie, S. B., & Podsakoff, P. M. (2003). A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research. Journal of Consumer Research, 30(2), 199–218. https://doi.org/10.1086/376806 Kane, T. J., Staiger, D. O., McCaffrey, D., Cantrell, S., Archer, J., Buhayar, S., & Parker, D. (2012). Gathering Feedback for Teaching: Combining High-Quality Observations with Student Surveys and Achievement Gains. Bill & Melinda Gates Foundation, Measures of Effective Teaching Project. http://eric.ed.gov/?id=ED540960 Klette, K., Blikstad-Balas, M., & Roe, A. (2017). Linking Instruction and Student Achievement. A research design for a new generation of classroom studies. Acta Didactica Norge, 11(3), 19. https://doi.org/10.5617/adno.4729 Klette, K., & Blikstad-Balas, M. (2018). Observation manuals as lenses to classroom teaching: Pitfalls and possibilities. European Educational Research Journal, 17(1), 129–146. https://doi.org/10.1177/1474904117703228 Praetorius, A.-K., Klieme, E., Herbert, B., & Pinger, P. (2018). Generic dimensions of teaching quality: The German framework of Three Basic Dimensions. ZDM, 50(3), 407–426. https://doi.org/10.1007/s11858-018-0918-4 Tremblay, K., & Pons, A. (2019). The OECD TALIS Video Study-Progress Report (No. JT03445173). Organisation for Economic Co-operation and Development. https://www.oecd.org/education/school/TALIS_Video_Study_Progress_Report.pdf van der Lans, R. M., van de Grift, W. J. C. M., & Veen, K. van. (2018). Developing an Instrument for Teacher Feedback: Using the Rasch Model to Explore Teachers’ Development of Effective Teaching Strategies and Behaviors. The Journal of Experimental Education, 86(2), 247–264. https://doi.org/10.1080/00220973.2016.1268086
00. Central Events (Keynotes, EERA-Panel, EERJ Round Table, Invited Sessions)
Network 1. Continuing Professional Development: Learning for Individuals, Leaders, and Organisations
Network 2. Vocational Education and Training (VETNET)
Network 3. Curriculum Innovation
Network 4. Inclusive Education
Network 5. Children and Youth at Risk and Urban Education
Network 6. Open Learning: Media, Environments and Cultures
Network 7. Social Justice and Intercultural Education
Network 8. Research on Health Education
Network 9. Assessment, Evaluation, Testing and Measurement
Network 10. Teacher Education Research
Network 11. Educational Effectiveness and Quality Assurance
Network 12. LISnet - Library and Information Science Network
Network 13. Philosophy of Education
Network 14. Communities, Families and Schooling in Educational Research
Network 15. Research Partnerships in Education
Network 16. ICT in Education and Training
Network 17. Histories of Education
Network 18. Research in Sport Pedagogy
Network 19. Ethnography
Network 20. Research in Innovative Intercultural Learning Environments
Network 22. Research in Higher Education
Network 23. Policy Studies and Politics of Education
Network 24. Mathematics Education Research
Network 25. Research on Children's Rights in Education
Network 26. Educational Leadership
Network 27. Didactics – Learning and Teaching
The programme is updated regularly (each day in the morning)
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.