09 SES 09 A, Curriculum, Instruction and Student Achievement in Primary Schools: Findings from TIMSS
In the past decades several studies have related the amount of instructional time to student achievement following Carroll’s (1963) sensible premise that learn something is a function of the time allocated to learn it. Although some studies identified effects of learning time on student achievement, the effects are often inconsistent across grade and outcomes, and on average modest (e.g., Karweit & Slavin, 1982, Fredrick & Walberg, 1980; Scheerens, 2014; Seidel & Shavelson, 2007).
A possible explanation for the mixed findings on the effects of instructional time is that many studies are based on relatively small and not representative samples. Another limitation of the research in this area is that most studies have been conducted in the US. To address these limitation, Baker, Fabrega, Galindo, and Mishook (2004) analyzes the representative samples of secondary school student from three international large-scale assessments (PISA, TIMSS, CIVIC) that have been conducted in 28 to 38 countries. They used the cross-sectional variation to estimate the association between instructional time and student achievement. The study basically replicated the inconsistent findings from previous research, in some countries the associations were statistically significant positive, in others they were statistically negative, and often non-significant correlations were observed. The scope of the study is limited, though, because selection effects may have biased the cross-sectional estimates of the association between instructional time and achievement.
In a recent study, Lavy (2015) proposed an alternative approach to identify the effect of instructional time that can takes possible selection bias into account when analyzing cross-sectional PISA data. The study used the pooled data from all countries participating in PISA 2002, 2003, and 2006 to estimate a model with student fixed effects. The basic idea was that the student fixed-effects absorb any student, school, and country characteristics that are not subject-specific so that they cannot bias the estimation of the instructional time effect. The study shows a small but statistically significant positive effect. Furthermore, the study shows that methodology matters because simple OLS estimates (correlations) were up to three times larger indicating severe selection bias (Hanushek, 2015). Although Lavy’s study has a strong research design, there are also limitations. First, the study did not estimate the effectiveness of instructional time on achievement for different countries but for the pooled international data. Instructional time may be more effective in some countries than in others. Second, the study is limited to secondary school data so that the findings may not be valid for primary school. Thirds, PISA surveyed students about the amount of instructional time and this information may be unreliable.
The main aim of the present study is to estimate the effect of instructional time on student achievement. We build on the work of Lavy (2015) using differences in instructional time across subjects but extent the analyses in three ways. First, we focused on students in primary school instead of secondary school. Second, we conducted analyses country-by-country instead of pooling the international data. This approach allows us to explore international differences in the effectiveness of instructional time. Third, we use teacher-reported data on instructional time, which we consider a more valid measure of instructional time than student data.
We used the data from the 35 countries that participated with the same student sample in TIMSS and PIRLS in 2011. Between 3,275 (Norway) and 14,961 (United Arab Emirates) students were sampled in each country and the total student sample size is 192,302. The outcomes were the scores in mathematics, reading, and science. Achievement scores in all three domains are available for all students. The scale of each achievement scales was transformed to metric with a mean of zero with a standard deviation of one within each country. The instructional time in mathematics, reading, and science (all in minutes) is the main explanatory variable. It was surveyed from the subject teachers in mathematics, reading, and science, respectively (e.g., In a typical week, how much time do you spend teaching mathematics to the students in this class?). Issues in the identification of the effect of instructional time on student achievement are selection effects. If schools that operate in, for example, deprived areas provide more instructional time to their students, children from disadvantaged families might receive more instructional time. In the same vein, additional learning time may be a compensatory educational measure to support low-performing classes. To circumvent such bias we exploit the within-student between-subject variation in explanatory and outcome variables. The basic idea is to relate the performance in one subject to the amount of instructional time in the same subject while controlling for performance and instructional time in the other domains. For this purpose we estimate a linear model with subject and student fixed effects, where we regress student achievement on instructional time and dummies for the three subjects and for each student. We replicated the analyses for the 35 countries using all five plausible values and sampling weights.
As a baseline for our within-student between-subject analyses below, we regressed student achievement on instructional time. Such simple analyses estimate the mean association between achievement and instructional time across subjects and based on the cross-sectional variation. The results show statistically significant (5%-level) negative associations in nine countries, seven statistically significant positive associations (5%-level), and 19 non-significant associations. A serious concern with this mixed pattern of results is that possible confounders may hide actual effects of instructional time. The main model with fixed effects for subjects and students effectively circumvents bias from any confounding student and higher level that are not subject-specific because we exploit within-student variation. As this strategy basically relates within-student differences in the outcome to within-student differences in the explanatory variable, constant confounders cannot bias the estimation of the instructional time effect. The results of these analyses reveal that the previously reported negative associations vanish while we still observe several positive effects. We observe in only two countries statistically significant (5%-level) negative effects, seven statistically significant positive effects (5%-level), and 19 non-significant effects. These finding suggests that the previously reported cross-sectional associations do not reflect the effect of instructional time on achievement but rather bias from confounding factors. Our main analyses provide at least some evidence for an effect of instructional time school achievement.
Carroll, J. (1963). A model for school learning. Teachers College Record, 64(8), 723-733. Fredrick, W. C., & Walberg, H. J. (1980). Learning as a Function of Time. The Journal of Educational Research, 73(4), 183-194. doi:10.1080/00220671.1980.10885233 Hanushek, E. A. (2015). Time in education: Introduction. The Economic Journal, 125(588), F394-F396. doi:10.1111/ecoj.12266 Karweit, N., & Slavin, R. E. (1982). Time-on-Task: Timing, sampling, and definition. Journal of Educational Psychology, 74(6), 844-851. Lavy, V. (2015). Do differences in schools' instruction time explain international achievement gaps? Evidence from developed and developing countries. The Economic Journal, 125(588), F397-F424. doi:10.1111/ecoj.12233 Scheerens, J. (2014). Effectiveness of Time Investments in Education Insights from a Review and Meta-Analysis. Heidelberg: Springer Seidel, T., & Shavelson, R. J. (2007). Teaching effectiveness research in the past decade: The role of theory and research design in disentangling meta-analysis results. Review of Educational Research, 77(4), 454-499.
00. Central Events (Keynotes, EERA-Panel, EERJ Round Table, Invited Sessions)
Network 1. Continuing Professional Development: Learning for Individuals, Leaders, and Organisations
Network 2. Vocational Education and Training (VETNET)
Network 3. Curriculum Innovation
Network 4. Inclusive Education
Network 5. Children and Youth at Risk and Urban Education
Network 6. Open Learning: Media, Environments and Cultures
Network 7. Social Justice and Intercultural Education
Network 8. Research on Health Education
Network 9. Assessment, Evaluation, Testing and Measurement
Network 10. Teacher Education Research
Network 11. Educational Effectiveness and Quality Assurance
Network 12. LISnet - Library and Information Science Network
Network 13. Philosophy of Education
Network 14. Communities, Families and Schooling in Educational Research
Network 15. Research Partnerships in Education
Network 16. ICT in Education and Training
Network 17. Histories of Education
Network 18. Research in Sport Pedagogy
Network 19. Ethnography
Network 20. Research in Innovative Intercultural Learning Environments
Network 22. Research in Higher Education
Network 23. Policy Studies and Politics of Education
Network 24. Mathematics Education Research
Network 25. Research on Children's Rights in Education
Network 26. Educational Leadership
Network 27. Didactics – Learning and Teaching
The programme is updated regularly (each day in the morning)
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.