Session Information
09 SES 04 A, Methodological Issues in Large-Scale Assessments
Paper Session
Contribution
The paper focuses on the relationship between cognitive abilities and students’ achievement in reading, mathematics and science at the end of primary school. The joint assessment of the Progress in International Reading Literacy Study (PIRLS) and the Trends in International Mathematics and Science Study (TIMSS) in a common representative sample in 2011 provides a unique dataset which allows discovering common patterns across the achievement domains reading literacy, mathematics and science (Martin & Mullis, 2013). The study at hand presents analyses of the German PIRLS/TIMSS 2011 sample. 3,928 learners from 197 schools participated in all three assessment domains (Bos, Tarelli, et al., 2012; Bos, Wendt, et al., 2012). In Germany, a national extension of the instruments allows to analyze data of two subscales (figural and verbal) of a cognitive ability test (KFT: Heller & Perleth, 2000).
The distinguished concepts that build the theoretical basis of this work are cognitive abilities and subject specific competencies: especially, the definition of competencies by Weinert (1999, 2001), which can be considered an overarching framework for research on competency concepts in international comparisons of student achievement, and a comparison of the characteristics of the concepts of competence and intelligence (Hartig & Klieme, 2006). The core of the presented work discusses the confounding of the concepts of competence and intelligence in educational measurement. Referring to Rindermann and other researchers, who doubt that large-scale student assessment studies measure nothing else but intelligence, Baumert et al. (2007, 2009) analysed to what extent student assessment studies measure more than a single general cognitive ability. The authors report a comparison of g-factor and nested factor models by Brunner (2005, 2008) using data of 9th grade students in the German national extension of PISA. The authors’ interpretation of these model comparisons is that intelligence (g) influences the performance on all tasks; moreover, there are domain-specific abilities with an impact on the achievement test results.
For students at the end of primary school only research on the relationships between the achievement in reading, mathematics and science exists (Martin & Mullis, 2013; Bos, Tarelli, et al., 2012; Bos, Wendt, et al., 2012). However, research on the relationship of subject specific test results of large-scale assessments and cognitive ability tests at the end of primary school in Germany is not yet provided.
The work focuses on the following research questions:
1) How can the relationship between the results of the administered figural and verbal cognitive ability tests and achievement in reading, mathematics and science at the end of primary school be described using latent correlations?
2) Does a one-dimensional (single general ability), a four-dimensional (reading literacy, mathematics, science, cognitive abilities) or a ten-dimensional (reading: literacy, informational; mathematics: number, geometric shapes and measures, data display; cognitive abilities: figural, verbal) model fit the data best?
3) To what extent can the variance of achievement in the different competence domains be explained by the measured cognitive abilities? How high is the residual variance?
Is it possible to identify and categorize items that both fit well in a general one-dimensional model and do not fit well in a subject-specific multidimensional model and vice versa?
Method
Expected Outcomes
References
Adams, R. J., Wilson, M. R. & Wang, W. L. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21 (1), 1–24. Baumert, J., Brunner, M., Lüdtke, O. & Trautwein, U. (2007). Was messen internationale Schulleistungsstudien? - Resultate kumulativer Wissenserwerbsprozesse. Eine Antwort auf Heiner Rindermann. [What do international large-scale assessment studies measure?] Psychologische Rundschau, 58 (2), 118–128. Baumert, J., Lüdtke O., Trautwein U. & Brunner M. (2009). Large-scale assessment studies measure the result of processes of knowledge acquisition: evidence in support of the distinction between intelligence and student achievement. Educational Research Review, 4 (3), 165–176. Bos, W., Tarelli, I., Bremerich-Vos, A. & Schwippert, K. (Hrsg.). (2012). IGLU 2011. Lesekompetenzen von Grundschulkindern in Deutschland im internationalen Vergleich [Results of PIRLS 2011 in Germany]. Münster: Waxmann. Bos, W., Wendt, H., Köller, O. & Selter, C. (Hrsg.). (2012). TIMSS 2011. Mathematische und naturwissenschaftliche Kompetenzen von Grundschulkindern in Deutschland im internationalen Vergleich [Results of TIMSS 2011 in Germany]. Münster: Waxmann. Brunner, M. (2005). Mathematische Schülerleistung: Struktur, Schulformunterschiede und Validität [Student achievement in mathematics: Structure, school type differences and validity], Humboldt-Universität. Zugriff am 16.02.2013. Verfügbar unter http://library.mpib-berlin.mpg.de/diss/Brunner_Dissertation.pdf. Brunner, M. (2008). No g in education? Learning and Individual Differences, 18 (2), 152–165. Hartig, J. & Klieme, E. (2006). Kompetenz und Kompetenzdiagnostik [Competence and competence diagnostics]. In K. Schweizer (Hrsg.), Leistung und Leistungsdiagnostik (S. 127–143). Heidelberg: Springer Medizin Verlag. / Heller, K. A. & Perleth, C. (2000). Kognitiver Fähigkeitstest für 4. bis 12. Klassen, Revision – KFT 4-12+R : Manual. Göttingen: Beltz Test. Kaufmann, S. B., Reynolds, M. R., Liu, X., Kaufman, A. S. & Mc Grew, K. S. (2012). Are cognitive g and academic achievement g one and the same Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock-Johnson and Kaufmans tests. Intelligence: a multidisciplinary journal. Martin, M.O., Mullis, I.V.S. (2013). TIMSS and PIRLS 2011: Relationships Among Reading, Mathematics, and Science Achievement at the Fourth Grade—Implications for Early Learning. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College. Weinert, F. E. (2001). Concept of competence: a conceptual clarification. In D. S. Rychen & L. Hersh Salganik (Hrsg.), Defining and Selecting Key Competencies (S. 45-65). Seattle: Hogrefe & Huber Publishers. Wu, M., Adams, R., Wilson, M. R. & Haldane, S. (2007). ACER ConQuest 2.0. Generalised item response modelling software [Computer software]. Camberwell: Acer Press.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.