Session Information
24 SES 15 A, Affective Aspects in Mathematics Learning and Teacher Selection
Paper Session
Contribution
Due to increasing numbers of study applicants in Germany, which often exceed the amount of available study places, universities are confronted with the challenge of selecting the "most suitable" candidates. According to a decision of the German Federal Constitutional Court in December 2017, this selection must not be made solely on the basis of the university entrance qualification grade (e.g. Abitur, A-level, high school diploma, French baccalauréat, gaokao, etc.). In particular, subject-specific admission tests provide a valid prediction of academic success (Schult et al., 2019). At Leuphana University of Lueneburg, applicants for a teacher training programme in mathematics thus have the opportunity to gain additional credits in the selection process by taking a mathematics test, among other things, and thereby increase their chances of being accepted to university.
The mathematics test used for this purpose comprises 77 items, which spread across all content-related and process-related competencies of the lower secondary level of the German educational standards model (e.g. Klieme et al., 2004; Leuders et al., 2005). The test was piloted on 654 students from different universities and has a satisfying internal consistency (see Besser et al., submitted). The content validity of the test was confirmed by experts who were involved in the development of the German educational standards. However, construct and criterion validity still must be confirmed empirically. With respect to this desideratum, it can be stressed: (1) Regarding the assessment of mathematical knowledge and skills according to the theoretical model of educational standards, it can be expected that students in upper levels (with a longer duration of school attendance) perform better in this test than students in lower levels (with a correspondingly shorter duration of school attendance). (2) Additionally, it is known that students' performance in existing mathematics tests which reflect the model of educational standards usually correlate negatively with mathematics grades (if better performance is coded with higher test points and a lower grade, e.g. Ferla et al., 2009; Fischbach et al., 2013; Graf et al., 2016; Wiberg, 2019), positively with mathematics self-concept and mathematics self-efficacy, not or slightly positively with mathematics, and negatively with mathematics anxiety (e.g. Ferla et al., 2009; Hattie, 2010; Lee & Stankov, 2018; Pipere & Mieriņa, 2017; Stankov, & Lee, 2014).
Based on these considerations, the research questions of this paper are as follows:
Question 1: How do lower secondary school pupils perform compared to university applicants in a mathematics test on lower secondary contents (which has been developed to assess mathematical knowledge of applicants for a mathematics teacher training programme)?
Question 2: How does test performance (in this special test) correlate with mathematics grade, mathematics self-concept, mathematics self-efficacy, mathematics interest and mathematics anxiety?
In particular, the following hypotheses shall be tested:
H1: University applicants perform better than students of lower secondary school.
H2: Test performance is negatively correlated with mathematics grade, positively correlated with mathematics self-concept and mathematics self-efficacy, weakly positively or not correlated with mathematics interest and negatively correlated with mathematics anxiety.
Method
The mathematics test was administered in late summer 2019 in a multi-matrix design to 140 university applicants and 294 pupils from 7th, 9th and 11th grades of grammar (Gymnasium) and comprehensive schools (Gesamtschulen) in Germany. For the tests in grades 7 and 9, only those items were included which, according to the curriculum, could be completed by pupils in these grades. All items are (complex) multiple choice items. The answers were coded dichotomously (0=not correct, 1=correct) and Rasch-scaled one-dimensionally with ConQuest. To answer question 1, analyses of variance were computed using SPSS with six factors resulting from the three grade levels and the two different school types. Since the case numbers of these factors differed considerably, the GT2 post-hoc test by Hochberg was calculated to determine homogeneous groups. To answer question 2, the mathematics grade (1=very good, 6=insufficient) as well as mathematics self-concept (4 items, Weber & Freund, 2017), mathematics self-efficacy (4 items, Ramm et al, 2006), mathematics interest and enjoyment (4 items, Ramm et al., 2006; OECD, 2005) and mathematics anxiety (5 items, Ramm et al., 2006; OECD, 2005) were surveyed with paper-pencil questionnaires using four-level Likert scales (1 = strongly disagree, 4 = strongly agree). The test scaling provides an EAP reliability of 0.71. The internal consistencies of the administered scales are all in a good range (Cronbach’s α ranges between .80 and .91).
Expected Outcomes
The analysis of variance shows that the test performances of the different groups are not all equal (confirmation of H1): The post-hoc test provides three homogeneous groups, the first of which comprises the pupils of the 7th grades and the pupils of the 9th grades of comprehensive schools, the second of which comprises the pupils of the 9th and 11th grades of grammar schools and the third of which comprises the university applicants. The correlations of test performance with the other constructs confirm hypothesis H2: Test performance correlates negatively with mathematics grade (all students: r=-.362; group 1: r=-.266; group 2: r=-.418), positively with mathematics self-concept (all students: r=.239; group 1: r=.216; group 2: r=.575) and mathematics self-efficacy (all students: r=.206; group 1: r=.178; group 2: r=.527), not or positively with mathematics interest (all students: n.s.; group 1: n.s.; group 2: r=.489) and negatively with mathematics anxiety (all students: r=-.235; group 1: r=-.235; group 2: r=-.490). The results confirm both hypotheses: University applicants perform better than pupils from the 9th and 11th grades and these in turn perform better than pupils from the 7th grades and the 9th comprehensive school classes. The correlations are all consistent with empirical results of former studies. Altogether, the findings support the quality of the test instrument, which validly measures different levels of mathematics competence and can thus be used to evaluate applicants. In the context of the mentioned decision of the German Federal Constitutional Court in December 2017, this result is of special interest for the development of standardized, subject-specific instruments for university selection processes. The next steps to evaluate the validity of the test are its use in further samples with additional external constructs (e.g. other standardized mathematics tests) and, in particular, the investigation of its predictive validity concerning student teachers’ performance at university.
References
Besser, M., Göller, R., Ehmke, T., Leiss, D., & Hagena, M. (2020). Entwicklung eines fachspezifischen Kenntnistests zur Erfassung mathematischen Vorwissens von Bewerberinnen und Bewerbern auf ein Mathematik-Lehramtsstudium. Journal für Mathematik-Didaktik. https://doi.org/10.1007/s13138-020-00176-x Ferla, J., Valcke, M., & Cai, Y. (2009). Academic self-efficacy and academic self-concept: Reconsidering structural relationships. Learning and Individual Differences, 19(4), 499–505. https://doi.org/10.1016/j.lindif.2009.05.004 Fischbach, A., Keller, U., Preckel, F., & Brunner, M. (2013). PISA proficiency scores predict educational outcomes. Learning and Individual Differences, 24, 63–72. https://doi.org/10.1016/j.lindif.2012.10.012 Graf, T., Harych, P., Wendt, W., Emmrich, R., & Brunner, M. (2016). Wie gut können VERA-8-Testergebnisse den schulischen Erfolg amEnde der Sekundarstufe I vorhersagen? Zeitschrift für Pädagogische Psychologie, 30(4), 201–211. https://doi.org/10.1024/1010-0652/a000182 Hattie, J. (2010). Visible learning: A synthesis of over 800 meta-analyses relating to achievement (Reprinted). Routledge. Klieme, E., Avenarius, H., Blum, W., Döbrich, P., Gruber, H., Prenzel, M., Reiss, K., Riquarts, K., Rost, J., Tenorth, H.-E., & Vollmer, H. (2004). The development of national educational standards. An expertise. Bundesministerium für Bildung und Forschung. Lee, J., & Stankov, L. (2018). Non-cognitive predictors of academic achievement: Evidence from TIMSS and PISA. Learning and Individual Differences, 65, 50–64. https://doi.org/10.1016/j.lindif.2018.05.009 Leuders, T., Barzel, B., & Hußmann, S. (2005). Outcome standards and core curricula: A new orientation for mathematics teachers in Germany. Zentralblatt Für Didaktik Der Mathematik, 37(4), 275–286. https://doi.org/10.1007/BF02655815 OECD (2005). PISA 2003 Technical Report. OECD Publishing, Paris. https://doi.org/10.1787/9789264010543-en Pipere, A., & Mieriņa, I. (2017). Exploring non-cognitive predictors of mathematics achievement among 9th grade students. Learning and Individual Differences, 59, 65–77. https://doi.org/10.1016/j.lindif.2017.09.005 Ramm, G., Adamsen, C., Neubrand, M., & Deutsches PISA-Konsortium (Eds.). (2006). PISA 2003: Dokumentation der Erhebungsinstrumente. Münster: Waxmann. Schult, J., Hofmann, A., & Stegt, S. J. (2019). Leisten fachspezifische Studierfähigkeitstests im deutschsprachigen Raum eine valide Studienerfolgsprognose?: Ein metaanalytisches Update. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 51(1), 16–30. https://doi.org/10.1026/0049-8637/a000204 Stankov, L., & Lee, J. (2014). Quest for the best non-cognitive predictor of academic achievement. Educational Psychology, 34(1), 1–8. https://doi.org/10.1080/01443410.2013.858908 Weber, K. E., & Freund, P. A. (2017). Erfassung des Selbstkonzepts von Kindern im Grundschulalter: Validierung eines deutschsprachigen Messinstruments. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 49(1), 38–49. https://doi.org/10.1026/0049-8637/a000165 Wiberg, M. (2019). The relationship between TIMSS mathematics achievements, grades, and national test scores. Education Inquiry, 10(4), 328–343. https://doi.org/10.1080/20004508.2019.1579626
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.