Session Information
09 SES 04 B, Understanding Student Responses in Large-Scale-Assessments
Paper/Pecha Kucha Session
Contribution
Making an error seems to be an essential element of learning irrespective of the subject domain (Tulis, 2013), while student errors may provide prised information in the context of educational assessment and evaluation (O’Brien et al., 2017). Since mathematics is observed as the critical component of every national curriculum and vital to success and inclusion in the future labour market (OECD, 2016), it is important to understand the specific difficulties students may encounter with in mathematics from an early stage. In that sense, as Kingsdorf and Krawec (2014) stress, error analysis has shown to be effective in providing more detailed information about different levels of students’ math skills, as well as their misconceptions in procedural skills or underlying conceptual misunderstandings.
The tradition of international large scale assessments (ILSA) such as the Trends in International Mathematics and Science Study (TIMSS) or the Programme for International Student Assessment (PISA) is rather directed towards evaluating students’ answers as right and wrong. While the right answers demonstrate that the students have adopted particular concepts and/or have mastered certain reasoning strategies, the wrong answers may be anchored on different grounds, not resulting from the lack of relevant competence (Urdan & Schoenfelder, 2006).
TIMSS assesses achievement in mathematics and science in grades 4 and 8 and its assessment is strongly grounded in the curriculum model through the lenses of the intended curriculum, the implemented curriculum, and the achieved curriculum (Mullis & Martin, 2013). These demonstrate the mathematics the society intends for students to learn, acknowledging at the same time that what is taught in classrooms may greatly differ across the participating countries. Content wise, in grade 4, TIMSS assesses student’s knowledge in three content domains: numbers, geometric shapes and measures; and data display (Mullis & Martin, 2013), levelled across three cognitive domains (knowing; applying; and reasoning).
Since the beginning of participation in the TIMSS grade 4 Survey, Serbia accomplishes above average results in mathematics (516 score points in 2011 and 518 in 2015 respectively). At the same time students usually outperform in the numbers domain, while data display and geometric shapes and measures are relatively more difficult domains. Given the fact data display domain is understated in the Serbian curriculum and that the class teachers themselves perceive such tasks as more difficult for the students (Authors, 2015), in this paper we focus on these. More specifically, we examine in detail (1) the various types and frequency of errors made by students when observing all items in the data display category, and based on this we identify (2) the difficulties pupils encounter while solving two TIMSS items, as well as the strategies used to overcome these.
Although in our analyses we focus on one country data, seemingly excluding the European perspective, our study rather aims in arguing how ILSA data can contribute inclusion and recognition of processes partaking educational testing and evaluation, by using the ILSA data in a slightly different fashion. In this way we regain power in not just informing the educational policy and practice, but also in refining the assessment process.
Method
The study is organised in two consecutive steps, using TIMSS 2011 and 2015 data in Serbia. In the first step we focus on analysing students’ mistakes by direct observation of students’ answering sheets in TIMSS 2015 grade 4 on all the data display items. For this purpose, from the total pool of students’ booklets in the 2015 round, we have randomly chosen 100 booklets from each even number booklet (i.e., booklets 2, 4, 6, ..). In this way we have ensured that for each of the items in the display data domain we have a sample of 100 student answers. Following, each item was observed from the perspective of both correct and incorrect answers. In case of the former, this included both the total percentage of correct answers (i.e., using the entire sample of students in Serbia participating in the TIMSS survey) and the type of correct answers as defined by the TIMSS developers. For the latter, analyses included distinction between non-categorized and categorized mistakes, according to the TIMSS categorization. In the second step we focus more directly on how the students construct their answers. For this purpose 12 students are recruited in grade 4, the same grade as in the TIMSS study, organized in 4 triads. The data collection process is organized as a single session with three consecutive phases, following each other in a fixed order for each triad. These include an individual phase, group interaction, and the second individual phase. Such an organization has allowed as to mimic the conditions of the TIMSS individual solving test situation, to put students in a situation to argue own positon on why they solved the task in a particular way and then to provide them with an opportunity to offer a new solution if they find it to be more appropriate. Each triad was balanced for gender, school marks and joint time spent in school (Psaltis & Duveen, 2006) and was exposed to solving two items in the data display category. Item selection was twofold. Given the changed practices of TIMSS on not releasing items in a manner that was evident in the previous cycles, two items were chosen from the TIMSS 2011 released pol taking into account the current analyses on student errors and the information of relative difficulty for each item in the 2011 sample.
Expected Outcomes
When observing for the items in the data display category, students have been successful the most (over 80% solve the item correctly) and the least (less than 40% solve the item correctly), there seems to be no distinctive pattern. However, it is noticeable that in the case of the former we do not find items belonging to the highest cognitive level - reasoning. Of the two items that were solved by less than 25% of the students, the first one required them to transform information into a pie chart using their knowledge of fractions. In addition, although the example allowed for partially correct answers, these were not present at all. The second example asked the students to deduct information from a graph. From the type of the errors visible in the students’ answers, the majority had already struggled with recognizing which axis in the graph holds the piece of information they need. Since the values in both axes were numerical, students would often “read” the information from the wrong end. Contrary to this, among the tasks that were solved by more than 85% of the students, two asked from the students also to read out the information in the graph. However, in these examples only one side of the graph was numerical, thus enabling students to “read the correct” information more easily. When observing all the tasks that less than 40% of the students were able to solve, students’ errors were also bound by their failure to provide an explanation for own answers based on the given display of information or failure to apply a particular procedure provided in the task, in order to solve it The data from the student triads are under analyses.
References
Authors (2015). Kingsdorf. S., & Krawec, J. (2014). Error Analysis of Mathematical Word Problem Solving Across Students with and without Learning Disabilities. Learning Disabilities Research & Practice, 29(2), 66–74 O’Brien, R, Pan, X., Courville, T., Bray. M.A., Breaux, K., Avitia, M., & Choi, D. (2017). Exploratory Factor Analysis of Reading, Spelling, and Math Errors. Journal of Psychoeducational Assessment, 35(1-2) 7–23. OECD (2016). Equations and Inequalities: Making Mathematics Accessible to All, PISA, Paris: OECD Publishing. Mullis, I.V.S. & Martin, M.O. (Eds.) (2013). TIMSS 2015 Assessment Frameworks. Boston College: TIMSS & PIRLS International Study. Psaltis, C., & Duveen, G. (2006). Social relations and cognitive development: The influence of conversation type and representations of gender. European Journal of Social Psychology, 36(3), 407-430. Urdan, T., & Schoenfelder, E. (2006). Classroom effects on student motivation: Goal structures, social relationships, and competence beliefs. Journal of School Psychology, 44, 331–349. Tulis, M. (2013). Error management behavior in classrooms: Teachers’ responses to student mistakes. Teaching and Teacher Education, 33, 56–68.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.