Session Information
09 SES 08 A JS, Assessment and Curriculum Reforms: Understanding Impacts and Enhancing Assessment Literacy
Joint Paper Session, NW 09 and NW 24
Contribution
Assessment literacy has been a repetitious term in the assessment literature since it was popularized by Stiggins (1991) (Koh et al., 2018). Assessment literacy is mainly related to teachers’ assessment practices and skills in selecting, designing, or using assessments for various purposes (Stiggins, 1991). The term also defines the knowledge of principles behind selecting, adapting, or designing assessment tasks, judging students’ work and using obtained data to enhance their learning (Koh et al., 2018).
Mathematical thinking arises when students work on problem-like tasks (Jones & Pepin, 2016). However, traditional mathematics instruction and assessment mainly emphasize memorization instead of creative thinking or reasoning. Some research also supports this claim (see Jäder et al., 2015; Stein et al., 2009; Stevenson & Stigler, 1992; Vacc, 1993). On the other hand, such instruction and assessment fail to enhance students’ competencies in mathematics and lead them to follow rote learning (Hiebert, 2003). Hence, students must face challenging and unfamiliar problems that activate their higher-order thinking skills (HOTS).
HOTS require making explanations, interpretations, and decision-making. Students with HOTS can learn how to improve their success and reduce their weaknesses (Tanujaya, 2016). Hence, mathematics teachers should be knowledgeable about HOTS and how to enhance these skills to carry out quality mathematics instruction and assessment. For this reason, teacher education programs must support preservice mathematics teachers (PMTs) to understand the significance of engaging students with higher-level tasks.
Several categorizations are provided for HOTS in the field of education. Bloom’s taxonomy proposed that analysis, synthesis, and evaluation levels include HOTS (McDavitt, 1994). Stein et al. (1996) described a higher level of cognitive demand as doing mathematics or the use of procedures with connection to concepts, understanding, or meaning. A national categorization for mathematics competence levels was also provided in a Monitoring and Evaluating Academic Skills Study (MEASS) project. This framework comprises four categories, and the last two categories were devoted to students’ higher-order thinking skills (MoNE, 2015). The framework will be introduced during the presentation.
Although challenging tasks can promote students’ HOTS, research has shown that designing worthwhile mathematical tasks is not trivial (Leavy & Hourigan, 2020). Besides, preservice teachers (PT) cannot create such tasks (Silver et al., 1996). This is predictable since they have fewer opportunities to write tasks in their teacher education programs (Crespo & Sinclair, 2008). Significantly less is known about the PT’s ability to develop mathematical tasks (Crespo, 2003). Besides, there are fewer studies on how to help PTs to realize and discuss the quality of their mathematical tasks (Crespo & Sinclair, 2008). Thus, professional development (PD) studies must be conducted to increase PMTs’ capacity to develop tasks.
The study’s purpose is to improve the quality assessment task-writing skills of PMTs through feedback provided by different agencies such as researchers, peers, and students. It specifically aimed to answer the research question, “how does feedback provided by different agencies improve the quality of assessment tasks developed by PMTs.” This study also aimed to introduce the framework for mathematics competence levels (MoNE, 2015) to European society. Feedback was defined by Eggen and Kauchak (2004) as the information that teachers or students receive with regard to the accuracy or relevancy of their work through classroom practices. In this study, this term was used to refer to the information preservice mathematics teachers receive from researchers, students, and their peers about the quality and cognitive levels of their tasks.
Method
The study’s data were drawn from design-based research that aimed (1) to examine and improve the senior preservice middle school mathematics teachers’ (PMT) understanding and development of cognitively demanding quality mathematical tasks which aim to assess students’ learning and (2) to develop, test and revise a conjecture map and professional development sequence serving for the first purpose. The research was conducted in an elective course that required a weekly meeting of three-course hours. Ten fourth-year PMT enrolled in a four-year middle grades (grades 5-8) mathematics teacher education program at a public university in Türkiye participated in the course. The course consisted of two phases, including several PD activities. In the first phase, PMTs investigated sample tasks and criticized and revised them, considering their quality and cognitive demand. They conducted an independent study in the second phase. They developed two cognitively demanding quality assessment tasks. The development process of both tasks was a cyclic process that required revisions considering the researchers’, peers’, and students’ feedback. This study focused on the development process of a contextual task written by one of the preservice teachers (Mert). The task development process involved four cycles and was based upon an iterative task design cycle suggested by Liljedahl et al. (2007), consisting of predictive analysis, trial, reflective analysis, and adjustment. Our processes emphasized the importance of feedback to re-develop the task and reflect on experiences. Hence, each cycle ended with a new version of the tasks. PMTs wrote the first version of their contextual tasks in the baseline. In task development cycle 1 (TDC1), the tasks were peer-reviewed by pairs of PMTs and criticized during the class discussion regarding their cognitive demand and quality. The researcher provided written feedback on the second version of the tasks in TDC2. In TDC3, PMTs interviewed middle school students using the third version of their task, while in TDC4, they implemented the tasks in real classrooms. They shared students’ thinking with their peers in the PD course after TDC3 and TDC4. They revised their task considering what they noticed about students’ thinking or difficulties and their peers’ feedback and prepared the last versions. Mert’s reflections at the end of each cycle, his reflections on the interviews with students and class implementation, his project report, and the post-interview provided the data for this study.
Expected Outcomes
Mert developed a cognitively demanding quality assessment task that involved the context in which a car turned around a center of rotation and consisted of two multiple-choice questions in the baseline. The first question asked students to compare the speeds of all wheels shown in the figures. The second question asked to choose the correct interpretation of the ratio of the front-right-wheel to the rear-left-wheel. Mert categorized its cognitive level as the highest level 4 and provided reasonable explanations. In TDC1, peers criticized and gave feedback about the pedagogical and mathematical task qualities such as the task's clarity, appearance, cognitive level, and mathematical language. Mert changed the figures and the language he used in the second question. He asked to compare the rear-wheels’ speeds instead of comparing the speeds of a front- and a rear-wheel. However, he thought this second version's cognitive level was slightly weakened. In TDC2, Mert revised the second question’s options, made changes in its appearance considering the researcher’s feedback, and categorized the task as more qualified. He made radical changes in his task in TDC3. Students’ perspectives guided him to change the figure again and question types from multiple-choice to open-ended. He completely changed the second question and asked the difference between the distance traveled by the right- and left-rear-wheel. He also wanted students to support their explanation using algebraic expressions. He did not revise his task in TDC5 since “it was sufficient to be a cognitively demanding quality task” (Mert). In sum, each cycle contributed to the task’s quality. Having the opportunity to enact the task to the students, especially in a one-to-one setting, made the greatest contribution to the task’s pedagogical and mathematical quality. Hence this process revealed the significance of assessing students’ responses to realize the quality of tasks (Norton & Kastberg, 2012).
References
Acknowledgment: The paper was supported by the H2020 project MaTeK, no. 951822. Crespo, S. (2003). Learning to pose mathematical problems: Exploring changes in preservice teachers’ practices. Educational Studies in Mathematics, 52(3), 243–270. Crespo, S., & Sinclair, N. (2008). What makes a problem mathematically interesting? Inviting prospective teachers to pose better problems. Journal of Mathematics Teacher Education, 11(5), 395–415. Hiebert, J. (2003). What research says about the NCTM standards. In J. Kilpatrick, G. Martin, & D. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 5–26). Reston, Va.: NCTM. Jäder, J., Lithner, J., & Sidenvall, J. (2015). A cross-national textbook analysis with a focus on mathematical reasoning–The opportunities to learn. Licentiate thesis, Linköping University. Jones, K., & Pepin, B. (2016). Research on mathematics teachers as partners in task design. Journal of Mathematics Teacher Education, 19(2), 105–121. Koh, K. H., Burke, L. E. C., Luke, A., Gong, W. & Tan, C. (2018). Developing the assessment literacy of teachers in Chinese language classrooms: A focus on assessment task design. Language Teaching Research, 22(3), 264–288. https://doi.org/10.1177/13621688166843 Leavy, A., & Hourigan, M. (2020). Posing mathematically worthwhile problems: developing the problem‑posing skills of prospective teachers. Journal of Mathematics Teacher Education, (23)4 p341-361. https://doi.org/10.1007/s10857-018-09425-w Liljedahl, P., Chernoff, E., & Zazkis, R. (2007). Interweaving mathematics and pedagogy in task design: A tale of one task. Journal of Mathematics Teacher Education, 10(4–6), 239–249. McDavitt, D. S. (1994). Teaching for understanding: Attaining higher order learning and increased achievement through experiential instruction. Technical Report. Retrieved from https://files.eric.ed.gov/fulltext/ED374093.pdf Ministry of National Education [MoNE] (2015). Akademik becerilerin izlenmesi ve değerlendirilmesi. Retrieved from https://abide.meb.gov.tr/ Norton, A., & Kastberg, S. (2012). Learning to pose cognitively demanding tasks through letter writing. Journal of Mathematics Teacher Education, 15, 109–130. Silver, E. A., Mamona-Downs, J., & Leung, S. S. (1996). Posing mathematical problems: An exploratory study. Journal for Research in Mathematics Education, 27, 293–309. Stevenson, H. W., & Stigler. J. W. (1992). The learning gap: Why our schools are failing and what we can learn from Japanese and Chinese education. NY: Summit Books. Stiggins, R.J. (1991). Assessment literacy. Phi Delta Kappan, 72, 534−539. Tanujaya, B. (2016). Development of an instrument to measure higher order thinking skills in senior high school mathematics instruction. Journal of Education and Practice, 7(21), 144-148. Vacc, N. (1993). Questioning in the mathematics classroom. Arithmetic Teacher, 41(2), 88–91.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.