Session Information
16 SES 01 A, Computer Science and Computational Thinking
Paper Session
Contribution
Within the last decade, several national curricula have introduced computational thinking (CT), either through a dedicated subject or integrated within existing subjects (Bocconi et al., 2022) Several reviews of CT highlight the lack of a unified definition e.g. Weintrop et al. (2016) which in turn percolates to a disparity in the assessment of CT. Nevertheless, there is a consensus that CT includes concepts and practices which are foundational in computing and are crucial in a wide range of problem-solving. With the ever-growing use of digital tools in schools and in all kinds of professional practices, it becomes relevant to introduce CT in learning contexts.
CT frameworks vary from more generic to subject-specific. Examples of more generic frameworks are Brennan and Resnick (2012), and Grover and Pea (2013) which overlap on aspects like decomposition, abstraction, algorithms, and debugging. The more subject-specific frameworks relevant to the integration of CT into mathematics and science highlight approaches like formulating problems, gathering and analysing data, and modelling e.g. Weintrop et al. (2016). Integration of CT in curricula will affect the learning processes and, therefore, should have implications for assessment.
Only a few of the CT frameworks are operationalized as assessment frameworks. There are numerous assessment tools, but many are focused on programming practices or as part of a computer science subject. Furthermore, the tools are often automatized providing summative assessment. Many frameworks are targeted towards intervention studies rather than assessment criteria for educators. As CT is a growing educational field, there is a need for a CT framework which can be applied by teachers and inform their design of teaching and assessment. Such guidelines for both teachers and students can be understood as shared learning intentions and criteria (Wiliam, 2011) and measures that may enable educators to evaluate the effectiveness of incorporating CT in curricula (Grover & Pea, 2013).
Discussions regarding the assessment of CT have frequently focused on CT as a generic skill e.g. Román-González et al. (2019) or assessments of students’ programming or computing skills (Tang et al., 2020). There is a need for developing an assessment framework providing formative and summative assessment relevant for integrating CT into subjects (Tang et al., 2020).
CT was included in several subjects in the Norwegian curriculum in 2020. In mathematics, students are introduced to the basic concepts of programming like variables, loops, and conditions. Building on the skills developed in mathematics, the students are expected to use programming in science, arts and crafts, and music. Although the curriculum uses the term programming, it is broadly understood as a concept close to CT. Through programming, the students should explore the subject matter enhancing their learning outcome (Norwegian Directorate for Education and Training, 2019).
The aim of this study is to explore the assessment constructs aligned with the relevant CT definitions and with the subject-matter knowledge. Thus, in this study we raise the following research question:
RQ1: What constructs inform a framework for the assessment of CT?
RQ2: How can a set of CT assessment constructs support practitioners’ teaching and assessment?
Formative and summative assessment are related and they play an important role in students learning (Wiliam, 2011). CT is complex and therefore it is recommended to develop rich and complementary systems of assessment (Grover, 2017; Román-González et al., 2019). To create the framework, we draw on a literature review, teacher interviews as well as on classroom observations. The framework is furthermore tested in close collaboration with teachers.
Method
The overall study was designed to examine the use of CT in primary and secondary education. This paper focuses on the assessment of CT. The overall research design is design-based research (Juuti & Lavonen, 2006). The project [blinded] is a longitudinal study that addresses the emerging needs for a CT assessment framework which may support teachers’ practice as well as teacher education. During the initial phase, a literature review of CT assessment strategies was made. Concurrently, observation sessions in classrooms were conducted in order to map the status quo and understand teachers’ needs in addition to semi-structured teacher interviews. In phase two, a criteria framework for the assessment of CT was developed, based on the results from the review and the needs identified. In the third phase, an intervention was designed together with teachers, which was conducted in 2 classrooms and 2 schools over a period of 2 semesters (phase one spring 2022 and phase two spring 2023), primarily in mathematics and science lessons. Results from the interventions were then evaluated. Data were collected by means of focus group interviews with all the teachers in the study prior to data collection, observations (one video camera focusing on the teacher as well as Go-Pro cameras strapped to the students), and interviews with the teachers and group interviews with students at the end of each trial. Integration of assessment principles into the teaching units was a central design principle throughout the intervention. The data corpus includes video and interview material which were first content logged and then categorized. The categories were developed first through screening the literature review. These were later further elaborated. A subset of these categories was transcribed and analysed.
Expected Outcomes
In this paper, we advance a framework for both formative and summative assessment of CT within the mathematics and science. The review of literature yielded 46 articles, where 31 was included. Several assessment constructs emerged from the review of literature. A substantial part of the articles took Brennan and Resnick (2012) framework as their point of departure. The review also indicated that there is more focus on generic formative assessment, in line with Tang et al.’s (2020) findings. Grover (2017) recommends “systems of assessment” and Román-González et al. (2019) use of multiple means of assessment. Drawing on and extending on Tang et al.’s (2020) and Grover's (2017) findings and their directions for further research, we focus on assessment constructs that align with corresponding CT definitions as well as the subject-matter knowledge in order to highlight the integration between CT and subject domains. The framework developed in this study operationalises the identified CT assessment constructs such that they inform both formative and summative assessment across different subject contexts and spanning both schools and teacher education. The aim is to contribute to better integration between CT and subject domains as well as a tighter coupling between subject domain assessment and CT assessment.
References
Bocconi, S., Chioccariello, A., Kampylis, P., Dagienė, V., Wastiau, P., Engelhardt, K., Earp, J., Horvath, M., Jasutė, E., & Malagoli, C. (2022). Reviewing computational thinking in compulsory education: state of play and practices from computing education (No. JRC128347). Publications Office of the European Union Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking. (Ed.),^(Eds.). Proceedings of the 2012 annual meeting of the American educational research association, Vancouver, Canada. Grover, S. (2017). Assessing Algorithmic and Computational Thinking in K-12: Lessons from a Middle School Classroom. In P. J. Rich & C. B. Hodges (Eds.), Emerging Research, Practice, and Policy on Computational Thinking (pp. 269-288). Springer International Publishing. https://doi.org/10.1007/978-3-319-52691-1_17 Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational researcher, 42(1), 38-43. https://doi.org/10.3102/0013189X12463051 Juuti, K., & Lavonen, J. (2006). Design-based research in science education: One step towards methodology. Nordic Studies in Science Education, 2(2), 54-68. Norwegian Directorate for Education and Training. (2019). Curriculum for Natural science. https://www.udir.no/lk20/nat01-04?lang=eng Román-González, M., Moreno-León, J., & Robles, G. (2019). Combining assessment tools for a comprehensive evaluation of computational thinking interventions. Computational thinking education, 79-98. Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, 103798. https://doi.org/https://doi.org/10.1016/j.compedu.2019.103798 Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining Computational Thinking for Mathematics and Science Classrooms. Journal of Science Education and Technology, 25(1), 127-147. https://doi.org/10.1007/s10956-015-9581-5 Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3-14. https://doi.org/https://doi.org/10.1016/j.stueduc.2011.03.001
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.