Session Information
16 SES 01 A, Computer Science and Computational Thinking
Paper Session
Contribution
Computational Thinking (CT) regained interest from researchers, policymakers, and educators in the aftermath of Wing’s (2006) position article where CT was defined as a fundamental skill that “includes a range of mental tools that reflect the breadth of the field of computer science” (p. 33). However, despite the apparent consensus that CT is a crucial skill that has been implemented in school curricula in several countries (Hsu et al., 2019), there is limited consensus as to how CT is defined. Brennan and Resnick’s (2012) definition divides CT into three key dimensions: computational thinking concepts which are understanding fundamental programming concepts like loops, operators, and conditionals; computational practices which are about the processes of thinking and learning; and computational perspectives which are about how the person understand themselves, the connection with others, and the technical world surrounding them and how these understandings evolve. However, Brennan and Resnick’s (2012) understanding of CT is closely connected to the Scratch environment which might introduce constraints to their framework. Based on their review of the state of the field, Grover and Pea (2013) described CT as comprising abstractions and pattern generalizations; systematic processing of information; algorithmic notions of flow of control; structured problem decomposition; conditional logic; efficiency and performance constraint; debugging and systematic error detection; and iterative, recursive, and parallel thinking. Weintrop et al.’s (2016) CT definition focuses on four main categories: data practices, modelling and simulation practices, computational problem-solving practices, and systems thinking practices, aiming at developing a nuanced understanding of CT in mathematics and science. Shute et al. (2017) proposed a model for CT that aimed at being useful between disciplines and instructional settings. They defined “CT as the conceptual foundation required to solve problems effectively and efficiently (i.e., algorithmically, with or without the assistance of computers) with solutions that are reusable in different contexts” (p. 151). Consequently, they understood CT as a logical way of thinking. Further, they categorized CT into decomposition, abstraction, algorithms, debugging, iteration, and generalization.
Tang et al. (2020) argued that an essential difference between the various definitions of CT is whether the definition focuses on CT as programming and computing, and those that focus more on CT as competencies needed in both domain-specific knowledge and general problem-solving. Shute et al. (2017) raised an important issue in their definition of CT, namely the need to define CT across different contexts. Yadav et al. (2022) pointed out the lack of studies that focus on CT in initial teacher education, highlighting the need to develop not necessarily a consensual definition of CT, but rather a working framework that can span both school and teacher education.
Consequently, this study aims to develop a flexible model for CT competencies that can be used across different education levels, by teachers, teacher educators and student teachers. To do this, we investigate the following research questions:
RQ1: What characterizes the definitions and operationalizations of CT used in empirical studies of CT?
RQ2: What are the converging and diverging understandings of CT used in empirical studies of CT?
In our study, we view CT as a ‘boundary object’, drawing on Star and Griesemer (1989). As a boundary object, CT is viewed as an ‘ill-structured’ concept that has resulted in a tug-of-war. However, from a boundary object perspective, it is this very lack of consensus that can contribute to developing a model that is flexible and adaptable in different contexts.
Method
To create an overview of the characteristics of definitions and operationalizations of CT, we make use of a systematic review, guided by the seven-step procedure proposed by Fink (2019) to ensure independently reproducible results. 1) We identified the research questions by conducting a systematic umbrella review (Authors, 2022); 2) we identified search terms including inclusion and exclusion criteria; 3) the results from the database searches were screened using Rayyan; 4) a pilot review was conducted; 5) the systematic review was conducted with two coders coding each article based on a codebook that was agreed upon in advance; 6) the results were synthesized drawing on the directed content analysis approach described by Hsieh and Shannon (2005); and 7) a descriptive review was performed that led to the CT framework. The database searches were run in selected databases (Scopus, ProQuest, Web of Science, ACM Digital Library, ERIC, EBSCO, IEEE Xplore, and JSTOR) which were selected based on the experiences from the umbrella review (Authors, 2022). The search terms were combinations of computational thinking, algorithmic thinking, problem-solving, programming, coding, and different levels of education ranging from primary school to teacher education including abbreviations and synonyms. The search was limited to journal articles published in English between 2012 and 2022. The database searches gave us 2253 articles, including duplicates. After removing duplicates, 1526 articles were imported into Rayyan for screening. The screening process where two researchers screened each abstract reduced the included number of articles to 179. In this process articles that were not empirical, focused on special education or non-compulsory education, or pure computer science in higher education were excluded. After the screening, the articles were each coded by two coders, ensuring inter-coder reliability. The codes were decided on in advance based on the research questions. The results were cross-checked and discussed between coders. Some of the articles were excluded during the coding process based on the inclusion and exclusion criteria. Where there were disagreements between the coders, the first and second authors made a final decision. After this step, 113 articles were included. The data were synthesized and further analysed to answer the research questions.
Expected Outcomes
The 113 empirical articles used a wide variety of CT definitions, often with a generic viewpoint. Interestingly, most of the empirical articles included in our review were published after 2017, supporting the claim of increasing interest in CT. Our preliminary results indicate that Brennan and Resnick’s (2012) understanding of CT is the most used framework. However, this could be an obstacle as Brennan and Resnick’s definition is based on the environment Scratch. Although Shute et al. (2017) claimed that their framework is adaptable between different disciplines and instructional settings, it seems to be less used. However, further analysis might inform if there are some contextual differences between the various frameworks. Furthermore, there are indications that the operationalization of CT revolves around constructs such as abstraction, decomposition, pattern recognition, algorithmic design, evaluation, and generalization. Based on these preliminary findings, there is a need for a model for CT competencies that encompasses different CT perspectives. Identifying the different indicators of CT, collecting the most frequent, and dividing them into subject-specific or generic approaches to CT, we divided CT into several dimensions of competencies. These competencies take into account the perspectives of students, student teachers, teacher educators, and teachers, aiming to ensure the flexibility of the CT competencies model in terms of education level. Furthermore, the CT competencies model is targeted towards both generic and subject-specific approaches.
References
Authors (2022) Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking [Paper presentation]. Annual American Educational Research Association Meeting, Vancouver, BC, Canada (pp. 1–25). https://doi.org/10.1.1.296.6602 Fink, A. (2019). Conducting research literature reviews: From the internet to paper. Sage publications. Grover, S., & Pea, R. (2013). Computational Thinking in K–12:A Review of the State of the Field. Educational Researcher, 42(1), 38-43. https://doi.org/10.3102/0013189x12463051 Hsieh, H.-F., & Shannon, S. E. (2005). Three Approaches to Qualitative Content Analysis. Qualitative Health Research, 15(9), 1277-1288. https://doi.org/10.1177/1049732305276687 Hsu, YC., Irie, N.R. & Ching, YH. Computational Thinking Educational Policy Initiatives (CTEPI) Across the Globe. TechTrends 63, 260–270 (2019). https://doi.org/10.1007/s11528-019-00384-4 Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Research Review, 22, 142-158. https://doi.org/10.1016/j.edurev.2017.09.003 Star, S. L., & Griesemer, J. R. (1989). Institutional Ecology, `Translations' and Boundary Objects: Amateurs and Professionals in Berkeley's Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19(3), 387-420. https://doi.org/10.1177/030631289019003001 Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, 103798. https://doi.org/10.1016/j.compedu.2019.103798 Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology, 25(1), 127-147. https://link.springer.com/article/10.1007/s10956-015-9581-5 Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. https://doi.org/10.1145/1118178.1118215 Yadav, A., Caeli, E. N., Ocak, C., & Macann, V. (2022, July). Teacher Education and Computational Thinking: Measuring Pre-service Teacher Conceptions and Attitudes. In Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol. 1 (pp. 547-553).
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.