Session Information
16 SES 02 A, Technology in Assessment
Paper Session
Contribution
Many daily activities at home, at work, and at school, require that people can use digital technologies and engage with digital information (Fraillon et al., 2018) in a secured, critical, cooperative, and creative way to effectively participate in our digital society. It can therefore be seen as a key competence in function of lifelong learning (Carretero et al., 2017). According to the DigComp 2.1. Framework (Carretero et al., 2017) digital competences are not just defined as technical, but as key transversal skills, knowledge, and attitudes. DigComp 2.1. describes today’s required competences to use digital technologies in a confident, critical, cooperative, and creative way to perform in activities in the context of work, learning, and participation in our society (Carretero et al., 2017). Previous research (e.g. Authors, 2013; Ilomäki et al., 2016) recognizes the importance of developing digital competences at an early age (Juhaňák et al., 2019). Although children are exposed to all kinds of digital information at a young age through films, games, apps, and various devices such as smartphones and tablets, they are not as computer savvy at it is assumed (Authors, 2018; Kirshner & De Bruyckere, 2017). This is exemplified in the work undertaken by van Deursen & Helsper (2015) as the participants mainly master basic operational (and communicational) skills but are having trouble with higher-order digital skills like using technology for creative purposes.
Furthermore, the access to and existence of technological opportunities are constantly changing and are challenging educational practices (Siddiq & Scherer, 2019). As a response to these developments, educational systems around the world are expected to emphasize the importance of digital competences by including these into their national curriculum as formal learning objectives (Ilomäki et al., 2016). Together with these curricular innovations, the importance of reliable and valid assessment instruments is highlighted to monitor students’ mastery of this competence (Siddiq et al., 2016).
In the past decade, research on the assessment of digital competences has been growing (see e.g. Fraillon et al., 2018; Heitink, 2018). However, most instruments measure digital competences in an indirect and inauthentic way using self-report or multiple choice (Siddiq et al. 2016). These ways of assessment only provide a judgement of the actual performance, and do not cover the authenticity of contexts in which people usually perform (Law et al., 2009; Litt, 2013). Next to the problem of indirect assessment, most assessment instruments measuring students’ digital competences, are based on the principles of classical test theory (CTT). Research (e.g. Blömeke et al., 2015) shows that Item response theory (IRT), in contrast to CTT, support the validation of assessment instruments to obtain standardized measures of primary students' competences that are not sample dependent.
Therefore, this paper highlights the importance of test development and validation, more specific for the assessment of students’ digital competences. This aim can be translated into the following research objective: To construct a standardized and performance-based assessment instrument that can be used to measure primary school students’ digital competences in a direct, authentic, and valid way.
The developed assessment instrument is characterized by performance-based authentic assessment in which students’ actual competences are elicited through the application of these competences in authentic tasks, which trigger the thinking processes and actions to perform with digital technology (e.g. Heitink, 2018). This study focuses particularly on primary school students' proficiency in information and data literacy, communication and collaboration, creation of digital content and safely and securely use of digital information. In what’s next, we will shortly explain the different steps of item development and the validation process of the assessment instrument. Finally, we discuss the practical contributions of the developed scale.
Method
Based on the curriculum objectives of the Flemish government concerning ICT in primary school, three core objectives are selected to measure. (1) Students can use digital technology in a safe way (2) Students can use digital technology to search for, process and store digital information (3) Students can use digital technology to communicate in a safe, responsible, and effective way. This has resulted in 13 higher-order competences and 20 technical-operational skills which were registered in a test matrix. An expert panel of test developers, teachers and ICT researchers reviewed this matrix of competences. Students had to show their level of competence while completing tasks on a computer and using applications and software. Consequently, a computer-based environment with simulation-based tasks was developed. The simulated tasks are emulated real-world situations, e.g. recognizing risky behavior on the Internet whereby the system is automatically logging students’ click behavior. The higher-order competences and technical-procedural digital skills were clustered in three assignments. During the entire item development process, the items were administered multiple times to students and an expert panel to make necessary adaptations. To validate the performance-based digital competence test, 84 developed items of the instrument were administered to a representative sample of 445 sixth-grade primary-school students (age between 10,42 and 14,60 years old) in Flanders in May 2021. First, a classical item analysis was conducted to investigate the difficulty and discriminatory value of each item. In a second step, a measurement scale was developed using item response theory (IRT). The items were controlled for dimensionality, and model-data fit. Finally, the reliability of the scale is calculated.
Expected Outcomes
Simulation-based tasks which reflect real-life situations, or so-called performance-based assessment, is a way to measure digital competences in a direct way. During the psychometric validation of the ICT scale, 35 of the 84 items were removed based on difficulty and discrimination parameters (19 items too easy, 7 items too difficult), dimensionality (unidimensional, 5 items were removed based on too low factor loadings) and model-data fit (Rasch-model, no items were removed). The final measure contains of 49 items. The results indicate that the instrument is reliable (EAP value = .901). Subsequently, a digital competence scale was developed through item response theory that can be used in primary education to measure students' digital competences in a direct and authentic way, in line with the Flemish ICT curriculum. This study leads to some practical implications. First, at school level, the results of the assessment instrument could inform school leaders and teachers about the level of digital competence among students at their school, and which competences need more attention in class. Furthermore, teachers can also test themselves on the extent to which they possess these competences and, on that basis, use professional development in a goal-oriented way. Next, such large-scale direct assessments can be used as a starting point for curriculum developers and policy makers to gain insight into the general level of digital competence and make informed choices in the design of ICT curricula and education. Further research might explore direct and authentic assessment of other, underexposed, sub competences of digital competence, like problem solving and digital creativity and citizenship. Finally, further work should be undertaken to investigate differences in digital competences of students at the end of primary education and to identify characteristics at student, teacher and school level that might help explain these differences.
References
Authors (2013). Authors (2018). Blömeke, S., Gustafsson, J. E., & Shavelson, R. J. (2015). Beyond dichotomies: Competence viewed as a continuum. Journal of Psychology, 223(1), 3–13. https://doi.org/10.1027/2151-2604/a000194 Carretero, S.; Vuorikari, R., & Punie, Y. (2017). DigComp 2.1: The Digital Competence Framework for Citizens with eight proficiency levels and examples of use. Retrieved from https://op.europa.eu/s/oNl8 Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Duckworth, D. (2018). Preparing for life in a digital age. The IEA international computer and information literacy study international report. Springer International Publishing AG. Retrieved from https://www.iea.nl/sites/default/files/2020-04/IEA%20International%20Computer%20and%20Information%20Literacy%20Study%202018%20International%20Report.pdf Heitink, M. (2018). Eliciting teachers’ and students’ technological competences: Assessing technological skills in practice. (Doctoral dissertation, University of Twente, Enschede, Nederland) Retrieved from https://doi.org/10.3990/1.9789036546850 Ilomäki, L., Paavola, S., Lakkala, M., & Kantosalo, A. (2016). Digital competence – an emergent boundary concept for policy and educational research. Education and Information Technologies, 21(3), 655–679. https://doi.org/10.1007/s10639-014-9346-4 Juhaňák, L., Zounek, J., Záleská, K., Bárta, O., Vlčková, K. (2019). The relationship between the age at first computer use and students' perceived competence and autonomy in ICT usage: A mediation analysis. Computers & Education, 141, 1–14. https://doi.org/10.1016/j.compedu.2019.103614 Kirshner, P. & De Bruyckere, P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, 135-142. https://doi.org/10.1016/j.tate.2017.06.001 Law, N., Lee, Y., & Yuen, H. K. (2009). The impact of ICT in education policies on teacher practices and student outcomes in Hong Kong. In F. Scheuermann & F. Pedro (Eds.), Assessing the effects of ICT in education–Indicators, criteria, and benchmarks for international comparisons (pp. 143–164). OECD. Retrieved from https://www.academia.edu/2579082/The_impact_of_ICT_in_education_policies_on_teacher_practices_and_student_outcomes_in_Hong_Kong Litt, E. (2013). Measuring users’ internet skills: A review of past assessments and a look toward the future. New Media and Society, 15(4), 612–630. https://doi.org/10.1177/1461444813475424 Siddiq, F. & Scherer, R. (2019). Is there a gender gap? A meta-analysis of the gender differences in students' ICT literacy. Educational Research Review, 27, 205 – 217. https://doi.org/10.1016/j.edurev.2019.03.007 Siddiq, F., Hatlevik, O. E., Olsen, R. V., Throndsen, I., & Scherer, R. (2016). Taking a future perspective by learning from the past - A systematic review of assessment instruments that aim to measure primary and secondary school students’ ICT literacy. Educational Research Review, 19, 58–84. https://doi.org/10.1016/j.edurev.2016.05.002 Van Deursen, A.J.A.M. & Helsper, E.J. (2015). The third-level digital divide: who benefits most from being online? In: Robinson, L, Cotton, SR, Schulz, J. (eds) Communication and Information Technologies Annual. Bingley: Emerald Group Publishing Limited, pp. 29–52.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.