Session Information
16 SES 05 B JS, Students’ Computer and Information Literacy from a European Perspective. Findings from ICILS 2013. (Part 2)
Symposium Joint Session NW 09 with NW 16 continues from 16 SES 04 B JS
Contribution
Although computer-based tests (CBT), for example the simulation-based ICILS (International Computer and Information Literacy Study) test, exhibit a high face validity measuring Computer and Information Literacy (CIL), a lot of large scale studies across Europe are still conducted using paper-pencil tests (PPT) with multiple-choice (MC) questions (e.g. the NEPS in Germany; Senkbeil, Ihme & Wittwer, 2013; Assessing Digital Competence in Norway; Hatlevik & Christophersen, 2013). The construct validity of PPT measuring CIL by means of MC questions has so far only scarcely been compared to MC or simulation-based CBT. Results suggest that a PPT does not measure the same construct as a performance-based CBT and shows lower construct validity (Senkbeil & Ihme, 2014). If the task complexity is comparable, a MC and a simulation-based CBT show equal construct validity (Goldhammer, Kröhne, Keßel, Senkbeil & Ihme, 2014). In this work, we aim to investigate the factorial structure of CIL measures composed of paper-pencil or computer-based MC tasks and computer-based simulation tasks and their convergent and discriminant validity to motivational variables, and to compare these results across Europe. Twelve European countries participated in ICILS, in which the samples completed the ICILS test containing computer-based MC and simulation tasks. The German sub-sample additionally completed the NEPS (German National Educational Panel Study) ICT literacy test, a paper-pencil multiple-choice test (Senkbeil, Ihme, & Wittwer, 2013) and a cognitive skills test. First results support a model with distinguishable factors for each task format (MC and simulation tasks) in the European samples and, moreover, method factors for test medium (PPT vs. CBT) and comparably high convergent and discriminant validity scores for the ICILS and the NEPS test regarding cognitive skills and motivational patterns in the German sub-sample. Practical implications of the results on how to construct CIL measures comprising different item formats in the future are discussed.
References
Goldhammer, F., Kröhne, U., Keßel, Y. Senkbeil, M. & Ihme, J. M. (2014). Diagnostik von ICT-Literacy. Multiple-Choice- vs. simulationsbasierte Aufgaben. Diagnostica, 60, 10-21. Hatlevik, O. E. & Christophersen, K.-A. (2013). Digital competence at the beginning of upper secondary school: Identifying factors explaing digital inclusion. Computers & Education, 63, 240-247. Senkbeil, M. & Ihme, J. M. (2014). Wie valide sind Papier-und-Bleistift-Tests zur Erfassung computerbezogener Kompetenzen? Diagnostica, 60, 22-34. Senkbeil, M., Ihme, J. M. & Wittwer, J. (2013). The test of Technological and Information Literacy (TILT) in the National Educational Panel Study: Development, empirical testing, and evidence for validity. Journal for Educational Research Online, 5, 139-161.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.