Session Information
09 SES 16 A, Assessing and Investigating ICT competencies
Paper Session
Contribution
The development of ICT skills among the population is one of the major goals of education systems worldwide. These skills became necessary for a person to fully participate in contemporary society. The growing initiatives in this area raise the need to measure ICT skills on a large scale to monitor the level and the development of these skills across different groups of people in terms of their gender, cultural background, or socioeconomic status. Self-assessment questionnaire measures of ICT skills are commonly used in research on ICT literacy and appear to be a suitable tool for large-scale measurement of ICT skills, especially because of the low monetary and time demands of their administration.
However, despite the frequent use of questionnaire items with rating scales in educational research in general, questions have been raised concerning their comparability across different (groups of) respondents. For example, if we let two students with the same level of ICT skills assess their ICT skills on a five-point scale: (a) excellent, (b) very good, (c) good, (d) poor, (e) very poor, one might assess himself as very good, while the other only as good. This heterogeneity in reporting behavior may lead us to erroneous conclusions about the level of ICT skills of these students.
Indeed, paradoxical findings have been documented at both international (e.g. Kyllonen & Bertling, 2013; He & van de Vijver, 2016; Vonkova, Zamarro, & Hitt, 2018) and within-country level (e.g. Vonkova & Hrabak, 2015; West et al., 2016). Specifically, in the ICT domain, Vonkova and Hrabak (2015) compared student self-assessments of ICT skills and found that students studying ICT reported a lower level of ICT skills than students studying business and pedagogy. These striking, contra-intuitive results warrant further examination of differences in reporting behavior and the possible solutions to this problem. Recently, some innovative methodological approaches have been proposed to analyze and resolve this problem such as the overclaiming technique (OCT), the anchoring vignette method, or identification of response styles independent of item content. Here we focus on the overclaiming technique and its potential to enhance comparability of self-reports among different groups of students.
Using the overclaiming technique in the domain of mathematics knowledge, Vonkova, Papajoanu, and Stipek, (2018) identified notable differences in response patterns worldwide and across Europe. For example, the authors documented high exaggeration tendencies in Southern Europe while they found low exaggeration tendencies in Western Europe. Their results indicate that in the European context, differences in response patterns might indeed cause problems with self-assessment data comparability and that these differences require further examination both at the international and at the within-country level in different domains. Their results also support the further use of the overclaiming technique as a promising candidate to increase the data comparability.
In our study, we employ the overclaiming technique in the domain of ICT skills measurement on a representative sample of Czech upper-secondary schools. This study builds on the previous literature by analyzing intra-country differences in response patterns between different groups of students in more detail and presenting a methodology that can be used to analyze response patterns of respondents internationally, or in their respective countries.
Specifically, our research questions are:
(1) What are the differences in response patterns when assessing own ICT knowledge between students studying at different types of schools?
(2) How can the adjustment of student self-assessment of ICT knowledge using the overclaiming technique alter the relationship between the self-assessment and external variables like ICT test score?
Method
The overclaiming technique was initially introduced as an alternative methodology for capturing socially desirable responding (Bensch, Paulhus, Stankov, & Ziegler, 2017). More recently, the technique has been proposed as one of the approaches to enhance cross-cultural comparability of data. The basic principle of the method is to let the respondents rate their familiarity with a set of concepts from a particular field of knowledge (e.g. history, mathematics), some of which (usually about 20%) are actually non-existent (foils). Based on the information about respondent rating of existing and nonexisting concepts we can calculate four basic indices: proportion of hits (PH; proportion of existing concepts a respondent claims he/she is familiar with), proportion of false alarms (PFA; proportion of non-existing concepts a respondent claims he/she is familiar with), index of accuracy (IA; calculated as IA = PH - PFA), and index of exaggeration (IE, calculated as IE = (PH + PFA) / 2). For the use in the domain of ICT knowledge, we developed a set of existing ICT concepts (reals) and non-existing concepts (foils). The reals were chosen based on the DigComp framework digital competence areas (information, communication, content creation, safety). Examples of reals are “spam” and “promiscuous mode”, examples of foils are “cacheswitcher” and “swapping coprocessor”. The respondents were asked to assess their familiarity with these concepts on a 4-point scale: (a) I have never heard of this word, I don´t know it; (b) I have heard of this word, but I don’t know what it means; (c) I partially know this word but can´t explain it properly; (d) I know this word very well and can explain it. For each student in our sample, we calculate the averaged PH, PFA, IA, and IE indices as in Vonkova, Papajoanu et al. (2018). We identify the response patterns of different groups of students based on their accuracy and exaggeration values. PH index is then interpreted as an unadjusted ICT familiarity score and index of accuracy as an OCT adjusted ICT familiarity score.
Expected Outcomes
Our preliminary results from the pilot study revealed differences between OCT-adjusted and non-adjusted self-reports among different groups of students. First, the correlation between ICT test scores and OCT adjusted / non-adjusted self-reports are similar, though at a technical type of school the OCT adjusted self-reports are significantly higher correlated with ICT test scores than unadjusted self-reports. Second, at technical and ICT types of school students report a higher familiarity with both existing and non-existing concepts. It implies that the exaggeration index is much higher compared to other types of schools. Third, we also documented gender differences in ICT test and overclaiming scores. For boys, the correlation between the test score and OCT adjusted ICT self-report is higher than for the unadjusted index. Also, the exaggeration index is higher for boys than for girls. At the international level, the technique showed great promise in adjusting student self-reports of mathematics knowledge (Vonkova, Papajoanu, et al., 2018). It remains to be investigated how the technique performs at the intracountry level and in the domain of ICT knowledge – an issue investigated in our study.
References
Bensch, D., Paulhus, D. L., Stankov, L., & Ziegler, M. (2017). Teasing apart overclaiming, overconfidence, and socially desirable responding. Assessment. Advance online publication. doi:10.1177/1073191117700268 He, J., & van de Vijver, F. J. R. (2016). The motivation-achievement paradox in international educational achievement tests: Toward a better understanding. In R. B. King & A. B. I. Bernardo (Eds.), The psychology of Asian learners: A festschrift in honor of David Watkins (pp. 253-268). Singapore: Springer Science. Kyllonen, P. C., & Bertling, J. (2013). Innovative questionnaire assessment methods to increase crosscountry comparability. In L. Rutkowski, M. von Davier, & D. Rutkowski (Eds.), A handbook of international large-scale assessment data analysis: Background, technical issues, and methods of data analysis (pp. 277-286). London, England: Chapman Hall/CRC Press. Vonkova, H., & Hrabak, J. (2015). The (in) comparability of ICT knowledge and skill self-assessments among upper secondary school students: The use of the anchoring vignette method. Computers & Education, 85, 191-202. Vonkova, H., Papajoanu, O., & Stipek, J. (2018). Enhancing the cross-cultural comparability of self-reports using the overclaiming technique: An analysis of accuracy and exaggeration in 64 cultures. Journal of Cross-Cultural Psychology, 49(8), 1247-1268. Vonkova, H., Zamarro, G., & Hitt, C. (2018). Cross-country heterogeneity in students’ reporting behavior: The use of the anchoring vignette method. Journal of Educational Measurement, 55(1), 3-31. West, M. R., Kraft, M. A., Finn, A. S., Martin, R. E., Duckworth, A. L., Gabrieli, C. F., & Gabrieli, J. D. (2016). Promise and paradox: Measuring students’ non-cognitive skills and the impact of schooling. Educational Evaluation and Policy Analysis, 38(1), 148-170.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.