Session Information
09 SES 12 B, Reimagining Assessment Practices and Teacher Autonomy
Paper Session
Contribution
Children today are growing up in a digitally connected world which sets them apart from previous generations. For example, 42% of 5- to 7-year-olds have their own tablet and 93% of 8- to 11-year-olds spend an average of 13.5 hours online (Ofcom 2022). Digital media provides opportunities for easier access to information and communication with peers. However, it also presents a range of risks, especially for children who are particularly vulnerable due to their young age. This becomes clear when they are confronted with violent, sexual, advertising, or judgmental content in the digital space (Livingstone et al. 2015). Other challenges in digital communication and information channels include fake news, propaganda and deepfakes. With regard to the aforementioned aspects, it is necessary to possess skills that enable a critical examination of information. For this reason, information evaluation is considered an important subskill for social participation and learning inside and outside of school.
When examining the internet preferences of children and young people, it becomes apparent that they are primarily interested in extracurricular activities rather than child-friendly services commonly discussed in school settings, such as children's search engines. The top four internet activities include WhatsApp, watching films and videos, and using YouTube and search engines (Feierabend et al., 2023). In this respect, WhatsApp, YouTube, and TikTok are the most popular (social media) platforms (Reppert-Bismarck et al. 2019). The evaluation of content is not limited to online research alone. It can also occur in different scenarios, such as browsing the internet for entertainment or out of boredom). In this regard, the strategies for evaluating content vary depending on the purpose of the discussion (Weisberg et al. 2023), allowing the assessment of information, data, and content from different angles. One approach to evaluate content is to verify its credibility. In research literature, credibility encompasses multiple aspects. This includes assessing the trustworthiness of content, such as recognizing intention, or the expertise of the author. However, studies show that young people tend to lack critical evaluation skills when it comes to the credibility of online content (Kiili et al. 2018) and are also insufficiently prepared to verify the truthfulness of information (Hasebrink et al. 2019). In the context of social media in particular, the question of the realism of the shared content (e.g., factuality or plausibility) arises. Recipients are faced with the challenge of multiplicity resulting from the different ‘realities’ on social media. These realities are shaped by different motivations, attitudes, and political or social contexts which can blur boundaries (Cho et al. 2022). Overall, the evaluation process of online content is influenced by various factors. For instance, research suggests that reading competence affects the evaluation process. Furthermore, the socioeconomic status has been found to influence the digitalization-related skills of young people (see ICILS results). Another important aspect to consider is the influence of platform-specific knowledge, such as understanding the YouTube algorithm, and topic-specific knowledge on content evaluation, such as the subject of a news video. In addition, the design of both the platform and the content can also have an impact. This includes factors such as image-to-text ratio, layout, effects, and the focus of the central message.
To which extent the presented assumptions apply to primary school children is unclear, as most empirical results relate to adults or adolescents. Therefore, the overarching goal of the project is to develop a standardized measurement instrument for primary school children in order to assess to which extent they are able to evaluate internet content. The creation of a standardized measurement instrument involves several substeps which are outlined below.
Method
Model The development of a measurement instrument requires a theoretical and empirical foundation. We believe there is a limited number of models that specifically address the evaluation of online content in primary school children. Therefore, we examined constructs related to the subcompetence of 'evaluation' to develop a theory- and empirically-based measurement model. For this purpose, we used normatively formulated standards, theoretical models and empirical studies that systematize, assess, or discuss information, media, digital, internet and social media skills. The analysis of these constructs can yield various criteria for evaluating online content, such as credibility or realism. For instance, context is crucial when evaluating content (e.g., advertising content; Purington Drake et al., 2023). As most of the analysis is not related to primary schools, all German curricula (e.g., based on DigComp, Ferrari 2013) were examined for relevant subcompetencies and content areas. The aim is to compare the research results with normative requirements in the primary school sector to ensure that competence targets are not set unrealistically high. Assessment instrument Based on the measurement model, we developed a digital performance test with 20 multiple-choice tasks. To increase content validity, the instrument includes multimodal test items from the age group's most popular platforms (e.g., YouTube). The operationalization includes phenomena that are platform-specific (e.g., clickbait). Assessment criteria were derived for each content area and subcompetency and adapted to the specific platform content, such as a promotional video with child influencers. Expert interviews in the online children's sector additionally contributed to the development of age-appropriate content and evaluation criteria (Brückner et al. 2020). Validation steps/procedures To validate the 20 test items, a qualitative comprehensibility analysis was conducted in small group discussions with school and university experts (n=12). Following that, five children were accompanied by the thinking aloud method while they solved the test items (Brandt and Moosbrugger 2020). Both validation steps led to linguistic and content-related adjustments. Pilot study An initial test of the measurement instrument was conducted with 81 pupils (56.8% female) in Grade 3/4 (M=10.4, SD=0.64). 57 children were given parental permission to provide information on their socioeconomic status (HISEI=47.44, SD=16.42). 51.9% predominantly speak another language at home. The aim of the pilot study was to perform an initial descriptive item analysis to determine task difficulty, variance, and selectivity. The calculation of an overall score requires item homogeneity, wherein high selectivity indices serve as an initial indication (Kelava and Moosbrugger 2020).
Expected Outcomes
The results of the piloting showed that 15 out of 20 test items had a task difficulty of 45≤Pi≤78. Five items had a higher difficulty (25≤Pi≤39). These items primarily dealt with phishing, clickbait, the use of third-party data, and bots. The correlative relationships calculation showed an inconsistent picture for the respective tasks which resulted in low selectivity indices (rit<.3) in some cases. Due to the small sample size, it was not possible to definitely determine whether the data had a unidimensional or multidimensional structure (principal component analysis/varimax rotation). As a result, the selectivity was not further interpreted (Kelava and Moosbrugger 2020). It is not surprising that students struggled with test tasks involving deception and personality interference, as even adults find phenomena like bots to be challenging (Wineburg et al. 2019). This raises the question of whether this content is appropriate for primary schools despite its real-world relevance. Methodological challenges in investigating such phenomena and implications for school support are discussed in the main study. As a result of the pilot study, the five most challenging tasks were adjusted in terms of difficulty without altering the core content (e.g., linguistic adaptations of questions/answers, replacement of videos). To obtain precise information on unidimensionality, IRT models were utilized for data analysis in the main study (Kelava and Moosbrugger 2020). The data collection was completed in December 2023 (n=672) and aims to gain more precise insights into item and test quality. The quality results of the measurement instrument will be reported at the conference with a focus on the area of deception. This study raises the question of whether primary school children are able to evaluate deceptive content and what methodological challenges this poses for measurement. This study will investigate whether individual variables (socioeconomic status, migration history) influence the evaluation of deceptive content.
References
Brandt, Holger; Moosbrugger, Helfried (2020): Planungsaspekte und Konstruktionsphasen von Tests und Fragebogen. In: Helfried Moosbrugger und Augustin Kelava (Hg.): Testtheorie und Fragebogenkonstruktion. Berlin, Heidelberg: Springer Berlin Heidelberg, S. 41–66. Brückner, Sebastian; Zlatkin-Troitschanskaia, Olga; Pant, Hans Anand (2020): Standards für pädagogisches Testen. In: Helfried Moosbrugger und Augustin Kelava (Hg.): Testtheorie und Fragebogenkonstruktion. Berlin, Heidelberg: Springer Berlin Heidelberg, S. 217–248. Cho, Hyunyi; Cannon, Julie; Lopez, Rachel; Li, Wenbo (2022): Social media literacy: A conceptual framework. In: New Media & Society, 146144482110685. DOI: 10.1177/14614448211068530. Feierabend, Sabine; Rathgeb, Thomas; Kheredmand, Hediye; Glöckler, Stephan (2023): KIM-Studie 2022 Kindheit, Internet, Medien. Basisuntersuchung zum Medienumgang 6-bis 13-Jähriger. Hg. v. Medienpädagogischer Forschungsverbund Südwest (mpfs). Online verfügbar unter https://www.mpfs.de/studien/kim-studie/2022/. Ferrari, Anusca (2013): DIGCOMP: A Framework for Developing and Understanding Digital Competence in Europe. Eurpean Commission Joint Research Center. Online verfügbar unter https://publications.jrc.ec.europa.eu/repository/handle/JRC83167, zuletzt geprüft am 16.05.2023. Hasebrink, Uwe; Lampert, Claudia; Thiel, Kira (2019): Online-Erfahrungen von 9- bis 17-Jährigen. Ergebnisse der EU Kids Online-Befragung in Deutschland 2019. 2. überarb. Auflage. Hamburg: Verlag Hans-Bredow. Kelava, Augustin; Moosbrugger, Helfried (2020): Deskriptivstatistische Itemanalyse und Testwertbestimmung. In: Helfried Moosbrugger und Augustin Kelava (Hg.): Testtheorie und Fragebogenkonstruktion. Berlin, Heidelberg: Springer Berlin Heidelberg, S. 143–158. Kiili, Carita; Leu, Donald J.; Utriainen, Jukka; Coiro, Julie; Kanniainen, Laura; Tolvanen, Asko et al. (2018): Reading to Learn From Online Information: Modeling the Factor Structure. In: Journal of Literacy Research 50 (3), S. 304–334. DOI: 10.1177/1086296X18784640. Livingstone, S.; Mascheroni, G.; Staksrud, E. (2015): Developing a framework for researching children’s online risks and opportunities in Europe. EU Kids Online. Online verfügbar unter https://eprints.lse.ac.uk/64470/1/__lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_repository_Content_EU%20Kids%20Online_EU%20Kids%20Online_Developing%20framework%20for%20researching_2015.pdf, zuletzt geprüft am 11.01.2024. Ofcom (2022): Children and parents: media use and attitudes report. Online verfügbar unter https://www.ofcom.org.uk/__data/assets/pdf_file/0024/234609/childrens-media-use-and-attitudes-report-2022.pdf. Purington Drake, Amanda; Masur, Philipp K.; Bazarova, Natalie N.; Zou, Wenting; Whitlock, Janis (2023): The youth social media literacy inventory: development and validation using item response theory in the US. In: Journal of Children and Media, S. 1–21. DOI: 10.1080/17482798.2023.2230493. Reppert-Bismarck; Dombrowski, Tim; Prager, Thomas (2019): Tackling Disinformation Face to Face: Journalists' Findings From the Classroom. In: Lie Directors. Weisberg, Lauren; Wan, Xiaoman; Wusylko, Christine; Kohnen, Angela M. (2023): Critical Online Information Evaluation (COIE): A comprehensive model for curriculum and assessment design. In: JMLE 15 (1), S. 14–30. DOI: 10.23860/JMLE-2023-15-1-2. Wineburg, Sam; Breakstone, Joel; Smith, Mark; McGrew, Sarah; Ortega, Teresa (2019): Civic Online Reasoning: Curriculum Evaluation (working paper 2019-A2, Stanford History Education Group, Stanford University). Online verfügbar unter https://stacks.stanford.edu/file/druid:xr124mv4805/COR%20Curriculum%20Evaluation.pdf, zuletzt geprüft am 29.06.2023.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.