Session Information
09 SES 06 A, ICT and Education: Perspectives from ICILS and PIRLS
Symposium
Contribution
International large-scale assessments (ILSA) administer context questionnaires to students, teachers, and principals to collect information about school, classroom and learning conditions. These questionnaires usually consist of a series of rating type items which often face issues such as social desirability, self-presentation, and acquiescence bias (e.g., Lelkes and Weiss, 2015; Schaeffer and Dykema, 2020). There are alternatives to rating scales, such as forced choice items, rankings, anchoring vignettes or situational judgement tasks. Alternative item types can address some issues found with rating item types. It was found that ranking reduces the response style, and it improves data quality (Krosnick & Alwin, 1988). Furthermore, computer-based surveys enable administering items or response scales that are difficult to implement on paper. They provide an opportunity to use functions such as sliders, drag-and-drop, or drop-down menus. In the field trial of the International Computer and Information Literacy Study (ICILS) 2023, Q-sort was introduced as an alternative question type to assess teaching beliefs of secondary school teachers. Q-sort is a technique that was initially developed for clinical interviews, requiring respondents to arrange and rank a series of cards according to their preference. In this paper, we investigate the feasibility of using the Q-sort (ranking) format when collecting data about teaching beliefs in an international survey and explore and compare the quality and usefulness of the data gathered by two question types, ranking and rating. We use teacher data from 28 countries participating in ICILS 2023 field trial to investigate the effect of the question format using multiple criteria of data quality. The two question types were randomly distributed across the participating teachers within countries. We compare the two versions by the amount of missing data, distribution of responses, item and scale means, and the correlations between the scale scores and teacher characteristics. For ranking higher proportion of missing values were observed because the cognitive load is higher for the parallel sorting of a total of 18 items than for the rating items that are answered individually. In addition, we observed more variance in the responses from the ranking than in the rating version. The ranking removes the possibility that respondents can agree equally with all statements and can thus reduce acquiescence bias. Although some advantages were found for the ranking format, we could not suggest the implementation of the current version for further data collection because of the high amount of missing data observed.
References
Krosnick, J. A., & Alwin, D. F. (1988). A test of the form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. Public Opinion Quarterly, 52 (4), 526–538. Lelkes, Y., & Weiss, R. (2015). Much ado about acquiescence: The relative validity and reliability of construct-specific and agree–disagree questions. Research & Politics, 2 (3), 053168015604173. Schaeffer, N. C., & Dykema, J. (2020). Advances in the science of asking questions. Annual Review of Sociology, 46 , 37–60.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.