Session Information
31 SES 13 JS, Assessing Language Competencies – Theoretical Considerations and Empirical Testing
Symposium Joint Session NW 09 and NW 31
Contribution
Technology-Based Assessment extends the possibilities of traditional paper-and-pencil testing with respect to standardization, construct validity, automatic coding, adaptive test design, and access to behavioral logfile data. This presentation will give an overview of such advances and will then present empirical examples from own research in the field of reading assessment. In one study we investigated the speed-ability tradeoff in word recognition and sentence verification. If there is between-subject variation in the speed-ability compromise, the measure will be confounded with the decision on speed. We compared the convergent validity of untimed and timed measures of word recognition and sentence verification with reading competence. Results suggest that timed administration increases convergent validity by eliminating the confounding with speed. In another study we investigated the construct validity of the Digital Reading Assessment from PISA 2012. The skill to read texts presented in a non-linear format like on the Internet brings a number of demands that go beyond reading “traditional” printed texts. We tested the impact of reading-related, computer-related, and general cognitive skills needed for proficient reading of online texts. Results revealed that reading comprehension of printed texts and basic computer skills could significantly predict digital reading competence.
Method
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.