Conference:
ECER 2009
Format:
Symposium Paper
Session Information
09 SES 11 A, Rasch Measurement in Educational Contexts (Part 4)
Symposium. Continued from 09 SES 10 A
Time:
2009-09-30
16:45-18:15
Room:
HG, HS 50
Chair:
Pekka Antero Kupari
Discussant:
Wolfram Schulz
Contribution
Large-scale assessments administer items mostly arranging them in booklet designs. In this case the same item often has various positions in different booklets which can cause a bias of item difficulty depending on the position of the item. In further consequence the ability of the test person would be biased. Another important aspect of test administration is the choice of a particular response format with different kinds of multiple choice formats or constructed response formats being available. It is conceivable that using a different response format also has an impact on the difficulty of the item.
The LLTM is a proper IRT approach for examining effects of item position as well as response format on item solving probability (Kubinger, 2008). Using this method the impact of item position was analyzed in a mathematic competence test of the national standard tests in Austria for 4th grade. The test was administered as a typical large-scale assessment with various test forms with items on varying positions in different booklets. The data set consisted of nearly 1800 students. Results indicate that a small fatigue effect take place in this test (Hohensinn et al., 2008). The impact of the response format was examined in a mathematic competence test of the national standard tests in Austria for 8th grade which was administered to aroung 3.000 students. For this analyses it is expected that various response formats modify the item solving probability even if the item content is the same. All IRT analyses were conducted using eRm (software package for R).
Method
Expected Outcomes
References
Kubinger, K. D. (2008). On the revival of the Rasch model-based LLTM: From constructing tests using item generating rules to measuring item administration effects. Psychology Science Quarterly, 50 (3), 311-327. Hohensinn, C., Kubinger, K. D., Reif, M., Holocher-Ertl, S., Khorramdel, L. & Frebort, M. (2008). Examining item-position effects in large-scale assessment using the Linear Logistic Test Model. Psychology Science Quarterly, 50 (3), 397-402.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.