Conference:
ECER 2009
Format:
Symposium Paper
Session Information
09 SES 09 A, Rasch Measurement in Educational Contexts (Part 2)
Symposium. Continued from 09 SES 08 A
Time:
2009-09-30
10:30-12:00
Room:
HG, HS 50
Chair:
Sarah Howie
Discussant:
Eugenio Gonzalez
Contribution
Cross-national (respectively cross-cultural) surveys like the international reading assessment study PIRLS (Progress in International Reading Literacy Study) deal with the problem of data comparability across nations and cultures. One necessary task is to ensure a correct translation of items used in the tests and background questionnaires into the target languages including an appropriate adaptation for their cultural context (Malak & Trong, 2007). But even instruments in one language may be biased when comparing the data from different countries or regions (Schaffer & Riordan, 2003).
Different versions of the German translation of the reading tests from PIRLS 2006 have been used in Austria, Germany, Luxemburg and the German-Speaking Community in Belgium. This allows a specific look at differences in item difficulties between these countries.
Using data from (1) PIRLS 2006 (only the German-speaking participating countries: Austria, Germany and Luxemburg), (2) IGLU Belgien (a study in the German-speaking Community in Belgium using the same reading tests as PIRLS 2006) and (3) LESELUX (a follow-up study to PIRLS Luxemburg 2006 using released items from PIRLS 2006) this paper examines differential item functioning (DIF) between the different German-speaking countries as well as between different versions of the German translation of the PIRLS reading assessment instruments. DIF analyses with Parscale 4 will be done using the three-parameter logistic model (official PIRLS-scaling; Foy, Galia & Li, 2007) as well as the one-parameter logistic model.
Substantial differential item functioning (DIF) can be found when comparing the results from the four German-speaking countries regarded as well as when comparing the different translations used. This raises the question to what extent these differences in item difficulty might lower the international comparability of large scale assessment studies.
As cross-national studies rely on the comparability of measured constructs, close attention must be paid to keep item bias to a minimum.
Method
Expected Outcomes
References
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.