Session Information
09 SES 13 B JS, Translation and Cross-cultural Comparability in Large Scale Assessments
Joint Paper Session NW 09 and NW 12
Contribution
International large-scale assessment (iLSA) studies like the Programme for International Student Assessment (PISA) or the Programme for the International Assessment of Adult Competencies (PIAAC) have become widespread. They are used for “monitoring the achievement level in a particular population, for comparing assessed (sub)populations“, and may therefore “form the basis for developing and/or revising educational policies“ (Upsing, Gissler, Goldhammer, Rölke, & Ferrari, 2011, 44, 45). PISA tests are used to compare the literacy levels of 15-year olds or adults, for example, across multiple countries, whereas PIAAC tests measure the competencies of adults.
iLSA studies cannot take place without prior translation of its test items into the languages of the participating countries. The validity of cross-national surveys or tests is at stake when the translated versions of test items contain errors or deviations. The main goal of the translation process for test items is that “a person of the same ability will have the same probability of answering any assessment item successfully independent of his or her linguistic or cultural background“ (Thorn, 2009, p. 9). Therefore, when a test is translated, the test should not become easier or harder to respond to because of its translation.
For studies like PISA or PIAAC stringent quality control procedures have been set up to support translators and to check their work, as most translators are not familiar with test item translation and may not know which factors unduly impact the translated version of a test item. Therefore, translators receive general guidelines which are supposed to point out peculiarities of test translation (see PISA Consortium, 2010 for an example). Furthermore, the source language test items are supplemented with item-specific guidelines, which explain difficult English expressions or idioms, demand (if possible) to use the same translated expression in the question (or item) as in the stimulus (i.e., the material the test person works with to answer the question), or ask to use a local name instead of the English name (cf. Organisation for Economic Co-operation and Development [OECD], 2014, p. 95).
Despite these efforts, iLSA translations have been criticized. On the one hand, the quality of the translations themselves or the translation process have been questioned (cf. Arffman, 2012a, Arffman, 2012b, Dolin, 2007, Ercikan, 1998, Karg, 2005, Wuttke, 2007). On the other hand, unexpected statistical differences between language versions of tests have been pointed out (cf. Grisay, Gonzalez, & Monseur, 2009, p. 80 or Eivers, 2010, p. 102).
This study analyzes the information process and the information needs of the translators who perform translations for iLSA tests. Thereby, this study compares these needs with the supplied information for this task. The translation process for the PIAAC study is used as a basis for this analysis.
Method
To understand the interplay between information needs of translators and the information provided, a qualitative approach is used. The qualitative content analysis method (Mayring 2010) was used to explore both the general guidelines and the item-specific guidelines that translators received for the PIAAC study. Also, all comments and corrections that were made during the translation process for the 34 different language versions of the PIAAC study were analyzed, with the aim of identifying the questions and problems that had arisen and the corrections which were implemented. The underlying question was whether translators across target languages face similar difficulties in their work, and whether they find similar answers to resolving these (linguistic or technical) issues. The broader aim was to qualitatively analyze the impact of information on the adaptation process. The results of this analysis provided the background for an interview study with 20 translation players. All but for interview partners were involved in the translation process for PIAAC and/or PISA. The focus of the interviews was set on the information needs and the information environment that the different actors face when completing their tasks. The interviews were designed to better understand the preferences and priorities of the various parties. A particular area of emphasis was to determine how these individuals deal with the information they receive when working for iLSA, and how this setup compares with other (translation) tasks they encounter.
Expected Outcomes
The results of the qualitative analysis show that information deficits arise during the translation of PIAAC test items (for example, as there are not enough empirical findings regarding which parts of a translated test item cause bias). For other translation tasks, translators do not receive information in the form of guidelines (as they do for iLSA), and one assumption is that they over-interpret this information: It is shown that the information provided to translators influenced their translation strategies and choices, and that they tended to attach a disproportionate level of importance to such information at the expense of their own competence. They might show this behavior for lack of experience with translating test items, while at the same time placing high confidence in the skills of the item developers, who are in charge of creating the information. The findings can be used for further considerations regarding information needs, for example by giving stronger consideration to the information source, and to the status and role of the information user.
References
Arffman, I. (2012a). Translating International Achievement Tests: Translators’ View. Reports. Finnish Institute for Educational Research Reports: Vol. 44. Jyväskylä: University of Jyväskylä. Retrieved from https://ktl.jyu.fi/julkaisut/julkaisuluettelo/julkaisut/2012/g044 Arffman, I. (2012b). Unwanted Literal Translation: An Underdiscussed Problem in International Achievement Studies. Education Research International. (2), 1–13. https://doi.org/10.1155/2012/503824 Dolin, J. (2007). PISA – An Example of the Use and Misuse of Large-Scale Comparative Tests. In S. T. Hopmann, G. Brinek, & M. Retzl (Eds.), Schulpädagogik und Pädagogische Psychologie: Vol. 6. PISA zufolge PISA – PISA According to PISA: Hält PISA, was es verspricht? – Does PISA Keep What It Promises? (pp. 93–125). Wien: LIT Verlag. Eivers, E. (2010). PISA: Issues in Implementation and Interpretation. Irish Journal of Education. (38), 94–118. Retrieved from http://www.erc.ie/documents/vol38chp5.pdf Ercikan, K. (1998). Translation Effects in International Assessments. International Journal of Educational Research, 29(6), 543–553. https://doi.org/10.1016/S0883-0355(98)00047-0 Grisay, A., Gonzalez, E., & Monseur, C. (2009). Equivalence of Item Difficulties across National Versions of the PIRLS and PISA Reading Assessment. In M. von Davier & D. Hastedt (Eds.), IERI Monograph Series: Issues and Methodologies in Large-Scale Assessment (Vol. 2, pp. 63–83). Retrieved from http://www.ierinstitute.org/fileadmin/Documents/IERI_Monograph/IERI_Monograph_Volume_02_Chapter_03.pdf Karg, I. (2005). Mythos PISA. Vermeintliche Vergleichbarkeit und die Wirklichkeit eines Vergleichs. Göttingen: V-&-R-Unipress. Organisation for Economic Co-operation and Development (OECD). (2014). PISA 2012 Technical Report. Programme for International Student Assessment. Retrieved from http://www.oecd.org/pisa/pisaproducts/PISA-2012-technical-report-final.pdf PISA Consortium. (2010). Translation and Adaptation Guidelines for PISA 2012. Retrieved from https://www.oecd.org/pisa/pisaproducts/49273486.pdf Thorn, W. (2009). International Adult Literacy and Basic Skills Surveys in the OECD Region. OECD Publishing. Advance online publication. https://doi.org/10.1787/221351213600 Upsing, B., Gissler, G., Goldhammer, F., Rölke, H., & Ferrari, A. (2011). Localisation in International Large-scale Assessments of Competencies: Challenges and Solutions. Localisation Focus, 10(1), 44–57. Retrieved from http://www.localisation.ie/sites/default/files/publications/Vol10_1UpsingGissleretAl.pdf Wuttke, J. (2007). Uncertainties and Bias in PISA. In S. T. Hopmann, G. Brinek, & M. Retzl (Eds.), Schulpädagogik und Pädagogische Psychologie: Vol. 6. PISA zufolge PISA – PISA According to PISA: Hält PISA, was es verspricht? – Does PISA Keep What It Promises? (pp. 241–263). Wien: LIT Verlag.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.