Session Information
MC_Poster, Poster Session Main Conference
Main Conference Poster Session
Contribution
International reading literacy assessments have attracted wide interest in recent years, the results of these assessments often being the basis on which nations make educational decisions. These assessments necessitate extensive translation work because all the texts and items used in these assessments have to be translated into the languages of the participating countries. Besides, as measuring instruments the translations have to satisfy exceptionally high standards: they have to be equivalent, or comparable, to each other – they have to measure the same construct at a comparable level of difficulty. If this is not the case the entire assessment and its results risk being invalid.
Over the years criticism has been expressed and suspicions have been aroused concerning the quality and equivalence of instruments translated and used in international assessments. It has been claimed that because of linguistic and cultural differences alone it will never be possible to attain full equivalence in these assessments (e.g. Bonnet, 2002; Bonnet et al., 2003). Moreover, research done on these translations has shown that there have been deficiencies in them and that therefore equivalence may not always have been attained (e.g. Arffman, 2007; Bechger et al., 1998). All in all, the opinion seems to be that while the instruments translated and used in international reading literacy assessments have improved, deficiencies in them may still endanger the validity of the assessments (e.g. Bechger et al., 1998; Bonnet, 2002).
Translations are always the end result of and therefore dependent on the translation process followed. This is also true of instruments translated in international reading literacy assessments. Thus, if there are deficiencies in these instruments these are largely a function of deficiencies in the translation procedures. Therefore, if we want to improve the quality and increase the level of equivalence of instruments used in international reading literacy assessments it is important to explore the translation procedures behind them and to develop them.
The purpose of the study was to address the translation procedures followed when translating instruments in international reading literacy assessments and to examine whether there are factors in them which complicate the translation work, making it difficult to ensure equivalence between the different-language test versions. Knowledge of such factors helps to develop translation procedures that are better able to yield more equivalent instruments. In the end, the study thus aimed at increasing the validity of international reading literacy assessments. However, since translation procedures followed in other types of international studies are often to a significant degree similar to those followed in international assessments of reading literacy the study can also be used when developing translation procedures in these other types of international studies.
Method
Expected Outcomes
References
Arffman, I. (2007). The problem of equivalence in translating texts in international reading literacy studies. A text analytic study of three English and Finnish texts used in the PISA 2000 reading test. Jyväskylä: University of Jyväskylä, Institute for Educational Research. Bechger, T., van Schooten, E., de Glopper, C., & Hox, J. (1998). The validity of international surveys of reading literacy: The case of the Reading Literacy Study. Studies in Educational Evaluation 24, 99-125. Bonnet, G. (2002). Reflections in a critical eye: On the pitfalls of international assessment [Review of the book Knowledge and skills for life: First results from PISA 2000]. Assessment in Education 9, 387-399. Bonnet, G., Daems, F., de Clopper, C., Horner, S. Lappalainen, H.-P., Nardi, E., Remond, M., Robin, I., Rosen, M., Solheim, R., Tonnessen, F.-E., Vertecchi, B., Vrignaud, P., Wagner, A., & White, J. (2003). Culturally balanced assessment of reading. C-bar. Available: http://cisad.adc.education.fr/reva/pdf/cbarfinalreport.pdf [2009, May 12]. Danks, J., & Griffin, J. (1997). Reading and translation. A psycholinguistic perspective. In J. Danks, G. Shreve, S. Fountain & M. McBeath (Eds.), Cognitive processes in translation and interpreting (pp. 161-175). Thousand Oaks, CA: Sage. Grisay, A. (2002). Translation and cultural appropriateness of the test and survey material. In R. Adams, & M. Wu (Eds.), PISA 2000 technical report (pp. 57-70). Paris: OECD. Hambleton, R. (2005). Issues, designs, and technical guidelines for adapting tests into multiple languages and cultures. In R. Hambleton, P. Merenda & C. Spielberger (Eds.), Adapting educational and psychological tests for cross-cultural assessment (pp. 3-38). Mahwah, NJ: Erlbaum. Nord, C. (1991). Text analysis in translation. Theory, methodology, and didactic application of a model for translation-oriented text analysis. Amsterdam: Rodopi. Vermeer, H. J. (1989). Skopos and commission in translational action. In A. Chesterman (Ed.), Readings in translation theory (pp. 173-187). Helsinki: Finn Lectura.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.