Session Information
09 SES 05 C, Competencies and Attitudes of Teachers (Part 1)
Paper Session: to be continued in 09 SES 06 C
Contribution
The presentation is focused on the problem of comparability of cross-cultural studies’ results. Globalization all over the word has lead to a large number of cross-cultural studies in different areas, especially in education and psychology. Researchers have to use multiple-language versions of tests and questionnaires and to involve individuals from different languages and cultures. One of the most important tasks of cross-cultural researches is ensuring comparability of results obtained in different cultures and languages. For researchers it means that tests in different languages and cultures must be equivalent.
Estimates derived from surveys always contain errors (Groves, 2004). These errors can be random or systematic but they can bias results. The systematic errors will threaten the comparability of data obtained from different cultural groups. Five sources of incomparability are mentioned in literature which pertain to construct, instrument, administration, sample, and response procedure (Chen, 2013). This presentation is focused on construct equivalence that refers to the similarity of the construct measured by international assessments in different languages and cultures.
It is not sufficient just to assume that the instrument developed in one culture, based on specific cultural values and conceptions, measures the same construct in another culture. In fact, the literature emphasizes that constructs, especially psychological constructs, are likely to involve culture-specific attributes and meanings (Cooper & Denner, 1998). For example constructs such as attitudes, classroom climate, or even social-economic status are likely to have different significations in different countries, cultures and languages (Ercikan, & Lyons-Thomas, 2013). Linguistic equivalence is one of the most investigated aspects of measurement comparability, but little to no research has been conducted on the comparability of constructs.
There are several methods of testing construct equivalence: factor analysis (exploratory and confirmatory), methods of Item Response Theory (IRT), DIF analysis, Latent Class Analysis (LCA), nomological networks and some qualitative methods. In this work two approaches - IRT-modeling and factor analysis (EFA and CFA) - will be combined to analyze a cross-cultural comparability of survey data.
The problem of cross-cultural comparability will be considered in relation to the survey NorBA aimed at assessing mathematics teachers’ beliefs at North Baltic countries (Lepic M., Pipere A., 2011). In this study teachers’ beliefs are understood as conceptions, attitudes and personal ideology, which are the basis of their practice. Teacher’s beliefs are an extremely important aspect of pedagogical science, because they are closely connected with teacher’s practice and, therefore, with the students’ achievements.
Currently the most widely used model of teachers’ beliefs about the nature of teaching and learning implies two groups of teachers’ beliefs - “traditional beliefs” and “constructivist beliefs” (OECD, 2009). The traditional approach implies that teacher communicates knowledge in a clear and structured way, explains correct solutions, gives learners clear and resolvable problems and ensures peace and concentration in the classroom. The constructivist approach implies that students are active participants in acquisition of knowledge, students’ own inquiry is stressed developing problem solutions (Underhill, 1988; OECD, 2009). This model is used in NorBA international survey.
The questionnaire used in NorBA was firstly developed in English language and translated to languages of countries participated. But preliminary analysis of survey data has shown that there is threat to comparability of data obtained from different countries because of possible non-equivalence of constructs.
So, the research question is: How can we compare the beliefs of mathematics teachers from different countries in conditions of possible non-equivalence of constructs?
Method
Expected Outcomes
References
1. Chen Y. M. (2013) Cross-cultural Comparability of Surveys // The University of British Columbia, October 7, 2013 2. Cooper, C. R., & Denner, J. (1998). Theories linking culture and psychology: Universal and community-specific processes. Annual Review of Psychology, 49(1), 559-584. 3. Ercikan, K., & Lyons-Thomas, J. (2013). Adapting Tests for Use in Other Languages and Cultures. In K. Geisinger (Ed.), APA Handbook of Testing and Assessment in Psychology, Volume 3, (pp. 545-569). American Psychological Association: Washington, DC. 4. Groves R.M. (2004) Survey errors and survey costs (Vol. 536). New York: Wiley. 5. Lepic M., Anita Pipere (2011) Baltic-Nordic Comparative Study on Mathematics Teachers’ Beliefs and practices // ACTA PAEDAGOGICA VILNENSIA №27, pp. 115-123. 6. Linacre J. M. (2011) A User's Guide to WINSTEPS. Program Manual 3.71.0. http://www.winsteps.com/a/winsteps.pdf. 7. OECD. (2009). Creating Effective Teaching and Learning Environments: First Results from TALIS. Paris: OECD Publishing. 8. Smith, Jr. E. V. (2002). Detecting and Evaluating the Impact of Multidimensionality using Item Fit Statistics and Principal Component Analysis of Residuals. Journal of Applied Measurement, 3:2, 205-231. 9. Underhill, R.G. (1988). Mathematics teachers’ beliefs: Review and reflections //Focus on Learning Problems in Mathematics 10 (3), pp. 43-58.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.