Session Information
99 ERC SES 03 B, Interactive Poster Session
Interactive Poster Session
Contribution
When taking language tests, students employ not only their knowledge but also three types of strategies: language learner strategies, test-management strategies (TMSs), i.e. “strategies for responding meaningfully to test items and tasks” (Cohen, 2011, p. 306), and test-wiseness strategies (TWSs) – ”strategies for using knowledge of test formats and other peripheral information to answer test items without going through the expected linguistic and cognitive processes.” (Cohen, 2011, p. 306). The latter two groups are called test-taking strategies (TTSs). The use of TTSs is of interest to item writers who, when creating tests, try to minimize the possible influence of TWS use on test performance to ensure test validity, as well as to teachers and students who perceive TTS use as a means to maximizing test scores.
Research into the relation of TTS use and test performance has yet to yield conclusive results. While Ghafournia (2013) claims that the group with the highest results in an English reading test used the highest number of strategies, Wu, Chen and Stone (2018) concluded that, in a test where item difficulty was rising progressively, the group which performed the best (highest proficiency group) used the lowest number of strategies, whereas lower-proficiency test-takers needed to resort to further strategies to arrive at an answer. As for practical implications, Winke and Lim (2017) conclude that explicit instruction covering TTS use has no significant impact on test performance. Getting acquainted with the test format and item types proved abundantly sufficient. Existing research is predominantly qualitative, taking advantage of various types of strategy inventories, e.g. Oxford’s (1989) SILL or Cohen and Upton’s (2007). Also, research deals mainly with university students whose studies are related to English.
This work deals with the issues of TTS use that need further investigation, i.e. mainly what TTSs are used by students at various levels of proficiency in English; the research also puts particular emphasis on how these strategies are employed, and on the factors influencing the choice of strategies.The research is qualitative: semi-structured interviews with 18 informants were conducted, the data were then analyzed in acocrdance with the principles of grounded theory..
The aim of the qualitative study is also to provide data for the formation of an instrument (questionnaire) for the subsequent quantitative stage of research.
Tests in accordance with STANAG 6001 are high-stakes language proficiency criterion-referenced tests for NATO member nations. STANAG 6001 is a standardization agreement dealing with language curriculum, test development, recording and reporting “standardized language profiles”. The tests are standardized; however, each nation is responsible for its own system of testing. In the Czech Republic, reading and listening tests are comprised solely of multiple-choice questions. The results of the study could be of use to test writers focusing on language proficiency testing, as well as to students and teachers not only in the Czech Republic but in other countries too.
Method
This study focuses on the TTSs which STANAG 6001 English proficiency exam candidates employ; the main research questions are: 1. What TTSs are used in listening and reading tests by test-takers sitting STANAG 6001 English proficiency exams? 1.1. Is there a difference in the use of TTSs between groups of students of different levels of proficiency in English? 1.2. Is there a difference in the use of TTSs between test-takers who have taken part in an exam preparation course and other test-takers? 2. How do test-takers employ TTSs when taking a test and dealing with a test item? The sample for this study was formed by those taking the STANAG 6001 English Exams in the Czech Republic, i.e. students of the University of Defence in Brno and employees of the Ministry of Defence of the Czech Republic. Maximal variation sampling was used to select informants displaying all the following traits (and their dimensions): 1. Exam level – candidates taking Level 1 / Level 2 / Level 3 Exams 2. Exam preparation course participation – Exam preparation course students / University of Defence students / other candidates 3. Soldiers / Civilians Eighteen people were interviewed, six participants taking each level of the exam. Eight test-takers were University of Defence students, seven of them participated in exam preparation courses, three people were so-called external exam candidates. Fifteen informants were soldiers (including University of Defence students), three were civilians. Data collection took place between March and September 2019. A semi-structured interview was used to collect data, the interviews were recorded. The scripts of interviews were done in full, interviews lasted from fourteen to thirty-one minutes. The data were analysed in accordance with the principles of grounded theory, initial open coding was followed by axial coding, gradually arriving at a theory of TTS use in a test.
Expected Outcomes
The results show (similarly to e.g. Nikolov, 2006) that the use of TTSs is highly individual and depends on the task and the context. The informants have a preferred strategy when dealing with an item – their primary TTS, which applies to both reading and listening test items in most cases.. No differences were identified in TTS use between candidates taking exams of different levels. Informants used TMSs and well as TWSs; at the same time, the strategies could also be divided into 2 other groups, overlapping with the former groups – strategies applying to an item and strategies applying to the test as a whole. Subjective assessment of item difficulty appears to be a metastrategy (a new concept, as far as the author is aware) influencing the selection of a particular strategy (or strategies) for a particular item. Perceiving the item as difficult leads to the use of a wider array of strategies. A similar concept appears in Wu, Chen, and Stone (2018); however, the items in their study were increasing in difficulty objectively, whereas this study deals with subjective assessment. Other factors influencing strategy selection include test properties, students’ concepts of item construction, students’ opinions regarding the usefulness of the strategy. The research was conducted using a relatively small sample of candidates, its findings need to be confirmed using a bigger sample. All findings only apply to TTSs used in multiple-choice language tests. The study relies on reported data, further steps in research (quantitative study) should yield data for triangulation. Nevertheless, the research offers interesting insights into TTS employment.
References
Cohen, A. (2011). Strategies in Learning and Using a Second Language. London: Taylor & Francis. Cohen, A. & Upton, T. A. (2007). "I want to go back to the text": Response strategies on the reading subtest of the New TOEFL. Language Testing, 24(2), 209–250. https://doi.org/10.1177/0265532207076364 Ghafournia, N. (2013). The relationship between using multiple-choice test-taking strategies and general language proficiency levels. Procedia - Social and Behavioral Science, 70, 90-94. Nikolov, M. (2006). Test-taking strategies of 12- and 13-year-old Hungarian learners of EFL: The utilisation of test-taking strategies among female students in a tertiary institution. GEMA Online Journal of Language Studies. Language Learning, 56(1), 1-51. Oxford, R. L. (1989). Language learning strategies: What every teacher should know. New York: Newbury House Publisher. STANAG 6001 Ed. 5. Retrieved from https://www.natobilc.org/files/ATrainP-5%20EDA%20V2%20E.pdf Winke, P. & Lim, H. (2017). The Effects of Test Preparation on Second-Language Listening Test Performance. Language Assessment Quarterly, 14(4), 380-397. doi:10.1080/15434303.2017.1399396 Wu, A.D., Chen, M.Y., Stone, J.E. (2018). Investigating how test-takers change their strategies to handle difficulty in taking a reading comprehension test: implications for score validation. International Journal of Testing, 18(3), 253-275.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.