Session Information
Contribution
Electronic measures and tools for testing competences meet increasing awareness in European education. There is some evidence that e-testing is no longer viewed as a mere means for automation of an existing process, but indeed reconceptualises it with desirable improvements and inevitable challenges. The underlying idea is that electronic testing could improve the effectiveness, i.e. improve identification of skills, and efficiency, by reducing costs (financial efforts, human resources etc.). When dealing with evaluation of skills, an obvious advantage of computer based assessments over the traditional pencil and paper is computer adaptive testing (CAT). CAT is often developed on the basis of item response theory (IRT) psychometric model family. An advantage of this approach is the reduction of test execution time, the test taker being always faced with a "realistic" challenge. Moreover, it is claimed as more secure, since each test taker is given a tailored test, and cheating becomes difficult. CAT is completely dependent on technology and as yet it cannot handle open-ended questions. The pool of items has to be quite large. Cost-benefit considerations aim at verifying the strengths and weaknesses, potentials and barriers posed in terms of their application in real situations. Any of the delivery modes, whether Paper-Pencil and/or computer-based, comprises advantages and challenges which can hardly be compared, especially in relation to estimated costs. The use of CBT includes additional benefits which can be achieved from a organisational, psychological, analytical and pedagogical perspective. Whereas the use of computers in the assessment of skills is increasingly common in the US, there are not many experiences at European level as far as large-scale assessments are concerned. Some small scaled experiences are reported at a project level, e.g.- DIALANG: The DIALANG project is about computer based language testing. It is an assessment system intended for language learners who want to obtain diagnostic information about their language proficiency, providing also advice about how to improve language skills. DIALANG is Internet based freeware, currently managing diagnostic tests in 14 different European languages. - TAO: The TAO (Test Assisté par Ordinateur) system is a modular platform for internet-based computer aided testing. The platform allows the management of knowledge pertaining to subjects (individuals whose competencies and knowledge may be assessed), groups of subjects, tests and items (elements of tests requiring an answer from the user). - PISA pilot studies: (additional national pilots 2003, CBAS 2006) - Literature review - Expert interviews - Case studies - Evaluation of platforms, tools and services Experiences are not sufficiently documented yet but results so far suggest that IT based tools could support the assessment process and the analysis of results. In terms of language skills assessment there are certain specific barriers to be taken into account. Whether undertaken in an electronic mode or not, the most important challenge of assessing productive skills is, in both cases, that heavy investments are needed to deliver and generate results at large-scale level. As demonstrated by PISA, the provision of open questions is rather cost-intensive.The intention of this presentation is to reflect on the challenges posed related to large-scale testing at a European level and to outline quality dimensions and criteria for the implementation of eTesting and the selection of platforms.Mills, C. N and M. Steffen (2000). "The GRE Computer Adaptive Test: Operational Issues", in Van den Linden and Glass (2000), pp. 75-100.Anderson, J. C. 2005: Diagnosing Foreign Language Proficiency. The Interface between Learning and Assessment. London: Continuum. And http://www.dialang.org. Cassady, J. C. & Gridley, B. E. 2005: The Effects of online formative and summative assessment on test anxiety and performance. In The Journal of Technology, Learning, and Assessment. Vol 4 (1). Available from http://www.jtla.org. Parshall, C. G.; Spray, J. A.; Kalohn, J. C. & Davey, T. 2002: Practical considerations in computer-based testing. Berlin: Springer. Rudner, L. M. 1998: An on-line interative computer adaptive testing tutorial. Available at http://edres.org/scripts/cat, last accessed: 15/03/2006. Wainer, H. 1990: Computer Adaptive Testing: a Primer. Hillsdale, N.J.: Lawrence Erlbaum.Baker, F. B. 2001: The Basiics of Item response Theory. Madison, W: ERIC Clearinghouse on Assessment and Evaluation.Madsen, H. S. 1991: Computer Adaptive Testing of Listening and Reading Comprehension: The Brigham Young University Approach.", Computer Assisted Language Learning and Testing: Research Issues and Practice, edited by P. Dunkel. New York, NY: Newbury House Grotjahn, R. (Ed.) 2006. Der C-Test: Theorie, Empirie, Anwendungen /The C-Test: Theory, Empirical Research, Applications Reichert, M.; Keller, U. and Martin, R. 2006. Le Test de Connaissance du Français et le C-Test: étude sur la comparabilité entre les deux instruments. Uni. of Luxembourg.Wynne, L. (v.1), 2006. e-Assessment and value, to be published by Pearson Vue. Poggio, J.; Glasnapp, D. R.; Xiangdong Yang; 2005. A comparative Evaluation of Score Results from Computerized and Paper & Pencil Mathematics Testing in a large Scale Assessment Program. In: JTLA, Journal of Technology, Learning, and Assessment, Volume 3, Number 6, February 2005. Also available at: www.bc.edu/research/intasc/jtla/journal/pdf/v3n6_jtla.pdf, last accessed: 30.11.2006. Goldberg, A.; Rusell, M. & Cook, A. 2003. The effect of Computers on Student Writing: A Meta-analysis of Studies from 1992 to 2002. In: JTLA, Journal of Technology, Learning, and Assessment, Volume 2, Number 1, February 2003. Also available at: http://www.bc.edu/research/intasc/jtla/journal/pdf/v2n1_jtla.pdf, last accessed: 30.6.2006. Ferris, S. 2002. The Effects of Computers on Traditional Writing. In: The Journal of Electronic Publishing, August, 2002, Volume 8, Issue 1, also available at: http://www.press.umich.edu/jep/08-01/ferris.html, last acessed: 30.11.2006EDU/PISA/GB(2006)31. International option for the assessment of reading of electronic texts. Minutes. EDU/PISA/GB(2006)27. Assessing the reading of electronic texts / Proposal for inclusion in PISA 2009 Plichart, P., Jadoul, R., Vandenabeele, L. and Latour, T. 2004. TAO, a Collaborative distributed computer-based assessment framework built on Semantic Web standards" In International Conference on Advances in Intelligent Systems - Theory and Applications, AISTA 2004 in cooperation with IEEE computer society, 15 - 18 November 2004, Luxembourg.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.