Session Information
22 SES 06 B, Research-Led Strategies, Innovations and Practices for Achieving Learning Through Assessment in Higher Education
Symposium
Contribution
Over the last years competences have largely gained importance in the professional context and consequently in education. The focus of the latter on competence based education has provided quite some challenges for assessment. To adequately assess competences, students are asked to provide proof of a performance. Ideally, open and real-life tasks are used for such performance assessment. It has been proven difficult to assess these performances reliably. One way of dealing with these low levels of reliability was by introducing specific criteria under the form of rubrics. This, however, hampers the validity of the performance assessment. In this talk an alternative method is introduced that does not require standardisation of the assessment. The method of Comparative Judgement (CJ) method is based on the idea that people are able to compare two performances more easily and reliably than assigning a score to a single one. We will present findings from the D-PAC project providing proof for the validity and the reliability of CJ. Van Daal, Lesterhuis, Coertjens, Donche and De Maeyer (2016) have provided evidence that the validity of CJ is based on the holistic nature of the judgement. This judgement is based on a professional expertise which is reflected in the result of the CJ assessment, called the shared consensus. Regarding the reliability it was shown in an overview study that the reliability measure in CJ can be interpreted as a measure of consistency over time and among judges (Verhavert, 2016). And in the study by Coertjens et al. (2015) on writing assessment results indicate that for lower levels of reliability, CJ is as efficient as rubrics, while, when higher reliability levels are desired, CJ outperforms rubrics in terms of efficiency.
References
Coertjens, L., Verhavert, S., Lesterhuis, M., Goossens, M., &De Maeyer, S. (2016, November) Is comparative judgement more efficient? An explorative study into the reliability-efficiency trade-off when using rubrics or comparative judgement. Paper presented at the Association for Educational Assessment Europe conference, 2015 van Daal, T., Lesterhuis, M., Coertjens, L., Donche, V., & De Maeyer, S. (2016). Validity of comparative judgement to assess academic writing: examining implications of its holistic character and building on a shared consensus. Assessment in Education: Principles, Policy & Practice. http://dx.doi.org/10.1080/0969594X.2016.1253542 Verhavert, S. De Maeyer, S., Donche, V., & Coertjens, L. (2016, November). Comparative Judgement and Scale Separation reliability: Yes, but what does it mean? Paper presented at the Association for Educational Assessment Europe conference, Cyprus.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.